• Tetra Quark

On this page

  • Introduction
    • Notes on Functions of Random Variables
  • Statistics of the wave amplitude

Other Formats

  • PDF

Rice distribution

Statistics
Published

October 28, 2025

Abstract
In coronagraphy, atmospheric scattering creates speckle patterns that obscure faint astronomical companions. Stochastic Speckle Discrimination (SSD) exploits statistical differences between faint sources and speckle patterns to separate them. The speckle pattern follows a Rice distribution, while faint sources exhibit Poisson statistics. This document derives the fundamental equations of the Rice distribution and demonstrates its application to power spectrum analysis in coronagraphic observations.
Keywords

Rice distribution, statistics

\(\require{cancel}\) \(\def\oot{\frac{1}{2}}\)

Introduction

In coronagraphy, the main light source is blocked by a mask. However, a small fraction of the light is scattered through the atmosphere and forms a speckle pattern. This creates a confounding pattern in the image where a faint companion is superimposed on the speckle pattern. Stochastic Speckle Discrimination (SSD) is a technique to separate the faint companion from the speckle pattern by exploiting the statistical properties of the speckle pattern. This power spectrum of the speckle pattern is different from that of the faint companion. The faint source will have a Poisson distribution, while the speckle pattern will have a Rice distribution. A precise measurement of the power spectrum of the light will allow us to separate the faint companion from the speckle pattern.

In this document, we will derive the fundamental equations that describe statistical distributions.

Notes on Functions of Random Variables

We will need certain transformations of random variables. This will require some basic knowledge of probability theory and functions of random variables. In the standard notation, a random variable variable is denoted by a capital letter, and its value is denoted by a lowercase letter. For example, \(U\) is a random variable, and \(u\) is its value. Assume we know that \(U\) has a probability density function \(f_U(u)\). This random variable is subject to a transformation \(V = g(U)\), where \(g\) is a known function. We want to find the probability density function of the new random variable \(V\), denoted as \(f_V(v)\).

The cumulative distribution function of the transformed random variable \(V\) is given by:

\[\begin{equation} F_V(v) = P(V \leq v) \label{eq:cumulative_distribution_function}. \end{equation}\] An illustration of this transformation is shown in Figure 1. The domain of \(v\leq V\) is mapped to the domain of \(u\) as shown in green line segments on the x-axis. Let us define this domain as \(L_v\).

Figure 1: Transformation of a random variable \(U\) to \(V\): \(v=g(u)\). The green line segments represent the values of u for which the random variable V will be less than or equal to the particular value of \(v\) shown in the dashed line.

We can express \(\ref{eq:cumulative_distribution_function}\) in terms \(u\) \[\begin{equation} F_V(v) = P(V \leq v) = P( U \in L_v) \label{eq:cumulative_distribution_function_in_terms_of_u}. \end{equation}\]

The probability density function of the transformed random variable \(V\) is given by:

\[\begin{equation} f_V(v) = \frac{d}{dv} F_V(v) = \frac{d}{dv} P( U \in L_v) \label{eq:probability_density_function_of_v}. \end{equation}\]

In the general case, we need to find the domains (\(L_v\)) and compute the derivative in \(\ref{eq:probability_density_function_of_v}\). Let’s start with an example:

\[\begin{equation} g(u) =a u^2 \label{eq:gofu}. \end{equation}\]

Figure 2: Transformation of a random variable \(U\) to \(V\): \(v=a u^2\). The green line segments represent the values of u for which the random variable V will be less than or equal to the particular value of \(v\) shown in the dashed line.

\[\begin{equation} F_V(v) = P(V \leq v) = P( U \in L_v)=P(-\sqrt{\frac{v}{a}} \leq u \leq \sqrt{\frac{v}{a}}) =\int_{-\sqrt{\frac{v}{a}}}^{\sqrt{\frac{v}{a}}} f_U(u) du \label{eq:cumulative_distribution_function_in_terms_of_usq}. \end{equation}\]

\[\begin{eqnarray} f_V(v) &=& \frac{d}{dv} F_V(v) = \left\{f_U\left(\sqrt{\frac{v}{a}}\right) +f_U\left(-\sqrt{\frac{v}{a}}\right) \right\} \left[\frac{d}{dv}\sqrt{\frac{v}{a}}\right]\nonumber\\ &=& \frac{f_U\left(\sqrt{\frac{v}{a}}\right) +f_U\left(-\sqrt{\frac{v}{a}}\right)} {2\sqrt{a v}} \label{eq:probability_density_function_of_v_sq}. \end{eqnarray}\]

For example, if \(f_U\) is a uniform distribution, \(f_U(u) = \frac{1}{c-b}\) for \(u \in [b,c]\), then under the transformation \(a\,v^2\), the domain gets mapped to \([a\,b^2,a\,c^2]\). Inserting this into \(\ref{eq:probability_density_function_of_v_sq}\) we get:

\[\begin{equation} f_V(v) = \begin{cases} \frac{1}{2\sqrt{a v} (c-b)}, & b^2<v<c^2 \\ 0, & \text{otherwise} \end{cases} \label{eq:probability_density_function_of_v_sq_uniform}, \end{equation}\] which is shown in Figure 3.

Figure 3: The density function of \(V\) with \(v=a\,u^2\) where \(U\) is uniformly distributed between \(b\) and \(c\).

One may be concerned about the singularity at \(v=0\). It is still integrable, and the probability function is well defined. We can show that the normalization still holds after the transformation.

\[\begin{equation} \int_{-\infty}^{\infty} dv f_V(v) = \int_{a\,b^2}^{a\,c^2} \frac{dv}{2\sqrt{a v} (c-b)} = \frac{1}{\sqrt{a} (c-b)}\left.\sqrt{v} \right|_{a\,b^2}^{a\,c^2} = 1 \label{eq:probability_density_function_of_v_sq_uniform_normalization}, \end{equation}\]

\(\nextSection\)

Statistics of the wave amplitude

We will assume that the wavefront will be described by a random vector in the complex plane. It may have a deterministic component, and a random fluctuation on top of it. The random fluctuation represents the atmospheric turbulence. We illustrate this with a random walk in the complex plane as shown in Figure 4.

Figure 4: A random walk in complex plane with an initial constant phasor.

We can express the resultant phasor as a sum of a deterministic phasor and a collection of random phasors: \[\begin{equation} V=V_0+\frac{1}{\sqrt{N}}\sum_{i=1}^{N} \alpha_i e^{i\phi_i}, \label{eq:resultant_v} \end{equation}\] where \(V_0\) is the deterministic phasor, \(\alpha_i\) and \(\phi_i\) are the amplitudes and phases of the random phasors, respectively. Individual phasors are assumed to be independent of each other.

We will assume that \(\alpha_i\) are i.i.d. random variables with mean \(\bar{\alpha}\) and variance \(\sigma_{\alpha}^2\). We can define the real and imaginary parts of the resultant phasor as: \[\begin{equation} \Re \{V\}=V_{0r}+\frac{1}{\sqrt{N}}\sum_{i=1}^{N} \alpha_i \cos(\phi_i), \label{eq:resultant_v_r} \end{equation}\] \[\begin{equation} \Im \{V\}=V_{0i}+\frac{1}{\sqrt{N}}\sum_{i=1}^{N} \alpha_i \sin(\phi_i) \label{eq:resultant_v_i}, \end{equation}\]

The expected value of the real and imaginary parts of the resultant phasor are simply set by the deterministic phasor: \[\begin{equation} \langle\Re \{V\} \rangle = \Re \{V_0\} \label{eq:resultant_v_r_mean}, \end{equation}\] \[\begin{equation} \langle \Im \{V\} \rangle = \Im \{V_0\} \label{eq:resultant_v_i_mean}. \end{equation}\]

We can also compute the variance: \[\begin{equation} \langle\left(\Re \{V\}-\langle\Re \{V\} \rangle\right)^2 \rangle =\langle\left(\Re \{V\}-\Re \{V_0\}\right)^2 \rangle = \frac{1}{N}\sum_{i=1}^{N} \int d \alpha_i \int (\alpha_i-\bar\alpha)^2 d \phi_i \cos^2(\phi_i) =\frac{\sigma_{\alpha}^2}{2}\equiv \sigma^2 \label{eq:resultant_v_r_variance}. \end{equation}\]

Note that the coordinate system can be rotated, and the constant phasor can by aligned with the real axis. i.e.,

The expected value of the real and imaginary parts of the resultant phasor are simply set by the deterministic phasor: \[\begin{equation} \langle\Re \{V\} \rangle = \Re \{V_0\}\equiv s \label{eq:resultant_v_r_meanr}, \end{equation}\] \[\begin{equation} \langle \Im \{V\} \rangle \equiv 0 \label{eq:resultant_v_i_meanr}. \end{equation}\]

As the vector is composed of many random phasors, the resultant vector will be a Gaussian distribution due to the central limit theorem. We have computed its mean and variance. We can put them into the normal distribution:

\[\begin{equation} f_{R,I}(r,i) =\frac{1}{2\pi \sigma^2} \exp\left(-\frac{(r-s)^2+i^2}{2\sigma^2}\right) \label{eq:resultant_gaussian}, \end{equation}\] where we denote the real and imaginary parts of the resultant vector as \(r\) and \(i\), respectively.

This is the representation of the resultant vector in the Cartesian coordinate system \((r,i)\). We can transform it into the polar coordinate system \((A,\Theta)\):

\[\begin{equation} \begin{aligned} a &= \sqrt{r^2+i^2} \\ \theta &= \tan^{-1}\left(\frac{i}{r}\right) \end{aligned} \quad\iff\quad \begin{aligned} r &= a \cos \theta \\ i &= a \sin \theta \end{aligned} \label{eq:resultant_polar_inverse} \end{equation}\] Transforming the probability density function of the resultant vector from Cartesian to polar coordinates, we get: \[\begin{equation} f_{A,\Theta}(a,\theta) =\begin{cases} \frac{a}{2\pi \sigma^2} \exp\left(-\frac{(a \cos \theta-s)^2+(a \sin \theta)^2}{2\sigma^2}\right), & a\geq 0 \text{ and } -\pi<\theta<\pi \\ 0, & \text{otherwise} \end{cases} \label{eq:resultant_polar_pdf} \end{equation}\] where \(a\) factor comes from the Jacobian of the transformation.

In order to find the probability density function of the amplitude, we need to integrate over the angle \(\theta\): \[\begin{eqnarray} f_A(a) &=& \int_{-\pi}^{\pi} f_{A,\Theta}(a,\theta) d\theta= \frac{a}{2\pi \sigma^2} \exp\left(-\frac{a^2 +s^2}{2\sigma^2}\right) \int_{-\pi}^{\pi} \exp\left(-\frac{a s \cos \theta}{\sigma^2}\right) d\theta\\ &=& \frac{a}{2\pi \sigma^2} \exp\left(-\frac{a^2 +s^2}{2\sigma^2}\right) I_0\left(\frac{a s}{\sigma^2}\right) \label{eq:resultant_polar_pdf_a}, \end{eqnarray}\] where \(I_0\) is the modified Bessel function of the first kind.

\[\begin{eqnarray} f_A(a) &=& \begin{cases} \frac{a}{2\pi \sigma^2} \exp\left(-\frac{a^2 +s^2}{2\sigma^2}\right) I_0\left(\frac{a s}{\sigma^2}\right), & a\geq 0 \text{ and } \\ 0, & \text{otherwise,} \end{cases} \label{eq:resultant_polar_pdf_a2} \end{eqnarray}\] where \(I_0\) is the modified Bessel function of the first kind.

Figure 5: Probability density function of amplitude (Rician distribution) for different values of k = s/σ.

No matching items