Stationarity and Autocorrelation

A stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time.[1] Consequently, parameters such as mean and variance also do not change over time. To get an intuition of stationarity, one can imagine a frictionless pendulum. It swings back and forth in an oscillatory motion, yet the amplitude and frequency remain constant. Although the pendulum is moving, the process is stationary as its "statistics" are constant (frequency and amplitude). However, if a force were to be applied to the pendulum (for example, friction with the air), either the frequency or amplitude would change, thus making the process non-stationary.[2]

Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean.

A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time.

Strict-sense stationarity

Definition

Formally, let [math]\left\{X_t\right\}[/math] be a stochastic process and let [math]F_{X}(x_{t_1 + \tau}, \ldots, x_{t_n + \tau})[/math] represent the cumulative distribution function of the unconditional (i.e., with no reference to any particular starting value) joint distribution of [math]\left\{X_t\right\}[/math] at times [math]t_1 + \tau, \ldots, t_n + \tau[/math]. Then, [math]\left\{X_t\right\}[/math] is said to be strictly stationary, strongly stationary or strict-sense stationary if[3]:p. 155

[[math]]\begin{equation}\label{sss} F_{X}(x_{t_1+\tau} ,\ldots, x_{t_n+\tau}) = F_{X}(x_{t_1},\ldots, x_{t_n}) \quad \text{for all } \tau,t_1, \ldots, t_n \in \mathbb{R} \text{ and for all } n \in \mathbb{N}\end{equation}[[/math]]

Since [math]\tau[/math] does not affect [math]F_X(\cdot)[/math], [math] F_{X}[/math] is not a function of time.

Examples

Two simulated time series processes, one stationary and the other non-stationary, are shown above. The augmented Dickey–Fuller (ADF) test statistic is reported for each process; non-stationarity cannot be rejected for the second process at a 5% significance level.

Example 1 (Constant Process)

Let [math]Y[/math] be any scalar random variable, and define a time-series [math]\left\{X_t\right\}[/math], by [math]X_t=Y[/math] for all [math]t[/math]. Then [math]\left\{X_t\right\}[/math] is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on this case, as the limiting value of an average from a single realisation takes the random value determined by [math]Y[/math], rather than taking the expected value of [math]Y[/math].

Example 2

As a further example of a stationary process for which any single realisation has an apparently noise-free structure, let [math]Y[/math] has a uniform distribution on [math](0,2\pi][/math] and define the time series [math]\left\{X_t\right\}[/math] by

[[math]]X_t=\cos (t+Y) \quad \text{ for } t \in \mathbb{R}. [[/math]]

Then [math]\left\{X_t\right\}[/math] is strictly stationary since ([math] (t+ Y) [/math] modulo [math] 2 \pi [/math]) follows the same uniform distribution as [math] Y [/math] for any [math] t [/math].

Example 3 (White Noise Process)

A discrete-time stochastic process [math]W(n)[/math] is called white noise if its mean is equal to zero for all [math]n[/math] , i.e. [math]\operatorname{E}[W(n)] = 0[/math] and if the autocorrelation function [math]R_{W}(n) = \operatorname{E}[W(k+n)W(k)][/math] has a nonzero value only for [math]n = 0[/math]. Keep in mind that a white noise is not necessarily strictly stationary. Let [math]\omega[/math] be a random variable uniformly distributed in the interval [math](0, 2\pi)[/math] and define the time series [math]\left\{z_t\right\}[/math]

[[math]]z_t=\cos(t\omega) \quad (t=1,2,...) [[/math]]

Then

[[math]] \begin{align*} \mathbb{E}(z_t) &= \frac{1}{2\pi} \int_0^{2\pi} \cos(t\omega) \,d\omega = 0,\\ \operatorname{Var}(z_t) &= \frac{1}{2\pi} \int_0^{2\pi} \cos^2(t\omega) \,d\omega = 1/2,\\ \operatorname{Cov}(z_t , z_j) &= \frac{1}{2\pi} \int_0^{2\pi} \cos(t\omega)\cos(j\omega) \,d\omega = 0 \quad \forall t\neq j. \end{align*} [[/math]]

So [math]\{z_t\}[/math] is a white noise, however it is not strictly stationary.

Weak or wide-sense stationarity

A weaker form of stationarity commonly employed in signal processing is known as weak-sense stationarity, wide-sense stationarity (WSS), or covariance stationarity. WSS random processes only require that 1st moment (i.e. the mean) and autocovariance do not vary with respect to time and that the 2nd moment is finite for all times. Any strictly stationary process which has a finite mean and a covariance is also WSS.[4]:p. 299

So, a continuous time random process [math]\left\{X_t\right\}[/math] which is WSS has the following restrictions on its mean function [math]m_X(t) \triangleq \operatorname E[X_t][/math] and autocovariance function [math]K_{XX}(t_1, t_2) \triangleq \operatorname E[(X_{t_1}-m_X(t_1))(X_{t_2}-m_X(t_2))][/math]:

[[math]] \begin{align*} & m_X(t) = m_X(t + \tau) & & \text{for all } \tau \in \mathbb{R} \\ & K_{XX}(t_1, t_2) = K_{XX}(t_1 - t_2, 0) & & \text{for all } t_1,t_2 \in \mathbb{R} \\ & \operatorname E[|X(t)|^2] \lt \infty & & \text{for all } t \in \mathbb{R} \end{align*} [[/math]]

The first property implies that the mean function [math]m_X(t)[/math] must be constant. The second property implies that the covariance function depends only on the difference between [math]t_1[/math] and [math]t_2[/math] and only needs to be indexed by one variable rather than two variables.[3]:p. 159 Thus, instead of writing,

[[math]]\,\!K_{XX}(t_1 - t_2, 0)\,[[/math]]

the notation is often abbreviated by the substitution [math]\tau = t_1 - t_2[/math]:

[[math]]K_{XX}(\tau) \triangleq K_{XX}(t_1 - t_2, 0)[[/math]]

This also implies that the autocorrelation depends only on [math]\tau = t_1 - t_2[/math], that is

[[math]]\,\! R_X(t_1,t_2) = R_X(t_1-t_2,0) \triangleq R_X(\tau).[[/math]]

The third property says that the second moments must be finite for any time [math]t[/math].

Differencing

One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trends. This can also remove seasonality, if differences are taken appropriately (e.g. differencing observations 1 year apart to remove year-lo).

Transformations such as logarithms can help to stabilize the variance of a time series.

One of the ways for identifying non-stationary times series is the ACF plot. Sometimes, seasonal patterns will be more visible in the ACF plot than in the original time series; however, this is not always the case.[5] Nonstationary time series can look stationary

Another approach to identifying non-stationarity is to look at the Laplace transform of a series, which will identify both exponential trends and sinusoidal seasonality (complex exponential trends). Related techniques from signal analysis such as the wavelet transform and Fourier transform may also be helpful.

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.

Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.

Autocorrelation of stochastic processes

In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let [math]\left\{ X_t \right\}[/math] be a random process, and [math]t[/math] be any point in time ([math]t[/math] may be an integer for a discrete-time process or a real number for a continuous-time process). Then [math]X_t[/math] is the value (or realization) produced by a given run of the process at time [math]t[/math]. Suppose that the process has mean [math]\mu_t[/math] and variance [math]\sigma_t^2[/math] at time [math]t[/math], for each [math]t[/math]. Then the definition of the auto-correlation function between times [math]t_1[/math] and [math]t_2[/math] is[6]:p.388[3]:p.165

[[math]]\operatorname{R}_{XX}(t_1,t_2) = \operatorname{E} \left[ X_{t_1} \overline{X}_{t_2}\right][[/math]]

where the bar represents complex conjugation.

Subtracting the mean before multiplication yields the auto-covariance function between times [math]t_1[/math] and [math]t_2[/math]:[6]:p.392[3]:p.168

[[math]]\operatorname{K}_{XX}(t_1,t_2) = \operatorname{E} \left[ (X_{t_1} - \mu_{t_1})\overline{(X_{t_2} - \mu_{t_2})} \right] = \operatorname{E}\left[X_{t_1} \overline{X}_{t_2} \right] - \mu_{t_1}\overline{\mu}_{t_2}[[/math]]

Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law).

Definition for wide-sense stationary stochastic process

If [math]\left\{ X_t \right\}[/math] is a wide-sense stationary process then the mean [math]\mu[/math] and the variance [math]\sigma^2[/math] are time-independent, and further the autocovariance function depends only on the lag between [math]t_1[/math] and [math]t_2[/math]: the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and auto-correlation can be expressed as a function of the time-lag, and that this would be an even function of the lag [math]\tau=t_2-t_1[/math]. This gives the more familiar forms for the auto-correlation function[6]:p.395

[[math]]\operatorname{R}_{XX}(\tau) = \operatorname{E}\left[X_{t+\tau} \overline{X}_{t} \right][[/math]]

and the auto-covariance function:

[[math]]\operatorname{K}_{XX}(\tau) = \operatorname{E}\left[ (X_{t+\tau} - \mu)\overline{(X_{t} - \mu)} \right] = \operatorname{E} \left[ X_{t+\tau} \overline{X}_{t} \right] - \mu\overline{\mu}[[/math]]

Properties

Symmetry property

The fact that the auto-correlation function [math]\operatorname{R}_{XX}[/math] is an even function can be stated as[3]:p.171

[[math]]\operatorname{R}_{XX}(t_1,t_2) = \overline{\operatorname{R}_{XX}(t_2,t_1)}[[/math]]

respectively for a WSS process:[3]:p.173

[[math]]\operatorname{R}_{XX}(\tau) = \overline{\operatorname{R}_{XX}(-\tau)} .[[/math]]

Maximum at zero

For a WSS process:[3]:p.174

[[math]]\left|\operatorname{R}_{XX}(\tau)\right| \leq \operatorname{R}_{XX}(0)[[/math]]

Notice that [math]\operatorname{R}_{XX}(0)[/math] is always real.

Cauchy–Schwarz inequality

The Cauchy–Schwarz inequality, inequality for stochastic processes:[6]:p.392

[[math]]\left|\operatorname{R}_{XX}(t_1,t_2)\right|^2 \leq \operatorname{E}\left[ |X_{t_1}|^2\right] \operatorname{E}\left[|X_{t_2}|^2\right][[/math]]

References

  1. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 1–256. ISBN 978-1-119-38755-8.
  2. Laumann, Timothy O. (2016-09-02). "On the Stability of BOLD fMRI Correlations". Cerebral Cortex. doi:10.1093/cercor/bhw265. ISSN 1047-3211. PMID 27591147. PMC:6248456. 
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 Park,Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer. ISBN 978-3-319-68074-3.
  4. Ionut Florescu (7 November 2014). Probability and Stochastic Processes. John Wiley & Sons. ISBN 978-1-118-59320-2.
  5. "8.1 Stationarity and differencing | OTexts". www.otexts.org. Retrieved 2016-05-18.
  6. 6.0 6.1 6.2 6.3 Gubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. ISBN 978-0-521-86470-1.

Wikipedia References