Time Series analysis tsa

statsmodels.tsa contains model classes and functions that are useful for time series analysis. Basic models include univariate autoregressive models (AR), vector autoregressive models (VAR) and univariate autoregressive moving average models (ARMA). Non-linear models include Markov switching dynamic regression and autoregression. It also includes descriptive statistics for time series, for example autocorrelation, partial autocorrelation function and periodogram, as well as the corresponding theoretical properties of ARMA or related processes. It also includes methods to work with autoregressive and moving average lag-polynomials. Additionally, related statistical tests and some useful helper functions are available.

Estimation is either done by exact or conditional Maximum Likelihood or conditional least-squares, either using Kalman Filter or direct filters.

Currently, functions and classes have to be imported from the corresponding module, but the main classes will be made available in the statsmodels.tsa namespace. The module structure is within statsmodels.tsa is

  • stattools : empirical properties and tests, acf, pacf, granger-causality, adf unit root test, kpss test, bds test, ljung-box test and others.

  • ar_model : univariate autoregressive process, estimation with conditional and exact maximum likelihood and conditional least-squares

  • arima_model : univariate ARMA process, estimation with conditional and exact maximum likelihood and conditional least-squares

  • statespace : Comprehensive statespace model specification and estimation. See the statespace documentation.

  • vector_ar, var : vector autoregressive process (VAR) and vector error correction models, estimation, impulse response analysis, forecast error variance decompositions, and data visualization tools. See the vector_ar documentation.

  • kalmanf : estimation classes for ARMA and other models with exact MLE using Kalman Filter

  • arma_process : properties of arma processes with given parameters, this includes tools to convert between ARMA, MA and AR representation as well as acf, pacf, spectral density, impulse response function and similar

  • sandbox.tsa.fftarma : similar to arma_process but working in frequency domain

  • tsatools : additional helper functions, to create arrays of lagged variables, construct regressors for trend, detrend and similar.

  • filters : helper function for filtering time series

  • regime_switching : Markov switching dynamic regression and autoregression models

Some additional functions that are also useful for time series analysis are in other parts of statsmodels, for example additional statistical tests.

Some related functions are also available in matplotlib, nitime, and scikits.talkbox. Those functions are designed more for the use in signal processing where longer time series are available and work more often in the frequency domain.

Descriptive Statistics and Tests

stattools.acovf(x[, unbiased, demean, fft, …])

Estimate autocovariances.

stattools.acf(x[, unbiased, nlags, qstat, …])

Calculate the autocorrelation function.

stattools.pacf(x[, nlags, method, alpha])

Partial autocorrelation estimate.

stattools.pacf_yw(x[, nlags, method])

Partial autocorrelation estimated with non-recursive yule_walker.

stattools.pacf_ols(x[, nlags, efficient, …])

Calculate partial autocorrelations via OLS.

stattools.pacf_burg(x[, nlags, demean])

Calculate Burg’s partial autocorrelation estimator.

stattools.ccovf(x, y[, unbiased, demean])

Calculate the crosscovariance between two series.

stattools.ccf(x, y[, unbiased])

The cross-correlation function.

stattools.periodogram(x)

Compute the periodogram for the natural frequency of x.

stattools.adfuller(x[, maxlag, regression, …])

Augmented Dickey-Fuller unit root test.

stattools.kpss(x[, regression, nlags, store])

Kwiatkowski-Phillips-Schmidt-Shin test for stationarity.

stattools.zivot_andrews

Zivot-Andrews structural-break unit-root test.

stattools.coint(y0, y1[, trend, method, …])

Test for no-cointegration of a univariate equation.

stattools.bds(x[, max_dim, epsilon, distance])

BDS Test Statistic for Independence of a Time Series

stattools.q_stat(x, nobs[, type])

Compute Ljung-Box Q Statistic.

stattools.grangercausalitytests(x, maxlag[, …])

Four tests for granger non causality of 2 time series.

stattools.levinson_durbin(s[, nlags, isacov])

Levinson-Durbin recursion for autoregressive processes.

stattools.innovations_algo(acov[, nobs, rtol])

Innovations algorithm to convert autocovariances to MA parameters.

stattools.innovations_filter(endog, theta)

Filter observations using the innovations algorithm.

stattools.levinson_durbin_pacf(pacf[, nlags])

Levinson-Durbin algorithm that returns the acf and ar coefficients.

stattools.arma_order_select_ic(y[, max_ar, …])

Compute information criteria for many ARMA models.

x13.x13_arima_select_order(endog[, …])

Perform automatic seasonal ARIMA order identification using x12/x13 ARIMA.

x13.x13_arima_analysis(endog[, maxorder, …])

Perform x13-arima analysis for monthly or quarterly data.

Estimation

The following are the main estimation classes, which can be accessed through statsmodels.tsa.api and their result classes

Univariate Autoregressive Processes (AR)

Beginning in version 0.11, Statsmodels has introduced a new class dedicated to autoregressive models.

ar_model.AutoReg(endog, lags[, trend, …])

Autoregressive AR-X(p) model.

ar_model.AutoRegResults(model, params, …)

Class to hold results from fitting an AutoReg model.

ar_model.ar_select_order(endog, maxlag[, …])

Autoregressive AR-X(p) model order selection.

The ar_model.AutoReg model estimates parameters using conditional MLE (OLS), and supports exogenous regressors (an AR-X model) and seasonal effects.

AR-X and related models can also be fitted with the arima.ARIMA class and the SARIMAX class (using full MLE via the Kalman Filter).

Finally, the old class, ar_model.AR, is still available but it has been deprecated.

ar_model.AR(endog[, dates, freq, missing])

Autoregressive AR(p) model.

ar_model.ARResults(model, params[, …])

Class to hold results from fitting an AR model.

Autoregressive Moving-Average Processes (ARMA) and Kalman Filter

Basic ARIMA model and results classes are as follows:

arima_model.ARMA(endog, order[, exog, …])

Autoregressive Moving Average ARMA(p,q) Model

arima_model.ARMAResults(model, params[, …])

Class to hold results from fitting an ARMA model.

arima_model.ARIMA(endog, order[, exog, …])

Autoregressive Integrated Moving Average ARIMA(p,d,q) Model

arima_model.ARIMAResults(model, params[, …])

Attributes

However, beginning in version 0.11, Statsmodels has introduced a new class dedicated to ARIMA models. While this class is still in a testing phase, it should be the starting point for for most users going forwards:

arima.model.ARIMA(endog[, exog, order, …])

Autoregressive Integrated Moving Average (ARIMA) model, and extensions

arima.model.ARIMAResults(model, params, …)

Class to hold results from fitting an SARIMAX model.

The arima.model.ARIMA model allows estimating parameters by various methods (including conditional MLE via the Hannan-Rissanen method and full MLE via the Kalman filter). Since it is a special case of the SARIMAX model, it includes all features of state space models (including prediction / forecasting, residual diagnostics, simulation and impulse responses, etc.).

Exponential Smoothing

Linear and non-linear exponential smoothing models are available:

holtwinters.ExponentialSmoothing(endog[, …])

Holt Winter’s Exponential Smoothing

holtwinters.SimpleExpSmoothing(endog)

Simple Exponential Smoothing

holtwinters.Holt(endog[, exponential, damped])

Holt’s Exponential Smoothing

holtwinters.HoltWintersResults(model, …)

Holt Winter’s Exponential Smoothing Results

Linear exponential smoothing models have also been separately implemented as a special case of the state space framework. Although this approach does not allow for the non-linear (multiplicative) exponential smoothing models, it includes all features of state space models (including prediction / forecasting, residual diagnostics, simulation and impulse responses, etc.).

statespace.exponential_smoothing.ExponentialSmoothing(endog)

Linear exponential smoothing models

statespace.exponential_smoothing.ExponentialSmoothingResults(…)

Attributes

ARMA Process

The following are tools to work with the theoretical properties of an ARMA process for given lag-polynomials.

arima_process.ArmaProcess([ar, ma, nobs])

Theoretical properties of an ARMA process for specified lag-polynomials.

arima_process.ar2arma(ar_des, p, q[, n, …])

Find arma approximation to ar process.

arima_process.arma2ar(ar, ma[, lags])

A finite-lag AR approximation of an ARMA process.

arima_process.arma2ma(ar, ma[, lags])

A finite-lag approximate MA representation of an ARMA process.

arima_process.arma_acf(ar, ma[, lags])

Theoretical autocorrelation function of an ARMA process.

arima_process.arma_acovf(ar, ma[, nobs, …])

Theoretical autocovariance function of ARMA process.

arima_process.arma_generate_sample(ar, ma, …)

Simulate data from an ARMA.

arima_process.arma_impulse_response(ar, ma)

Compute the impulse response function (MA representation) for ARMA process.

arima_process.arma_pacf(ar, ma[, lags])

Theoretical partial autocorrelation function of an ARMA process.

arima_process.arma_periodogram(ar, ma[, …])

Periodogram for ARMA process given by lag-polynomials ar and ma.

arima_process.deconvolve(num, den[, n])

Deconvolves divisor out of signal, division of polynomials for n terms

arima_process.index2lpol(coeffs, index)

Expand coefficients to lag poly

arima_process.lpol2index(ar)

Remove zeros from lag polynomial

arima_process.lpol_fiar(d[, n])

AR representation of fractional integration

arima_process.lpol_fima(d[, n])

MA representation of fractional integration

arima_process.lpol_sdiff(s)

return coefficients for seasonal difference (1-L^s)

ArmaFft(ar, ma, n)

fft tools for arma processes

Statespace Models

See the statespace documentation.

Vector ARs and Vector Error Correction Models

See the vector_ar documentation.

Regime switching models

MarkovRegression(endog, k_regimes[, trend, …])

First-order k-regime Markov switching regression model

MarkovAutoregression(endog, k_regimes, order)

Markov switching regression model

Time Series Filters

bkfilter(x[, low, high, K])

Filter a time series using the Baxter-King bandpass filter.

hpfilter(x[, lamb])

Hodrick-Prescott filter.

cffilter(x[, low, high, drift])

Christiano Fitzgerald asymmetric, random walk filter.

convolution_filter(x, filt[, nsides])

Linear filtering via convolution.

recursive_filter(x, ar_coeff[, init])

Autoregressive, or recursive, filtering.

miso_lfilter(ar, ma, x[, useic])

Filter multiple time series into a single time series.

fftconvolve3(in1[, in2, in3, mode])

Convolve two N-dimensional arrays using FFT.

fftconvolveinv(in1, in2[, mode])

Convolve two N-dimensional arrays using FFT.

seasonal_decompose(x[, model, filt, period, …])

Seasonal decomposition using moving averages.

STL(endog[, period, seasonal, trend, …])

Season-Trend decomposition using LOESS.

DecomposeResult(observed, seasonal, trend, resid)

Results class for seasonal decompositions

TSA Tools

add_lag(x[, col, lags, drop, insert])

Returns an array with lags included given an array.

add_trend(x[, trend, prepend, has_constant])

Add a trend and/or constant to an array.

detrend(x[, order, axis])

Detrend an array with a trend of given order along axis 0 or 1.

lagmat(x, maxlag[, trim, original, use_pandas])

Create 2d array of lags.

lagmat2ds(x, maxlag0[, maxlagex, dropex, …])

Generate lagmatrix for 2d array, columns arranged by variables.

VARMA Process

VarmaPoly(ar[, ma])

class to keep track of Varma polynomial format

Interpolation

dentonm(indicator, benchmark[, freq])

Modified Denton’s method to convert low-frequency to high-frequency data.