nep-ets New Economics Papers
on Econometric Time Series
Issue of 2019‒02‒04
twelve papers chosen by
Jaqueson K. Galimberti
KOF Swiss Economic Institute

  1. Variational Bayesian inference in large Vector Autoregressions with hierarchical shrinkage By Deborah Gefang; Gary Koop; Aubrey Poon
  2. Will the real eigensystem VAR please stand up? A univariate primer By Leo Krippner
  3. Bootstrap Procedures for Detecting Multiple Persistance4 Shifts in a heteroskedastic Time Series By Mohitosh Kejriwal; Xuewen Yu
  4. Volatility Models Applied to Geophysics and High Frequency Financial Market Data By Maria C Mariani; Md Al Masum Bhuiyan; Osei K Tweneboah; Hector Gonzalez-Huizar; Ionut Florescu
  5. Efficient Estimation of Nonparametric Regression in The Presence of Dynamic Heteroskedasticit By Linton, O.; Xiao, Z.
  6. Simple methods for consistent estimation of dynamic panel data sample selection models By Majid M. Al-Sadoon; Sergi Jiménez-Martín; Jose M. Labeaga
  7. Temporal Logistic Neural Bag-of-Features for Financial Time series Forecasting leveraging Limit Order Book Data By Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
  8. lassopack: Model selection and prediction with regularized regression in Stata By Achim Ahrens; Christian B. Hansen; Mark E. Schaffer
  9. Keynesian Models, Detrending, and the Method of Moments By MAO TAKONGMO, Charles Olivier
  10. The Wisdom of a Kalman Crowd By Ulrik W. Nash
  11. Quasi Maximum Likelihood Analysis of High Dimensional Constrained Factor Models By Li, Kunpeng; Li, Qi; Lu, Lina
  12. Taming the Factor Zoo: A Test of New Factors By Guanhao Feng; Stefano Giglio; Dacheng Xiu

  1. By: Deborah Gefang; Gary Koop; Aubrey Poon
    Abstract: Many recent papers in macroeconomics have used large Vector Autoregressions (VARs) involving a hundred or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital in achieving reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayes methods for large VARs which overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.
    Keywords: Variational inference, Vector Autoregression, Stochastic Volatility, Hierarchical Prior, Forecasting
    JEL: C11 C32 C53
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-08&r=all
  2. By: Leo Krippner
    Abstract: I introduce the essential aspects of the eigensystem vector autoregression (EVAR), which allows VARs to be specified and estimated directly in terms of their eigensystem, using univariate examples for clarity. The EVAR guarantees non-explosive dynamics and, if included, non-redundant moving-average components. In the empirical application, constraining the EVAR eigenvalues to be real and positive leads to “desirable” impulse response functions and improved out-of-sample forecasts.
    Keywords: vector autoregression, moaving average, lag polynomial
    JEL: C22 C32 C53
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-01&r=all
  3. By: Mohitosh Kejriwal; Xuewen Yu
    Abstract: This paper proposes new bootstrap procedures for detecting multiple persistence shifts in a time series driven by nonstationary volatility. The assumed volatility process can accommodate discrete breaks, smooth transition variation as well as trending volatility. We develop wild bootstrap sup-Wald tests of the null hypothesis that the process is either stationary [I(0)] or has a unit root [I(1)] throughout the sample. We also propose a sequential procedure to estimate the number of persistence breaks based on ordering the regime-specific bootstrap p-values. The asymptotic validity of the advocated procedures is established both under the null of stability and a variety of persistence change alternatives. Monte Carlo simulations support the use of a non-recursive scheme for generating the I(0) bootstrap samples and a partially recursive scheme for generating the I(1) bootstrap samples, especially when the data generating process contains an I(1) segment. A comparison with existing tests illustrates the finite sample improvements offered by our methods in terms of both size and power. An application to OECD inflation rates is included.
    Keywords: heteroskedasticity, multiple structural changes,
    JEL: C22
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:pur:prukra:1308&r=all
  4. By: Maria C Mariani; Md Al Masum Bhuiyan; Osei K Tweneboah; Hector Gonzalez-Huizar; Ionut Florescu
    Abstract: This work is devoted to the study of modeling geophysical and financial time series. A class of volatility models with time-varying parameters is presented to forecast the volatility of time series in a stationary environment. The modeling of stationary time series with consistent properties facilitates prediction with much certainty. Using the GARCH and stochastic volatility model, we forecast one-step-ahead suggested volatility with +/- 2 standard prediction errors, which is enacted via Maximum Likelihood Estimation. We compare the stochastic volatility model relying on the filtering technique as used in the conditional volatility with the GARCH model. We conclude that the stochastic volatility is a better forecasting tool than GARCH (1, 1), since it is less conditioned by autoregressive past information.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.09145&r=all
  5. By: Linton, O.; Xiao, Z.
    Abstract: We study the efficient estimation of nonparametric regression in the presence of heteroskedasticity. We focus our analysis on local polynomial estimation of nonparametric regressions with conditional heteroskedasticity in a time series setting. We introduce a weighted local polynomial regression smoother that takes account of the dynamic heteroskedasticity. We show that, although traditionally it is adviced that one should not weight for heteroskedasticity in nonparametric regressions, in many popular nonparametric regression models our method has lower asymptotic variance than the usual unweighted procedures. We conduct a Monte Carlo investigation that confirms the efficiency gain over conventional nonparametric regression estimators infinite samples.
    Keywords: Efficiency; Heteroskedasticity; Local Polynomial Estimation; Nonparametric Regression.
    JEL: C13 C14
    Date: 2019–01–15
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1907&r=all
  6. By: Majid M. Al-Sadoon; Sergi Jiménez-Martín; Jose M. Labeaga
    Abstract: We analyse the properties of generalised method of moments-instrumental variables (GMM-IV) estimators of AR(1) dynamic panel data sample selection models. We show the consistency of the first-differenced GMM-IV estimator uncorrected for sample selection of Arellano and Bond (1991) (a property also shared by the Anderson and Hsiao,1982, proposal). Alternatively, the system GMM-IV estimator (Arellano and Bover, 1995, and Blundell and Bond, 1998) shows a moderate bias. We perform a Monte Carlo study to evaluate the finite sample properties of the proposed estimators. Our results confirm the absence of bias of the Arellano and Bond estimator under a variety of circumstances, as well as the small bias of the system estimator, mostly due to the correlation between the individual heterogeneity components in both the outcome and selection equations. However, we must not discard the system estimator because, in small samples, its performance is similar to or even better than that of the Arellano-Bond. These results hold in dynamic models with exogenous, predetermined or endogenous covariates. They are especially relevant for practitioners using unbalanced panels when either there is selection of unknown form or when selection is difficult to model.
    Keywords: Panel data, sample selection, dynamic model, generalized method of moments
    JEL: J52 C23 C24
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1631&r=all
  7. By: Nikolaos Passalis; Anastasios Tefas; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
    Abstract: Time series forecasting is a crucial component of many important applications, ranging from forecasting the stock markets to energy load prediction. The high-dimensionality, velocity and variety of the data collected in these applications pose significant and unique challenges that must be carefully addressed for each of them. In this work, a novel Temporal Logistic Neural Bag-of-Features approach, that can be used to tackle these challenges, is proposed. The proposed method can be effectively combined with deep neural networks, leading to powerful deep learning models for time series analysis. However, combining existing BoF formulations with deep feature extractors pose significant challenges: the distribution of the input features is not stationary, tuning the hyper-parameters of the model can be especially difficult and the normalizations involved in the BoF model can cause significant instabilities during the training process. The proposed method is capable of overcoming these limitations by a employing a novel adaptive scaling mechanism and replacing the classical Gaussian-based density estimation involved in the regular BoF model with a logistic kernel. The effectiveness of the proposed approach is demonstrated using extensive experiments on a large-scale financial time series dataset that consists of more than 4 million limit orders.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.08280&r=all
  8. By: Achim Ahrens; Christian B. Hansen; Mark E. Schaffer
    Abstract: This article introduces lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso and post-estimation OLS. The methods are suitable for the high-dimensional setting where the number of predictors $p$ may be large and possibly greater than the number of observations, $n$. We offer three different approaches for selecting the penalization (`tuning') parameters: information criteria (implemented in lasso2), $K$-fold cross-validation and $h$-step ahead rolling cross-validation for cross-section, panel and time-series data (cvlasso), and theory-driven (`rigorous') penalization for the lasso and square-root lasso for cross-section and panel data (rlasso). We discuss the theoretical framework and practical considerations for each approach. We also present Monte Carlo results to compare the performance of the penalization approaches.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.05397&r=all
  9. By: MAO TAKONGMO, Charles Olivier
    Abstract: One important question in the Keynesian literature is whether we should detrend data when estimating the parameters of a Keynesian model using the moment method. It has been common in the literature to detrend data in the same way the model is detrended. Doing so works relatively well with linear models, in part because in such a case the information that disappears from the data after the detrending process is usually related to the parameters that also disappear from the detrended model. Unfortunately, in heavy non-linear Keynesian models, parameters rarely disappear from detrended models, but information does disappear from the detrended data. Using a simple real business cycle model, we show that both the moment method estimators of parameters and the estimated responses of endogenous variables to a technological shock can be seriously inaccurate when the data used in the estimation process are detrended. Using a dynamic stochastic general equilibrium model and U.S. data, we show that detrending the data before estimating the parameters may result in a seriously misleading response of endogeneous variables to monetary shocks. We suggest building the moment conditions using raw data, irrespective of the trend observed in the data.
    Keywords: RBC models, DSGE models, Trend.
    JEL: C12 C13 C15 E17 E51
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:91709&r=all
  10. By: Ulrik W. Nash
    Abstract: The Kalman Filter has been called one of the greatest inventions in statistics during the 20th century. Its purpose is to measure the state of a system by processing the noisy data received from different electronic sensors. In comparison, a useful resource for managers in their effort to make the right decisions is the wisdom of crowds. This phenomenon allows managers to combine judgments by different employees to get estimates that are often more accurate and reliable than estimates, which managers produce alone. Since harnessing the collective intelligence of employees, and filtering signals from multiple noisy sensors appear related, we looked at the possibility of using the Kalman Filter on estimates by people. Our predictions suggest, and our findings based on the Survey of Professional Forecasters reveal, that the Kalman Filter can help managers solve their decision-making problems by giving them stronger signals before they choose. Indeed, when used on a subset of forecasters identified by the Contribution Weighted Model, the Kalman Filter beat that rule clearly, across all the forecasting horizons in the survey.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.08133&r=all
  11. By: Li, Kunpeng (Capital University of Economics and Business); Li, Qi (Texas A&M University); Lu, Lina (Federal Reserve Bank of Boston)
    Abstract: Factor models have been widely used in practice. However, an undesirable feature of a high dimensional factor model is that the model has too many parameters. An effective way to address this issue, proposed in a seminal work by Tsai and Tsay (2010), is to decompose the loadings matrix by a high-dimensional known matrix multiplying with a low-dimensional unknown matrix, which Tsai and Tsay (2010) name the constrained factor models. This paper investigates the estimation and inferential theory of constrained factor models under large-N and large-T setup, where N denotes the number of cross sectional units and T the time periods. We propose using the quasi maximum likelihood method to estimate the model and investigate the asymptotic properties of the quasi maximum likelihood estimators, including consistency, rates of convergence and limiting distributions. A new statistic is proposed for testing the null hypothesis of constrained factor models against the alternative of standard factor models. Partially constrained factor models are also investigated. Monte carlo simulations confirm our theoretical results and show that the quasi maximum likelihood estimators and the proposed new statistic perform well in finite samples. We also consider the extension to an approximate constrained factor model where the idiosyncratic errors are allowed to be weakly dependent processes.
    Keywords: Constrained factor models; Maximum likelihood estimation; High dimensionality; Inferential theory
    JEL: C13 C38
    Date: 2018–04–24
    URL: http://d.repec.org/n?u=RePEc:fip:fedbqu:rpa18-2&r=all
  12. By: Guanhao Feng; Stefano Giglio; Dacheng Xiu
    Abstract: We propose a model-selection method to systematically evaluate the contribution to asset pricing of any new factor, above and beyond what a high-dimensional set of existing factors explains. Our methodology explicitly accounts for potential model-selection mistakes, unlike the standard approaches that assume perfect variable selection, which rarely occurs in practice and produces a bias due to the omitted variables. We apply our procedure to a set of factors recently discovered in the literature. While most of these new factors are found to be redundant relative to the existing factors, a few — such as profitability — have statistically significant explanatory power beyond the hundreds of factors proposed in the past. In addition, we show that our estimates and their significance are stable, whereas the model selected by simple LASSO is not.
    JEL: C01 C12 C23 C52 C58 G00 G1 G10 G12
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25481&r=all

This nep-ets issue is ©2019 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.