nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒04‒19
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Recurrent Dictionary Learning for State-Space Models with an Application in Stock Forecasting By Shalini Sharma; Víctor Elvira; Emilie Chouzenoux; Angshul Majumdar
  2. Score-driven time series models By Harvey, A.
  3. Efficient and robust inference of models with occasionally binding constraints By Giovannini, Massimo; Pfeiffer, Philipp; Ratto, Marco
  4. Nonparametric, Stochastic Frontier Models with Multiple Inputs and Outputs By Simar, Léopold; Wilson, Paul
  5. Analytic and Bootstrap-after-Cross-Validation Methods for Selecting Penalty Parameters of High-Dimensional M-Estimators By Denis Chetverikov; Jesper Riis-Vestergaard S{\o}rensen
  6. The identification of dominant macroeconomic drivers: coping with confounding shocks By Dieppe, Alistair; Francis, Neville; Kindberg-Hanlon, Gene
  7. Identification of Dynamic Panel Logit Models with Fixed Effects By Christopher Dobronyi; Jiaying Gu; Kyoo il Kim
  8. Factor Models with Local Factors—Determining the Number of Relevant Factors By Simon Freyaldenhoven
  9. Single and multiple-group penalized factor analysis: a trust-region algorithm approach with integrated automatic multiple tuning parameter selection By Geminiani, Elena; Marra, Giampiero; Moustaki, Irini
  10. Improved Tests for Granger Non-Causality in Panel Data By Xiao, Jiaqi; Juodis, Arturas; Karavias, Yiannis; Sarafidis, Vasilis
  11. Forecast Error Variance Decompositions with Local Projections By Gorodnichenko, Y; Lee, B
  12. Time Series (re)sampling using Generative Adversarial Networks By Christian M. Dahl; Emil N. S{\o}rensen
  13. The Econometrics and Some Properties of Separable Matching Models By Alfred Galichon; Bernard Salani\'e
  14. Improving the Estimation and Predictions of Small Time Series Models By Gareth Liu-Evans
  15. A Bayesian analysis of gain-loss asymmetry By Andrea Giuseppe Di Iura; Giulia Terenzi
  16. A survey of some recent applications of optimal transport methods to econometrics By Alfred Galichon
  17. Identification in the Random Utility Model By Christopher Turansick

  1. By: Shalini Sharma (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi]); Víctor Elvira (School of Mathematics - University of Edinburgh - University of Edinburgh); Emilie Chouzenoux (OPIS - OPtimisation Imagerie et Santé - CVN - Centre de vision numérique - CentraleSupélec - Université Paris-Saclay - Inria - Institut National de Recherche en Informatique et en Automatique - Inria Saclay - Ile de France - Inria - Institut National de Recherche en Informatique et en Automatique); Angshul Majumdar (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi])
    Abstract: In this work, we introduce a new modeling and inferential tool for dynamical processing of time series. The approach is called recurrent dictionary learning (RDL). The proposed model reads as a linear Gaussian Markovian state-space model involving two linear operators, the state evolution and the observation matrices, that we assumed to be unknown. These two unknown operators (that can be seen interpreted as dictionaries) and the sequence of hidden states are jointly learnt via an expectation-maximization algorithm. The RDL model gathers several advantages, namely online processing, probabilistic inference, and a high model expressiveness which is usually typical of neural networks. RDL is particularly well suited for stock forecasting. Its performance is illustrated on two problems: next day forecasting (regression problem) and next day trading (classification problem), given past stock market observations. Experimental results show that our proposed method excels over state-of-the-art stock analysis models such as CNN-TA, MFNN, and LSTM.
    Keywords: Stock Forecasting,Recurrent dictionary learning,Kalman filter,expectation-minimization,dynamical modeling,uncertainty quantification
    Date: 2021
  2. By: Harvey, A.
    Abstract: The construction of score-driven filters for nonlinear time series models is described and it is shown how they apply over a wide range of disciplines. Their theoretical and practical advantages over other methods are highlighted. Topics covered include robust time series modeling, conditional heteroscedasticity, count data, dynamic correlation and association, censoring, circular data and switching regimes.
    Keywords: copula, count data, directional data, generalized autoregressive conditional heteroscedasticity, generalized beta distribution of the second kind, observation-driven model, robustness
    JEL: C22 C32
    Date: 2021–04–07
  3. By: Giovannini, Massimo (European Commission); Pfeiffer, Philipp (European Commission); Ratto, Marco (European Commission)
    Abstract: This paper proposes a piecewise-linear Kalman filter (PKF) to estimate DSGE models with occasionally binding constraints. This method expands the set of models suitable for nonlinear estimation. It straightforwardly handles missing data, non-singularity (more shocks than observed time series), and large-scale models. We provide several applications to highlight its efficiency and robustness compared to existing methods. Our toolkit integrates the PKF into Dynare, the most popular software in DSGE modeling.
    Keywords: DSGE, occasionally binding constraints, nonlinear estimation, Piecewise Kalman Filter
    JEL: C11 C32 C51
    Date: 2021–04
  4. By: Simar, Léopold (Université catholique de Louvain, LIDAM/ISBA, Belgium); Wilson, Paul
    Abstract: Stochastic frontier models along the lines of Aigner et al. (1977) are widely used to benchmark firms' performances in terms of efficiency. The models are typically fully-parametric, with functional form specifications for the frontier as well as both the noise and the inefficiency processes. Studies such as Kumbhakar et al. (2007) have attempted to relax some of the restrictions in parametric models, but so far all such approaches are limited to a univariate response variable. Some (e.g., Simar and Zelenyuk, 2011; Kuosmanen and Johnson, 2017) have proposed nonparametric estimation of directional distance functions to handle multiple inputs and outputs, raising issues of endogeneity that are either ignored or addressed by imposing restrictive and implausible assumptions. This paper extends nonparametric methods developed by Simar et al. (2017) and Hafner et al. (2018) to allow multiple inputs and outputs in an almost fullynonparametric framework while avoiding endogeneity problems. We discuss identification issues and properties of the resulting estimators, and examine their finite-sample performance through Monte-Carlo experiments. Practical implementation of the method is illustrated using data on U.S. commercial banks.
    Keywords: stochastic frontier ; nonparametric ; efficiency
    JEL: C01 C21 C40 C51
    Date: 2021–02–01
  5. By: Denis Chetverikov; Jesper Riis-Vestergaard S{\o}rensen
    Abstract: We develop two new methods for selecting the penalty parameter for the $\ell^1$-penalized high-dimensional M-estimator, which we refer to as the analytic and bootstrap-after-cross-validation methods. For both methods, we derive nonasymptotic error bounds for the corresponding $\ell^1$-penalized M-estimator and show that the bounds converge to zero under mild conditions, thus providing a theoretical justification for these methods. We demonstrate via simulations that the finite-sample performance of our methods is much better than that of previously available and theoretically justified methods.
    Date: 2021–04
  6. By: Dieppe, Alistair; Francis, Neville; Kindberg-Hanlon, Gene
    Abstract: We address the identification of low-frequency macroeconomic shocks, such as technology, in Structural Vector Autoregressions. Whilst identification issues with long-run restrictions are well documented, we demonstrate that the recent attempt to overcome said issues using the Max-Share approach of Francis et al. (2014) and Barsky and Sims (2011) has its own shortcomings, primarily that they are vulnerable to bias from confounding non-technology shocks, although less so than long-run specifications. We offer a new spectral methodology to improve empirical identification. This new preferred methodology offers equivalent or improved identification in a wide range of data generating processes and when applied to US data. Our findings on the bias generated by confounding shocks also importantly extends to the identification of dominant business-cycle shocks, which will be a combination of shocks rather than a single structural driver. This can result in a mis-characterization of the business cycle anatomy. JEL Classification: C11, C30, E32
    Keywords: confounding shocks, identification, long-horizon and business-cycle shocks
    Date: 2021–04
  7. By: Christopher Dobronyi; Jiaying Gu; Kyoo il Kim
    Abstract: We show that the identification problem for a class of dynamic panel logit models with fixed effects has a connection to the truncated moment problem in mathematics. We use this connection to show that the sharp identified set of the structural parameters is characterized by a set of moment equality and inequality conditions. This result provides sharp bounds in models where moment equality conditions do not exist or do not point identify the parameters. We also show that the sharp identifying content of the non-parametric latent distribution of the fixed effects is characterized by a vector of its generalized moments, and that the number of moments grows linearly in T. This final result lets us point identify, or sharply bound, specific classes of functionals, without solving an optimization problem with respect to the latent distribution.
    Date: 2021–04
  8. By: Simon Freyaldenhoven
    Abstract: We extend the theory on factor models by incorporating “local” factors into the model. Local factors affect only an unknown subset of the observed variables. This implies a continuum of eigenvalues of the covariance matrix, as is commonly observed in applications. We de-rive which factors are pervasive enough to be economically important and which factors are pervasive enough to be estimable using the common principal component estimator. We then introduce a new class of estimators to determine the number of those relevant factors. Un-like existing estimators, our estimators use not only the eigenvalues of the covariance matrix, but also its eigenvectors. We find that incorporating partial sums of the eigenvectors into our estimators leads to significant gains in performance in simulations.
    Keywords: high-dimensional data; factor models; weak factors; local factors; sparsity
    JEL: C38 C52 C55
    Date: 2021–04–15
  9. By: Geminiani, Elena; Marra, Giampiero; Moustaki, Irini
    Abstract: Penalized factor analysis is an efficient technique that produces a factor loading matrix with many zero elements thanks to the introduction of sparsity-inducing penalties within the estimation process. However, sparse solutions and stable model selection procedures are only possible if the employed penalty is non-differentiable, which poses certain theoretical and computational challenges. This article proposes a general penalized likelihood-based estimation approach for single and multiple-group factor analysis models. The framework builds upon differentiable approximations of non-differentiable penalties, a theoretically founded definition of degrees of freedom, and an algorithm with integrated automatic multiple tuning parameter selection that exploits second-order analytical derivative information. The proposed approach is evaluated in two simulation studies and illustrated using a real data set. All the necessary routines are integrated into the R package penfa.
    Keywords: effective degrees of freedom; generalized information criterion; measurement invariance; penalized likelihood; simple structure; CRUI-CARE Agreement; Alma Mater Studiorum - Universitá di Bologna within the CRUI-CARE Agreement
    JEL: C1
    Date: 2021–03–26
  10. By: Xiao, Jiaqi; Juodis, Arturas; Karavias, Yiannis; Sarafidis, Vasilis
    Abstract: This article introduces the xtgranger command in Stata, which implements the panel Granger non-causality test approach developed by Juodis, Karavias and Sarafidis (2021). This test offers superior size and power performance to existing tests, which stems from the use of a pooled estimator that has a faster √NT convergence rate. The test has two other useful properties; it can be used in multivariate systems and it has power against both homogeneous as well as heterogeneous alternatives.
    Keywords: Panel data, Granger non-causality, Nickell bias, Heterogeneous panels, Fixed effects, Half-panel Jackknife, xtgranger.
    JEL: C12 C23 C33
    Date: 2021–04–14
  11. By: Gorodnichenko, Y; Lee, B
    Abstract: We propose and study properties of an estimator of the forecast error variance decomposition in the local projections framework. We find for empirically relevant sample sizes that, after being bias-corrected with bootstrap, our estimator performs well in simulations. We also illustrate the workings of our estimator empirically for monetary policy and productivity shocks. KEYWORDS: Forecast error variance decomposition; Local projections.
    Keywords: Econometrics, Mathematical Sciences, Economics, Commerce, Management, Tourism and Services, Commerce, Management, Tourism and Services
    Date: 2020–10–01
  12. By: Christian M. Dahl; Emil N. S{\o}rensen
    Abstract: We propose a novel bootstrap procedure for dependent data based on Generative Adversarial networks (GANs). We show that the dynamics of common stationary time series processes can be learned by GANs and demonstrate that GANs trained on a single sample path can be used to generate additional samples from the process. We find that temporal convolutional neural networks provide a suitable design for the generator and discriminator, and that convincing samples can be generated on the basis of a vector of iid normal noise. We demonstrate the finite sample properties of GAN sampling and the suggested bootstrap using simulations where we compare the performance to circular block bootstrapping in the case of resampling an AR(1) time series processes. We find that resampling using the GAN can outperform circular block bootstrapping in terms of empirical coverage.
    Date: 2021–01
  13. By: Alfred Galichon; Bernard Salani\'e
    Abstract: We present a class of one-to-one matching models with perfectly transferable utility. We discuss identification and inference in these separable models, and we show how their comparative statics are readily analyzed.
    Date: 2021–02
  14. By: Gareth Liu-Evans
    Abstract: A new approach is developed for improving the point estimation and predictions of para-metric time-series models. The method targets performance criteria such as estimation bias, root mean squared error, variance, or prediction error, and produces closed-form es-timators focused towards these targets via a computational approximation method. This is done for an autoregression coefficient, for the mean reversion parameter in Vasicek and CIR diffusion models, for the Binomial thinning parameter in integer-valued autoregres-sive (INAR) models, and for predictions from a CIR model. The success of the prediction targeting approach is shown in Monte Carlo simulations and in out-of-sample forecasting of the US Federal Funds rate.
    Date: 2021–01
  15. By: Andrea Giuseppe Di Iura; Giulia Terenzi
    Abstract: We perform a quantitative analysis of the gain/loss asymmetry for financial time series by using a Bayesian approach. In particular, we focus on some selected indices and analyze the statistical significance of the asymmetry amount through a Bayesian generalization of the t-Test, which relaxes the normality assumption on the underlying distribution. We propose two different models for data distribution, we study the convergence of our method and we provide several graphical representations of our numerical results. Finally, we perform a sensitivity analysis with respect to model parameters in order to study the reliability and robustness of our results.
    Date: 2021–04
  16. By: Alfred Galichon
    Abstract: This paper surveys recent applications of methods from the theory of optimal transport to econometric problems.
    Date: 2021–02
  17. By: Christopher Turansick
    Abstract: The random utility model is known to be unidentified. However, there are times when a data set is uniquely rationalizable by the random utility model. We ask the question for which data sets does the random utility model have a unique representation. Our first result characterizes which data sets admit a unique representation. Our second result provides a finite test which determines if a distribution of preferences is observationally equivalent to some other distribution of preferences. We then explore the implications of our results in the context of other random utility models.
    Date: 2021–02

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.