nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒12‒21
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. The Role of the Prior in Estimating VAR Models with Sign Restrictions By Atsushi Inoue; Lutz Kilian
  2. Inference in mixed causal and noncausal models with generalized Student's t-distributions By Francesco Giancaterini; Alain Hecq
  3. Testing for Time Stochastic Dominance By Lee, K.; Linton, O.; Whang, Y-J.;
  4. Large Non-Stationary Noisy Covariance Matrices: A Cross-Validation Approach By Vincent W. C. Tan; Stefan Zohren
  5. Asymptotic Normality for Multivariate Random Forest Estimators By Kevin Li
  6. Binary Response Models for Heterogeneous Panel Data with Interactive Fixed Effects By Jiti Gao; Fei Liu; Bin Peng
  7. Testing fixed and random effects in linear mixed models By Marco Barnabani
  8. Non-Identifiability in Network Autoregressions By Federico Martellosio
  9. A Canonical Representation of Block Matrices with Applications to Covariance and Correlation Matrices By Ilya Archakov; Peter Reinhard Hansen
  10. Sharp Bounds in the Latent Index Selection Model By Philip Marx
  11. Latent Variables Analysis in Structural Models: A New Decomposition of the Kalman Smoother By Hess Chung; Cristina Fuentes-Albero; Matthias Paustian; Damjan Pfajfar
  12. Likelihood-based Dynamic Asset Pricing: Learning Time-varying Risk Premia from Cross-Sectional Models By Dennis Umlandt
  13. Double machine learning for (weighted) dynamic treatment effects By Hugo Bodory; Martin Huber; Luk\'a\v{s} Laff\'ers
  14. Modeling Turning Points In Global Equity Market By Daniel Felix Ahelegbey; Monica Billio; Roberto Casarin
  15. A Multivariate Realized GARCH Model By Ilya Archakov; Peter Reinhard Hansen; Asger Lunde
  16. Capturing GDP nowcast uncertainty in real time By Paul Labonne
  17. GMM weighting matrices incross-sectional asset pricing tests By Laurinaityte, Nora; Meinerding, Christoph; Schlag, Christian; Thimme, Julian
  18. Uncertainty on the Reproduction Ratio in the SIR Model. By Sean ELLIOTT; Christian GOURIEROUX
  19. Separating predicted randomness from residual behavior By Jose Apesteguia; Miguel Ángel Ballester

  1. By: Atsushi Inoue; Lutz Kilian
    Abstract: Several recent studies have expressed concern that the Haar prior typically imposed in estimating sign-identified VAR models may be unintentionally informative about the implied prior for the structural impulse responses. This question is indeed important, but we show that the tools that have been used in the literature to illustrate this potential problem are invalid. Specifically, we show that it does not make sense from a Bayesian point of view to characterize the impulse response prior based on the distribution of the impulse responses conditional on the maximum likelihood estimator of the reduced-form parameters, since the prior does not, in general, depend on the data. We illustrate that this approach tends to produce highly misleading estimates of the impulse response priors. We formally derive the correct impulse response prior distribution and show that there is no evidence that typical sign-identified VAR models estimated using conventional priors tend to imply unintentionally informative priors for the impulse response vector or that the corresponding posterior is dominated by the prior. Our evidence suggests that concerns about the Haar prior for the rotation matrix have been greatly overstated and that alternative estimation methods are not required in typical applications. Finally, we demonstrate that the alternative Bayesian approach to estimating sign-identified VAR models proposed by Baumeister and Hamilton (2015) suffers from exactly the same conceptual shortcoming as the conventional approach. We illustrate that this alternative approach may imply highly economically implausible impulse response priors.
    Keywords: Prior; posterior; impulse response; loss function; joint inference; absolute loss; median
    JEL: C22 C32 C52 E31 Q43
    Date: 2020–12–03
    URL: http://d.repec.org/n?u=RePEc:fip:feddwp:89121&r=all
  2. By: Francesco Giancaterini; Alain Hecq
    Abstract: This paper analyzes the properties of the Maximum Likelihood Estimator for mixed causal and noncausal models when the error term follows a Student's t-distribution. In particular, we compare several existing methods to compute the expected Fisher information matrix and show that they cannot be applied in the heavy-tail framework. For this purpose, we propose a new approach to make inference on causal and noncausal parameters in finite sample sizes. It is based on the empirical variance computed on the generalized Student's t, even when the population variance is not finite. Monte Carlo simulations show the good performances of our new estimator for fat tail series. We illustrate how the different approaches lead to different standard errors in four time series: annual debt to GDP for Canada, the variation of daily Covid-19 deaths in Belgium, the monthly wheat prices and the monthly inflation rate in Brazil.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.01888&r=all
  3. By: Lee, K.; Linton, O.; Whang, Y-J.;
    Abstract: We propose nonparametric tests for the null hypothesis of time stochastic dominance. Time stochastic dominance makes a partial order of different prospects over time based on the net present value criteria for general utility and time discount function classes. For example, time stochastic dominance can be used for ranking investment strategies or environmental policies based on the expected net present value of the future benefits. We consider an Lp integrated test statistic and derive its large sample distribution. We suggest a path-wise bootstrap procedures that allows for time dependence in a panel data structure. In addition to the least favorable case based bootstrap method, we describe two approaches, the contact-set approach and the numerical delta method, for the purpose of enhancing a power of the test. We prove the asymptotic validity of our testing procedures. We investigate the finite sample performance of the tests in simulation studies. As an illustration, we apply the proposed tests to evaluate the welfare improvement of the Thailand’s Million Baht Village Fund Program.
    Keywords: Bootstrap, Discounting, Stochastic Dominance, Testing
    JEL: C10 C12 C14
    Date: 2020–12–10
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:20121&r=all
  4. By: Vincent W. C. Tan; Stefan Zohren
    Abstract: We introduce a novel covariance estimator that exploits the heteroscedastic nature of financial time series by employing exponential weighted moving averages and shrinking the in-sample eigenvalues through cross-validation. Our estimator is model-agnostic in that we make no assumptions on the distribution of the random entries of the matrix or structure of the covariance matrix. Additionally, we show how Random Matrix Theory can provide guidance for automatic tuning of the hyperparameter which characterizes the time scale for the dynamics of the estimator. By attenuating the noise from both the cross-sectional and time-series dimensions, we empirically demonstrate the superiority of our estimator over competing estimators that are based on exponentially-weighted and uniformly-weighted covariance matrices.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.05757&r=all
  5. By: Kevin Li
    Abstract: Regression trees and random forests are popular and effective non-parametric estimators in practical applications. A recent paper by Athey and Wager shows that the random forest estimate at any point is asymptotically Gaussian; in this paper, we extend this result to the multivariate case and show that the vector of estimates at multiple points is jointly normal. Specifically, the covariance matrix of the limiting normal distribution is diagonal, so that the estimates at any two points are independent in sufficiently deep trees. Moreover, the off-diagonal term is bounded by quantities capturing how likely two points belong to the same partition of the resulting tree. Our results relies on certain a certain stability property when constructing splits, and we give examples of splitting rules for which this assumption is and is not satisfied. We test our proposed covariance bound and the associated coverage rates of confidence intervals in numerical simulations.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.03486&r=all
  6. By: Jiti Gao; Fei Liu; Bin Peng
    Abstract: In this paper, we investigate binary response models for heterogeneous panel data with interactive fixed effects by allowing both the cross sectional dimension and the temporal dimension to diverge. From a practical point of view, the proposed framework can be applied to predict the probability of corporate failure, conduct credit rating analysis, etc. Theoretically and methodologically, we establish a link between a maximum likelihood estimation and a least squares approach, provide a simple information criterion to detect the number of factors, and achieve the asymptotic distributions accordingly. In addition, we conduct intensive simulations to examine the theoretical findings. In the empirical study, we focus on the sign prediction of stock returns, and then use the results of sign forecast to conduct portfolio analysis. By implementing rolling-window based out-of-sample forecasts, we show the finite-sample performance and demonstrate the practical relevance of the proposed model and estimation method.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.03182&r=all
  7. By: Marco Barnabani (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze)
    Abstract: In linear mixed models the selection of fixed and random effects using a testing hypothesis approach brings up several problems. In this paper, we consider the so called boundary problem and the confounding impact of effects from one set of coefficient in the other set. These problems are addressed by defining two test statistics based on ordinary least squares obtained by dividing two quadratic forms, one that contains the effect and another that does not. As a result, the test statistics are sufficiently general, easy to compute, with known finite sample properties. The test on randomness has a known exact distribution under the null and alternative hypothesis, the test on fixed effect is approximated by a noncentral F -distribution. Because of its importance in the selection variable approach, the goodness-of-approximation is examined in-depth in final simulations.
    Keywords: Selection procedure; Hypothesis testing; Linear Mixed Models; Generalized F -distribution;
    JEL: C12 C63 C52
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2020_09&r=all
  8. By: Federico Martellosio
    Abstract: We study identification in autoregressions defined on a general network. Most identification conditions that are available for these models either rely on repeated observations, are only sufficient, or require strong distributional assumptions. We derive conditions that apply even if only one observation of a network is available, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the models are generically identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix for which identification fails. This is done both in the original model and after certain transformations in the sample space, the latter case being important for some fixed effects specifications.
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2011.11084&r=all
  9. By: Ilya Archakov; Peter Reinhard Hansen
    Abstract: We obtain a canonical representation for block matrices. The representation facilitates simple computation of the determinant, the matrix inverse, and other powers of a block matrix, as well as the matrix logarithm and the matrix exponential. These results are particularly useful for block covariance and block correlation matrices, where evaluation of the Gaussian log-likelihood and estimation are greatly simplified. We illustrate this with an empirical application using a large panel of daily asset returns. Moreover, the representation paves new ways to regularizing large covariance/correlation matrices and to test block structures in matrices.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.02698&r=all
  10. By: Philip Marx
    Abstract: A fundamental question underlying the literature on partial identification is: what can we learn about parameters that are relevant for policy but not necessarily point-identified by the exogenous variation we observe? This paper provides an answer in terms of sharp, closed-form characterizations and bounds for the latent index selection model, which defines a large class of policy-relevant treatment effects via its marginal treatment effect (MTE) function [Heckman and Vytlacil (1999,2005), Vytlacil (2002)]. The sharp bounds use the full content of identified marginal distributions, and closed-form expressions rely on the theory of stochastic orders. The proposed methods also make it possible to sharply incorporate new auxiliary assumptions on distributions into the latent index selection framework. Empirically, I apply the methods to study the effects of Medicaid on emergency room utilization in the Oregon Health Insurance Experiment, showing that the predictions from extrapolations based on a distribution assumption (rank similarity) differ substantively and consistently from existing extrapolations based on a parametric mean assumption (linearity). This underscores the value of utilizing the model's full empirical content.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.02390&r=all
  11. By: Hess Chung; Cristina Fuentes-Albero; Matthias Paustian; Damjan Pfajfar
    Abstract: This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of observable variables in estimating the output gap in two models.
    Keywords: Kalman smoother; Latent variables; Shock decomposition; Data decomposition; Double decomposition
    JEL: C18 C32 C52
    Date: 2020–12–04
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2020-100&r=all
  12. By: Dennis Umlandt
    Abstract: This paper proposes a new parametric approach to estimate linear factor pricing mod-els with time-varying risk premia. In contrast to recent contributions to the literature,the framework presented abstains from introducing instrument variables to describethe time variation of risk prices. Instead, time-varying risk prices and exposures followa recursive updating scheme constructed to reduce the one-step ahead prediction errorfrom a cross-sectional factor model at the current observation. This agnostic approachis particularly useful in situations where instrument variables are unavailable or of poorquality. Estimation and inference are done by likelihood maximization. A Monte Carlostudy compares the ability of the method to predict risk prices and returns to that ofa regression-based method that uses noisy signals from true risk price predictors. Ina realistic setting, the two approaches keep pace when the signal contains 80 percentcorrect information. An application to a macro-finance model of currency carry tradesillustrates the novel approach.
    Keywords: Dynamic Asset Pricing, Generalized Autoregressive Score Models, Time-varying Risk Premia, Return Predictability
    JEL: G12 G17 C58
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:trr:qfrawp:202006&r=all
  13. By: Hugo Bodory; Martin Huber; Luk\'a\v{s} Laff\'ers
    Abstract: We consider evaluating the causal effects of dynamic treatments, i.e. of multiple treatment sequences in various periods, based on double machine learning to control for observed, time-varying covariates in a data-driven way under a selection-on-observables assumption. To this end, we make use of so-called Neyman-orthogonal score functions, which imply the robustness of treatment effect estimation to moderate (local) misspecifications of the dynamic outcome and treatment models. This robustness property permits approximating outcome and treatment models by double machine learning even under high dimensional covariates and is combined with data splitting to prevent overfitting. In addition to effect estimation for the total population, we consider weighted estimation that permits assessing dynamic treatment effects in specific subgroups, e.g. among those treated in the first treatment period. We demonstrate that the estimators are asymptotically normal and $\sqrt{n}$-consistent under specific regularity conditions and investigate their finite sample properties in a simulation study. Finally, we apply the methods to the Job Corps study in order to assess different sequences of training programs under a large set of covariates.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.00370&r=all
  14. By: Daniel Felix Ahelegbey (University of Pavia); Monica Billio (University of Venice); Roberto Casarin (University of Venice)
    Abstract: Turning points in financial markets are often characterized by changes in the direction and/or magnitude of market movements with short-to-long term impacts on investors’ decisions. This paper develops a Bayesian technique to turning point detection in financial equity markets. We derive the interconnectedness among stock market returns from a piece-wise network vector autoregressive model. The empirical application examines turning points in global equity market over the past two decades. We also compare the Covid-19 induced interconnectedness with that of the global financial crisis in 2008 to identify similarities and the most central market for spillover propagation
    Keywords: Bayesian inference, Dynamic Programming, Turning points, Networks, VAR.
    JEL: C11 C15 C51 C52 C55 C58 G01
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0195&r=all
  15. By: Ilya Archakov; Peter Reinhard Hansen; Asger Lunde
    Abstract: We propose a novel class of multivariate GARCH models that utilize realized measures of volatilities and correlations. The central component is an unconstrained vector parametrization of the correlation matrix that facilitates modeling of the correlation structure. The parametrization is based on the matrix logarithmic transformation that retains the positive definiteness as an innate property. A factor approach offers a way to impose a parsimonious structure in high dimensional system and we show that a factor framework arises naturally in some existing models. We apply the model to returns of nine assets and employ the factor structure that emerges from a block correlation specification. An auxiliary empirical finding is that the empirical distribution of parametrized realized correlations is approximately Gaussian. This observation is analogous to the well-known result for logarithmically transformed realized variances.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.02708&r=all
  16. By: Paul Labonne
    Abstract: Nowcasting methods rely on timely series related to economic growth for producing and updating estimates of GDP growth before publication of official figures. But the statistical uncertainty attached to these forecasts, which is critical to their interpretation, is only improved marginally when new data on related series become available. That is particularly problematic in times of high economic uncertainty. As a solution this paper proposes to model common factors in scale and shape parameters alongside the mixed-frequency dynamic factor model typically used for location parameters in nowcasting frameworks. Scale and shape parameters control the time-varying dispersion and asymmetry round point forecasts which are necessary to capture the increase in variance and negative skewness found in times of recessions. It is shown how cross-sectional dependencies in scale and shape parameters may be modelled in mixed-frequency settings, with a particularly convenient approximation for scale parameters in Gaussian models. The benefit of this methodology is explored using vintages of U.S. economic growth data with a focus on the economic depression resulting from the coronavirus pandemic. The results show that modelling common factors in scale and shape parameters improves nowcasting performance towards the end of the nowcasting window in recessionary episodes.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.02601&r=all
  17. By: Laurinaityte, Nora; Meinerding, Christoph; Schlag, Christian; Thimme, Julian
    Abstract: Cross-sectional asset pricing tests with GMM can generate spuriouslyhigh explanatory power for factor models when the moment conditions are specifiedsuch that they allow the estimated factor means to substantially deviate from theobserved sample averages. In fact, by shifting the weights on the moment conditions,any level of cross-sectional fit can be attained. This property is a feature of the GMMestimation design and applies to strong as well as weak factors, and to all samplesizes and test assets. We reveal the origins of this bias theoretically, gauge its sizeusing simulations, and document its relevance empirically.
    Keywords: asset pricing,cross-section of expected returns,GMM,factor zoo
    JEL: G00 G12 C21 C13
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:622020&r=all
  18. By: Sean ELLIOTT (University of Toronto.); Christian GOURIEROUX (University of Toronto, Toulouse School of Economics, and CREST.)
    Abstract: The aim of this paper is to understand the extreme variability on the estimated reproduction ratio R0 observed in practice. For expository purpose we consider a discrete time stochastic version of the Susceptible-Infected-Recovered (SIR) model, and introduce different approximate maximum likelihood (AML) estimators of R0. We carefully discuss the properties of these estimators and illustrate by a Monte-Carlo study the width of confidence intervals on R0.
    Keywords: SIR Model, Reproduction Ratio, COVID-19, Approximate Maximum Likelihood, EpiEstim, Final Size.
    Date: 2020–12–09
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2020-31&r=all
  19. By: Jose Apesteguia; Miguel Ángel Ballester
    Abstract: We propose a novel measure of goodness of fit for stochastic choice models: that is, the maximal fraction of data that can be reconciled with the model. The procedure is to separate the data into two parts: one generated by the best specification of the model and another representing residual behavior. We claim that the three elements involved in a separation are instrumental to understanding the data. We show how to apply our approach to any stochastic choice model and then study the case of four well-known models, each capturing a different notion of randomness. We illustrate our results with an experimental dataset.
    Keywords: Goodness of fit; Stochastic Choice; Residual Behavior
    JEL: C91 D81 G12 G20 G41
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1757&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.