nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒09‒17
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A further look at Modified ML estimation of the panel AR(1) model with fixed effects and arbitrary initial conditions. By Kruiniger, Hugo
  2. A Class of Time-Varying Parameter Structural VARs for Inference under Exact or Set Identification By Bognanni, Mark
  3. The Evolution of Forecast Density Combinations in Economics By Knut Are Aastveit; James Mitchell; Francesco Ravazzolo; Herman van Dijk
  4. Hamiltonian Sequential Monte Carlo with Application to Consumer Choice Behavior By Martin Burda; Remi Daviet
  5. Bayesian Estimation of Fractionally Integrated Vector Autoregressions and an Application to Identified Technology Shocks By Ross Doppelt; Keith O'Hara
  6. Agnostic Structural Disturbances (ASDs): Detecting and Reducing Misspecification in Empirical Macroeconomic Models By Den Haan, Wouter; Drechsel, Thomas
  7. Identification of structural multivariate GARCH models By HAFNER Christian,; HERWARTZ Helmut,; MAXAND Simone,
  8. Inference based on Kotlarski's Identity By Kengo Kato; Yuya Sasaki; Takuya Ura
  9. Methods Matter: P-Hacking and Causal Inference in Economics By Abel Brodeur; Nikolai Cook; Anthony Heyes
  10. A Residual Bootstrap for Conditional Value-at-Risk By Eric Beutner; Alexander Heinemann; Stephan Smeekes
  11. Analytic Moments for GARCH Processes By Carol Alexander; Emese Lazar; Silvia Stanescu
  12. Testing for bubbles in cryptocurrencies with time-varying volatility By HAFNER Christian,
  13. Emergence of Turbulent Epochs in Oil Prices By Josselin Garnier; Knut Solna
  14. Asymmetry of copulas arising from shock models By Damjana Kokol Bukov\v{s}ek; Toma\v{z} Ko\v{s}ir; Bla\v{z} Moj\v{s}kerc; Matja\v{z} Omladi\v{c}
  15. New More Powerful Likelihood Ratio Tests for Short Horizon Event Studies By CHIRANJIT MUKHOPADHYAY
  16. Network constrained covariate coefficient and connection sign estimation By WEBER Matthias,; STRIAUKAS Jonas,; SCHUMACHER Martin,; HARALD Binder,

  1. By: Kruiniger, Hugo
    Abstract: In this paper we consider two kinds of generalizations of Lancaster's (Review of Economic Studies, 2002) Modified ML estimator (MMLE) for the panel AR(1) model with fixed effects and arbitrary initial conditions and possibly covariates when the time dimension, T, is fixed. When the autoregressive parameter ρ=1, the limiting modified profile log-likelihood function for this model has a stationary point of inflection and ρ is first-order underidentified but second-order identified. We show that the generalized MMLEs exist w.p.a.1 and are uniquely defined w.p.1. and consistent for any value of ρ≥-1. When ρ=1, the rate of convergence of the MMLEs is N^{1/4}, where N is the cross-sectional dimension of the panel. We then develop an asymptotic theory for GMM estimators when one of the parameters is only second-order identified and use this to derive the limiting distributions of the MMLEs. They are generally asymmetric when ρ=1. One kind of generalized MMLE depends on a weight matrix W_{N} and we show that a suitable choice of W_{N} yields an asymptotically unbiased MMLE. We also show that Quasi LM tests that are based on the modified profile log-likelihood and use its expected rather than observed Hessian, with an additional modification for ρ=1, and confidence regions that are based on inverting these tests have correct asymptotic size in a uniform sense when |ρ|≤1. Finally, we investigate the finite sample properties of the MMLEs and the QLM test in a Monte Carlo study.
    Keywords: dynamic panel data, expected Hessian, fixed effects, Generalized Method of Moments (GMM), inflection point, Modified Maximum Likelihood, Quasi LM test, second-order identification, singular information matrix, weak moment conditions.
    JEL: C11 C12 C13 C23
    Date: 2018–06–16
  2. By: Bognanni, Mark (Federal Reserve Bank of Cleveland)
    Abstract: This paper develops a new class of structural vector autoregressions (SVARs) with time-varying parameters, which I call a drifting SVAR (DSVAR). The DSVAR is the first structural time-varying parameter model to allow for internally consistent probabilistic inference under exact—or set—identification, nesting the widely used SVAR framework as a special case. I prove that the DSVAR implies a reduced-form representation, from which structural inference can proceed similarly to the widely used two-step approach for SVARs: beginning with estimation of a reduced form and then choosing among observationally equivalent candidate structural parameters via the imposition of identifying restrictions. In a special case, the implied reduced form is a tractable known model for which I provide the first algorithm for Bayesian estimation of all free parameters. I demonstrate the framework in the context of Baumeister and Peersman’s (2013b) work on time variation in the elasticity of oil demand.
    Keywords: structural vector autoregressions; time-varying parameters; Gibbs sampling; stochastic volatility; Bayesian inference;
    JEL: C11 C15 C32 C52 E3 E4 E5
    Date: 2018–09–11
  3. By: Knut Are Aastveit (Norges Bank); James Mitchell (Warwick Business School); Francesco Ravazzolo (Free University of Bozen/Bolzano); Herman van Dijk (Erasmus University, Noges Bank)
    Abstract: Increasingly, professional forecasters and academic researchers present model-based and subjective or judgment-based forecasts in economics which are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variables of interest. At the same time combinations of forecast densities are being used in order to integrate information coming from several sources like experts, models and large micro-data sets. Given this increased relevance of forecast density combinations, the genesis and evolution of this approach, both inside and outside economics, is explored. A fundamental density combination equation is specified which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its most simplistic case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination is made more flexible in recent literature. It has evolved from using simple average weights to optimized weights and then to `richer' procedures that allow for time-variation, learning features and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in a policy making environment of central banks.
    Keywords: Forecasting; Model Uncertainty; Density Combinations
    JEL: C10 C11
    Date: 2018–09–02
  4. By: Martin Burda; Remi Daviet
    Abstract: Practical use of nonparametric Bayesian methods requires the availability of efficient algorithms for implementation for posterior inference. The inherently serial nature of Markov Chain Monte Carlo (MCMC) imposes limitations on its efficiency and scalability. In recent years there has been a surge of research activity devoted to developing alternative implementation methods that target parallel computing environments. Sequential Monte Carlo (SMC), also known as a particle filter, has been gaining popularity due to its desirable properties. SMC uses a genetic mutation-selection sampling approach with a set of particles representing the posterior distribution of a stochastic process. We propose to enhance the performance of SMC by utilizing Hamiltonian transition dynamics in the particle transition phase, in place of random walk used in the previous literature. We call the resulting procedure Hamiltonian Sequential Monte Carlo (HSMC). Hamiltonian transition dynamics has been shown to yield superior mixing and convergence properties relative to random walk transition dynamics in the context of MCMC procedures. The rationale behind HSMC is to translate such gains to the SMC environment. We apply both SMC and HSMC to a panel discrete choice model with a nonparametric distribution of unobserved individual heterogeneity. We contrast both methods in terms of convergence properties and show the favorable performance of HSMC.
    Keywords: Particle filtering, Bayesian nonparametrics, mixed panel logit, discrete choice
    JEL: C11 C14 C15 C23 C25
    Date: 2018–09–12
  5. By: Ross Doppelt (Penn State); Keith O'Hara (New York University)
    Abstract: We introduce a new method for Bayesian estimation of fractionally integrated vector autoregressions (FIVARs). The FIVAR, which nests a standard VAR as a special case, allows each series to exhibit long memory, meaning that low frequencies can play a dominant role — a salient feature of many macroeconomic and financial time series. Although the parameter space is typically high-dimensional, our inferential procedure is computationally tractable and relatively easy to implement. We apply our methodology to the identification of technology shocks, an empirical problem in which business-cycle predictions depend on carefully accounting for low-frequency fluctuations.
    Date: 2018
  6. By: Den Haan, Wouter; Drechsel, Thomas
    Abstract: Exogenous random structural disturbances are the main driving force behind fluctuations in most business cycle models and typically a wide variety is used. This paper documents that a minor misspecification regarding structural disturbances can lead to large distortions for parameter estimates and implied model properties, such as impulse response functions with a wrong shape and even an incorrect sign. We propose a novel concept, namely an agnostic structural disturbance (ASD), that can be used to both detect and correct for misspecification of the structural disturbances. In contrast to regular disturbances and wedges, ASDs do not impose additional restrictions on policy functions. When applied to the Smets-Wouters (SW) model, we find that its risk-premium disturbance and its investment-specific productivity disturbance are rejected in favor of our ASDs. While agnostic in nature, studying the estimated associated coefficients and the impulse response functions of these ASDs allows us to interpret them economically as a risk-premium/preference and an investment-specific productivity type disturbance as in SW, but our results indicate that they enter the model quite differently than the original SW disturbances. Our procedure also selects an additional wage mark-up disturbance that is associated with increased capital efficiency.
    Keywords: DSGE; full-information model estimation; structural disturbances
    JEL: C13 C52 E30
    Date: 2018–08
  7. By: HAFNER Christian, (CORE and ISBA, UCLouvain); HERWARTZ Helmut, (University of Goettingen); MAXAND Simone, (University of Helsinki)
    Abstract: Multivariate GARCH models are widely used to model volatility and correlation dynamics of nancial time series. These models are typically silent about the transmission of implied orthogonalized shocks to vector returns. We propose a loss statistic to discriminate in a data-driven way between alternative structural assumptions about the transmission scheme. In its structural form, a four dimensional system comprising US and Latin American stock market returns points to a substantial volatility transmission from the US to the Latin American markets. The identified structural model improves the estimation of classical measures of portfolio risk, as well as corresponding variations.
    Keywords: structural innovations; identifying assumptions; MGARCH; portfolio risk; volatility transmission
    JEL: C32 G15
    Date: 2018–07–25
  8. By: Kengo Kato; Yuya Sasaki; Takuya Ura
    Abstract: This paper presents the nonparametric inference problem about the probability density function of a latent variable in the measurement error model with repeated measurements. We construct a system of linear complex-valued moment restrictions by Kotlarski's identity, and then establish a confidence band for the density of the latent variable. Our confidence band controls the asymptotic size uniformly over a class of data generating processes, and it is consistent against all fixed alternatives. Simulation studies support our theoretical results.
    Date: 2018–08
  9. By: Abel Brodeur (Department of Economics, University of Ottawa, Ottawa, ON); Nikolai Cook (Department of Economics, University of Ottawa, Ottawa, ON); Anthony Heyes (Department of Economics, University of Ottawa, Ottawa, ON, and University of Sussex)
    Abstract: The economics 'credibility revolution' has promoted the identification of causal relationships using difference-in-differences (DID), instrumental variables (IV), randomized control trials (RCT) and regression discontinuity design (RDD) methods. The extent to which a reader should trust claims about the statistical significance of results proves very sensitive to method. Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.
    Keywords: Research methods, causal inference, p-curves, p-hacking, publication bias.
    JEL: A11 B41 C13 C44
    Date: 2018
  10. By: Eric Beutner; Alexander Heinemann; Stephan Smeekes
    Abstract: This paper proposes a fixed-design residual bootstrap method for the two-step estimator of Francq and Zako\"ian (2015) associated with the conditional Value-at-Risk. The bootstrap's consistency is proven under mild assumptions for a general class of volatility models and bootstrap intervals are constructed for the conditional Value-at-Risk to quantify the uncertainty induced by estimation. A large-scale simulation study is conducted revealing that the equal-tailed percentile interval based on the fixed-design residual bootstrap tends to fall short of its nominal value. In contrast, the reversed-tails interval based on the fixed-design residual bootstrap yields accurate coverage. In the simulation study we also consider the recursive-design bootstrap. It turns out that the recursive-design and the fixed-design bootstrap perform equally well in terms of average coverage. Yet in smaller samples the fixed-design scheme leads on average to shorter intervals. An empirical application illustrates the interval estimation using the fixed-design residual bootstrap.
    Date: 2018–08
  11. By: Carol Alexander; Emese Lazar; Silvia Stanescu
    Abstract: Conditional returns distributions generated by a GARCH process, which are important for many applications in market risk assessment and portfolio optimization, are typically generated via simulation. This paper extends previous research on analytic moments of GARCH returns distributions in several ways: we consider a general GARCH model -- the GJR specification with a generic innovation distribution; we derive analytic expressions for the first four conditional moments of the forward return, of the forward variance, of the aggregated return and of the aggregated variance -- corresponding moments for some specific GARCH models largely used in practice are recovered as special cases; we derive the limits of these moments as the time horizon increases, establishing regularity conditions for the moments of aggregated returns to converge to normal moments; and we demonstrate empirically that some excellent approximate predictive distributions can be obtained from these analytic moments, thus precluding the need for time-consuming simulations.
    Date: 2018–08
  12. By: HAFNER Christian, (CORE and ISBA, UCLouvain)
    Abstract: The recent evolution of cryptocurrencies has been characterized by bubble-like behavior and extreme volatility. While it is difficult to assess an intrinsic value to a specific cryptocurrency, one can employ recently proposed bubble tests that rely on recursive applications of classical unit root tests. This paper extends this approach to the case where volatility is time varying, assuming a deterministic long-run component that may take into account a decrease of unconditional volatility when the cryptocurrency matures with a higher market dissemination. Volatility also includes a stochastic short-run component to capture volatility clustering. The wild bootstrap is shown to correctly adjust the size properties of the bubble test, which retains good power properties. In an empirical application using eleven of the largest cryptocurrencies and the CRIX index, the general evidence in favor of bubbles is confirmed, but much less pronounced than under constant volatility.
    Keywords: cryptocurrencies; speculative bubbles; wild bootstrap; volatility
    JEL: C14 C43 Z11
    Date: 2018–07–25
  13. By: Josselin Garnier; Knut Solna
    Abstract: Oil price data have a complicated multi-scale structure that may vary with time. We use time-frequency analysis to identify the main features of these variations and, in particular, the regime shifts. The analysis is based on a wavelet-based decomposition and analysis of the associated scale spectrum. The joint estimation of the local Hurst exponent and volatility is the key to detect and identify regime shifting and switching of the oil price. The framework involves in particular modeling in terms of a process of `multi-fractional' type so that both the roughness and the volatility of the price process may vary with time. Special epochs then emerge as a result of these degrees of freedom, moreover, as a result of the special type of spectral estimator used. These special epochs are discussed and related to historical events. Some of them are not detected by standard analysis based on maximum likelihood estimation. The paper presents a novel algorithm for robust detection of such special epochs and multi-fractional behavior in financial or other types of data. In the financial context insight about such behavior of the asset price is important to evaluate financial contracts involving the asset.
    Date: 2018–08
  14. By: Damjana Kokol Bukov\v{s}ek; Toma\v{z} Ko\v{s}ir; Bla\v{z} Moj\v{s}kerc; Matja\v{z} Omladi\v{c}
    Abstract: When choosing the right copula for our data a key point is to distinguish the family that describes it at the best. In this respect, a better choice of the copulas could be obtained through the information about the (non)symmetry of the data. Exchangeability as a probability concept (first next to independence) has been studied since 1930's, copulas have been studied since 1950's, and even the most important class of copulas from the point of view of applications, i.e. the ones arising from shock models s.a. Marshall's copulas, have been studied since 1960's. However, the point of non-exchangeability of copulas was brought up only in 2006 and has been intensively studied ever since. One of the main contributions of this paper is the maximal asymmetry function for a family of copulas. We compute this function for the major families of shock-based copulas, i.e. Marshall, maxmin and reflected maxmin (RMM for short) copulas and also for some other important families. We compute the sharp bound of asymmetry measure $\mu_\infty$, the most important of the asymmetry measures, for the family of Marshall copulas and the family of maxmin copulas, which both equal to $\frac{4}{27}\ (\approx 0.148)$. One should compare this bound to the one for the class of PQD copulas to which they belong, which is $3-2\sqrt{2}\ \approx 0.172)$, and to the general bound for all copulas that is $\frac13$. Furthermore, we give the sharp bound of the same asymmetry measure for RMM copulas which is $3-2\sqrt{2}$, compared to the same bound for NQD copulas, where they belong, which is $\sqrt{5}-2\ (\approx 0.236)$. One of our main results is also the statistical interpretation of shocks in a given model at which the maximal asymmetry measure bound is attained. These interpretations for the three families studied are illustrated by examples that should be helpful to practitioners when choosing the model for their data.
    Date: 2018–08
  15. By: CHIRANJIT MUKHOPADHYAY (Indian Institute of Science)
    Abstract: Short horizon Event Studies (ES) in financial research, are concerned with the effects of firm-specific or market-wide events such as, stock-splits, earnings announcements, mergers and acquisitions, derivatives introductions etc. on the underlying firms' stock prices. Though it has been around for half a century, and evidences abound about the phenomenon of Event Induced Variance (EIV), the methodological development in the ES literature, has mostly focused only on a shift in location of the expected abnormal returns.In this work, a random-effect model is proposed which explicitly accounts for the (empirically observed) cross-sectional variance of the (predicted) abnormal returns, along with another parameter accommodating for (another empirical phenomenon of) a change in post-event volatility. Under this model, the null hypothesis of "no event effect" also involves these additional variance parameters other than the usual mean. This necessitates development of new tests for this and other hypotheses of interests in ES, for which new Likelihood Ratio Tests (LRTs) are derived.As is standard in the ES literature, the specification and power behavior of the newly developed LRTs are compared with those of the existing ES tests, using real returns of 1231 stocks, that were listed in the National Stock Exchange, India between April 1998 and January 2016. 100,000 samples of sizes 5 and 50 are drawn to estimate and compare the probabilities of type-I error and power of the tests. The new LRTs are compared with both the popular and recent parametric and non-parametric ES tests in the literature. The powers are compared under both presence and absence of shift in location of the distribution of the abnormal returns, along with those of the two components of EIV. The newly developed LRTs are found to be adequately specified, and for more powerful than the existing ES tests in the literature, for a wide spectrum of alternatives.
    Keywords: Cumulative Abnormal Return, Efficient Market Hypothesis, Event Induced Variance, Power, Specification
    JEL: G14 C12 C58
    Date: 2018–06
  16. By: WEBER Matthias, (Bank of Lithuania and Vilnius University); STRIAUKAS Jonas, (CORE, UCLouvain); SCHUMACHER Martin, (University of Freiburg); HARALD Binder, (University of Freiburg)
    Abstract: Often, variables are linked to each other via a network. When such a network structure is known, this knowledge can be incorporated into regularized regression settings. In particular, an additional network penalty can be added on top of another penalty term, such as a Lasso penalty. However, when the type of interaction via the network is unknown (that is, whether connections are of an activating or a repressing type), the connection signs have to be estimated simultaneously with the covariate coefficients. This can be done with an algorithm iterating a connection sign estimation step and a covariate coefficient estimation step. We show detailed simulation results of such an algorithm. The algorithm performs well in a variety of settings. We also briefly describe the R-package that we developed for this purpose, which is publicly available.
    Keywords: network regression; network penalty; connection sign estimation; regularized regression
    Date: 2018–06–11

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.