nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒09‒14
23 papers chosen by
Sune Karlsson
Örebro universitet

  1. Coverage Optimal Empirical Likelihood Inference for Regression Discontinuity Design By Jun Ma; Zhengfei Yu
  2. Better Lee Bounds By Vira Semenova
  3. Efficient closed-form estimation of large spatial autoregressions By Abhimanyu Gupta
  4. Variable Selection and Forecasting in High Dimensional Linear Regressions with Structural Breaks By Alexander Chudik; M. Hashem Pesaran; Mahrad Sharifvaghefi
  5. Inference for Moment Inequalities: A Constrained Moment Selection Procedure By Rami V. Tabri; Christopher D. Walker
  6. Robust inference intime-varying structural VAR models: The DC-Cholesky multivariate stochasticvolatility model By Hartwig, Benny
  7. Spectral Targeting Estimation of $\lambda$-GARCH models By Simon Hetland
  8. Time stable small area estimates of general parameters under a unit-level model By Maria Guadarrama; Domingo Morales; Isabel Molina
  9. Finite-Sample Average Bid Auction By Haitian Xie
  10. A Vector Monotonicity Assumption for Multiple Instruments By Leonard Goff
  11. Bellman filtering for state-space models By Rutger Jan Lange
  12. Optimizing tail risks using an importance sampling based extrapolation for heavy-tailed objectives By Anand Deo; Karthyek Murthy
  13. Value-at-risk — the comparison of state-of-the-art models on various assets By Karol Kielak; Robert Ślepaczuk
  14. Structural Gaussian mixture vector autoregressive model By Savi Virolainen
  15. Difference-in-Differences Estimators of Intertemporal Treatment Effects By Cl\'ement de Chaisemartin; Xavier D'Haultf{\oe}uille
  16. Bayesian estimation of DSGE models with Hamiltonian Monte Carlo By Farkas, Mátyás; Tatar, Balint
  17. Should the Randomistas (Continue to) Rule? By Martin Ravallion
  18. On cointegration for processes integrated at different frequencies By del Barrio Castro, Tomás; Cubada, Ginaluca; Osborn, Denise R.
  19. Nowcasting in a Pandemic using Non-Parametric Mixed Frequency VARs By Florian Huber; Gary Koop; Luca Onorante; Michael Pfarrhofer; Josef Schreiner
  20. Fourier instantaneous estimators and the Epps effect By Patrick Chang
  21. Optimal Decision Rules for Weak GMM By Isaiah Andrews; Anna Mikusheva
  22. Time-Varying Parameters as Ridge Regressions By Philippe Goulet Coulombe
  23. On the Origin(s) and Development of "Big Data": The Phenomenon, the Term, and the Discipline By Francis X. Diebold

  1. By: Jun Ma; Zhengfei Yu
    Abstract: This paper proposes an empirical likelihood inference method for a general framework that covers various types of treatment effect parameters in regression discontinuity designs (RDD) . Our method can be applied for standard sharp and fuzzy RDDs, RDDs with categorical outcomes, augmented sharp and fuzzy RDDs with covariates and testing problems that involve multiple RDD treatment effect parameters. Our method is based on the first-order conditions from local polynomial fitting and avoids explicit asymptotic variance estimation. We investigate both firstorder and second-order asymptotic properties and derive the coverage optimal bandwidth which minimizes the leading term in the coverage error expansion. In some cases, the coverage optimal bandwidth has a simple explicit form, which the Wald-type inference method usually lacks. We also find that Bartlett corrected empirical likelihood inference further improves the coverage accuracy. Easily implementable coverage optimal bandwidth selector and Bartlett correction are proposed for practical use. We conduct Monte Carlo simulations to assess finite-sample performance of our method and also apply it to two real datasets to illustrate its usefulness.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09263&r=all
  2. By: Vira Semenova
    Abstract: This paper develops methods for tightening Lee (2009) bounds on average causal effects when the number of pre-randomization covariates is large, potentially exceeding the sample size. These Better Lee Bounds are guaranteed to be sharp when few of the covariates affect the selection and the outcome. If this sparsity assumption fails, the bounds remain valid. I propose inference methods that enable hypothesis testing in either case. My results rely on a weakened monotonicity assumption that only needs to hold conditional on covariates. I show that the unconditional monotonicity assumption that motivates traditional Lee bounds fails for the JobCorps training program. After imposing only conditional monotonicity, Better Lee Bounds are found to be much more informative than standard Lee bounds in a variety of settings.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12720&r=all
  3. By: Abhimanyu Gupta
    Abstract: Newton-step approximations to pseudo maximum likelihood estimates of spatial autoregressive models with a large number of parameters are examined, commencing from initial instrumental variables or least squares estimates. These have the same asymptotic efficiency properties as maximum likelihood under Gaussianity but are of closed form. Hence they are computationally simple and free from compactness assumptions, thereby avoiding two notorious pitfalls of implicitly defined estimates of large spatial autoregressions. For an initial least squares estimate, the Newton step can also lead to weaker regularity conditions for a central limit theorem than those extant in the literature. A simulation study demonstrates excellent finite sample gains from Newton iterations, especially in large multiparameter models for which grid search is costly. A small empirical illustration shows improvements in estimation precision with real data.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12395&r=all
  4. By: Alexander Chudik; M. Hashem Pesaran; Mahrad Sharifvaghefi
    Abstract: This paper is concerned with the problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows or exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection. It is clear that, in the absence of breaks, researchers should weigh the observations equally at both the variable selection and forecasting stages. In this study, we investigate whether or not we should use weighted observations at the variable selection stage in the presence of structural breaks, particularly when the number of potential covariates is large. Amongst the extant variable selection approaches we focus on the recently developed One Covariate at a time Multiple Testing (OCMT) method that allows a natural distinction between the selection and forecasting stages, and provide theoretical justification for using the full (not down-weighted) sample in the selection stage of OCMT and down-weighting of observations only at the forecasting stage (if needed). The benefits of the proposed method are illustrated by empirical applications to forecasting output growths and stock market returns.
    Keywords: Time-varying parameters; structural breaks; high-dimensionality; multiple testing; variable selection; one covariate at a time multiple testing (OCMT); forecasting
    JEL: C22 C52 C53 C55
    Date: 2020–08–19
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:88638&r=all
  5. By: Rami V. Tabri; Christopher D. Walker
    Abstract: Inference in models where the parameter is defined by moment inequalities is of interest in many areas of economics. This paper develops a new method for improving the performance of generalized moment selection (GMS) testing procedures in finite-samples. The method modifies GMS tests by tilting the empirical distribution in its moment selection step by an amount that maximizes the empirical likelihood subject to the restrictions of the null hypothesis. We characterize sets of population distributions on which a modified GMS test is (i) asymptotically equivalent to its non-modified version to first-order, and (ii) superior to its non-modified version according to local power when the sample size is large enough. An important feature of the proposed modification is that it remains computationally feasible even when the number of moment inequalities is large. We report simulation results that show the modified tests control size well, and have markedly improved local power over their non-modified counterparts.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09021&r=all
  6. By: Hartwig, Benny
    Abstract: This paper investigates how the ordering of variables affects properties of the time-varying covariance matrix in the Cholesky multivariate stochastic volatility model.It establishes that systematically different dynamic restrictions are imposed whenthe ratio of volatilities is time-varying. Simulations demonstrate that estimated co-variance matrices become more divergent when volatility clusters idiosyncratically.It is illustrated that this property is important for empirical applications. Specifically, alternative estimates on the evolution of U.S. systematic monetary policy andinflation-gap persistence indicate that conclusions may critically hinge on a selectedordering of variables. The dynamic correlation Cholesky multivariate stochasticvolatility model is proposed as a robust alternative.
    Keywords: Model uncertainty,Multivariate stochastic volatility,Dynamic correlations,Monetary policy,Structural VAR
    JEL: C11 C32 E32 E52
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:342020&r=all
  7. By: Simon Hetland
    Abstract: This paper presents a novel estimator of orthogonal GARCH models, which combines (eigenvalue and -vector) targeting estimation with stepwise (univariate) estimation. We denote this the spectral targeting estimator. This two-step estimator is consistent under finite second order moments, while asymptotic normality holds under finite fourth order moments. The estimator is especially well suited for modelling larger portfolios: we compare the empirical performance of the spectral targeting estimator to that of the quasi maximum likelihood estimator for five portfolios of 25 assets. The spectral targeting estimator dominates in terms of computational complexity, being up to 57 times faster in estimation, while both estimators produce similar out-of-sample forecasts, indicating that the spectral targeting estimator is well suited for high-dimensional empirical applications.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.02588&r=all
  8. By: Maria Guadarrama; Domingo Morales; Isabel Molina
    Abstract: Longitudinal surveys collecting information on certain phenomena at several time points are very popular because they allow to analyze the changes over time. Data coming from those surveys often present correlation over time that should be accounted for by the considered statistical procedures. In fact, methods that account for the existing time correlation are expected to yield more stable small area estimates over time. Temporal stability is a desirable property of statistics that are published regularly, specially in certain applications like in poverty mapping, where poverty estimates for the same area with big jumps from one period to the next are rarely credible. This paper considers a unit-level temporal linear mixed model for small area estimation that includes random time effects nested within the usual area effects, following an autoregressive process of order 1, AR(1). Based on the proposed model, we obtain empirical best predictors of general area parameters, giving explicit expressions for some common poverty indicators. We also propose a parametric bootstrap method for estimating their mean square errors under the model. The proposed methods are studied through simulation experiments and illustrated with an application to poverty mapping in Spanish provinces using survey data from 2004-2006.
    Keywords: Small area estimation; Empirircal best predictor; Linear mixed models; Time correlation; Poverty mapping
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:irs:cepswp:2020-10&r=all
  9. By: Haitian Xie
    Abstract: The paper studies the problem of auction design in a setting where the auctioneer accesses the knowledge of the valuation distribution only through statistical samples. A new framework is established that combines the statistical decision theory with mechanism design. Two optimality criteria, maxmin, and equivariance, are studied along with their implications on the form of auctions. The simplest form of the equivariant auction is the average bid auction, which set individual reservation prices proportional to the average of other bids and historical samples. This form of auction can be motivated by the Gamma distribution, and it sheds new light on the estimation of the optimal price, an irregular parameter. Theoretical results show that it is often possible to use the regular parameter population mean to approximate the optimal price. An adaptive average bid estimator is developed under this idea, and it has the same asymptotic properties as the empirical Myerson estimator. The new proposed estimator has a significantly better performance in terms of value at risk and expected shortfall when the sample size is small.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.10217&r=all
  10. By: Leonard Goff
    Abstract: When a researcher wishes to use multiple instrumental variables for a single binary treatment, the familiar LATE monotonicity assumption can become restrictive: it requires that all units share a common direction of response even when different instruments are shifted in opposing directions. What I call vector monotonicity, by contrast, simply restricts treatment status to be monotonic in each instrument separately. This is a natural assumption in many contexts, capturing the intuitive notion of "no defiers" for each instrument. I show that in a setting with a binary treatment and multiple discrete instruments, a class of causal parameters is point identified under vector monotonicity, including the average treatment effect among units that are responsive to any particular subset of the instruments. I propose a simple "2SLS-like" estimator for the family of identified treatment effect parameters. An empirical application revisits the labor market returns to college education.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00553&r=all
  11. By: Rutger Jan Lange (Erasmus School of Economics)
    Abstract: This article presents a new filter for state-space models based on Bellman's dynamic programming principle applied to the posterior mode. The proposed Bellman filter generalises the Kalman filter including its extended and iterated versions, while remaining equally inexpensive computationally. The Bellman filter is also (unlike the Kalman filter) robust under heavy-tailed observation noise and applicable to a wider range of models. Simulation studies reveal that the mean absolute error of the Bellman-filtered states using estimated parameters typically falls within a few percent of that produced by the mode estimator evaluated at the true parameters, which is optimal but generally infeasible.
    Keywords: Bellman filter, dynamic programming, Kalman filter, maximum a posteriori (MAP) estimate, posterior mode, state-space model
    JEL: C32 C53 C61
    Date: 2020–08–27
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20200052&r=all
  12. By: Anand Deo; Karthyek Murthy
    Abstract: Motivated by the prominence of Conditional Value-at-Risk (CVaR) as a measure for tail risk in settings affected by uncertainty, we develop a new formula for approximating CVaR based optimization objectives and their gradients from limited samples. A key difficulty that limits the widespread practical use of these optimization formulations is the large amount of data required by the state-of-the-art sample average approximation schemes to approximate the CVaR objective with high fidelity. Unlike the state-of-the-art sample average approximations which require impractically large amounts of data in tail probability regions, the proposed approximation scheme exploits the self-similarity of heavy-tailed distributions to extrapolate data from suitable lower quantiles. The resulting approximations are shown to be statistically consistent and are amenable for optimization by means of conventional gradient descent. The approximation is guided by means of a systematic importance-sampling scheme whose asymptotic variance reduction properties are rigorously examined. Numerical experiments demonstrate the superiority of the proposed approximations and the ease of implementation points to the versatility of settings to which the approximation scheme can be applied.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09818&r=all
  13. By: Karol Kielak (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw); Robert Ślepaczuk (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw)
    Abstract: This paper compares different approaches to Value-at-Risk measurement based on parametric and non-parametric approaches. Three portfolios are taken into consideration — the first one containing only stocks from the London Stock Exchange, the second one based on different assets of various origins and the third one consisting of cryptocurrencies. Data used cover the period of more than 20y. In the empirical part of the study, parametric methods based on mean-variance framework are compared with GARCH(1,1) and EGARCH(1,1) models. Different assumptions concerning returns’ distribution are taken into consideration. Adjustment for the fat tails effect is made by using Student t distribution in the analysis. One-day-ahead 95%VaR estimation is then calculated. Thereafter, models are validated using Kupiec and Christoffersen tests and Monte Carlo Simulation for reliable verification of the hypotheses. The overall goal of this paper is to establish if analyzed models accurately estimate Value-at-Risk measure, especially if we take into account assets with various returns distribution characteristics.
    Keywords: risk management, Value-at-Risk, GARCH models, returns distribution, Monte Carlo Simulation, asset class, cryptocurrencies
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-28&r=all
  14. By: Savi Virolainen
    Abstract: A structural version of the Gaussian mixture vector autoregressive model is introduced. The shocks are identified by combining simultaneous diagonalization of the error term covariance matrices with zero and sign constraints. It turns out that this often leads to less restrictive identification conditions than in conventional SVAR models, while some of the constraints are also testable. The accompanying R-package gmvarkit provides easy-to-use tools for estimating the models and applying the introduced methods.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.04713&r=all
  15. By: Cl\'ement de Chaisemartin; Xavier D'Haultf{\oe}uille
    Abstract: We consider the estimation of the effect of a policy or treatment, using panel data where different groups of units are exposed to the treatment at different times. We focus on parameters aggregating instantaneous and dynamic treatment effects, with a clear welfare interpretation. We show that under parallel trends conditions, these parameters can be unbiasedly estimated by a weighted average of differences-in-differences, provided that at least one group is always untreated, and another group is always treated. Our estimators are valid if the treatment effect is heterogeneous, contrary to the commonly-used event-study regression.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.04267&r=all
  16. By: Farkas, Mátyás; Tatar, Balint
    Abstract: In this paper we adopt the Hamiltonian Monte Carlo (HMC) estimator for DSGE models by implementing it into a state-of-the-art, freely available high-performance software package. We estimate a small scale textbook New-Keynesian model and the Smets-Wouters model on US data. Our results and sampling diagnostics con firm the parameter estimates available in existing literature. In addition we combine the HMC framework with the Sequential Monte Carlo (SMC) algorithm which permits the estimation of DSGE models with ill-behaved posterior densities.
    Keywords: DSGE Estimation,Bayesian Analysis,Hamiltonian Monte Carlo
    JEL: C11 C15 E10
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:144&r=all
  17. By: Martin Ravallion
    Abstract: The rising popularity of randomized controlled trials (RCTs) in development applications has come with continuing debates about the merits of this approach. The paper takes stock of the issues. It argues that an unconditional preference for RCTs is questionable on three main counts. First, the case for such a preference is unclear on a priori grounds. For example, with a given budget, even a biased observational study can come closer to the truth than a costly RCT. Second, the ethical objections to RCTs have not been properly addressed by advocates. Third, there is a risk of distorting the evidence-base for informing policymaking, given that an insistence on RCTs generates selection bias in what gets evaluated. Going forward, pressing knowledge gaps should drive the questions asked and how they are answered, not the methodological preferences of some researchers. The gold standard is the best method for the question at hand.
    JEL: B41 C93 O22
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27554&r=all
  18. By: del Barrio Castro, Tomás; Cubada, Ginaluca; Osborn, Denise R.
    Abstract: This paper explores the possibility of cointegration existing between processes integrated at di¤erent frequencies. Using the demodulator operator, we show that such cointegration can exist and explore its form using both complex- and real-valued representations. A straightforward approach to test for the presence of cointegration between processes integrated at di¤erent frequencies is proposed, with a Monte Carol study and an application showing that the testing approach works well.
    Keywords: Periodic Cointegration, Polynomial Cointegration, Demodulator Operator.
    JEL: C32
    Date: 2020–08–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102611&r=all
  19. By: Florian Huber; Gary Koop; Luca Onorante; Michael Pfarrhofer; Josef Schreiner
    Abstract: This paper develops Bayesian econometric methods for posterior and predictive inference in a non-parametric mixed frequency VAR using additive regression trees. We argue that regression tree models are ideally suited for macroeconomic nowcasting in the face of the extreme observations produced by the pandemic due to their flexibility and ability to model outliers. In a nowcasting application involving four major countries in the European Union, we find substantial improvements in nowcasting performance relative to a linear mixed frequency VAR. A detailed examination of the predictive densities in the first six months of 2020 shows where these improvements are achieved.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12706&r=all
  20. By: Patrick Chang
    Abstract: We compare the Malliavin-Mancino and Cuchiero-Teichmann Fourier instantaneous estimators to investigate the impact of the Epps effect arising from asynchrony in the instantaneous estimates. We demonstrate the instantaneous Epps effect under a simulation setting and provide a simple method to ameliorate the effect. We find that using the previous tick interpolation in the Cuchiero-Teichmann estimator results in unstable estimates when dealing with asynchrony, while the ability to bypass the time domain with the Malliavin-Mancino estimator allows it to produce stable estimates and is therefore better suited for ultra-high frequency finance. An empirical analysis using Trade and Quote data from the Johannesburg Stock Exchange illustrates the instantaneous Epps effect and how the intraday correlation dynamics can vary between days for the same equity pair.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.03453&r=all
  21. By: Isaiah Andrews; Anna Mikusheva
    Abstract: This paper derives the limit experiment for nonlinear GMM models with weak and partial identification. We propose a theoretically-motivated class of default priors on a non-parametric nuisance parameter. These priors imply computationally tractable Bayes decision rules in the limit problem, while leaving the prior on the structural parameter free to be selected by the researcher. We further obtain quasi-Bayes decision rules as the limit of sequences in this class, and derive weighted average power-optimal identification-robust frequentist tests. Finally, we prove a Bernstein-von Mises-type result for the quasi-Bayes posterior under weak and partial identification.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.04050&r=all
  22. By: Philippe Goulet Coulombe
    Abstract: Time-varying parameters (TVPs) models are frequently used in economics to model structural change. I show that they are in fact ridge regressions. Instantly, this makes computations, tuning, and implementation much easier than in the state-space paradigm. Among other things, solving the equivalent dual ridge problem is computationally very fast even in high dimensions, and the crucial "amount of time variation" is tuned by cross-validation. Evolving volatility is dealt with using a two-step ridge regression. I consider extensions that incorporate sparsity (the algorithm selects which parameters vary and which do not) and reduced-rank restrictions (variation is tied to a factor model). To demonstrate the usefulness of the approach, I use it to study the evolution of monetary policy in Canada. The application requires the estimation of about 4600 TVPs, a task well within the reach of the new method.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00401&r=all
  23. By: Francis X. Diebold
    Abstract: I investigate Big Data, the phenomenon, the term, and the discipline, with emphasis on origins of the term, in industry and academics, in computer science and statistics/econometrics. Big Data the phenomenon continues unabated, Big Data the term is now firmly entrenched, and Big Data the discipline is emerging.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.05835&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.