nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒10‒02
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bootstrapping DSGE models By Giovanni Angelini; Giuseppe Cavaliere; Luca Fanelli
  2. Should I stay or should I go? Bayesian inference in the threshold time varying parameter (TTVP) model By Florian Huber; Gregor Kastner; Martin Feldkircher
  3. Residual-augmented IVX predictive regression By Paulo M.M. Rodrigues; Matei Demetrescu
  4. Generic Inference on Quantile and Quantile Effect Functions for Discrete Outcomes By Victor Chernozhukov; Ivan Fernandez-Val; Blaise Melly; Kaspar Wüthrich
  5. Efficient Inference of Average Treatment Effects in High Dimensions via Approximate Residual Balancing By Athey, Susan; Imbens, Guido W.; Wager, Stefan
  6. Jackknife Bias Reduction in the Presence of a Near-Unit Root By Chambers, Marcus J; Kyriacou, Maria
  7. Yet another look at MIDAS regression By Franses, Ph.H.B.F.
  8. A wavelet-based multivariate multiscale approach for forecasting By António Rua
  9. Asset allocation with judgment By Manganelli, Simone
  10. -Adjusted p-values for genome-wide regression analysis with non-normally distributed quantitative phenotypes By Gregory Connor
  11. A sequential hypothesis testing procedure for the process capability index Cpk By Michele Scagliarini
  12. A Practical Approach to Testing Calibration Strategies By Yongquan Cao; Grey Gordon
  13. Flexible Functional Forms and Curvature Conditions: Parametric Productivity Estimation in Canadian and U.S. Manufacturing Industries By Jakir Hussain; Jean-Thomas Bernard
  14. Modeling censored losses using splicing: A global fit strategy with mixed Erlang and extreme value distributions By Tom Reynkens; Roel Verbelen; Jan Beirlant; Katrien Antonio
  15. Narrative Sign Restrictions for SVARs By Antolin-Diaz, Juan; Rubio-Ramírez, Juan Francisco
  16. Explaining Causal Findings without Bias: Detecting and Assessing Direct Effects By Acharya, Avidit; Blackwell, Matthew; Sen, Maya
  17. Testing Theories of Attitude Change with Online Panel Field Experiments By Broockman, David E.; Kalla, Joshua L.; Sekhon, Jasjeet S.

  1. By: Giovanni Angelini (Università di Bologna); Giuseppe Cavaliere (Università di Bologna); Luca Fanelli (Università di Bologna)
    Abstract: This paper explores the potential of bootstrap methods in the empirical evalu- ation of dynamic stochastic general equilibrium (DSGE) models and, more generally, in linear rational expectations models featuring unobservable (latent) components. We consider two dimensions. First, we provide mild regularity conditions that suffice for the bootstrap Quasi- Maximum Likelihood (QML) estimator of the structural parameters to mimic the asymptotic distribution of the QML estimator. Consistency of the bootstrap allows to keep the probability of false rejections of the cross-equation restrictions under control. Second, we show that the realizations of the bootstrap estimator of the structural parameters can be constructively used to build novel, computationally straightforward tests for model misspecification, including the case of weak identification. In particular, we show that under strong identification and boot- strap consistency, a test statistic based on a set of realizations of the bootstrap QML estimator approximates the Gaussian distribution. Instead, when the regularity conditions for inference do not hold as e.g. it happens when (part of) the structural parameters are weakly identified, the above result is no longer valid. Therefore, we can evaluate how close or distant is the esti- mated model from the case of strong identification. Our Monte Carlo experimentations suggest that the bootstrap plays an important role along both dimensions and represents a promising evaluation tool of the cross-equation restrictions and, under certain conditions, of the strength of identification. An empirical illustration based on a small-scale DSGE model estimated on U.S. quarterly observations shows the practical usefulness of our approach.
    Keywords: Bootstrap, Cross-equation restrictions, DSGE, QLR test, State space model, Weak identification.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:133&r=ecm
  2. By: Florian Huber (Department of Economics, Vienna University of Economics and Business); Gregor Kastner (Department of Statistics and Mathematics, Vienna University of Economics and Business); Martin Feldkircher (Oesterreichische Nationalbank (OeNB))
    Abstract: We provide a flexible means of estimating time-varying parameter models in a Bayesian framework. By specifying the state innovations to be characterized trough a threshold process that is driven by the absolute size of parameter changes, our model detects at each point in time whether a given regression coefficient is constant or time-varying. Moreover, our framework accounts for model uncertainty in a data-based fashion through Bayesian shrinkage priors on the initial values of the states. In a simulation, we show that our model reliably identifies regime shifts in cases where the data generating processes display high, moderate, and low num- bers of movements in the regression parameters. Finally, we illustrate the merits of our approach by means of two applications. In the first application we forecast the US equity premium and in the second application we investigate the macroeconomic effects of a US monetary policy shock.
    Keywords: Change point model, Threshold mixture innovations, Structural breaks, Shrinkage, Bayesian statistics, Monetary policy
    JEL: C11 C32 C52 E42
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp235&r=ecm
  3. By: Paulo M.M. Rodrigues; Matei Demetrescu
    Abstract: Bias correction in predictive regressions stabilizes the empirical size properties of OLS-based predictability tests. This paper shows that bias correction also improves the finite sample power of tests, in particular so in the context of the extended instrumental variable (IVX) predictability testing framework introduced by Kostakis et al. (2015, Review of Financial Studies). We introduce new IVX-statistics subject to a bias correction analogous to that proposed by Amihud and Hurvich (2014, Journal of Financial and Quantitative Analysis). Three important contributions are provided: first, we characterize the effects that bias-reduction adjustments have on the asymptotic distributions of the IVX test statistics in a general context allowing for short-run dynamics and heterogeneity; second, we discuss the validity of the procedure when predictors are stationary as well as near-integrated; and third, we conduct an exhaustive Monte Carlo analysis to investigate the small-sample properties of the test procedure and its sensitivity to distinctive features that characterize predictive regressions in practice, such as strong persistence, endogeneity, non-Gaussian innovations and heterogeneity. An application of the new procedure to the Welch and Goyal (2008) database illustrates its usefulness in practice.
    JEL: C12 C22
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201605&r=ecm
  4. By: Victor Chernozhukov; Ivan Fernandez-Val; Blaise Melly; Kaspar Wüthrich
    Abstract: This paper provides a method to construct simultaneous confidence bands for quantile and quantile effect functions for possibly discrete or mixed discrete-continuous random variables. The construction is generic and does not depend on the nature of the underlying problem. It works in conjunction with parametric, semiparametric, and nonparametric modeling strategies and does not depend on the sampling schemes. It is based upon projection of simultaneous confidence bands for distribution functions. We apply our method to analyze the distributional impact of insurance coverage on health care utilization and to provide a distributional decomposition of the racial test score gap. Our analysis generates new interesting findings, and complements previous analyses that focused on mean effects only. In both applications, the outcomes of interest are discrete rendering standard inference methods invalid for obtaining uniform confidence bands for quantile and quantile effects functions.
    Keywords: quantiles; quantile effects; treatment effects; distribution; discrete; mixed; count data; confidence bands; uniform inference.
    JEL: C12 C21 C25
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1607&r=ecm
  5. By: Athey, Susan (Stanford University); Imbens, Guido W. (Stanford University); Wager, Stefan (?)
    Abstract: There are many studies where researchers are interested in estimating average treatment effects and are willing to rely on the unconfoundedness assumption, which requires that treatment assignment is as good as random conditional on pre-treatment variables. The unconfoundedness assumption is often more plausible if a large number of pre-treatment variables are included in the analysis, but this can worsen the finite sample properties of existing approaches to estimation. In particular, existing methods do not handle well the case where the model for the propensity score (that is, the model relating pre-treatment variables to treatment assignment) is not sparse. In this paper, we propose a new method for estimating average treatment effects in high dimensions that combines balancing weights and regression adjustments. We show that our estimator achieves the semi-parametric efficiency bound for estimating average treatment effects without requiring any modeling assumptions on the propensity score. The result relies on two key assumptions, namely overlap (that is, all units have a propensity score that is bounded away from 0 and 1), and sparsity of the model relating pre-treatment variables to outcomes.
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:3408&r=ecm
  6. By: Chambers, Marcus J; Kyriacou, Maria
    Abstract: This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a near-unit root. The limit distributions of sub-sample estimators that are used in the construction of the jackknife estimator are derived and the joint moment generating function (MGF) of two components of these distributions is obtained and its properties are explored. The MGF can be used to derive the weights for an optimal jackknife estimator that removes fully the first-order finite sample bias from the estimator. The resulting jackknife estimator is shown to perform well in finite samples and, with a suitable choice of the number of sub-samples, is shown to reduce the overall finite sample root mean squared error as well as bias. However, the optimal jackknife weights rely on knowledge of the near-unit root parameter, which is typically unknown in practice, and so an alternative, feasible, jackknife estimator is pro- posed which achieves the intended bias reduction but does not rely on knowledge of this parameter. This feasible jackknife estimator is also capable of substantial bias and root mean squared error reductions in finite samples across a range of values of the near-unit root parameter and across different sample sizes.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:17623&r=ecm
  7. By: Franses, Ph.H.B.F.
    Abstract: A MIDAS regression involves a dependent variable observed at a low frequency and independent variables observed at a higher frequency. This paper relates a true high frequency data generating process, where also the dependent variable is observed (hypothetically) at the high frequency, with a MIDAS regression. It is shown that a correctly specified MIDAS regression usually includes lagged dependent variables, a substantial number of explanatory variables (observable at the low frequency) and a moving average term. Next, the parameters of the explanatory variables unlikely obey certain convenient patterns, and hence imposing such restrictions in practice is not recommended.
    Keywords: high frequency, low frequency, MIDAS regression
    JEL: C32
    Date: 2016–08–24
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:93331&r=ecm
  8. By: António Rua
    Abstract: In an increasingly data rich environment, factor models have become the workhorse approach for modelling and forecasting purposes. However, factors are non-observable and have to be estimated. In particular, the space spanned by the unknown factors is typically estimated via principal components. Herein, it is proposed a novel procedure to estimate the factor space resorting to a wavelet based multiscale principal component analysis. Through a Monte Carlo simulation study, it is shown that such an approach allows to improve both factor model estimation and forecasting performance. In the empirical application, one illustrates its usefulness for forecasting GDP growth and inflation in the United States.
    JEL: C22 C40 C53
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201612&r=ecm
  9. By: Manganelli, Simone
    Abstract: This paper shows how to incorporate judgment in a decision problem under uncertainty, within a classical framework. The method relies on the specification of a judgmental decision with associated confidence level and application of hypothesis testing. The null hypothesis tests whether marginal deviations from the judgmental decision generate negative changes in expected utility. The resulting estimator is always at the boundary of the confidence interval: beyond that point the probability of decreasing the expected utility becomes greater than the chosen confidence level. The decision maker chooses the confidence level as a mapping from the p-value of the judgmental decision into the unit interval. I show how the choice of priors in Bayesian estimators is equivalent to the choice of this confidence level mapping. I illustrate the implications of this new framework with a portfolio choice between cash and the EuroStoxx50 index. JEL Classification: C1, C11, C12, C13, D81
    Keywords: judgmental estimation, out-of-sample, portfolio selection, statistical risk propensity
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20161947&r=ecm
  10. By: Gregory Connor (Department of Economics, Finance and Accounting, Maynooth University.)
    Abstract: This paper provides a small-sample adjustment for Bonferonni- corrected p-values in multiple univariate regressions of a quantitative phenotype (such as a social trait) on individual genome markers. The p-value estimator conventionally used in existing genome-wide asso- ciation (GWA) regressions assumes a normally-distributed dependent variable, or relies on a central limit theorem based approximation. We show that the central limit theorem approximation is unreliable for GWA regression Bonferonni-corrected p-values except in very large samples. We note that measured phenotypes (particularly in the case of social traits) often have markedly non-normal distributions. We propose a mixed normal distribution to better ?t observed pheno- typic variables, and derive exact small-sample p-values for the stan- dard GWA regression under this distributional assumption.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:may:mayecw:n274-16.pdf&r=ecm
  11. By: Michele Scagliarini (Università di Bologna)
    Abstract: In this study we propose a sequential procedure for hypothesis testing on the Cpk process capability index. We compare the properties of the sequential test with the performances of non-sequential tests by performing an extensive simulation study. The results indicate that the proposed sequential procedure makes it possible to save a large amount of sample size, which can be translated into reduced costs, time and resources.
    Keywords: Average Sample Size; Brownian Motion; Maximal Allowable Sample Size; Power Function; Simulation Studies. Ampiezza Campionaria Media; Moto Browniano; Ampiezza Campionaria Massima Ammissibile; Funzione di Potenza; Studio di Simulazione.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:134&r=ecm
  12. By: Yongquan Cao (Indiana University); Grey Gordon (Indiana University)
    Abstract: A calibration strategy tries to match target moments using a model's parameters. We propose tests for determining whether this is possible. The tests use moments at random parameter draws to assess whether the target moments are similar to the computed ones (evidence of existence) or appear to be outliers (evidence of non-existence). Our experiments show the tests are effective at detecting both existence and non-existence in a non-linear model. Multiple calibration strategies can be quickly tested using just one set of simulated data. Code is provided.
    Keywords: Calibration, GMM, Existence, Outlier, Data Mining
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2016004&r=ecm
  13. By: Jakir Hussain (Department of Economics, University of Ottawa, Ottawa, ON); Jean-Thomas Bernard (Department of Economics, University of Ottawa, Ottawa, ON)
    Abstract: It is well-known that econometric productivity estimation using flexible functional forms often encounter violations of curvature conditions. However, the productivity literature does not provide any guidance on the selection of appropriate functional forms once they satisfy the theoretical regularity conditions. In this paper, we provide an empirical evidence that imposing local curvature conditions on the flexible functional forms affect total factor productivity (TFP) estimates in addition to the elasticity estimates. Moreover, we use this as a criterion for evaluating the performances of three widely used locally flexible cost functional forms - the translog (TL), the Generalized Leontief (GL), and the Normalized Quadratic (NQ) - in providing TFP estimates. Results suggest that the NQ model performs better than the other two functional forms in providing TFP estimates.
    Keywords: Technical change, Productivity, Flexible functional forms, Translog (TL), Generalized Leontief (GL), Normalized Quadratic (NQ), Cost function, Concavity
    JEL: C22 F33
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ott:wpaper:1612e&r=ecm
  14. By: Tom Reynkens; Roel Verbelen; Jan Beirlant; Katrien Antonio
    Abstract: In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modeling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement.
    Keywords: censoring, composite model, expectation-maximization algorithm, risk measurement, tail modeling
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ete:kbiper:549545&r=ecm
  15. By: Antolin-Diaz, Juan; Rubio-Ramírez, Juan Francisco
    Abstract: This paper identifies structural vector autoregressions using narrative sign restrictions. Narrative sign restrictions constrain the structural shocks and the historical decomposition of the data around key historical events, ensuring that they agree with the established account of these episodes. Using models of the oil market and monetary policy, we show that narrative sign restrictions can be highly informative. In particular we highlight that adding a small number of narrative sign restrictions, or sometimes even a single one, dramatically sharpens and even changes the inference of SVARs originally identified via the established practice of placing sign restrictions only on the impulse response functions. We see our approach as combining the appeal of narrative methods with the desire for basing inference on a few uncontroversial restrictions that popularized the use of sign restrictions.
    Date: 2016–09
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11517&r=ecm
  16. By: Acharya, Avidit (Stanford University); Blackwell, Matthew (Harvard University); Sen, Maya (Harvard University)
    Abstract: Researchers seeking to establish causal relationships frequently control for variables on the purported causal pathway, checking whether the original treatment effect then disappears. Unfortunately, this common approach may lead to biased estimates. In this paper, we show that the bias can be avoided by focusing on a quantity of interest called the controlled direct effect. Under certain conditions, the controlled direct effect enables researchers to rule out competing explanations-an important objective for political scientists. To estimate the controlled direct effect without bias, we describe an easy-to- implement estimation strategy from the biostatistics literature. We extend this approach by deriving a consistent variance estimator and demonstrating how to conduct a sensitivity analysis. Two examples-one on ethnic fractionalization's effect on civil war and one on the impact of historical plough use on contemporary female political participation-illustrate the framework and methodology.
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:ecl:harjfk:15-064&r=ecm
  17. By: Broockman, David E. (Stanford University); Kalla, Joshua L. (University of CA, Berkeley); Sekhon, Jasjeet S. (University of CA, Berkeley)
    Abstract: Social scientists increasingly wish to use field experiments to test theories. However, common experimental designs for studying the effects of treatments delivered in the field on individuals' attitudes are infeasible for most researchers and vulnerable to bias. We detail an alternative field experiment design exploiting a placebo control and multiple waves of panel surveys delivered online with multiple measures of outcomes. This design can make persuasion field experiments feasible by decreasing costs (often by nearly two orders of magnitude), allows experiments to test additional theories, and facilitates the evaluation of design assumptions. We then report an original application study, a field experiment implementing the design to evaluate a persuasive canvass targeting abortion attitudes. This study estimated a precise zero, suggesting the design can successfully evade social desirability bias. We conclude by discussing potential limitations and extensions.
    Date: 2016–04
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:3402&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.