nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒05‒18
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. Distributional Robustness of K-class Estimators and the PULSE By Martin Emil Jakobsen; Jonas Peters
  2. Fractional trends in unobserved components models By Tobias Hartl; Rolf Tschernig; Enzo Weber
  3. Structural Regularization By Jiaming Mao; Zhesheng Zheng
  4. Maximum Likelihood Estimation of Stochastic Frontier Models with Endogeneity By Samuele Centorrino; Mar\'ia P\'erez-Urdiales
  5. Dynamic stochastic general equilibrium inference using a score-driven approach By Licht, Adrian; Escribano Saez, Alvaro; Blazsek, Szabolcs Istvan
  6. Inference with Many Weak Instruments By Anna Mikusheva; Liyang Sun
  7. Endogenous Time Variation in Vector Autoregressions By Danilo Leiva-Leon; Luis Uzeda
  8. Nearly Efficient Likelihood Ratio Tests of a Unit Root in an Autoregressive Model of Arbitrary Order By Samuel Brien; Michael Jansson; Morten Ørregaard Nielsen
  9. Forecasting a Nonstationary Time Series with a Mixture of Stationary and Nonstationary Factors as Predictors By Sium Bodha Hannadige; Jiti Gao; Mervyn J. Silvapulle; Param Silvapulle
  10. Nonlinear common trends for the global crude oil market: Markov-switching score-driven models of the multivariate t-distribution By Licht, Adrian; Escribano Saez, Alvaro; Blazsek, Szabolcs Istvan
  11. Dynamic Shrinkage Priors for Large Time-varying Parameter Regressions using Scalable Markov Chain Monte Carlo Methods By Niko Hauzenberger; Florian Huber; Gary Koop
  12. Sensitivity to Calibrated Parameters By Thomas H. Joergensen
  13. Incentive-Compatible Critical Values By Adam McCloskey; Pascal Michaillat
  14. Bayesian Clustered Coefficients Regression with Auxiliary Covariates Assistant Random Effects By Guanyu Hu; Yishu Xue; Zhihua Ma
  15. Smile: A Simple Diagnostic for Selection on Observables By Slichter, David
  16. Solving non-linear dynamic models (more) efficiently: application to a simple monetary policy model By Shalva Mkhatrishvili; Douglas Laxton; Davit Tutberidze; Tamta Sopromadze; Saba Metreveli; Lasha Arevadze; Tamar Mdivnishvili; Giorgi Tsutskiridze
  17. How Reliable are Bootstrap-based Heteroskedasticity Robust Tests? By Benedikt M. P\"otscher; David Preinerstorfer
  18. An introduction to time-varying lag autoregression By Franses, Ph.H.B.F.
  19. Causal Inference on Networks under Continuous Treatment Interference By Davide Del Prete; Laura Forastiere; Valerio Leone Sciabolazza
  20. Identification and Inference of Network Formation Games with Misclassified Links By Candelaria, Luis E.; Ura, Takuya
  21. Direct versus iterated multi-period Value at Risk By Ruiz Ortega, Esther; Nieto Delfin, Maria Rosa

  1. By: Martin Emil Jakobsen; Jonas Peters
    Abstract: In causal settings, such as instrumental variable settings, it is well known that estimators based on ordinary least squares (OLS) can yield biased and non-consistent estimates of the causal parameters. This is partially overcome by two-stage least squares (TSLS) estimators. These are, under weak assumptions, consistent but do not have desirable finite sample properties: in many models, for example, they do not have finite moments. The set of K-class estimators can be seen as a non-linear interpolation between OLS and TSLS and are known to have improved finite sample properties. Recently, in causal discovery, invariance properties such as the moment criterion which TSLS estimators leverage have been exploited for causal structure learning: e.g., in cases, where the causal parameter is not identifiable, some structure of the non-zero components may be identified, and coverage guarantees are available. Subsequently, anchor regression has been proposed to trade-off invariance and predictability. The resulting estimator is shown to have optimal predictive performance under bounded shift interventions. In this paper, we show that the concepts of anchor regression and K-class estimators are closely related. Establishing this connection comes with two benefits: (1) It enables us to prove robustness properties for existing K-class estimators when considering distributional shifts. And, (2), we propose a novel estimator in instrumental variable settings by minimizing the mean squared prediction error subject to the constraint that the estimator lies in an asymptotically valid confidence region of the causal parameter. We call this estimator PULSE (p-uncorrelated least squares estimator) and show that it can be computed efficiently, even though the underlying optimization problem is non-convex. We further prove that it is consistent.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.03353&r=all
  2. By: Tobias Hartl; Rolf Tschernig; Enzo Weber
    Abstract: We develop a generalization of unobserved components models that allows for a wide range of long-run dynamics by modelling the permanent component as a fractionally integrated process. The model does not require stationarity and can be cast in state space form. In a multivariate setup, fractional trends may yield a cointegrated system. We derive the Kalman filter estimator for the common fractionally integrated component and establish consistency and asymptotic (mixed) normality of the maximum likelihood estimator. We apply the model to extract a common long-run component of three US inflation measures, where we show that the $I(1)$ assumption is likely to be violated for the common trend.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.03988&r=all
  3. By: Jiaming Mao; Zhesheng Zheng
    Abstract: We propose a novel method for modeling data by using structural models based on economic theory as regularizer for statistical models. We show that even if a structural model is misspecified, as long as it is informative about the data-generating mechanism, our method can outperform both the (misspecified) structural model and un-structural-regularized statistical models. Our method permits a Bayesian interpretation of theory as prior knowledge and can be used both for statistical prediction and causal inference. It contributes to transfer learning by showing how incorporating theory into statistical modeling can significantly improve out-of-domain predictions and offers a way to synthesize reduced-form and structural approaches to causal effect estimation. Simulation experiments demonstrate the potential of our method in various settings, including first-price auctions, dynamic models of entry and exit, and demand estimation with instrumental variables. Our method has potential applications not only in economics, but in other (social) scientific disciplines whose theoretical models offer important insight but are subject to significant misspecification concerns.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.12601&r=all
  4. By: Samuele Centorrino; Mar\'ia P\'erez-Urdiales
    Abstract: We provide a closed-form maximum likelihood estimation of stochastic frontier models with endogeneity. We consider cross-section data when both components of the composite error term may be correlated with inputs and environmental variables. Under appropriate restrictions, we show that the conditional distribution of the stochastic inefficiency term is a folded normal distribution. The latter reduces to the half-normal distribution when both inputs and environmental variables are independent of the stochastic inefficiency term. Our framework is thus a natural generalization of the normal half-normal stochastic frontier model with endogeneity. Among other things, this allows us to provide a generalization of the Battese-Coelli estimator of technical efficiency. Our maximum likelihood estimator is computationally fast and easy to implement. We showcase its finite sample properties in monte-carlo simulations and an empirical application to farmers in Nepal.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.12369&r=all
  5. By: Licht, Adrian; Escribano Saez, Alvaro; Blazsek, Szabolcs Istvan
    Abstract: In this paper, the benefits of statistical inference of score-driven state-spacemodels are incorporated into the inference of dynamic stochastic general equilibrium (DSGE)models. We focus on DSGE models, for which a Gaussian ABCD representation exists. Precisionof statistical estimation is improved, by using a score-driven multivariate t-distribution for theerrors. First, the updating term of the transition equation of the ABCD representation isreplaced by the conditional score of the log-likelihood (LL) with respect to location. Second,the time-constant scale parameters of the error terms in the measurement equation of the ABCDrepresentation are replaced by a dynamic parameter that is updated by the conditional score ofthe LL with respect to scale. Impulse response functions (IRFs) and conditions of the maximumlikelihood (ML) estimator are presented. In the empirical application, a benchmark DSGE modelis estimated for real data on US economic output, inflation and interest rate for the period of1954-2019. The score-driven ABCD representation improves the estimation precision of theGaussian ABCD representation. The score-driven ABCD representation with dynamic scaleprovides the best description of the time series data, by identifying a structural change in thesample period and providing the most precise IRF estimates.
    Keywords: Beta-T-Egarch; Generalized Autoregressive Score (Gas); Dynamic Conditional Score (Dcs); Dynamic Stochastic General Equilibrium (Dsge)
    Date: 2020–05–07
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:30347&r=all
  6. By: Anna Mikusheva; Liyang Sun
    Abstract: We develop a concept of weak identification in linear IV models in which the number of instruments can grow at the same rate or slower than the sample size. We propose a jackknifed version of the classical weak identification-robust Anderson-Rubin (AR) test statistic. Large-sample inference based on the jackknifed AR is valid under heteroscedasticity and weak identification. The feasible version of this statistic uses a novel variance estimator. The test has uniformly correct size and good power properties. We also develop a pre-test for weak identification that is related to the size property of a Wald test based on the Jackknife Instrumental Variable Estimator (JIVE). This new pre-test is valid under heteroscedasticity and with many instruments.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.12445&r=all
  7. By: Danilo Leiva-Leon; Luis Uzeda
    Abstract: We introduce a new class of time-varying parameter vector autoregressions (TVP-VARs) where the identified structural innovations are allowed to influence — contemporaneously and with a lag — the dynamics of the intercept and autoregressive coefficients in these models. An estimation algorithm and a parametrization conducive to model comparison are also provided. We apply our framework to the US economy. Scenario analysis suggests that the effects of monetary policy on economic activity are larger and more persistent in the proposed models than in an otherwise standard TVP-VAR. Our results also indicate that costpush shocks play an important role in understanding historical changes in inflation persistence.
    Keywords: Econometric and statistical methods; Inflation and prices; Transmission of monetary policy
    JEL: C32 E52
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:20-16&r=all
  8. By: Samuel Brien; Michael Jansson (UC Berkeley and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: We study large-sample properties of likelihood ratio tests of the unit root hypothesis in an autoregressive model of arbitrary, finite order. Earlier research on this testing problem has developed likelihood ratio tests in the autoregressive model of order one, but resorted to a plug-in approach when dealing with higher-order models. In contrast, we consider the full model and derive the relevant large-sample properties of likelihood ratio tests under a local-to-unity asymptotic framework. As in the simpler model, we show that the full likelihood ratio tests are nearly efficient, in the sense that their asymptotic local power functions are virtually indistinguishable from the Gaussian power envelopes.
    Keywords: Efficiency, Likelihood ratio test, Nuisance parameters, Unit root hypothesis
    JEL: C12 C22
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1429&r=all
  9. By: Sium Bodha Hannadige; Jiti Gao; Mervyn J. Silvapulle; Param Silvapulle
    Abstract: This paper develops a method for forecasting a nonstationary time series, such as GDP, using a set of high-dimensional panel data as predictors. To this end, we use what is known as a factor augmented regression [FAR] model that contains a small number of estimated factors as predictors; the factors are estimated using time series data on a large number of potential predictors. The validity of this method for forecasting has been established when all the variables are stationary and also when they are all nonstationary, but not when they consist of a mixture of stationary and nonstationary ones. This paper fills this gap. More specifically, we develop a method for constructing an asymptotically valid prediction interval using the FAR model when the predictors include a mixture of stationary and nonstationary factors; we refer to this as mixture-FAR model. This topic is important because typically time series data on a large number of economic variables is likely to contain a mixture of stationary and nonstationary variables. In a simulation study, we observed that the mixture-FAR performed better than its competitor that requires all the variables to be nonstationary. As an empirical illustration, we evaluated the aforementioned methods for forecasting the nonstationary variables, GDP and Industrial Production [IP], using the quarterly panel data on US macroeconomic variables, known as FRED-D. We observed that the mixture-FAR model proposed in this paper performed better than its aforementioned competitors.
    Keywords: bootstrap,generated factors, panel data, prediction interval.
    JEL: C22 C33 C38 C53
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-19&r=all
  10. By: Licht, Adrian; Escribano Saez, Alvaro; Blazsek, Szabolcs Istvan
    Abstract: Relevant works from the literature on crude oil market use structural vector autoregressive(SVAR) models with several lags to approximate the true model for the variables change in globalcrude oil production, global real economic activity and log real crude oil prices. Those variables involveseasonality, co-integration, structural changes, and outliers. We introduce nonlinear Markov-switchingscore-driven models with common trends of the multivariate t-distribution (MS-Seasonal-t-QVAR), forwhich filters are optimal according to the Kullback-Leibler divergence. We find that MS-Seasonal-t-QVAR provides a better approximation of the true data generating process and more precise short-runand long-run impulse responses than SVAR.
    Keywords: Markov Regime-Switching Models; Outliers And Structural Changes; Nonlinear Co-Integration; Score-Driven Models; Global Crude Oil Market
    JEL: C52 C51 C32
    Date: 2020–05–07
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:30346&r=all
  11. By: Niko Hauzenberger; Florian Huber; Gary Koop
    Abstract: Time-varying parameter (TVP) regression models can involve a huge number of coefficients. Careful prior elicitation is required to yield sensible posterior and predictive inferences. In addition, the computational demands of Markov Chain Monte Carlo (MCMC) methods mean their use is limited to the case where the number of predictors is not too large. In light of these two concerns, this paper proposes a new dynamic shrinkage prior which reflects the empirical regularity that TVPs are typically sparse (i.e. time variation may occur only episodically and only for some of the coefficients). A scalable MCMC algorithm is developed which is capable of handling very high dimensional TVP regressions or TVP Vector Autoregressions. In an exercise using artificial data we demonstrate the accuracy and computational efficiency of our methods. In an application involving the term structure of interest rates in the eurozone, we find our dynamic shrinkage prior to effectively pick out small amounts of parameter change and our methods to forecast well.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.03906&r=all
  12. By: Thomas H. Joergensen (CEBI, Department of Economics, University of Copenhagen)
    Abstract: Across many fields in economics, a common approach to estimation of economic models is to calibrate a sub-set of model parameters and keep them fixed when estimating the remaining parameters. Calibrated parameters likely affect conclusions based on the model but estimation time often makes a systematic investigation of the sensitivity to calibrated parameters infeasible. I propose a simple and computationally low-cost measure of the sensitivity of parameters and other objects of interest to the calibrated parameters. In the main empirical application, I revisit the analysis of life-cycle savings motives in Gourinchas and Parker (2002) and show that some estimates are sensitive to calibrations.
    Keywords: Sensitivity, Transparency, Structural Estimation, Calibration, Savings Motives
    JEL: C10 C52 C60
    Date: 2020–04–27
    URL: http://d.repec.org/n?u=RePEc:kud:kucebi:2014&r=all
  13. By: Adam McCloskey; Pascal Michaillat
    Abstract: Statistical hypothesis tests are a cornerstone of scientific research. The tests are informative when their size is properly controlled, so the frequency of rejecting true null hypotheses (type I error) stays below a prespecified nominal level. Publication bias exaggerates test sizes, however. Since scientists can typically only publish results that reject the null hypothesis, they have the incentive to continue conducting studies until attaining rejection. Such $p$-hacking takes many forms: from collecting additional data to examining multiple regression specifications, all in the search of statistical significance. The process inflates test sizes above their nominal levels because the critical values used to determine rejection assume that test statistics are constructed from a single study---abstracting from $p$-hacking. This paper addresses the problem by constructing critical values that are compatible with scientists' behavior given their incentives. We assume that researchers conduct studies until finding a test statistic that exceeds the critical value, or until the benefit from conducting an extra study falls below the cost. We then solve for the incentive-compatible critical value (ICCV). When the ICCV is used to determine rejection, readers can be confident that size is controlled at the desired significance level, and that the researcher's response to the incentives delineated by the critical value is accounted for. Since they allow researchers to search for significance among multiple studies, ICCVs are larger than classical critical values. Yet, for a broad range of researcher behaviors and beliefs, ICCVs lie in a fairly narrow range.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.04141&r=all
  14. By: Guanyu Hu; Yishu Xue; Zhihua Ma
    Abstract: In regional economics research, a problem of interest is to detect similarities between regions, and estimate their shared coefficients in economics models. In this article, we propose a mixture of finite mixtures (MFM) clustered regression model with auxiliary covariates that account for similarities in demographic or economic characteristics over a spatial domain. Our Bayesian construction provides both inference for number of clusters and clustering configurations, and estimation for parameters for each cluster. Empirical performance of the proposed model is illustrated through simulation experiments, and further applied to a study of influential factors for monthly housing cost in Georgia.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.12022&r=all
  15. By: Slichter, David
    Abstract: This paper develops a simple diagnostic for the selection on observables assumption in the case of a binary treatment variable. I show that, under common assumptions, when selection on observables does not hold, designs based on selection on observables will estimate treatment effects approaching infinity or negative infinity among observations with propensity scores close to 0 or 1. Researchers can check for violations of selection on observables either informally by looking for a "smile" shape in a binned scatterplot, or with a simple formal test. When selection on observables fails, the researcher can detect the sign of the resulting bias.
    Keywords: unconfoundedness, diagnostic test
    JEL: C21 C29
    Date: 2020–04–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:99921&r=all
  16. By: Shalva Mkhatrishvili (Macroeconomic Research Division, National Bank of Georgia); Douglas Laxton (NOVA School of Business and Economics, Saddle Point Research, The Better Policy Project); Davit Tutberidze (Macroeconomic Research Division, National Bank of Georgia); Tamta Sopromadze (Macroeconomic Research Division, National Bank of Georgia); Saba Metreveli (Macroeconomic Research Division, National Bank of Georgia); Lasha Arevadze (Macroeconomic Research Division, National Bank of Georgia); Tamar Mdivnishvili (Macroeconomic Research Division, National Bank of Georgia); Giorgi Tsutskiridze (Macroeconomic Research Division, National Bank of Georgia)
    Abstract: There has been an increased acceptance of non-linear linkages being the major driver of the most pronounced phases of business and financial cycles. However, modelling these non-linear phenomena has been a challenge, since existing solutions methods are either efficient but not able to accurately capture non-linear dynamics (e.g. linear methods), or accurate but quite resource-intensive (e.g. stacked system or stochastic Extended Path). This paper proposes two new solution approaches that try to be accurate enough and less costly. Moreover, one of those methods lets us do Kalman filtering on nonlinear models in a non-linear way, which is also important for this kind of models, in general, to be more policy-relevant. Impulse responses, simulations and Kalman filtering exercises show the advantages of those new approaches when applied to a simple, but strongly non-linear, monetary policy model.
    Keywords: Non-linear dynamic models, Solution methods, Monetary policy
    JEL: C60 C61 C63 E17
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:aez:wpaper:01/2019&r=all
  17. By: Benedikt M. P\"otscher; David Preinerstorfer
    Abstract: We develop theoretical finite-sample results concerning the size of wild bootstrap-based heteroskedasticity robust tests in linear regression models. In particular, these results provide an efficient diagnostic check, which can be used to weed out tests that are unreliable for a given testing problem in the sense that they overreject substantially. This allows us to assess the reliability of a large variety of wild bootstrap-based tests in an extensive numerical study.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.04089&r=all
  18. By: Franses, Ph.H.B.F.
    Abstract: This paper introduces a new autoregressive model, with the specific feature that the lag structure can vary over time. More precise, and to keep matters simple, the autoregressive model sometimes has lag 1, and sometimes lag 2. Representation, autocorrelation, specification, inference, and the creation of forecasts are presented. A detailed illustration for annual inflation rates for eight countries in Africa shows the empirical relevance of the new model. Various potential extensions are discussed.
    Keywords: Autoregression, Time-varying lags, Forecasting
    JEL: C22 C53
    Date: 2020–04–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:126706&r=all
  19. By: Davide Del Prete; Laura Forastiere; Valerio Leone Sciabolazza
    Abstract: This paper presents a methodology to draw causal inference in a non-experimental setting subject to network interference. Specifically, we develop a generalized propensity score-based estimator that allows us to estimate both direct and spillover effects of a continuous treatment, which spreads through weighted and directed edges of a network. To showcase this methodology, we investigate whether and how spillover effects shape the optimal level of policy interventions in agricultural markets. Our results show that, in this context, neglecting interference may lead to a downward bias when assessing policy effectiveness.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.13459&r=all
  20. By: Candelaria, Luis E. (University of Warwick); Ura, Takuya (University of California, Davis)
    Abstract: This paper considers a network formation model when links are potentially measured with error. We focus on a game-theoretical model of strategic network formation with incomplete information, in which the linking decisions depend on agents’ exogenous attributes and endogenous network characteristics. In the presence of link misclassification, we derive moment conditions that characterize the identified set for the preference parameters associated with homophily and network externalities. Based on the moment equality conditions, we provide an inference method that is asymptotically valid when a single network of many agents is observed. Finally, we apply our proposed method to study trust networks in rural villages in southern India.
    Keywords: Misclassification ; Network formation models ; Strategic interactions ; Incomplete information JEL codes: C13 ; C31
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1258&r=all
  21. By: Ruiz Ortega, Esther; Nieto Delfin, Maria Rosa
    Abstract: Although the Basel Accords require financial institutions to report daily predictions ofValue at Risk (VaR) computed using ten-day returns, a vast part of the literature deals withVaR predictions based on one-day returns. From the practitioner point of view, some ofthe conclusions about the best methods to estimate one-period VaR could not be directlygeneralized to multi-period VaR. Consequently, in the context of two-step VaR predictors,we use simulated and real data to compare direct and iterated predictions of multi-periodVaR based on ten-day returns assuming that the conditional variances of one-period returnsfollow a GARCH-type model. We show that multiperiod VaR predictions based on iteratingan asymmetric GJR model with normal or bootstrapped errors are often preferred whencompared with direct methods that are often biased and inefficient.
    Keywords: Risk; Multi-Step Forecasts; Gjr Model; Feasible Historical Simulation
    JEL: C58 C53 C22 G17
    Date: 2020–05–07
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:30349&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.