nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒07‒08
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Adaptive Testing for Cointegration with Nonstationary Volatility By Peter Boswijk; Yang Zu
  2. A robust approach to heteroskedasticity, error serial correlation and slope heterogeneity for large linear panel data models with interactive effects By Guowei Cui; Kazuhiko Hayakawa; Shuichi Nagata; Takashi Yamagata
  3. Flexible Estimation of Heteroskedastic Stochastic Frontier Models via Two-step Iterative Nonlinear Least Squares By Federico Belotti; Giancarlo Ferrara
  4. An automated prior robustness analysis in Bayesian model comparison By Joshua C. C. Chan; Liana Jacobi; Dan Zhu
  5. New Misspecification Tests for Multinomial Logit Models By Fok, D.; Paap, R.
  6. Implications of partial information for econometric modeling of macroeconomic systems By Adrian Pagan; Tim Robinson
  7. Empirically-transformed linear opinion pools By Anthony Garratt; Timo Henckel; Shaun P. Vahey
  8. Time-Varying Cointegration and the Kalman Filter By Burak Alparslan Eroglu; J. Isaac Miller; Taner Yigit
  9. Synthetic Estimation of Dynamic Panel Models When Either N or T or Both Are Not Large: Bias Decomposition in Systematic and Random Components By Carbajal-De-Nova, Carolina; Venegas-Martínez, Francisco
  10. Efficient selection of hyperparameters in large Bayesian VARs using automatic differentiation By Joshua C. C. Chan; Liana Jacobi; Dan Zhu
  11. Second Order Time Dependent Inflation Persistence in the United States: a GARCH-in-Mean Model with Time Varying Coefficients. By Alessandra Canepa,; Menelaos G. Karanasos; Alexandros G. Paraskevopoulos,
  12. Regression with an imputed dependent variable By Crossley, Thomas F.; Levell, Peter; Poupakis, Stavros
  13. Testing for Risk Aversion in First-Price Sealed-Bid Auctions By Federico Zincenko
  14. Entry games for the airline industry By Christian Bontemps; Bezerra Sampaio
  15. Testing for breaks in the cointegrating relationship: On the stability of government bond markets' equilibrium By Rodrigues, Paulo M.M.; Sibbertsen, Philipp; Voges, Michelle
  16. Estimating macroeconomic uncertainty and discord using info-metrics By Kajal Lahiri; Wuwei Wang
  17. On the use of machine learning for causal inference in climate economics By Isabel Hovdahl

  1. By: Peter Boswijk (University of Amsterdam); Yang Zu (University of Nottingham)
    Abstract: This paper generalises Boswijk and Zu (2018)'s adaptive unit root test for time series with nonstationary volatility to a multivariate context. Persistent changes in the innovation variance matrix of a vector autoregressive model lead to size distortions in conventional cointegration tests, which may be resolved using the wild bootstrap, as shown by Cavaliere et al. (2010, 2014). We show that it also leads to the possibility of constructing tests with higher power, by taking the time-varying volatilities and correlations into account in the formulation of the likelihood function and the resulting likelihood ratio test statistic. We find that under suitable conditions, adaptation with respect to the volatility process is possible, in the sense that nonparametric volatility matrix estimation does not lead to a loss of asymptotic local power relative to the case where the volatilities are observed. The asymptotic null distribution of the test is nonstandard and depends on the volatility process; we show that various bootstrap implementations may be used to conduct asymptotically valid inference. Monte Carlo simulations show that the resulting test has good size properties, and higher power than existing tests. Two empirical examples illustrate the applicability of the tests.
    Keywords: Adaptive estimation, Nonparametric volatility estimation, Wild bootstrap
    JEL: C32 C12
    Date: 2019–06–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190043&r=all
  2. By: Guowei Cui; Kazuhiko Hayakawa; Shuichi Nagata; Takashi Yamagata
    Abstract: In this paper, we propose a robust approach against heteroskedasticity, error serial correlation and slope heterogeneity for large linear panel data models. First, we establish the asymptotic validity of the Wald test based on the widely used panel heteroskedasticity and autocorrelation consistent (HAC) variance estimator of the pooled estimator under random coefficient models. Then, we show that a similar result holds with the proposed bias-corrected principal component-based estimators for models with unobserved interactive effects. Our new theoretical result justifies the use of the same slope estimator and the variance estimator, both for slope homogeneous and heterogeneous models. This robust approach can significantly reduce the model selection uncertainty for applied researchers. In addition, we propose a novel test for the correlation and dependence of the random coefficient with covariates. The test is of great importance, since the widely used estimators and/or its variance estimators can become inconsistent when the variation of coefficients depends on covariates, in general. The finite sample evidence supports the usefulness and reliability of our approach.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1037r&r=all
  3. By: Federico Belotti (CEIS & DEF University of Rome "Tor Vergata"); Giancarlo Ferrara (SOSE and University of Palermo)
    Abstract: This article illustrates a straightforward and useful method for incorporating exogenous inefficiency effects in the estimation of semiparametric stochastic frontier models. An iterative estimation algorithm based on two-step nonlinear least squares is developed allowing for any flexible and monotonic specification of the production technology. We investigate the behavior of the proposed procedure through a set of Monte Carlo experiments comparing its finite sample properties with those of available alternatives. The new algorithm provides very good performance, outperforming the competitors in small samples and in presence of small signal-to-noise ratios. Two applications to agricultural data illustrate the usefulness of the proposed algorithm, even when it is used as a tool for sensitivity analysis.
    Keywords: Stochastic frontier, Heteroskedasticity, Inefficiency effects, Generalized additive model, Nonlinear least-squares, P-Splines.
    JEL: C14 C51 D24
    Date: 2019–07–03
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:462&r=all
  4. By: Joshua C. C. Chan; Liana Jacobi; Dan Zhu
    Abstract: The marginal likelihood is the gold standard for Bayesian model comparison although it is well-known that the value of marginal likelihood could be sensitive to the choice of prior hyperparameters. Most models require computationally intense simulation-based methods to evaluate the typically high-dimensional integral of the marginal likelihood expression. Hence, despite the recognition that prior sensitivity analysis is important in this context, it is rarely done in practice. In this paper we develop efficient and feasible methods to compute the sensitivities of marginal likelihood, obtained via two common simulation-based methods, with respect to any prior hyperparameter alongside the MCMC estimation algorithm. Our approach builds on Automatic Differentiation (AD), which has only recently been introduced to the more computationally intensive setting of Markov chain Monte Carlo simulation. We illustrate our approach with two empirical applications in the context of widely used multivariate time series models.
    Keywords: automatic differentiation, model comparison, vector autoregression, factor models
    JEL: C11 C53 E37
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-45&r=all
  5. By: Fok, D.; Paap, R.
    Abstract: Misspecification tests for Multinomial Logit [MNL] models are known to have low power or large size distortion. We propose two new misspecification tests. Both use that preferences across binary pairs of alternatives can be described by independent binary logit models when MNL is true. The first test compares Composite Likelihood parameter estimates based on choice pairs with standard Maximum Likelihood estimates using a Hausman (1978) test. The second tests for overidentification in a GMM framework using more pairs than necessary. A Monte Carlo study shows that the GMM test is in general superior with respect to power and has correct size
    Keywords: Discrete choices, Multinomial Logit, IIA, Hausman test, Composite Likelihood
    JEL: C25 C12 C52
    Date: 2019–06–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:116745&r=all
  6. By: Adrian Pagan; Tim Robinson
    Abstract: Representative models of the macroeconomy (RMs), such as DSGE models, frequently contain unobserved variables. A finite-order VAR representation in the observed variables may not exist, and therefore the impulse responses of the RMs and SVAR models may differ. We demonstrate this divergence often is: (i) not substantial; (ii) reflects the omission of stock variables from the VAR; and (iii) when the RM features I (1) variables can be ameliorated by estimating a latent-variable VECM. We show that DSGE models utilize identifying restrictions stemming from common factor dynamics reflecting statistical, not economic, assumptions. We analyze the use of measurement error, and demonstrate that it may result in unintended consequences, particularly in models featuring I (1) variables.
    Keywords: SVAR, Partial Information, Identification, Measurement Error, DSGE
    JEL: E37 C51 C52
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-41&r=all
  7. By: Anthony Garratt; Timo Henckel; Shaun P. Vahey
    Abstract: Many studies have found that combining density forecasts improves predictive accuracy for macroeconomic variables. A prevalent approach known as the Linear Opinion Pool (LOP) combines forecast densities from “experts”; see, among others, Stone (1961), Geweke and Amisano (2011), Kascha and Ravazzolo (2011), Ranjan and Gneiting (2010) and Gneiting and Ranjan (2013). Since the LOP approach averages the experts’ probabilistic assessments, the distribution of the combination generally differs from the marginal distributions of the experts. As a result, the LOP combination forecasts sometimes fail to match salient features of the sample data, including asymmetries in risk. In this paper, we propose a computationally convenient transformation for a target macroeconomic variable with an asymmetric marginal distribution. Our methodology involves a Smirnov transform to reshape the LOP combination forecasts using a nonparametric kernel-smoothed empirical cumulative distribution function. We illustrate our methodology with an application examining quarterly real-time forecasts for US inflation based on multiple output gap measures over an evaluation sample from 1990:1 to 2017:2. Our proposed methodology improves combination forecast performance by approximately 10% in terms of both the root mean squared forecast error and the continuous ranked probability score. We find that our methodology delivers a similar performance gain for the Logarithmic Opinion Pool (LogOP), a commonly-used alternative to the LOP.
    Keywords: Forecast density combination, Smirnov transform, inflation
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-47&r=all
  8. By: Burak Alparslan Eroglu; J. Isaac Miller (Department of Economics, University of Missouri); Taner Yigit
    Abstract: We show that time-varying parameter state-space models estimated using the Kalman filter are particularly vulnerable to the problem of spurious regression, because the integrated error is transferred to the estimated state equation. We offer a simple yet effective methodology to reliably recover the instability in cointegrating vectors. In the process, the proposed methodology successfully distinguishes between the cases of no cointegration, fixed cointegration, and time-varying cointegration. We apply these proposed tests to elucidate the relationship between concentrations of greenhouse gases and global temperatures, an important relationship to both climate scientists and economists.
    Keywords: : time-varying cointegration, Kalman filter, spurious regression
    JEL: C12 C32 C51 Q54
    Date: 2019–06–27
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1905&r=all
  9. By: Carbajal-De-Nova, Carolina; Venegas-Martínez, Francisco
    Abstract: By increasing the dimensions N or T, or both, in data panel analysis, bias can be reduced asymptotically to zero. This research deals with an econometric methodology to separate and measure bias through synthetic estimators without altering the data panel dimensions. This is done by recursively decomposing bias in systematic and random components. The methodology provides consistent synthetic estimators.
    Keywords: panel data models; bias analysis; econometric modeling
    JEL: C51
    Date: 2019–06–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:94405&r=all
  10. By: Joshua C. C. Chan; Liana Jacobi; Dan Zhu
    Abstract: Large Bayesian VARs with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on Automatic Differentiation, which is an efficient way to compute derivatives. Using a large US dataset, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid-search approach, and is applicable in high-dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data-rich environment.
    Keywords: automatic differentiation, vector autoregression, optimal hyperparameters, forecasts, marginal likelihood
    JEL: C11 C53 E37
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-46&r=all
  11. By: Alessandra Canepa,; Menelaos G. Karanasos; Alexandros G. Paraskevopoulos, (University of Turin)
    Abstract: In this paper we investigate the behavior of in?ation persistence in the United States. To model in?ation we estimate an autoregressive GARCH-in-mean model with variable coe¢ cients and we propose a new measure of second-order time varying persistence, which not only distinguishes between changes in the dynamics of in?ation and its volatility, but it also allows for feedback from nominal uncertainty to in?ation. Our empirical results suggest that in?ation persistence in the United States is best described as unchanged. Another important result relates to the Monte Carlo experiment evidence which reveal that if the model is misspeci?ed, then commonly used unit root tests will misclassify in?ation of being a nonstationary, rather than a stationary process.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:201911&r=all
  12. By: Crossley, Thomas F.; Levell, Peter; Poupakis, Stavros
    Abstract: Researchers are often interested in the relationship between two variables, with no single data set containing both. A common strategy is to use proxies for the dependent variable that are common to two surveys to impute the dependent variable into the data set containing the independent variable. We show that commonly employed regression or matching-based imputation procedures lead to inconsistent estimates. We o er an easily-implemented correction and correct asymptotic standard errors. We illustrate these with Monte Carlo experiments and empirical examples using data from the US Consumer Expenditure Survey (CE) and the Panel Study of Income Dynamics (PSID).
    Date: 2019–06–24
    URL: http://d.repec.org/n?u=RePEc:ese:iserwp:2019-07&r=all
  13. By: Federico Zincenko
    Abstract: We consider testing for risk aversion in fi rst-price sealed-bid auctions with symmetricbidders and independent private values: the parameters are the bidders' utility function andvaluation distribution. First, we show that any test based on a sample of bids will generally beinconsistent and it will fail to detect any sequence of local alternatives converging to the null of riskneutrality. Second, we introduce restrictions on the parameter space, which are implied by Guerre,Perrigne, and Vuong (2009)'s exclusion restriction, and then we develop a consistent nonparametrictest that controls the limiting size and detects local alternatives at the parametric rate.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:pit:wpaper:6641&r=all
  14. By: Christian Bontemps (ENAC - Ecole Nationale de l'Aviation Civile); Bezerra Sampaio
    Abstract: In this paper we review the literature on static entry games and show how they can be used to estimate the market structure of the airline industry. The econometrics challenges are presented, in particular the problem of multiple equilibria and some solutions used in the literature are exposed. We also show how these models, either in the complete information setting or in the incomplete information one, can be estimated from i.i.d. data on market presence and market characteristics. We illustrate it by estimating a static entry game with heterogeneous firms by Simulated Maximum Likelihood on European data for the year 2015.
    Keywords: multiple equilibria,airlines,estimation,industrial organization,entry
    Date: 2019–05–22
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02137358&r=all
  15. By: Rodrigues, Paulo M.M.; Sibbertsen, Philipp; Voges, Michelle
    Abstract: In this paper, test procedures for no fractional cointegration against possible breaks in the persistence structure of a fractional cointegrating relationship are introduced. The tests proposed are based on the supremum of the Hassler and Breitung (2006) test statistic for no cointegration over possible breakpoints in the long-run equilibrium. We show that the new tests correctly standardized converge to the supremum of a chisquared distribution, and that this convergence is uniform. An in-depth Monte Carlo analysis provides results on the finite sample performance of our tests. We then use the new procedures to investigate whether there was a dissolution of fractional cointegrating relationships between benchmark government bonds of ten EMU countries (Spain, Italy, Portugal, Ireland, Greece, Belgium, Austria, Finland, the Netherlands and France) and Germany with the beginning of the European debt crisis.
    Keywords: Fractional cointegration, Persistence breaks, Hassler-Breitung test, Changing Long-run equilibrium
    JEL: C12 C32
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-656&r=all
  16. By: Kajal Lahiri; Wuwei Wang
    Abstract: We apply generalized beta and triangular distributions to histograms from the Survey of Professional Forecasters (SPF) to estimate forecast uncertainty, shocks and discord using information framework, and compare these with moment-based estimates. We find these two approaches to produce analogous results, except in cases where the underlying densities deviate significantly from normality. Even though the Shannon entropy is more inclusive of different facets of a forecast density, we find that with SPF forecasts it is largely driven by the variance of the densities. We use Jenson-Shannon Information to measure ex ante “news” or “uncertainty shocks” in real time, and find that this ‘news’ is closely related to revisions in forecast means, countercyclical, and raises uncertainty. Using standard vector auto-regression analysis, we confirm that uncertainty affects the economy negatively.
    Keywords: density forecasts, uncertainty, disagreement, entropy measures, Jensen-Shannon information, Survey of Professional Forecasters
    JEL: E37
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7674&r=all
  17. By: Isabel Hovdahl
    Abstract: One of the most important research questions in climate economics is the relationship between temperatures and human mortality. This paper develops a procedure that enables the use of machine learning for estimating the causal temperature-mortality relationship. The machine-learning model is compared to a traditional OLS model, and although both models are capturing the causal temperature-mortality relationship, they deliver very di?erent predictions of the e?ect of climate change on mortality. These di?erences are mainly caused by di?erent abilities regarding extrapolation and estimation of marginal e?ects. The procedure developed in this paper can ?nd applications in other ?elds far beyond climate economics.
    Keywords: Climate change, machine learning, mortality
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0077&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.