nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒01‒28
eleven papers chosen by
Sune Karlsson
Örebro universitet

  1. Markov chain Monte Carlo estimation of spatial dynamic panel models for large samples By LeSage, James P.; Chih, Yao-Yu; Vance, Colin
  2. Reconciling VAR-based Forecasts with Survey Forecasts By Doh, Taeyoung; Smith, Andrew Lee
  3. Non-Parametric Inference Adaptive to Intrinsic Dimension By Khashayar Khosravi; Greg Lewis; Vasilis Syrgkanis
  4. Inference on Functionals under First Order Degeneracy By Qihui Chen; Zheng Fang
  5. Semiparametric Bayes Multiple Imputation for Regression Models with Missing Mixed Continuous-Discrete Covariates By Ryo Kato; Takahiro Hoshino
  6. Structural vector autoregression with time varying transition probabilities: identifying uncertainty shocks via changes in volatility By Wenjuan Chen; Aleksei Netsunajev
  7. Forensic Econometrics: Demand Estimation when Data are Missing By Julian Hidalgo; Michelle Sovinsky
  8. A Comparison of Semiparametric Tests for Fractional Cointegration By Leschinski, Christian; Voges, Michelle; Sibbertsen, Philipp
  9. Evaluating heterogeneous forecasts for vintages of macroeconomic variables By Franses, Ph.H.B.F.; Welz, M.
  10. A Probabilistic Approach to Nonparametric Local Volatility By Martin Tegn\'er; Stephen Roberts
  11. Mastering Panel 'Metrics: Causal Impact of Democracy on Growth By Shuowen Chen; Victor Chernozhukov; Iv\'an Fern\'andez-Val

  1. By: LeSage, James P.; Chih, Yao-Yu; Vance, Colin
    Abstract: Focus is on efficient estimation of a dynamic space-time panel data model that incorporates spatial dependence, temporal dependence, as well as space-time covariance and can be implemented in large N and T situations, where N is the number of spatial units and T the number of time periods. Quasi-maximum likelihood (QML) estimation in cases involving large N and T poses computational challenges because optimizing the (log) likelihood requires: 1) evaluating the log-determinant of an NT x NT matrix that appears in the likelihood, 2) imposing stability restrictions on parameters reflecting space-time dynamics, as well as 3) simulations to produce an empirical distribution of the partial derivatives used to interpret model estimates that require numerous inversions of large matrices. We set forth a Markov Chain Monte Carlo (MCMC) estimation procedure capable of handling large problems, which we illustrate using a sample of T = 487 daily fuel prices for N = 12, 435 German gas stations, resulting in N x T over 6 million. The procedure produces estimates equivalent to those from QML and has the additional advantage of producing a Monte Carlo integrated estimate of the log-marginal likelihood, useful for purposes of model comparison. Our MCMC estimation procedure uses: 1) a Taylor series approximation to the logdeterminant based on traces of matrix products calculated prior to MCMC sampling, 2) block sampling of the spatiotemporal parameters, which allows imposition of the stability restrictions, and 3) a Metropolis-Hastings guided Monte Carlo integration of the logmarginal likelihood. We also provide an efficient approach to simulations needed to produce the empirical distribution of the partial derivatives for model interpretation.
    Keywords: dynamic panel models,spatial dependence,Markov Chain Monte Carlo estimation
    JEL: C23 D40
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:769&r=all
  2. By: Doh, Taeyoung (Federal Reserve Bank of Kansas City); Smith, Andrew Lee (Federal Reserve Bank of Kansas City)
    Abstract: This paper proposes a novel Bayesian approach to jointly model realized data and survey forecasts of the same variable in a vector autoregression (VAR). In particular, our method imposes a prior distribution on the consistency between the forecast implied by the VAR and the survey forecast for the same variable. When the prior is placed on unconditional forecasts from the VAR, the prior shapes the posterior of the reduced-form VAR coefficients. When the prior is placed on conditional forecasts (specifically, impulse responses), the prior shapes the posterior of the structural VAR coefficients. {{p}} To implement our prior, we combine importance sampling with a maximum entropy prior for forecast consistency to obtain posterior draws of VAR parameters at low computational cost. We use two empirical examples to illustrate some potential applications of our methodology: (i) the evolution of tail risks for inflation in a time-varying parameter VAR model and (ii) the identification of forward guidance shocks using sign and forecast-consistency restrictions in a monetary VAR model.
    Keywords: Vector Autoregression (VAR); Survey Forecasts; Bayesian VAR; Inflation Risk; Forward Guidance
    JEL: C11 C32 E31
    Date: 2018–12–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp18-13&r=all
  3. By: Khashayar Khosravi; Greg Lewis; Vasilis Syrgkanis
    Abstract: We consider non-parametric estimation and inference of conditional moment models in high dimensions. We show that even when the dimension $D$ of the conditioning variable is larger than the sample size $n$, estimation and inference is feasible as long as the distribution of the conditioning variable has small intrinsic dimension $d$, as measured by the doubling dimension. Our estimation is based on a sub-sampled ensemble of the $k$-nearest neighbors $Z$-estimator. We show that if the intrinsic dimension of the co-variate distribution is equal to $d$, then the finite sample estimation error of our estimator is of order $n^{-1/(d+2)}$ and our estimate is $n^{1/(d+2)}$-asymptotically normal, irrespective of $D$. We discuss extensions and applications to heterogeneous treatment effect estimation.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.03719&r=all
  4. By: Qihui Chen; Zheng Fang
    Abstract: This paper presents a unified second order asymptotic framework for conducting inference on parameters of the form $\phi(\theta_0)$, where $\theta_0$ is unknown but can be estimated by $\hat\theta_n$, and $\phi$ is a known map that admits null first order derivative at $\theta_0$. For a large number of examples in the literature, the second order Delta method reveals a nondegenerate weak limit for the plug-in estimator $\phi(\hat\theta_n)$. We show, however, that the `standard' bootstrap is consistent if and only if the second order derivative $\phi_{\theta_0}''=0$ under regularity conditions, i.e., the standard bootstrap is inconsistent if $\phi_{\theta_0}''\neq 0$, and provides degenerate limits unhelpful for inference otherwise. We thus identify a source of bootstrap failures distinct from that in Fang and Santos (2018) because the problem (of consistently bootstrapping a \textit{nondegenerate} limit) persists even if $\phi$ is differentiable. We show that the correction procedure in Babu (1984) can be extended to our general setup. Alternatively, a modified bootstrap is proposed when the map is \textit{in addition} second order nondifferentiable. Both are shown to provide local size control under some conditions. As an illustration, we develop a test of common conditional heteroskedastic (CH) features, a setting with both degeneracy and nondifferentiability -- the latter is because the Jacobian matrix is degenerate at zero and we allow the existence of multiple common CH features.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.04861&r=all
  5. By: Ryo Kato (Research Institute for Economics & Business Administration (RIEB), Kobe University, Japan); Takahiro Hoshino (Department of Economics, Keio University, Japan and RIKEN Center for Advanced Intelligence Project, Japan)
    Abstract: Issues regarding missing data are critical in observational and experimental research, as they induce loss of information and biased result. Recently, for datasets with mixed continuous and discrete variables in various study areas, multiple imputation by chained equation (MICE) has been more widely used, although MICE may yield severely biased estimates. We propose a new semiparametric Bayes multiple imputation approach that can deal with continuous and discrete variables. This enables us to overcome the shortcomings of multiple imputation by MICE; they must satisfy strong conditions (known as compatibility) to guarantee that obtained estimators are consistent. Our exhaustive simulation studies show thatthe coverage probability of 95 % interval calculated using MICE can be less than 1 %, while the MSE of the proposed one can be less than one-fiftieth. We also applied our method to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and the results are consistent with those of the previous research works that used panel data other than ADNI database, whereas the existing methods such as MICE, resulted in entirely inconsistent results.
    Keywords: Full conditional specification, Missing data, Multiple imputation, Probit stickbreaking process mixture, Semiparametric Bayes model
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:kob:dpaper:dp2018-15&r=all
  6. By: Wenjuan Chen; Aleksei Netsunajev
    Keywords: structural vector autoregression; Markov switching; time varying transition probabilities; identification via heteroscedasticity; uncertainty shocks; unemployment dynamics
    JEL: C32 D80 E24
    Date: 2018–02–13
    URL: http://d.repec.org/n?u=RePEc:eea:boewps:wp2018-02&r=all
  7. By: Julian Hidalgo; Michelle Sovinsky
    Abstract: Often empirical researchers face many data constraints when estimating models of de- mand. These constraints can sometimes prevent adequate evaluation of policies. In this article, we discuss two such missing data problems that arise frequently: missing data on prices and missing information on the size of the potential market. We present some ways to overcome these limitations in the context of two recent research projects. Liana and Sovin- sky (2018) which addresses how to incorporate unobserved price heterogeneity and Hidalgo and Sovinsky (2018) which focuses on how to use modeling techniques to estimate missing market size. Our aim is to provide a starting point for thinking about ways to overcome common data issues.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2018_058&r=all
  8. By: Leschinski, Christian; Voges, Michelle; Sibbertsen, Philipp
    Abstract: There are various competing procedures to determine whether fractional cointegration is present in a multivariate time series, but no standard approach has emerged. We provide a synthesis of this literature and conduct a detailed comparative Monte Carlo study to guide empirical researchers in their choice of appropriate methodologies. Special attention is paid on empirically relevant issues such as assumptions about the form of the underlying process and the ability of the procedures to distinguish between short-run correlation and long-run equilibria. It is found that several approaches are severely oversized in presence of correlated short-run components and that the methods show different performance in terms of power when applied to common-component models instead of triangular systems.
    Keywords: Long Memory; Fractional Cointegration; Semiparametric Estimation and Testing
    JEL: C14 C32
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-651&r=all
  9. By: Franses, Ph.H.B.F.; Welz, M.
    Abstract: There are various reasons why professional forecasters may disagree in their quotes for macroeconomic variables. One reason is that they target at different vintages of the data. We propose a novel method to test forecast bias in case of such heterogeneity. The method is based on Symbolic Regression, where the variables of interest become interval variables. We associate the interval containing the vintages of data with the intervals of the forecasts. An illustration to 18 years of forecasts for annual USA real GDP growth, given by the Consensus Economics forecasters, shows the relevance of the method.
    Keywords: Forecast bias, Data revisions, Interval data, Symbolic regression
    JEL: C53
    Date: 2019–09–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:114113&r=all
  10. By: Martin Tegn\'er; Stephen Roberts
    Abstract: The local volatility model is a widely used for pricing and hedging financial derivatives. While its main appeal is its capability of reproducing any given surface of observed option prices---it provides a perfect fit---the essential component is a latent function which can be uniquely determined only in the limit of infinite data. To (re)construct this function, numerous calibration methods have been suggested involving steps of interpolation and extrapolation, most often of parametric form and with point-estimate representations. We look at the calibration problem in a probabilistic framework with a nonparametric approach based on a Gaussian process prior. This immediately gives a way of encoding prior beliefs about the local volatility function and a hypothesis model which is highly flexible yet not prone to over-fitting. Besides providing a method for calibrating a (range of) point-estimate(s), we draw posterior inference from the distribution over local volatility. This leads to a better understanding of uncertainty associated with the calibration in particular, and with the model in general. Further, we infer dynamical properties of local volatility by augmenting the hypothesis space with a time dimension. Ideally, this provides predictive distributions not only locally, but also for entire surfaces forward in time. We apply our approach to S&P 500 market data.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.06021&r=all
  11. By: Shuowen Chen; Victor Chernozhukov; Iv\'an Fern\'andez-Val
    Abstract: The relationship between democracy and economic growth is of long-standing interest. We revisit the panel data analysis of this relationship by Acemoglu, Naidu, Restrepo and Robinson (forthcoming) using state of the art econometric methods. We argue that this and lots of other panel data settings in economics are in fact high-dimensional, resulting in principal estimators -- the fixed effects (FE) and Arellano-Bond (AB) estimators -- to be biased to the degree that invalidates statistical inference. We can however remove these biases by using simple analytical and sample-splitting methods, and thereby restore valid statistical inference. We find that the debiased FE and AB estimators produce substantially higher estimates of the long-run effect of democracy on growth, providing even stronger support for the key hypothesis in Acemoglu, Naidu, Restrepo and Robinson (forthcoming). Given the ubiquitous nature of panel data, we conclude that the use of debiased panel data estimators should substantially improve the quality of empirical inference in economics.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.03821&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.