nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒03‒25
twenty-two papers chosen by
Sune Karlsson
Örebro universitet

  1. Nonparametric conditional density specification testing and quantile estimation; with application to S&P500 returns By Patrick Marsh
  2. Maximum Likelihood Estimation for the Fractional Vasicek Model By Tanaka, Katsuto; Xiao, Weilin; Yu, Jun
  3. An Integrated Panel Data Approach to Modelling Economic Growth By Guohua Feng; Jiti Gao; Bin Peng
  4. The role of information in nonstationary regression By Patrick Marsh
  5. Inference for First-Price Auctions with Guerre, Perrigne, and Vuong's Estimator By Jun Ma; Vadim Marmer; Artyom Shneyerov
  6. How cluster-robust inference is changing applied econometrics By James G. MacKinnon
  7. Spatial Blind Source Separation By Bachoc, François; Genton, Mark G.; Nordhausen, Klaus; Ruiz-Gazen, Anne; Virta, Joni
  8. Bayesian MIDAS Penalized Regressions: Estimation, Selection, and Prediction By Matteo Mogliani
  9. Streamlining Time-varying VAR with a Factor Structure in the Parameters By Simon Beyeler
  10. Estimating the Benefits of New Products: Some Approximations By Diewert, Erwin; Feenstra, Robert
  11. Forecasting Equity Index Volatility by Measuring the Linkage among Component Stocks By Qiu, Yue; Xie, Tian; Yu, Jun; Zhou, Qiankun
  12. Identification of Auction Models Using Order Statistics By Yao Luo; Ruli Xiao
  13. Properties of the power envelope for tests against both stationary and explosive alternatives: the effect of trends By Patrick Marsh
  14. Counting Defiers By Amanda E. Kowalski
  15. Dynamic discrete mixtures for high frequency prices By Leopoldo Catania; Roberto Di Mari; Paolo Santucci de Magistris
  16. The effects of conventional and unconventional monetary policy: A new approach By Atsushi Inoue; Barbara Rossi
  17. Identifying and estimating the effects of unconventional monetary policy in the data: How to do It and what have we learned? By Barbara Rossi
  18. Machine Learning Risk Models By Zura Kakushadze; Willie Yu
  19. VAR-based Granger-causality test in the presence of instabilities By Yiru Wang; Barbara Rossi
  20. Bayesian Nonparametric Learning of How Skill Is Distributed across the Mutual Fund Industry By Fisher, Mark; Jensen, Mark J.; Tkac, Paula A.
  21. Moving Beyond Statistical Significance: The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations By John Deke; Mariel Finucane
  22. A Principled Approach to Assessing Missing-Wage Induced Selection Bias By Duo Qin, Sophie van Huellen, Raghda Elshafie, Yimeng Liu and Thanos Moraitis

  1. By: Patrick Marsh
    Abstract: This paper develops a two stage procedure to test for correct dynamic conditional specification. It exploits nonparametric likelihood for an exponential series density estimator applied to the in-sample Probability Integral Transforms obtained from a fitted conditional model. The test is shown to be asymptotically pivotal, without modification. Numerical experiments illustrate both this and also that it can have significantly more power than equivalent tests based on the empirical distribution function, when applied to a number of simple time series specifications. In the event of rejection, the second stage nonparametric estimator can both consistently estimate quantiles of the data, under empirically relevant conditions, as well as correct the predictive log-scores of mis-specified models. Both test and estimator are applied to monthly S&P500 returns data. The estimator leads to narrower predictive confidence bands which also enjoy better coverage and contributes positively to the predictive log-score of Gaussian fitted models. Additional application involves risk evaluation,such as Value at Risk calculations or estimation of the probability of a negative return. The contribution of the nonparametric estimator is particularly clear during the financial crisis of 2007/8 and highlights the usefulness of a specification procedure which offers the possibility of partially correcting rejected specifications.
    Keywords: Conditional specification, series density estimator, nonparametric likelihood ratio, predictive quantiles for returns, log-score.
  2. By: Tanaka, Katsuto (Gakushuin University); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics and Lee Kong Chian School of Business, Singapore Management University)
    Abstract: This paper is concerned about the problem of estimating the drift parameters in the fractional Vasicek model from a continuous record of observations. Based on the Girsanov theorem for the fractional Brownian motion, the maximum likelihood (ML) method is used. The asymptotic theory for the ML estimates (MLE) is established in the stationary case, the explosive case, and the null recurrent case for the entire range of the Hurst parameter, providing a complete treatment of asymptotic analysis. It is shown that changing the sign of the persistence parameter will change the asymptotic theory for the MLE, including the rate of convergence and the limiting distribution. It is also found that the asymptotic theory depends on the value of the Hurst parameter.
    Keywords: Maximum likelihood estimate; Fractional Vasicek model; Asymptotic distribution; Stationary process; Explosive process; Null recurrent process
    JEL: C15 C22 C32
    Date: 2019–03–03
  3. By: Guohua Feng; Jiti Gao; Bin Peng
    Abstract: Empirical growth analysis has three major problems --- variable selection, parameter heterogeneity and cross-sectional dependence --- which are addressed independently from each other in most studies. The purpose of this study is to propose an integrated framework that extends the conventional linear growth regression model to allow for parameter heterogeneity and cross-sectional error dependence, while simultaneously performing variable selection. We also derive the asymptotic properties of the estimator under both low and high dimensions, and further investigate the finite sample performance of the estimator through Monte Carlo simulations. We apply the framework to a dataset of 89 countries over the period from 1960 to 2014. Our results reveal some cross-country patterns not found in previous studies (e.g., "middle income trap hypothesis", "natural resources curse hypothesis", "religion works via belief, not practice", etc.).
    Date: 2019–03
  4. By: Patrick Marsh
    Abstract: The role of standard likelihood based measures of information and efficiency is unclear when regressions involve nonstationary data. Typically the standardized score is not asymptotically Gaussian and the standardized Hessian has a stochastic, rather than deterministic limit. Here we consider a time series regression involving a deterministic covariate which can be evaporating, slowly evolving or nonstationary. It is shown that conditional information, or equivalently, profile Kullback-Leibler and Fisher Information remain informative about both the accuracy, i.e. asymptotic variance, of profile maximum likelihood estimators, as well as the power of point optimal invariant tests for a unit root. Specifically these information measures indicate fractional,rather than linear trends may minimize inferential accuracy. Such is confirmed in numerical experiment.
  5. By: Jun Ma; Vadim Marmer; Artyom Shneyerov
    Abstract: We consider inference on the probability density of valuations in the first-price sealed-bid auctions model within the independent private value paradigm. We show the asymptotic normality of the two-step nonparametric estimator of Guerre, Perrigne, and Vuong (2000) (GPV), and propose an easily implementable and consistent estimator of the asymptotic variance. We prove the validity of the pointwise percentile bootstrap confidence intervals based on the GPV estimator. Lastly, we use the intermediate Gaussian approximation approach to construct bootstrap-based asymptotically valid uniform confidence bands for the density of the valuations.
    Date: 2019–03
  6. By: James G. MacKinnon (Queen's University)
    Abstract: In many fields of economics, and also in other disciplines, it is hard to justify the assumption that the random error terms in regression models are uncorrelated. It seems more plausible to assume that they are correlated within clusters, such as geographical areas or time periods, but uncorrelated across clusters. It has therefore become very popular to use "clustered" standard errors, which are robust against arbitrary patterns of within-cluster variation and covariation. Conventional methods for inference using clustered standard errors work very well when the model is correct and the data satisfy certain conditions, but they can produce very misleading results in other cases. This paper discusses some of the issues that users of these methods need to be aware of.
    Keywords: clustered data, cluster-robust variance estimator, CRVE, wild cluster bootstrap, robust inference
    JEL: C15 C21 C23
    Date: 2019–03
  7. By: Bachoc, François; Genton, Mark G.; Nordhausen, Klaus; Ruiz-Gazen, Anne; Virta, Joni
    Abstract: Recently a blind source separation model was suggested for spatial data together with an estimator based on the simultaneous diagonalization of two scatter matrices. The asymptotic properties of this estimator are derived here and a new estimator, based on the joint diagonalization of more than two scatter matrices, is proposed. The limiting properties and merits of the novel estimator are verified in simulation studies. A real data example illustrates the method.
    Keywords: joint diagonalisation; limiting distribution,; multivariate random fields; spatial scatter matrices
    Date: 2019–03
  8. By: Matteo Mogliani
    Abstract: We propose a new approach to mixed-frequency regressions in a high-dimensional environment that resorts to Group Lasso penalization and Bayesian techniques for estimation and inference. To improve the sparse recovery ability of the model, we also consider a Group Lasso with a spike-and-slab prior. Penalty hyper-parameters governing the model shrinkage are automatically tuned via an adaptive MCMC algorithm. Simulations show that the proposed models have good selection and forecasting performance, even when the design matrix presents high cross-correlation. When applied to U.S. GDP data, the results suggest that financial variables may have some, although limited, short-term predictive content.
    Date: 2019–03
  9. By: Simon Beyeler (Swiss National Bank)
    Abstract: I introduce a factor structure on the parameters of a Bayesian TVP-VAR to reduce the dimension of the model's state space. To further limit the scope of over-fitting the estimation of the factor loadings uses a new generation of shrinkage priors. A Monte Carlo study illustrates the ability of the proposed sampler to well distinguish between time-varying and constant parameters. In an application with Swiss data the model proves useful to capture changes in the economy's dynamics due to the lower bound on nominal interest rates.
    Date: 2019–03
  10. By: Diewert, Erwin; Feenstra, Robert
    Abstract: A major challenge facing statistical agencies is the problem of adjusting price and quantity indexes for changes in the availability of commodities. This problem arises in the scanner data context as products in a commodity stratum appear and disappear in retail outlets. Hicks suggested a reservation price methodology for dealing with this problem in the context of the economic approach to index number theory. Feenstra and Hausman suggested specific methods for implementing the Hicksian approach. The present paper evaluates these approaches and suggests some alternative approaches to the estimation of reservation prices. The various approaches are implemented using some scanner data on frozen juice products that are available online.
    Keywords: Hicksian reservation prices, virtual prices, Laspeyres, Paasche, Fisher
    JEL: C33 C43 C81 D11 D60 E31
    Date: 2019–03–13
  11. By: Qiu, Yue (WISE and School of Economics, Xiamen University); Xie, Tian (School of Economics, Singapore Management University); Yu, Jun (School of Economics and Lee Kong Chian School of Business, Singapore Management University); Zhou, Qiankun (Department of Economics, Louisiana State University)
    Abstract: The linkage among the realized volatilities across component stocks are important when modeling and forecasting the relevant index volatility. In this paper, the linkage is measured via an extended Common Correlated Effects (CCE) approach under a panel heterogeneous autoregression model where unobserved common factors in errors are assumed. Consistency of the CCE estimator is obtained. The common factors are extracted using the principal component analysis. Empirical studies show that realized volatility models exploiting the linkage effects lead to significantly better out-of-sample forecast performance, for example, an up to 32% increase in the pseudo R2. We also conduct various forecasting exercises on the the linkage variables that compare conventional regression methods with popular machine learning techniques.
    Keywords: Volatility Forecasting; Heterogeneous autoregression; Common correlated effect; Factor analysis; Random forest
    JEL: C31 C32 G12 G17
    Date: 2019–03–02
  12. By: Yao Luo; Ruli Xiao
    Abstract: Auction data often fail to record all bids or all relevant factors that shift bidder values. In this paper, we study the identification of auction models with unobserved heterogeneity (UH) using multiple order statistics of bids. Classical measurement error approaches require multiple independent measurements. Order statistics, by definition, are dependent, rendering classical approaches inapplicable. First, we show that models with nonseparable finite UH is identifiable using three consecutive order statistics or two consecutive ones with an instrument. Second, two arbitrary order statistics identify the models if UH provides support variations. Third, models with separable continuous UH are identifiable using two consecutive order statistics under a weak restrictive stochastic dominance condition. Lastly, we apply our methods to U.S. Forest Service timber auctions and find evidence of UH.
    Keywords: Unobserved Heterogeneity, Measurement Error, Finite Mixture, Multiplicative Separability, Support Variations, Deconvolution
    JEL: C14 D44
    Date: 2019–03–16
  13. By: Patrick Marsh
    Abstract: This paper details a precise analytic effect that inclusion of a linear trend has on the power of Neyman-Pearson point optimal unit root tests and thence the power envelope. Both stationary and explosive alternatives are considered. The envelope can be characterized by probabilities for two, related, sums of chi-square random variables. A stochastic expansion, in powers of the local-to-unity parameter, of the difference between these loses its leading term when a linear trend is included. This implies that the power envelope converges to size at a faster rate, which can then be exploited to prove that the power envelope must necessarily be lower. This effect is shown to be, analytically, greater asymptotically than in small samples and numerically far greater for explosive than for stationary alternatives. Only a linear trend has a specific rate effect on the power envelope, however other deterministic variables will have some effect. The methods of the paper lead to a simple direct measure of this effect which is then informative about power, in practice.
  14. By: Amanda E. Kowalski
    Abstract: The LATE monotonicity assumption of Imbens and Angrist (1994) precludes “defiers,” individuals whose treatment always runs counter to the instrument, in the terminology of Balke and Pearl (1993) and Angrist et al. (1996). I allow for defiers in a model with a binary instrument and a binary treatment. The model is explicit about the randomization process that gives rise to the instrument. I use the model to develop estimators of the counts of defiers, always takers, compliers, and never takers. I propose separate versions of the estimators for contexts in which the parameter of the randomization process is unspecified, which I intend for use with natural experiments with virtual random assignment. I present an empirical application that revisits Angrist and Evans (1998), which examines the impact of virtual random assignment of the sex of the first two children on subsequent fertility. I find that subsequent fertility is much more responsive to the sex mix of the first two children when defiers are allowed.
    JEL: C1 C9 H10 J01
    Date: 2019–03
  15. By: Leopoldo Catania; Roberto Di Mari; Paolo Santucci de Magistris
    Abstract: The tick structure of thef inancial markets entails that price changes observed at very high frequency are discrete. Departing from this empirical evidence we develop a new model to describe the dynamic properties of multivariate time-series of high frequency price changes, including the high probability of observing no variations (price staleness). We assume the existence of two independent latent/hidden Markov processes determining the dynamic properties of the price changes and the excess probability of the occurrence of zeros. We study the probabilistic properties of the model that generates a zero-in ated mixture of Skellam distributions and we develop an EM estimation procedure with closed-form M step. In the empirical application, we study the joint distribution of the price changes of four assets traded on NYSE. Particular focus is dedicated to the precision of the univariate and multivariate density forecasts, to the quality of the predictions of quantities like the volatility and correlations across assets, and to the possibility of disentangling the di erent sources of zero price variation as generated by absence of news, microstructural frictions or by the offsetting positions taken by the traders.
    Keywords: Dynamic Mixtures; Skellam Distribution; Zero-in ated series; EM Algorithm; High frequency prices; Volatility
  16. By: Atsushi Inoue; Barbara Rossi
    Abstract: We propose a new approach to analuze economic shocks. Our new procedure identifies economic shocks as exogenous shifts in a function; hence, we call the "functional shocks". We show how to identify such shocks and how to trace their effects in the economy via VARs using a procedure that we call "VARs with finctional shocks". Using our new procedire, we address the crucial question of studying the effects or monetary policy by identifying monetary policy shocks as shifts in the shole term structure of government bond yields in a narrow window of time around monetary policy announcements. Our identification sheds new light on the effects of monetary policy shocks, both in conventional and unconventional periods, and shows that traditional identification procedires may miss important effects. We find that, overall, unconventional monetary policy has similar effects to conventional expansionary monetary policy, leading to an increase in both output growth and inflation; the response is hump-shaped; peaking around oneyear to one year and a half after the shock. The new procedire has the advantage of identifying monetary policy shocks during both conventional and unconventional monetary policy periods in a unified manner and can be applied more generally to other economic shocks.
    Keywords: Shock identification, VARs, zero-lower bound, unconventional monetary policy, forward guidance.
    JEL: E4 E52 E21 H31 I3 D1
    Date: 2018–10
  17. By: Barbara Rossi
    Abstract: How should one identify monetary policy shocks in unconventional times? Are unconventional monetary policies as e§ective as conventional ones? And has the transmission mechanism of monetary policy changed in the zerolower bound era? The recent Önancial crisis led Central banks to lower their interest rates in order to stimulate the economy, and interest rates in many advanced economies hit the zero lower bound. As a consequence, the traditional approach to the identiÖcation and the estimation of monetary policy faces new econometric challenges in unconventional times. This article aims at providing a broad overview of the recent literature on the identiÖcation of unconventional monetary policy shocks and the estimation of their e§ects on both Önancial as well as macroeconomic variables. Given that the prospects of slow recoveries and long periods of very low interest rates are becoming the norm, many economists believe that we are likely to face unconventional monetary policy measures often in the future. Hence, these are potentially very important issues in practice.
    Keywords: Shock identification, VARs, zero lower bound, unconventional monetary policy, monetary policy, external instruments, forward guidance.
    JEL: E4 E52 E21 H31 I3 D1
    Date: 2018–01
  18. By: Zura Kakushadze; Willie Yu
    Abstract: We give an explicit algorithm and source code for constructing risk models based on machine learning techniques. The resultant covariance matrices are not factor models. Based on empirical backtests, we compare the performance of these machine learning risk models to other constructions, including statistical risk models, risk models based on fundamental industry classifications, and also those utilizing multilevel clustering based industry classifications.
    Date: 2019–03
  19. By: Yiru Wang; Barbara Rossi
    Abstract: In this article, we review Granger-causality tests robust to the presence of instabilities in a Vector Autoregressive framework. We also introduce the gcrobustvar command, which illustrates the procedure in Stata. In the presence of instabilities, the Granger-causality robust test is more powerful than the traditional Granger-causality test.
    Keywords: gcrobustvar, Granger-causality, VAR, instability, structural breaks, local projections
    Date: 2019–01
  20. By: Fisher, Mark (Federal Reserve Bank of Atlanta); Jensen, Mark J. (Federal Reserve Bank of Atlanta); Tkac, Paula A. (Federal Reserve Bank of Atlanta)
    Abstract: In this paper, we use Bayesian nonparametric learning to estimate the skill of actively managed mutual funds and also to estimate the population distribution for this skill. A nonparametric hierarchical prior, where the hyperprior distribution is unknown and modeled with a Dirichlet process prior, is used for the skill parameter, with its posterior predictive distribution being an estimate of the population distribution. Our nonparametric approach is equivalent to an infinitely ordered mixture of normals where we resolve the uncertainty in the mixture order by partitioning the funds into groups according to the group's average ability and variability. Applying our Bayesian nonparametric learning approach to a panel of actively managed, domestic equity funds, we find the population distribution of skill to be fat-tailed, skewed towards higher levels of performance. We also find that it has three distinct modes: a primary mode where the average ability covers the average fees charged by funds, a secondary mode at a performance level where a fund loses money for its investors, and lastly, a minor mode at an exceptionally high skill level.
    Keywords: Bayesian nonparametrics; mutual funds; unsupervised learning
    JEL: C11 C14 G11
    Date: 2019–03–01
  21. By: John Deke; Mariel Finucane
    Abstract: This brief describes an alternative framework for interpreting impact estimates, known as the BAyeSian Interpretation of Estimates (BASIE).
    Keywords: BayeSian Interpretation, Estimates, BASIE, Statistics, evaluation
  22. By: Duo Qin, Sophie van Huellen, Raghda Elshafie, Yimeng Liu and Thanos Moraitis (Department of Economics, SOAS University of London, UK)
    Abstract: Multiple imputation (MI) techniques are applied to simulate missing wage rates of non-working wives under the missing-at-random (MAR) condition. The assumed selection effect of the labour force participation decision is framed as deviations of the imputed wage rates from MAR. By varying the deviations, we assess the severity of subsequent selection bias in standard human capital models through sensitivity analyses (SA). Our experiments show that the bias remains largely insignificant. While similar findings are possibly attainable through the Heckman procedure, SA under the MI approach provides a more structured and principled approach to assessing selection bias.
    Keywords: wage, labour supply, selection, missing at random, multiple imputation
    JEL: C21 C52 J20 J24
    Date: 2019–01

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.