nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒10‒09
ten papers chosen by
Sune Karlsson
Örebro universitet

  1. Data-driven particle Filters for particle Markov Chain Monte Carlo By Patrick Leung; Catherine S. Forbes; Gael M. Martin; Brendan McCabe
  2. On the Consistency of Bootstrap Testing for a Parameter on the Boundary of the Parameter Space By Giuseppe Cavaliere; Heino Bohn Nielsen; Anders Rahbek
  3. Propensity Score Matching and Subclassification in Observational Studies with Multi-level Treatments By Yang, Shu; Imbens, Guido W.; Cui, Zhanglin; Faries, Douglas E.; Kadziola, Zbigniew
  4. Asymptotic Properties of Approximate Bayesian Computation By D.T. Frazier; G.M. Martin; C.P. Robert; J. Rousseau
  5. Feasible Invertibility Conditions and Maximum Likelihood Estimation for Observation-Driven Models By Francisco Blasques; Paolo Gorgi; Siem Jan Koopman; Olivier Wintenberger
  6. "Multiple-block Dynamic Equicorrelations with Realized Measures, Leverage and Endogeneity" By Yuta Kurose; Yasuhiro Omori
  7. The Drift Burst Hypothesis By Kim Christensen; Roel Oomen; Roberto Renò
  8. Estimating “Gamma” for Tail-hedge Discount Rates When Project Returns Are Co-integrated with GDP By Hultkrantz, Lars; Mantalos, Panagiotis
  9. Co-integration rank determination in partial systems using information criteria By Giuseppe Cavaliere; Luca De Angelis; Luca Fanelli
  10. Recession forecasting using Bayesian classification By Davig, Troy A.; Smalter Hall, Aaron

  1. By: Patrick Leung; Catherine S. Forbes; Gael M. Martin; Brendan McCabe
    Abstract: This paper proposes new automated proposal distributions for sequential Monte Carlo algorithms, including particle filtering and related sequential importance sampling methods. The wrights for these proposal distributions are easily established, as is the unbiasedness property of the resultant likelihood estimators, so that the methods may be used within a particle Markov chain Monte Carlo (PMCMC) inferential setting. Simulation exercises, based on a range of state space models, are used to demonstrate the linkage between the signal-to-noise ratio of the system and the performance of the new particle filters, in comparison with existing filters. In particular, we demonstrate that one of our proposed filters performs well in a high signal-to-noise ratio setting, that is, when the observation is informative in identifying the location of the unobserved state. A second filter, deliberately designed to draw proposals that are informed by both the current observation and past states, is shown to work well across a range of signal-to noise ratios and to be much more robust than the auxiliary particle filter, which is often used as the default choice. We then extend the study to explore the performance of the PMCMC algorithm using the new filters to estimate the likelihood function, once again in comparison with existing alternatives. Taking into consideration robustness to the signal-to-noise ratio, computation time and the efficiency of the chain, the second of the new filters is again found to be the best-performing method. Application of the preferred filter to a stochastic volatility model for weekly Australian/US exchange rate returns completes the paper.
    Keywords: Bayesian inference, non-Gaussian time series, state space models, unbiased likelihood estimation, sequential Monte Carlo
    JEL: C11 C32 C53
    Date: 2016
  2. By: Giuseppe Cavaliere (Università di Bologna); Heino Bohn Nielsen (University of Copenhagen); Anders Rahbek (University of Copenhagen)
    Abstract: It is well-known that with a parameter on the boundary of the parameter space, such as in the classic cases of testing for a zero location parameter or no ARCH effects, the classic nonparametric bootstrap - based on unrestricted parameter estimates - leads to inconsistent testing. In contrast, we show here that for the two aforementioned cases a nonparametric bootstrap test based on parameter estimates obtained under the null - referred to as 'restricted bootstrap' - is indeed consistent. While the restricted bootstrap is simple to implement in practice, novel theoretical arguments are required in order to establish consistency. In particular, since the bootstrap is analyzed both under the null hypothesis and under the alternative, non-standard asymptotic expansions are required to deal with parameters on the boundary. Detailed proofs of the asymptotic validity of the restricted bootstrap are given and, for the leading case of testing for no ARCH, a Monte Carlo study demonstrates that the bootstrap quasi-likelihood ratio statistic performs extremely well in terms of empirical size and power for even remarkably small samples, outperforming the standard and bootstrap Lagrange multiplier tests as well as the asymptotic quasi-likelihood ratio test.
    Keywords: Bootstrap; Boundary; ARCH; Location model
    Date: 2016
  3. By: Yang, Shu (Harvard University); Imbens, Guido W. (Stanford University); Cui, Zhanglin (Eli Lilly and Company); Faries, Douglas E. (Eli Lilly and Company); Kadziola, Zbigniew (Eli Lilly and Company)
    Abstract: In this paper, we develop new methods for estimating average treatment effects in observational studies, focusing on settings with more than two treatment levels under unconfoundedness given pre-treatment variables. We emphasize subclassification and matching methods which have been found to be effective in the binary treatment literature and which are among the most popular methods in that setting. Whereas the literature has suggested that these particular propensity-based methods do not naturally extend to the multi-level treatment case, we show, using the concept of weak unconfoundedness, that adjusting for or matching on a scalar function of the pre-treatment variables removes all biases associated with observed pre-treatment variables. We apply the proposed methods to an analysis of the effect of treatments for fibromyalgia. We also carry out a simulation study to assess the finite sample performance of the methods relative to previously proposed methods.
    Date: 2015–12
  4. By: D.T. Frazier; G.M. Martin; C.P. Robert; J. Rousseau
    Abstract: Approximate Bayesian computation (ABC) is becoming an accepted tool for statistical analysis in models with intractable likelihoods. With the initial focus being primarily on the practical import of ABC, exploration of its formal statistical properties has begun to attract more attention. In this paper we consider the asymptotic behaviour of the posterior obtained from ABC and the ensuing posterior mean. We give general results on: (i) the rate of concentration of the ABC posterior on sets containing the true parameter (vector); (ii) the limiting shape of the posterior; and (iii) the asymptotic distribution of the ABC posterior mean. These results hold under given rates for the tolerance used within ABC, mild regularity conditions on the summary statistics, and a condition linked to identification of the true parameters. Important implication of the theoretical results for practitioners of ABC are highlighted.
    Keywords: asymptotic properties, Bayesian inference, Bernstein-von Mises theorem, consistency, likelihood-free methods
    JEL: C11 C15 C18
    Date: 2016
  5. By: Francisco Blasques (VU University Amsterdam, the Netherlands); Paolo Gorgi (VU University Amsterdam, the Netherlands; University of Padua, Italy); Siem Jan Koopman (VU University Amsterdam, the Netherlands; Aarhus University, Denmark); Olivier Wintenberger (University of Copenhagen, Denmark; Sorbonne Universités, UPMC University Paris 06, France)
    Abstract: Invertibility conditions for observation-driven time series models often fail to be guaranteed in empirical applications. As a result, the asymptotic theory of maximum likelihood and quasi-maximum likelihood estimators may be compromised. We derive considerably weaker conditions that can be used in practice to ensure the consistency of the maximum likelihood estimator for a wide class of observation-driven time series models. Our consistency results hold for both correctly specified and misspecified models. The practical relevance of the theory is highlighted in a set of empirical examples. We further obtain an asymptotic test and confidence bounds for the unfeasible “true” invertibility region of the parameter space.
    Keywords: consistency; invertibility; maximum likelihood estimation; observation-driven models; stochastic recurrence equations
    JEL: C13 C32 C58
    Date: 2016–10–06
  6. By: Yuta Kurose (Department of Mathematical Sciences, Kwansei Gakuin University); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: The single equicorrelation structure among several daily asset returns is promising and attractive to reduce the number of parameters in multivariate stochastic volatility models. However, such an assumption may not be realistic as the number of assets may increase, for example, in the portfolio optimizations. As a solution to this oversimplication, the multipleblock equicorrelation structure is proposed for high dimensional financial time series, where we assume common correlations within a group of asset returns, but allow different correlations for different groups. The realized volatilities and realized correlations are also jointly modelled to obtain stable and accurate estimates of parameters, latent variables and leverage effects. Using a state space representation, we describe an efficient estimation method of Markov chain Monte Carlo simulation. Illustrative examples are given using simulated data, and empirical studies using U.S. daily stock returns data show that our proposed model outperforms other competing models in portfolio performances.
    Date: 2016–09
  7. By: Kim Christensen (Aarhus University and CREATES); Roel Oomen (Deutsche Bank AG (London) and London School of Economics & Political Science (LSE) - Department of Statistics); Roberto Renò (Department of Economics, University of Verona)
    Abstract: The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the continuous-time Itô semi-martingale model in such a way that the fundamental arbitrage-free property is preserved. We then develop a non-parametric test statistic that allows for the identification of drift bursts from noisy high-frequency data. We apply this methodology to a comprehensive set of tick data and show that drift bursts form an integral part of the price dynamics across equities, fixed income, currencies and commodities. We find that the majority of identified drift bursts are accompanied by strong price reversals and these can therefore be regarded as “flash crashes” that span brief periods of severe market disruption without any material longer term price impacts.
    Keywords: flash crashes, drift bursts, volatility bursts, nonparametric statistics, reversals
    JEL: G10 C58
    Date: 2016–09–27
  8. By: Hultkrantz, Lars (Örebro University School of Business); Mantalos, Panagiotis (Department of Economics and Statistics, School of Business and Economics Linnaeus University)
    Abstract: Weitzman (2012, 2013) has suggested a method for calculating social discount rates for long-term investments when project returns are covariant with consumption or other macroeconomic variables, so called “tail-hedge discounting”. This method relies on a parameter called “real project gamma“ that measures the proportion of project returns that is covariant with the macroeconomic variable. We suggest two approaches for estimation of this gamma when the project returns and the macroeconomic variable are co-integrated. First we use Weitzman’s (2012) own approach, and second a simple data transformation that keeps gamma within the zero to one interval. In a Mont-Carlo study we show that the method of using a standardized series is better and robust under different data-generating processes. Both approaches are demonstrated in a Monte-Carlo experiment and applied to Swedish time-series data from 1950-2011 for annual time-series data for rail freight (a measure of returns from rail investments) and GDP.
    Keywords: GDP
    JEL: D61 D90 G11 H43 R42
    Date: 2016–09–30
  9. By: Giuseppe Cavaliere (Università di Bologna); Luca De Angelis (Università di Bologna); Luca Fanelli
    Abstract: We investigate the asymptotic and finite sample properties of the most widely used information criteria for co-integration rank determination in ‘partial’ systems, i.e. in co-integrated Vector Autoregressive (VAR) models where a sub-set of variables of interest is modeled conditional on another sub-set of variables. The asymptotic properties of the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC) and the Hannan-Quinn Information Criterion (HQC) are established, and consistency of BIC and HQC is proved. No- tably, consistency of BIC and HQC is robust to violations of the hypothesis of weak exogeneity of the conditioning variables with respect to the co-integration parameters. More precisely, BIC and HQC recover the true co-integration rank from the partial system analysis also when the conditional model does not convey all information about the co-integration parameters. This result opens up interesting possibilities for practitioners who can determine the co-integration rank in partial systems without being concerned with the weak exogeneity of the conditioning variables. A Monte Carlo experiment which considers large systems as data generating process shows that BIC and HQC applied in partial systems perform reasonably well in small samples and comparatively better than ‘traditional’ approaches for co-integration rank determination. We further show the usefulness of our approach and the benefits of the conditional system anal- ysis to co-integration rank determination with two empirical illustrations, both based on the estimation of VAR systems on U.S. quarterly data. Overall, our analysis clearly shows that the gains of combining information criteria with partial systems analysis are indisputable.
    Keywords: Information criteria, Co-integration, Partial system, Conditional model, VAR. Criteri di informazione, co-integrazione, modello condizionato, VAR
    Date: 2016
  10. By: Davig, Troy A. (Federal Reserve Bank of Kansas City); Smalter Hall, Aaron (Federal Reserve Bank of Kansas City)
    Abstract: The authors demonstrated the use of a Naïve Bayes model as a recession forecasting tool. The approach has a close connection to Markov-switching models and logistic regression but also important differences. In contrast to Markov-switching models, Naïve Bayes treats National Bureau of Economic Research business cycle turning points as data rather than hidden states to be inferred by the model. Although Naïve Bayes and logistic regression are asymptotically equivalent under certain distributional assumptions, the assumptions do not hold for business cycle data.
    Keywords: Forecasting; Naïve Bayes model; Recession
    JEL: C11 C5 E32 E37
    Date: 2016–08–01

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.