nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒05‒28
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models By Gael M. Martin; Brendan P.M. McCabe; David T. Frazier; Worapree Maneesoonthorn; Christian P. Robert
  2. Inference in partially identified models with many moment inequalities using Lasso By Federico A. Bugni; Mehmet Caner; Anders Bredahl Kock; Soumendra Lahiri
  3. Mean-correction and Higher Order Moments for a Stochastic Volatility Model with Correlated Errors By Sujay Mukhoti; Pritam Ranjan
  4. Comparing Predictive Accuracy under Long Memory - With an Application to Volatility Forecasting By Robinson Kruse; Christian Leschinski; Michael Will
  5. Measuring Segregation on Small Units: A Partial Identification Analysis By Xavier D'Haultfoeuille; Roland Rathelot
  6. Removing Specification Errors from the Usual Formulation of Binary Choice Models* By P. A. V. B. Swamy; I-Lok Chang; Jatinder S. Mehta; William H. Greene; Stephen G. Hall; George S. Tavlas
  7. Testing for Deterministic Seasonality in Mixed-Frequency VARs By Tomás del Barrio Castro; Alain Hecq
  8. Estimation of Hierarchical Archimedean Copulas as a Shortest Path Problem By Matsypura, Dmytro; Neo, Emily; Prokhorov, Artem
  9. Asymptotic Eigenvalue Distribution of Wishart Matrices whose Components are not Independently and Identically Distributed By Takashi Shinzato
  10. A Nonparametric Approach to Identifying a Subset of Forecasters that Outperforms the Simple Average By Constantin Bürgi; Tara M. Sinclair
  11. Learning Time-Varying Forecast Combinations By Antoine Mandel; Amir Sani
  12. A New Method for Working With Sign Restrictions in SVARs By S Ouliaris; A R Pagan
  13. Estimation and filtering of nonlinear MS-DSGE models By Sergey Ivashchenko

  1. By: Gael M. Martin; Brendan P.M. McCabe; David T. Frazier; Worapree Maneesoonthorn; Christian P. Robert
    Abstract: A new approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics computed from observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a ‘match’ between observed and simulated summaries are retained, and used to estimate the inaccessible posterior; exact inference being possible in the state space setting, we pursue summaries via the maximization of an auxiliary likelihood function. We derive conditions under which this auxiliary likelihood-based approach achieves Bayesian consistency and show that – in a precise limiting sense – results yielded by the auxiliary maximum likelihood estimator are replicated by the auxiliary score. Particular attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case due to intractable transitions Two models for continuous time stochastic volatility are used for illustration, with auxiliary likelihoods constructed by applying computationally efficient filtering methods to discrete time approximations. The extent to which the conditions for consistency are satisfied is demonstrated in both cases, and the accuracy of the proposed technique when applied to a square root volatility model also demonstrated numerically. In multiple parameter settings a separate treatment of each parameter, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality associated with ABC methods.
    Keywords: Likelihood-free methods, latent diffusion models, Bayesian consistency, asymptotic sufficiency, unscented Kalman filter, stochastic volatility
    JEL: C11 C22 C58
    Date: 2016
  2. By: Federico A. Bugni (Duke University); Mehmet Caner (Ohio State University); Anders Bredahl Kock (Aarhus University and CREATES); Soumendra Lahiri (North Carolina State University)
    Abstract: This paper considers the problem of inference in a partially identified moment (in)equality model with possibly many moment inequalities. Our contribution is to propose a novel two-step new inference method based on the combination of two ideas. On the one hand, our test statistic and critical values are based on those proposed by Chernozhukov et al. (2014c) (CCK14, hereafter). On the other hand, we propose a new first step selection procedure based on the Lasso. Some of the advantages of our two-step inference method are that (i) it can be used to conduct hypothesis tests and to construct confidence sets for the true parameter value that is uniformly valid, both in underlying parameter _ and distribution of the data; (ii) our test is asymptotically optimal in a minimax sense and (iii) our method has better power than CCK14 in large parts of the parameter space, both in theory and in simulations. Finally, we show that the Lasso-based first step can be implemented with a thresholding least squares procedure that makes it extremely simple to compute.
    Keywords: Many moment inequalities, self-normalizing sum, multiplier bootstrap, empirical bootstrap, Lasso, inequality selection
    JEL: C13 C23 C26
    Date: 2016–04–26
  3. By: Sujay Mukhoti; Pritam Ranjan
    Abstract: In an efficient stock market, the log-returns and their time-dependent variances are often jointly modelled by stochastic volatility models (SVMs). Many SVMs assume that errors in log-return and latent volatility process are uncorrelated, which is unrealistic. It turns out that if a non-zero correlation is included in the SVM (e.g., Shephard (2005)), then the expected log-return at time t conditional on the past returns is non-zero, which is not a desirable feature of an efficient stock market. In this paper, we propose a mean-correction for such an SVM for discrete-time returns with non-zero correlation. We also find closed form analytical expressions for higher moments of log-return and its lead-lag correlations with the volatility process. We compare the performance of the proposed and classical SVMs on S&P 500 index returns obtained from NYSE.
    Date: 2016–05
  4. By: Robinson Kruse (Rijksuniversiteit Groningen and CREATES); Christian Leschinski (Leibniz University Hannover); Michael Will (Leibniz University Hannover)
    Abstract: This paper extends the popular Diebold-Mariano test to situations when the forecast error loss differential exhibits long memory. It is shown that this situation can arise frequently, since long memory can be transmitted from forecasts and the forecast objective to forecast error loss differentials. The nature of this transmission mainly depends on the (un)biasedness of the forecasts and whether the involved series share common long memory. Further results show that the conventional Diebold-Mariano test is invalidated under these circumstances. Robust statistics based on a memory and autocorrelation consistent estimator and an extended fixed-bandwidth approach are considered. The subsequent Monte Carlo study provides a novel comparison of these robust statistics. As empirical applications, we conduct forecast comparison tests for the realized volatility of the Standard and Poors 500 index among recent extensions of the heterogeneous autoregressive model. While we find that forecasts improve significantly if jumps in the log-price process are considered separately from continuous components, improvements achieved by the inclusion of implied volatility turn out to be insignificant in most situations.
    Keywords: Equal Predictive Ability, Long Memory, Diebold-Mariano Test, Long-run Variance Estimation, Realized Volatility
    JEL: C22 C52 C53
    Date: 2016–05–19
  5. By: Xavier D'Haultfoeuille (Centre de Recherche en Économie et Statistique (CREST)); Roland Rathelot (University of Warwick)
    Abstract: We consider the issue of measuring segregation in a population of small units, considering establishments in our application. Each establishment may have a different probability to hire an individual from the minority group. We define segregation indices as inequality indices on these unobserved, random probabilities. Because these probabilities are measured with error by proportions, standard estimators are inconsistent. We model this problem as a nonparametric binomial mixture. Under this testable assumption and conditions satisfied by standard segregation indices, such indices are partially identified and sharp bounds can be easily obtained by an optimization over a low dimensional space. We also develop bootstrap confidence intervals and a test of the binomial mixture model. Finally, we apply our method to measure the segregation of foreigners in small French firms.
    Keywords: segregation, small units, partial identification
    JEL: C13 C14 J71
    Date: 2016–05
  6. By: P. A. V. B. Swamy; I-Lok Chang; Jatinder S. Mehta; William H. Greene; Stephen G. Hall; George S. Tavlas
    Abstract: We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.
  7. By: Tomás del Barrio Castro (Universitat de les Illes Balears); Alain Hecq (Maastricht University)
    Abstract: This paper investigates the presence of deterministic seasonal features within a mixed frequency vector autoregressive model. A strategy based on Wald tests is proposed.
    Keywords: deterministic seasonal features, mixed frequency VARs
    JEL: C32
    Date: 2016
  8. By: Matsypura, Dmytro; Neo, Emily; Prokhorov, Artem
    Abstract: We formulate the problem of finding and estimating the optimal hierarchical Archimedean copula as an amended shortest path problem. The standard network flow problem is amended by certain constraints specific to copulas, which limit scalability of the problem. However, we show in dimensions as high as twenty that the new approach dominates the alternatives which usually require recursive estimation or full enumeration.
    Keywords: network flow problem, copulas
    Date: 2016–04–16
  9. By: Takashi Shinzato
    Abstract: In the present work, eigenvalue distributions defined by a random rectangular matrix whose components are neither independently nor identically distributed are analyzed using replica analysis and belief propagation. In particular, we consider the case in which the components are independently but not identically distributed; for example, only the components in each row or in each column may be {identically distributed}. We also consider the more general case in which the components are correlated with one another. We use the replica approach while making only weak assumptions in order to determine the asymptotic eigenvalue distribution and to derive an algorithm for doing so, based on belief propagation. One of our findings supports the results obtained from Feynman diagrams. We present the results of several numerical experiments that validate our proposed methods.
    Date: 2016–05
  10. By: Constantin Bürgi (The George Washington University); Tara M. Sinclair (The George Washington University)
    Abstract: Empirical studies in the forecast combination literature have shown that it is notoriously di!cult to improve upon the simple average despite the availability of optimal combination weights. In particular, historical performance-based combination approaches do not select forecasters that improve upon the simple average going forward. This paper shows that this is due to the high correlation among forecasters, which only by chance causes some individuals to have lower root mean squared errors (RMSE) than the simple average. We introduce a new nonparametric approach to eliminate forecasters who perform well based purely on chance as well as poor performers. This leaves a subset of forecasters with better performance in subsequent periods. It improves upon the simple average in the SPF for bond yields where some forecasters may be more likely to have specialized knowledge.
    Keywords: Forecast combination; Forecast evaluation; Multiple model comparisons; Real-time data; Survey of Professional Forecasters
    JEL: C22 C52 C53
    Date: 2015–12
  11. By: Antoine Mandel (Centre d'Economie de la Sorbonne - Paris School of Economics); Amir Sani (Centre d'Economie de la Sorbonne - Paris School of Economics)
    Abstract: Non-parametric forecast combination methods choose a set of static weights to combine over candidate forecasts as opposed to traditional forecasting approaches, such as ordinary least squares, that combine over information (e.g. exogenous variables). While they are robust to noise, structural breaks, inconsistent predictors and changing dynamics in the target variable, sophisticated combination methods fail to outperform the simple mean. Time-varying weights have been suggested as a way forward. Here we address the challenge to “develop methods better geared to the intermittent and evolving nature of predictive relations” in Stock and Watson (2001) and propose a data driven machine learning approach to learn time-varying forecast combinations for output, inflation or any macroeconomic time series of interest. Further, the proposed procedure “hedges” combination weights against poor performance to the mean, while optimizing weights to minimize the performance gap to the best candidate forecast in hindsight. Theoretical results are reported along with empirical performance on a standard macroeconomic dataset for predicting output and inflation
    Keywords: Forecast combinations; Machine Learning; Econometrics; Forecasting; Forecast combination Puzzle
    JEL: C71 D85
    Date: 2016–04
  12. By: S Ouliaris; A R Pagan (UniSyd)
    Abstract: Structural VARs are used to compute impulse responses to shocks. One problem that has arisen involves the information needed to perform this task i.e. how are the shocks to separated into those representing technology, monetary effects etc. Increasingly the signs of impulse responses are used for this task. However it is often desirable to impose some parametric assumption as well e.g. that monetary shocks have no long-run impact on output. Existing methods for combining sign and parametric restrictions are not well developed. In this paper we provide a relatively simple way to allow for these combinations and show how it works in a number of different contexts.
    Keywords: VAR
    Date: 2015–05–11
  13. By: Sergey Ivashchenko (National Research University Higher School of Economics)
    Abstract: This article suggests and compares the properties of some nonlinear Markov-switching filters. Two of them are sigma point filters: the Markov switching central difference Kalman filter (MSCDKF) and MSCDKFA. Two of them are Gaussian assumed filters: Markov switching quadratic Kalman filter (MSQKF) and MSQKFA. A small scale financial MS-DSGE model is used for tests. MSQKF greatly outperforms other filters in terms of computational costs. It also is the first or the second best according to most tests of filtering quality (including the quality of quasi-maximum likelihood estimation with use of a filter, RMSE and LPS of unobserved variables).
    Keywords: regime switching, second-order approximation, non-linear MS-DSGE estimation, MSQKF, MSCDKF
    JEL: C13 C32 E32
    Date: 2016

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.