nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒06‒11
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Indirect likelihood inference By Michael Creel; Dennis Kristensen
  2. Pointwise adaptive estimation for quantile regression By Markus Reiß; Yves Rozenholc; Charles A. Cuenod
  3. Fixed Effects Estimation in Panel Nonlinear Fractional Response Models By Xiaoming Li
  4. Regime-Switching Cointegration* By Markus Jochmann; Gary Koop
  5. Inference on Impulse Response Functions in Structural VAR Models By Inoue, Atsushi; Kilian, Lutz
  6. Estimation of Zenga's new index of economic inequality in heavy tailed populations By Francesca, Greselin; Leo, Pasquazzi
  7. Improving Real-time Estimates of Output Gaps and Inflation Trends with Multiple-vintage Models By Michael P. Clements; Ana Beatriz Galvão
  8. The near-extreme density of intraday log-returns By Mauro Politi; Nicolas Millot; Anirban Chakraborti
  9. News Shocks or Correlated Sunspots? An Observational Equivalence Result in Linear Rational Expectations Model By Marco M. Sorge
  10. The Regression Tournament: A Novel Approach to Prediction Model Assessment By Adi Schnytzer; Janez Šušteršič
  11. Modelling Breaks and Clusters in the Steady States of Macroeconomic Variables By Gary Koop; Joshua Chan
  12. Ranking Multivariate GARCH Models by Problem Dimension: An Empirical Evaluation By Massimiliano Caporin; Michael McAleer

  1. By: Michael Creel; Dennis Kristensen
    Abstract: Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
    Keywords: indirect inference; maximum-likelihood; simulation-based
    JEL: C13 C14 C15 C33
    Date: 2011–05–19
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:874.11&r=ecm
  2. By: Markus Reiß; Yves Rozenholc; Charles A. Cuenod
    Abstract: A nonparametric procedure for quantile regression, or more generally nonparametric M-estimation, is proposed which is completely data-driven and adapts locally to the regularity of the regression function. This is achieved by considering in each point M-estimators over different local neighbourhoods and by a local model selection procedure based on sequential testing. Non-asymptotic risk bounds are obtained, which yield rate-optimality for large sample asymptotics under weak conditions. Simulations for different univariate median regression models show good finite sample properties, also in comparison to traditional methods. The approach is the basis for denoising CT scans in cancer research.
    Keywords: M-estimation, median regression, robust estimation, local model selection, unsupervised learning, local bandwidth selection, median filter, Lepski procedure, minimax rate, image denoising
    JEL: C14 C31
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-029&r=ecm
  3. By: Xiaoming Li (University of Connecticut)
    Abstract: Estimations of nonlinear panel models that include individual specific fixed effects are complicated by the incidental parameters problem, that is, the asymptotic bias in the estimation of typical fixed effects panel models generally results in inconsistent estimates. In this paper, I characterize the leading term of a large-T expansion of the biases in the nonlinear least square estimator (NLSE) and estimators of the average partial effects in panel fractional response models. The resulting estimator after analytical bias correction is robust to the incidental parameters bias and reduces the bias order from O(T−1) to O(T−2). I also examine the finite sample performance of the proposed estimator using a new data generating process in which panel fractional response variables are collapsed from repeated, clustered cross-sectional binary probit choices. A proof showing the generated data satisfies the identification assumption at the cluster level has been given. Simulation results suggest that, in the static case, the bias corrected estimator performs comparably to the quasi-maximum likelihood estimator (QMLE), which is the standard approach in the literature, for 8 or more periods, while in the dynamic case, the bias corrected estimators are substantially superior to those QMLE’s.
    Keywords: Fractional responses, Panel Data, Unobserved effects, Probit, Partial effects, Bias, Incidental parameters problem, Fixed effects, Bias Correction
    JEL: C23 C25 I22
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2011-11&r=ecm
  4. By: Markus Jochmann (Department of Economics, New~~#badcastle University); Gary Koop (Department of Economics, University of Strathclyde)
    Abstract: We develop methods for Bayesian inference in vector error correction models which are subject to a variety of switches in regime (e.g. Markov switches in regime or structural breaks). An important aspect of our approach is that we allow both the cointegrating vectors and the number of cointegrating relationships to change when the regime changes. We show how Bayesian model averaging or model selection methods can be used to deal with the high-dimensional model space that results. Our methods are used in an empirical study of the Fisher effect.
    Keywords: Bayesian, Markov switching, structural breaks, cointegration,
    JEL: C11 C32 C52
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:1125&r=ecm
  5. By: Inoue, Atsushi; Kilian, Lutz
    Abstract: Skepticism toward traditional identifying assumptions based on exclusion restrictions has led to a surge in the use of structural VAR models in which structural shocks are identified by restricting the sign of the responses of selected macroeconomic aggregates to these shocks. Researchers commonly report the vector of pointwise posterior medians of the impulse responses as a measure of central tendency of the estimated response functions, along with pointwise 68 percent posterior error bands. It can be shown that this approach cannot be used to characterize the central tendency of the structural impulse response functions. We propose an alternative method of summarizing the evidence from sign-identified VAR models designed to enhance their practical usefulness. Our objective is to characterize the most likely admissible model(s) within the set of structural VAR models that satisfy the sign restrictions. We show how the set of most likely structural response functions can be computed from the posterior mode of the joint distribution of admissible models both in the fully identified and in the partially identified case, and we propose a highest-posterior density credible set that characterizes the joint uncertainty about this set. Our approach can also be used to resolve the long-standing problem of how to conduct joint inference on sets of structural impulse response functions in exactly identified VAR models. We illustrate the differences between our approach and the traditional approach for the analysis of the effects of monetary policy shocks and of the effects of oil demand and oil supply shocks.
    Keywords: Credible Set; Impulse responses; Median; Mode; Sign restrictions; Simultaneous inference; Vector autoregression
    JEL: C32 C52 E37 Q43
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8419&r=ecm
  6. By: Francesca, Greselin; Leo, Pasquazzi
    Abstract: In this work we propose a new estimator for Zenga's inequality measure in heavy tailed populations. The new estimator is based on the Weissman estimator for high quantiles. We will show that, under fairly general conditions, it has asymptotic normal distribution. Further we present the results of a simulation study where we compare confidence intervals based on the new estimator with those based on the plug-in estimator.
    Keywords: Heavy-tailed distributions; inequality measures; conditional tail expectation; Hill estimator; Weissman estimator; extreme value theory
    JEL: C13 D63 C14
    Date: 2011–06–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:31230&r=ecm
  7. By: Michael P. Clements (University of Warwick); Ana Beatriz Galvão (Queen Mary, University of London)
    Abstract: Real-time estimates of output gaps and inflation trends differ from the values that are obtained using data available long after the event. Part of the problem is that the data on which the real-time estimates are based is subsequently revised. We show that vector-autoregressive models of data vintages provide forecasts of post-revision values of future observations and of already-released observations capable of improving real-time output gap and inflation trend estimates. Our findings indicate that annual revisions to output and inflation data are in part predictable based on their past vintages.
    Keywords: Revisions, Real-time forecasting, Output gap, Inflation trend
    JEL: C53
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp678&r=ecm
  8. By: Mauro Politi; Nicolas Millot; Anirban Chakraborti
    Abstract: The extreme event statistics plays a very important role in the theory and practice of time series analysis. The reassembly of classical theoretical results is often undermined by non-stationarity and dependence between increments. Furthermore, the convergence to the limit distributions can be slow, requiring a huge amount of records to obtain significant statistics, and thus limiting its practical applications. Focussing, instead, on the closely related density of "near-extremes" -- the distance between a record and the maximal value -- can render the statistical methods to be more suitable in the practical applications and/or validations of models. We apply this recently proposed method in the empirical validation of an adapted financial market model of the intraday market fluctuations.
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1106.0039&r=ecm
  9. By: Marco M. Sorge
    Abstract: This paper studies identification of linear rational expectations models under news shocks. Exploiting the general martingale difference solution approach, we show that news shocks models are observationally equivalent to a class of indeterminate equilibrium frameworks which are subject only, though arbitrarily, to i.i.d. fundamental shocks. The equivalent models are characterized by a lagged expectations structure, which arises typically when choice variables are predetermined or rather based on past information with respect to current observables. This particular feature creates room for serially correlated sunspot variables to arise in equilibrium reduced forms, whose dynamics can be equivalently induced by news shocks processes. This finding, which is inherent to the rational expectations theoretical construct, calls for carefully designing empirical investigations of news shocks in estimated DSGE models.
    Keywords: Rational expectations; News shocks; Indeterminacy; Observational equivalence.
    JEL: C1 E3
    Date: 2011–05–09
    URL: http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2011_09&r=ecm
  10. By: Adi Schnytzer (Bar-Ilan University); Janez Šušteršič (University of Primorska)
    Abstract: Standard methods to assess the statistical quality of econometric models implicitly assume there is only one person in the world, namely the forecaster with her model(s), and that there exists an objective and independent reality to which the model predictions may be compared. However, on many occasions, the reality with which we compare our predictions and in which we take our actions is co-determined and changed constantly by actions taken by other actors based on their own models. We propose a new method, called a regression tournament, to assess the utility of forecasting models and taking these interactions into account. We present an empirical case of betting on Australian Rules Football matches where the most accurate predictive model does not yield the highest betting return, or, in our terms, does not win a regression tournament.
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:biu:wpaper:2011-10&r=ecm
  11. By: Gary Koop (Department of Economics, University of Strathclyde); Joshua Chan (Australian National University)
    Abstract: Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. In this paper, we draw on methods from the Bayesian clustering literature to develop an econometric methodology which: i) finds groups of variables which have the same number of breaks; and ii) determines the nature of the break process within each group. We present an application involving a five-variate steady-state VAR.
    Keywords: mixtures of normals, steady state VARs, Bayesian
    JEL: C11 C24 C32
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:1111&r=ecm
  12. By: Massimiliano Caporin; Michael McAleer (University of Canterbury)
    Abstract: In the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of models, namely BEKK, DCC, Corrected DCC (cDCC) of Aeilli (2008), CCC, Exponentially Weighted Moving Average, and covariance shrinking, using historical data of 89 US equities. Our methods follow part of the approach described in Patton and Sheppard (2009), and the paper contributes to the literature in several directions. First, we consider a wide range of models, including the recent cDCC model and covariance shrinking. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Weighted Likelihood Ratio test of Amisano and Giacomini (2007). Third, we examine how the model rankings are influenced by the cross-sectional dimension of the problem.
    Keywords: Covariance forecasting; model confidence set; model ranking; MGARCH; model comparison
    JEL: C32 C53 C52
    Date: 2011–05–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:11/23&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.