nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒09‒29
twenty papers chosen by
Sune Karlsson
Orebro University

  1. EGARCH and Stochastic Volatility: Modeling Jumps and Heavy-tails for Stock Returns By Jouchi Nakajima
  2. Optimal inference in dynamic models with conditional moment restrictions By Bent Jesper Christensen; Michael Sørensen
  3. Likelihood based testing for no fractional cointegration By Katarzyna Lasak
  4. Real Time Detection of Structural Breaks in GARCH Models By Zhongfang He; John M Maheu
  5. Maximum likelihood estimation of fractionally cointegrated systems By Katarzyna Lasak
  6. Testing for Co-integration in Vector Autoregressions with Non-Stationary Volatility By Giuseppe Cavaliere; Anders Rahbek; A.M.Robert Taylor
  7. Matching Theory and Data: Bayesian Vector Autoregression and Dynamic Stochastic General Equilibrium Models By Alexander Kriwoluzky
  8. Heteroscedasticity and Autocorrelation Efficient (HAE) Estimation and Pivots for Jointly Evolving Series By Hrishikesh D. Vinod
  9. Forecast Combination With Entry and Exit of Experts By Carlos Capistrán; Allan Timmermann
  10. Optimally combining Censored and Uncensored Datasets By Paul J. Devereux; Gautam Tripathi
  11. Seasonal Mackey-Glass-GARCH process and short-term dynamics By Catherine Kyrtsou; Michel Terraza
  12. Modeling technology and technological change in manufacturing: how do countries differ? By Eberhardt, Markus; Teal, Francis
  13. Glossary to ARCH (GARCH) By Tim Bollerslev
  14. Testing downside risk efficiency under market distress By Jesus Gonzalo; Jose Olmo
  15. The Resolution of Macroeconomic Uncertainty: Evidence from Survey Forecast By Andrew J. Patton; Allan Timmermann
  16. Real-time Prediction with UK Monetary Aggregates in the Presence of Model Uncertainty By Anthony Garratt; Gary Koop; Emi Mise; Shaun Vahey
  17. On identifiability of MAP processes By Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper
  18. The Effect of High School Employment on Educational Attainment: A Conditional Difference-in-Differences Approach By Buscha, Franz; Maurel, Arnaud; Page, Lionel; Speckesser, Stefan
  19. Non-Linearities, Model Uncertainty, and Macro Stress Testing By Miroslav Misina; David Tessier
  20. Real-Time Measurement of Business Conditions By S. Boragan Aruoba; Francis X. Diebold; Chiara Scotti

  1. By: Jouchi Nakajima (Institute for Monetary and Economic Studies, Bank of Japan (E-mail: jouchi.nakajima-1@boj.or.jp))
    Abstract: This paper proposes the EGARCH model with jumps and heavy- tailed errors, and studies the empirical performance of different models including the stochastic volatility models with leverage, jumps and heavy-tailed errors for daily stock returns. In the framework of a Bayesian inference, the Markov chain Monte Carlo estimation methods for these models are illustrated with a simulation study. The model comparison based on the marginal likelihood estimation is provided with data on the U.S. stock index.
    Keywords: Bayesian analysis, EGARCH, Heavy-tailed error, Jumps, Marginal likelihood, Markov chain Monte Carlo, Stochastic volatility
    JEL: C11 C15 G12
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:ime:imedps:08-e-23&r=ecm
  2. By: Bent Jesper Christensen; Michael Sørensen (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: By an application of the theory of optimal estimating function, optimal in- struments for dynamic models with conditional moment restrictions are derived. The general efficiency bound is provided, along with estimators attaining the bound. It is demonstrated that the optimal estimators are always at least as ef- ficient as the traditional optimal generalized method of moments estimator, and usually more efficient. The form of our optimal instruments resembles that from Newey (1990), but involves conditioning on the history of the stochastic pro- cess. In the special case of i.i.d. observations, our optimal estimator reduces to Newey’s. Specification and hypothesis testing in our framework are introduced. We derive the theory of optimal instruments and the associated asymptotic dis- tribution theory for general cases including non-martingale estimating functions and general history dependence. Examples involving time-varying conditional volatility and stochastic volatility are offered.
    Keywords: optimal estimating function, generalized method of moments, conditional moment restrictions, dynamic models, optimal instruments, martingale estimating function, specification test
    JEL: C12 C13 C22 C32
    Date: 2008–09–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-51&r=ecm
  3. By: Katarzyna Lasak (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under fractional alternatives and size accuracy provided by the asymptotic distribution in finite samples are investigated.
    Keywords: Error correction model, Gaussian VAR model, Maximum likelihood estimation, Fractional cointegration, Likelihood ratio tests, fractional Brownian motion
    JEL: C12 C15 C32
    Date: 2008–09–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-52&r=ecm
  4. By: Zhongfang He; John M Maheu
    Abstract: This paper proposes a sequential Monte Carlo method for estimating GARCH models subject to an unknown number of structural breaks. We use particle filtering techniques that allow for fast and efficient updates of posterior quantities and forecasts in real-time. The method conveniently deals with the path dependence problem that arises in these type of models. The performance of the method is shown to work well using simulated data. Applied to daily NASDAQ returns, the evidence favors a partial structural break specification in which only the intercept of the conditional variance equation has breaks compared to the full structural break specification in which all parameters are subject to change. Our empirical application underscores the importance of model assumptions when investigating breaks. A model with normal return innovations result in strong evidence of breaks; while more flexible return distributions such as t-innovations or adding jumps to the model still favor breaks but indicate much more uncertainty regarding the time and impact of them.
    Keywords: particle filter, GARCH model, change point, sequential Monte Carlo
    JEL: C11 C22 C53 G10
    Date: 2008–09–19
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-336&r=ecm
  5. By: Katarzyna Lasak (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed normal distributed, while the degree of fracional cointegration and the speed of adjustment to the equilibrium matrix have joint normal distribution, which proves the intuition that the memory of the cointegrating residuals affects the speed of conver- gence to the long-run equilibrium, but does not have any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does not affect the consistency of the estimators of the cointegration relationships, although usual inference rules are not valid. We illustrate our results in finite samples by Monte Carlo analysis.
    Keywords: Error correction model, Gaussian VAR model, Maximum likelihood estimation, Fractional cointegration
    JEL: C13 C32
    Date: 2008–09–12
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-53&r=ecm
  6. By: Giuseppe Cavaliere; Anders Rahbek; A.M.Robert Taylor (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: Many key macro-economic and financial variables are characterised by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special cases. We show that the conventional rank statistics computed as in Johansen (1988,1991) are potentially unreliable. In particular, their large sample distributions depend on the integrated covariation of the underlying multivariate volatility process which impacts on both the size and power of the associated co-integration tests, as we demonstrate numerically. A solution to the identified inference problem is provided by considering wild bootstrap-based implementations of the rank tests. These do not require the practitioner to specify a parametric model for volatility, nor to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. The bootstrap is shown to perform very well in practice.
    Keywords: Co-integration, non-stationary volatility, trace and maximum eigenvalue tests, wild bootstrap
    JEL: C30 C32
    Date: 2008–09–08
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-50&r=ecm
  7. By: Alexander Kriwoluzky
    Abstract: This paper shows how to identify the structural shocks of a Vector Autore- gression (VAR) while at the same time estimating a dynamic stochastic general equilibrium (DSGE) model that is not assumed to replicate the data generating process. It proposes a framework to estimate the parameters of the VAR model and the DSGE model jointly: the VAR model is identified by sign restrictions derived from the DSGE model; the DSGE model is estimated by matching the corresponding impulse response functions.
    Keywords: Bayesian Model Estimation, Vector Autoregression, Identification.
    JEL: C51
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-060&r=ecm
  8. By: Hrishikesh D. Vinod (Fordham University, Department of Economics)
    Abstract: A new two-way map between time domain and numerical magnitudes or values domain (v-dom) provides a new solution to heteroscedasticity. Since sorted logs of squared fitted residuals are monotonic in the v-dom, we obtain a parsimonious fit there. Two theorems prove consistency, asymptotic normality, efficiency and specification-robustness, supplemented by a simulation. Since Dufour’s (1997) impossibility theorems show how confidence intervals from Wald-type tests can have zero coverage, I suggest Godambe pivot functions (GPF) with good finite sample coverage and distribution-free robustness. I use the Frisch-Waugh theorem and the scalar GPF to construct new confidence intervals for regression parameters and apply Vinod’s (2004, 2006) maximum entropy bootstrap. I use Irving Fisher’s model for interest rates and Keynesian consumption function for illustration.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:frd:wpaper:dp2008-15&r=ecm
  9. By: Carlos Capistrán; Allan Timmermann (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: Combination of forecasts from survey data is complicated by the frequent entry and exit of individual forecasters which renders conventional least squares regression approaches infeasible. We explore the consequences of this issue for existing combina- tion methods and propose new methods for bias-adjusting the equal-weighted forecast or applying combinations on an extended panel constructed by back-filling missing ob- servations using an EM algorithm. Through simulations and an application to a range of macroeconomic variables we show that the entry and exit of forecasters can have a large effect on the real-time performance of conventional combination methods. The bias-adjusted combination method is found to work well in practice.
    Keywords: Real-time Data, Survey of Professional Forecasters, Bias-adjustment, EM Algorithm.
    Date: 2008–09–19
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-55&r=ecm
  10. By: Paul J. Devereux (University College Dublin); Gautam Tripathi (University of Connecticut)
    Abstract: We develop a simple semiparametric framework for combining censored and uncensored samples so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. No nonparametric smoothing is required to implement our estimators. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. Results from a small simulation experiment suggest that the estimator proposed in this paper can work very well in finite samples.
    Date: 2008–09–19
    URL: http://d.repec.org/n?u=RePEc:ucn:wpaper:200820&r=ecm
  11. By: Catherine Kyrtsou (Department of Economics, University of Macedonia); Michel Terraza (Department of Economics, LAMETA)
    Abstract: The aim of this article is the study of complex structures which are behind the short-term predictability of stock returns series. In this regard, we employ a seasonal version of the Mackey-Glass-GARCH(p,q) model, initially proposed by Kyrtsou and Terraza (2003) and generalized by Kyrtsou (2005, 2006). It has either negligible or significant autocorrelations in the conditional mean, and a rich structure in the conditional variance. To reveal short or long memory components and non-linear structures in the French Stock Exchange (CAC40) returns series, we apply the test of Geweke and Porter-Hudak (1983), the Brock et al. (1996) and Dechert (1995) tests, the correlation-dimension method of Grassberger and Procaccia (1983), the Lyapunov exponents method of Gencay and Dechert (1992), and the Recurrence Quantification Analysis introduced by Webber and Zbilut (1994). As a confirmation procedure of the dynamics generating future movements in CAC40, we forecast the return series using a seasonal Mackey-Glass-GARCH(1,1) model. The interest of the forecasting exercise is found in the inclusion of high-dimensional non-linearities in the mean equation of returns.
    Keywords: Noisy chaos, short-term dynamics, correlation dimension, Lyapunov exponents, recurrence quantifications, forecasting.
    JEL: C49 C51 C52 C53 D84 G12 G14
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:mcd:mcddps:2008_09&r=ecm
  12. By: Eberhardt, Markus; Teal, Francis
    Abstract: In this paper we ask how technological differences in manufacturing across countries can best be modeled when using a standard production function approach. We show that it is important to allow for differences in technology as measured by differences in parameters. Of similar importance are time-series properties of the data and the role of dynamic processes, which can be thought of as aspects of technological change. Regarding the latter we identify both an element that is common across all countries and a part which is country-specific. The estimator we develop, which we term the Augmented Mean Group estimator (AMG), is closely related to the Mean Group version of the Pesaran (2006) Common Correlated Effects estimator. Once we allow for parameter heterogeneity and the underlying time-series properties of the data we are able to show that the parameter estimates from the production function are consistent with information on factor shares.
    Keywords: Manufacturing Production; Parameter Heterogeneity; Nonstationary Panel Econometrics; Cross-section Dependence
    JEL: O47 C33 O14
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10690&r=ecm
  13. By: Tim Bollerslev (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: The literature on modeling and forecasting time-varying volatility is ripe with acronyms and abbreviations used to describe the many different parametric models that have been put forth since the original linear ARCH model introduced in the seminal Nobel Prize winning paper by Engle (1982). The present paper provides an easy-to-use encyclopedic reference guide to this long list of ARCH acronyms. In addition to the acronyms associated with specific parametric models, I have also included descriptions of various abbreviations associated with more general statistical procedures and ideas that figure especially prominently in the ARCH literature.
    Keywords: (G)ARCH, Volatility models
    JEL: C22
    Date: 2008–09–04
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-49&r=ecm
  14. By: Jesus Gonzalo; Jose Olmo
    Abstract: In moments of distress downside risk measures like Lower Partial Moments (LPM) are more appropriate than the standard variance to characterize risk. The goal of this paper is to study how to compare portfolios in these situations. In order to do that we show the close connection between mean-risk efficiency sets and stochastic dominance under distress episodes of the market, and use the latter property to propose a hypothesis test to discriminate between portfolios across risk aversion levels. Our novel family of test statistics for testing stochastic dominance under distress makes allowance for testing orders of dominance higher than zero, for general forms of dependence between portfolios and can be extended to residuals of regression models. These results are illustrated in the empirical application for data from US stocks. We show that mean-variance strategies are stochastically dominated by mean-risk efficient sets in episodes of financial distress.
    Keywords: Comovements, Downside risk, Lower partial moments, Market Distress, Mean-risk models, Mean-variance models, Stochastic dominance
    JEL: C1 C2 G1
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we084321&r=ecm
  15. By: Andrew J. Patton; Allan Timmermann (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: We develop an unobserved components approach to study surveys of forecasts containing multiple forecast horizons. Under the assumption that forecasters optimally update their beliefs about past, current and future state variables as new information arrives, we use our model to extract information on the degree of predictability of the state variable and the importance of measurement errors on that variable. Empirical estimates of the model are obtained using survey forecasts of annual GDP growth and inflation in the US with forecast horizons ranging from 1 to 24 months. The model is found to closely match the joint realization of forecast errors at different horizons and is used to demonstrate how uncertainty about macroeconomic variables is resolved.
    Keywords: Fixed-event forecasts, multiple forecast horizons, Kalman filtering, survey data
    Date: 2008–09–19
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-54&r=ecm
  16. By: Anthony Garratt; Gary Koop; Emi Mise; Shaun Vahey (Reserve Bank of New Zealand)
    Abstract: A popular account for the demise of the UK’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily-revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily-revised data. We consider a large set of recursively estimated Vector Autoregressive (VAR) and Vector Error Correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily-revised data obscure these fluctuations. Out of sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the UK’s monetary targeting regime. Classification-C11, C32, C53, E51, E52
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:nzb:nzbdps:2008/13&r=ecm
  17. By: Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper
    Abstract: Two types of transitions can be found in the Markovian Arrival process or MAP: with and without arrivals. In transient transitions the chain jumps from one state to another with no arrival; in effective transitions, a single arrival occurs. We assume that in practice, only arrival times are observed in a MAP. This leads us to define and study the Effective Markovian Arrival process or E-MAP. In this work we define identifiability of MAPs in terms of equivalence between the corresponding E-MAPs and study conditions under which two sets of parameters induce identical laws for the observable process, in the case of 2 and 3-states MAP. We illustrate and discuss our results with examples.
    Keywords: Batch Markovian Arrival process, Hidden Markov models, Identifiability problems
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws084513&r=ecm
  18. By: Buscha, Franz (University of Westminster); Maurel, Arnaud (ENSAE-CREST); Page, Lionel (University of Westminster); Speckesser, Stefan (University of Westminster)
    Abstract: Using American panel data from the National Educational Longitudinal Study of 1988 (NELS:88) this paper investigates the effect of working during grade 12 on attainment. We exploit the longitudinal nature of the NELS by employing, for the first time in the related literature, a semiparametric propensity score matching approach combined with difference-in- differences. This identification strategy allows us to address in a flexible way selection on both observables and unobservables associated with part-time work decisions. Once such factors are controlled for, insignificant effects on reading and math scores are found. We show that these results are robust to a matching approach combined with difference-in-difference-in-differences which allows differential time trends in attainment according to the working status in grade 12.
    Keywords: education, evaluation, propensity score matching
    JEL: J24 J22 I21
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3696&r=ecm
  19. By: Miroslav Misina; David Tessier
    Abstract: A distinguishing feature of macro stress testing exercises is the use of macroeconomic models in scenario design and implementation. It is widely agreed that scenarios should be based on "rare but plausible" events that have either resulted in vulnerabilities in the past or could do so in the future. This requirement, however, raises a number of difficult statistical and methodological problems. Economic models, as well as the statistical models of the relationships among economic variables, generally focus on capturing the average rather than the extreme behaviour, and frequently rely on the assumption of linearity. In this paper we show that these models are particularly ill-suited for stress-testing as they do not adequately capture past behaviour in extreme events, nor do they generate plausible responses to shocks under stress. Whereas one might argue that the use of these models is still preferable to no having no models, since they at least impose the consistency restrictions on the paths generated under the scenario, failing to deal with a large extent of uncertainty of these paths may lead to results that are non-informative, and potentially misleading. The paper illustrates both of these problems by a series of examples, but our conclusions have broader implications for the types of models that would be useful in these exercises.
    Keywords: Financial stability
    JEL: C15 G21 G33
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:08-30&r=ecm
  20. By: S. Boragan Aruoba; Francis X. Diebold; Chiara Scotti
    Abstract: We construct a framework for measuring economic activity at high frequency, potentially in real time. We use a variety of stock and flow data observed at mixed frequencies (including very high frequencies), and we use a dynamic factor model that permits exact filtering. We illustrate the framework in a prototype empirical example and a simulation study calibrated to the example.
    JEL: C01 C22 E32 E37
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14349&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.