nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒03‒14
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Testing Conditional Factor Models By Dennis Kristensen; Andrew Ang
  2. Forecasting Large Datasets with Conditionally Heteroskedastic Dynamic Common Factors By Lucia Alessi; Matteo Barigozzi; Marco Capasso
  3. Likelihood-Based Confidence Sets for the Timing of Structural Breaks By Eo, Yunjong; Morley, James C.
  4. Forecasting Errors: Yet More Problems for Identification? By Contini, Bruno
  5. Transformation kernel density estimation of actuarial loss functions By Catalina Bolance (Universitat de Barcelona); Montserrat Guillen (Universitat de Barcelona); Jens Perch Nielsen (City University London)
  6. ESTAR model with multiple fixed points. Testing and Estimation By David Peel; Ivan Paya; Ioannis A. Venetis
  7. Forecasting Exchange Rate Volatility: The Superior Performance of Conditional Combinations of Time Series and Option Implied Forecasts By Guillermo Benavides; Carlos Capistrán
  8. The Theta Model in the Presence of a Unit Root Some new results on “optimal” theta forecasts By Dimitrios Thomakos; Konstantinos Nikolopoulos
  9. Poor identification and estimation problems in panel data models with random effects and autocorrelated errors By Giorgio Calzolari; Laura Magazzini
  10. The Factor-Spline-GARCH Model for High and Low Frequency Correlations By Jose Gonzalo Rangel; Robert F. Engle
  11. A Likelihood Analysis of Models with Information Frictions By Leonardo Melosi
  12. Estimating Sequential-move Games by a Recursive Conditioning Simulator By Shiko Maruyama

  1. By: Dennis Kristensen (Columbia University and CREATES); Andrew Ang (Columbia University and NBER)
    Abstract: We develop a new methodology for estimating time-varying factor loadings and conditional alphas based on nonparametric techniques. We test whether long-run alphas, or averages of conditional alphas over the sample, are equal to zero and derive test statistics for the constancy of factor loadings. The tests can be performed for a single asset or jointly across portfolios. The traditional Gibbons, Ross and Shanken (1989) test arises as a special case when there is no time variation in the factor loadings. As applications of the methodology, we estimate conditional CAPM and Fama and French (1993) models on book-to-market and momentum decile portfolios. We reject the null that long-run alphas are equal to zero even though there is substantial variation in the conditional factor loadings of these portfolios.
    Keywords: factor models, time-varying loadings, nonparametric estimation, kernel methods, testing
    JEL: C12 C13 C14 C32 G11
    Date: 2009–03–04
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-09&r=ecm
  2. By: Lucia Alessi; Matteo Barigozzi; Marco Capasso
    Abstract: We propose a new method for multivariate forecasting which combines Dynamic Factor and multivariate GARCH models. We call the model Dynamic Factor GARCH, as the information contained in large macroeconomic or financial datasets is captured by a few dynamic common factors, which we assume being conditionally heteroskedastic. After describing the estimation of the model, we present simulation results and carry out two empirical applications on financial asset returns and macroeconomic series, with a particular focus on different measures of inflation. Our proposed model outperforms the benchmarks in forecasting the conditional volatility of returns and the inflation level. Moreover, it allows to predict conditional covariances of all the time series in the panel.
    Keywords: Dynamic factors, multivariate GARCH, covolatility forecasting, inflation forecasting
    JEL: C52 C53
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2009_005&r=ecm
  3. By: Eo, Yunjong; Morley, James C.
    Abstract: In this paper, we propose a new approach to constructing confidence sets for the timing of structural breaks. This approach involves using Markov-chain Monte Carlo methods to simulate marginal “fiducial” distributions of break dates from the likelihood function. We compare our proposed approach to asymptotic and bootstrap confidence sets and find that it performs best in terms of producing short confidence sets with accurate coverage rates. Our approach also has the advantages of i) being broadly applicable to different patterns of structural breaks, ii) being computationally efficient, and iii) requiring only the ability to evaluate the likelihood function over parameter values, thus allowing for many possible distributional assumptions for the data. In our application, we investigate the nature and timing of structural breaks in postwar U.S. Real GDP. Based on marginal fiducial distributions, we find much tighter 95% confidence sets for the timing of the so-called “Great Moderation” than has been reported in previous studies.
    Keywords: Fiducial Inference; Bootstrap Methods; Structural Breaks; Confidence Intervals and Sets; Coverage Accuracy and Expected Length; Markov-chain Monte Carlo;
    JEL: C15 C22
    Date: 2008–09–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:13913&r=ecm
  4. By: Contini, Bruno (LABORatorio R. Revelli)
    Abstract: Forecasting errors pose a serious problem of identification, often neglected in empirical applications. Any attempt of estimating choice models under uncertainty may lead to severely biased results in the presence of forecasting errors even when individual expectations on future events are observed together with the standard outcome variables.
    Keywords: identification, forecasting errors, subjective probabilities
    JEL: C01 C51
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4035&r=ecm
  5. By: Catalina Bolance (Universitat de Barcelona); Montserrat Guillen (Universitat de Barcelona); Jens Perch Nielsen (City University London) (Universitat de Barcelona)
    Abstract: A transformation kernel density estimator that is suitable for heavy-tailed distributions is discussed. Using a truncated Beta transformation, the choice of the bandwidth parameter becomes straightforward. An application to insurance data and the calculation of the value-at-risk are presented.
    Keywords: non-parametric methods, heavy-tailed distributions, value at risk
    JEL: G22 C14
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:bar:bedcje:2009219&r=ecm
  6. By: David Peel; Ivan Paya; Ioannis A. Venetis
    Abstract: In this paper we propose a globally stationary augmentation of the Exponential Smooth Transition Autoregressive (ESTAR) model that allows for multiple fixed points in the transition function. An F-type test statistic for the null of nonstationarity against such globally stationary nonlinear alternative is developed. The test statistic is based on the standard approximation of the nonlinear function under the null hypothesis by a Taylor series expansion. The model is applied to the U.S real interest rate data for which we find evidence of the new ESTAR process.
    Keywords: ESTAR, unit toot, real interest rates
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:lan:wpaper:005916&r=ecm
  7. By: Guillermo Benavides; Carlos Capistrán
    Abstract: This paper provides empirical evidence that combinations of option implied and time series volatility forecasts that are conditional on current information are statistically superior to individual models, unconditional combinations, and hybrid forecasts. Superior forecasting performance is achieved by both, taking into account the conditional expected performance of each model given current information, and combining individual forecasts. The method used in this paper to produce conditional combinations extends the application of conditional predictive ability tests to select forecast combinations. The application is for volatility forecasts of the Mexican Peso-US Dollar exchange rate, where realized volatility calculated using intra-day data is used as a proxy for the (latent) daily volatility.
    Keywords: Composite Forecasts, Forecast Evaluation, GARCH, Implied volatility, Mexican Peso-U.S. Dollar Exchange Rate, Regime-Switching
    JEL: C22 C52 C53 G10
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2009-01&r=ecm
  8. By: Dimitrios Thomakos; Konstantinos Nikolopoulos
    Abstract: We significantly extend earlier work by Assimakopoulos and Nikopoloulos (2000) and Hyndman and Billah (2003) on the properties and performance of the Theta model, and potentially explain its very good performance in the M3 forecasting competition. We derive a number of new theoretical results for theta forecasts when the data generating process contains both deterministic and stochastic trends. In particular (a) we show that using the standard theta forecasts coincides with the minimum mean-squared error forecast when the innovations are uncorrelated; (b) we provide, for the first time, an optimal value for the theta parameter, which coincides with the first order autocorrelation of the innovations, and thus provide a single optimal theta line; (c) we show that the optimal linear combination of two standard theta lines coincides with the single optimal theta line of (b). Under (b) and (c) we show that the optimal theta forecast function is identical with that of an ARIMA(1,1,0) model. Furthermore, we illustrate how the Theta model can be generalized to include local behavior in two different ways.
    Keywords: forecasting, theta model, unit roots.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:uop:wpaper:0034&r=ecm
  9. By: Giorgio Calzolari (University of Firenze); Laura Magazzini (Università di Verona; Dipartimento di Scienze economiche (Università di Verona))
    Abstract: A dramatically large number of corner solutions occur when estimating by (Gaussian) maximum likelihood a simple model for panel data with random effects and autocorrelated errors. This can invalidate results of applications to panel data with a short time dimension, even in a correctly specified model. We explain this unpleasant effect (usually underestimated, almost ignored in the literature) showing that the expected log-likelihood is nearly flat, thus rising problems of poor identification.
    Keywords: panel data, maximum likelihood, identification.
    JEL: C13 C23
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:ver:wpaper:53&r=ecm
  10. By: Jose Gonzalo Rangel; Robert F. Engle
    Abstract: We propose a new approach to model high and low frequency components of equity correlations. Our framework combines a factor asset pricing structure with other specifications capturing dynamic properties of volatilities and covariances between a single common factor and idiosyncratic returns. High frequency correlations mean revert to slowly varying functions that characterize long-term correlation patterns. We associate such term behavior with low frequency economic variables, including determinants of market and idiosyncratic volatilities. Flexibility in the time varying level of mean reversion improves the empirical fit of equity correlations in the US and correlation forecasts at long horizons.
    Keywords: Yield curve, forecasting, economic activity
    JEL: C22 C32 C51 C53 G11 G12 G32
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2009-03&r=ecm
  11. By: Leonardo Melosi (Department of Economics, University of Pennsylvania)
    Abstract: This paper develops a dynamic stochastic general equilibrium model where firms are imperfectly informed. We estimate the model through likelihood-based methods and find that it can explain the highly persistent real effects of monetary disturbances that are documented by a benchmark VAR. The model of imperfect information nests a model of rational inattention where firms optimally choose the variances of signal noise, subject to an information-processing constraint. We present an econometric procedure to evaluate the predictions of this rational inattention model. Implementing this procedure delivers insights on how to improve the fit of rational inattention models.
    Keywords: Imperfect common knowledge; rational inattention; Bayesian econometrics; real effects of nominal shocks; VAR identification
    JEL: E3 E5 C32 D8
    Date: 2009–02–27
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-009&r=ecm
  12. By: Shiko Maruyama (School of Economics, The University of New South Wales)
    Abstract: Sequential decision-making is a noticeable feature of strategic interactions among agents. The full estimation of sequential games, however, has been challenging due to the sheer computational burden, especially when the game is large and asymmetric. In this paper, I propose an estimation method for discrete choice sequential games that is computationally feasible, easy-to-implement, and e¢ cient, by modifying the Geweke-Hajivassiliou-Keane (GHK) simulator, the most widely used probit simulator. I show that the recursive nature of the GHK simulator is easily dovetailed with the sequential structure of strategic interactions.
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2009-01&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.