nep-for New Economics Papers
on Forecasting
Issue of 2015‒07‒18
seven papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Inflation in Tunisia Using Dynamic Factors Model By AMMOURI, Bilel; TOUMI, Hassen; Zitouna, Habib
  2. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models By Francisco Blasques; Siem Jan Koopman; Katarzyna Łasak; André Lucas
  3. High-Dimensional Copula-Based Distributions with Mixed Frequency Data By Oh, Dong Hwan; Patton, Andrew J.
  4. Looking into the future of complex dynamic systems By Baggio, Rodolfo
  5. Local Polynomial Regressions versus OLS for Generating Location Value Estimates: Which is More Efficient in Out-of-Sample Forecasts? By Cohen, Jeffrey P.; Coughlin, Cletus C.; Clapp, John M.
  6. Rare Shocks vs. Non-linearities: What Drives Extreme Events in the Economy? Some Empirical Evidence By Michal Franta
  7. Backtesting Strategies Based on Multiple Signals By Robert Novy-Marx

  1. By: AMMOURI, Bilel; TOUMI, Hassen; Zitouna, Habib
    Abstract: This work presents a forecasting inflation model using a monthly database. Conventional models for forecasting inflation use a small number of macroeconomic variables. In the context of globalization and dependent economic world, models have to account a large number of information. This model is the goal of recent research in the various industrialized countries as well as developing countries. With Dynamic Factors Model the forecast values are closer to actual inflation than those obtained from conventional models in the short term. In our research we devise the inflation in to “free inflation and administered inflation” and we test the performance of the DFM in different types of inflation namely administered and free inflation. We found that dynamic factors model leads to substantial forecasting improvements over simple benchmark regressions.
    Keywords: Inflation, PCA, VAR, Dynamic Factors Model, Kalman Filter, algorithmic EM, Space-state, forecast.
    JEL: C13 C22 C53 E31
    Date: 2015–07–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:65514&r=for
  2. By: Francisco Blasques (VU University Amsterdam, the Netherlands); Siem Jan Koopman (VU University Amsterdam, the Netherlands); Katarzyna Łasak (VU University Amsterdam, the Netherlands); André Lucas (VU University Amsterdam, the Netherlands)
    Abstract: We study the performance of alternative methods for calculating in-sample confidence and out of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty only. The out-of-sample bands reflect both parameter uncertainty and innovation uncertainty. The bands are applicable to a large class of observation driven models and a wide range of estimation procedures. A Monte Carlo study is conducted for time-varying parameter models such as generalized autoregressive conditional heteroskedasticity and autoregressive conditional duration models. Our results show clear differences between the actual coverage provided by the different methods. We illustrate our findings in a volatility analysis for monthly Standard & Poor’s 500 index returns.
    Keywords: autoregressive conditional duration; delta-method; generalized autoregressive
    JEL: C52 C53
    Date: 2015–07–09
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150083&r=for
  3. By: Oh, Dong Hwan (Board of Governors of the Federal Reserve System (U.S.)); Patton, Andrew J. (Duke University)
    Abstract: This paper proposes a new model for high-dimensional distributions of asset returns that utilizes mixed frequency data and copulas. The dependence between returns is decomposed into linear and nonlinear components, enabling the use of high frequency data to accurately forecast linear dependence, and a new class of copulas designed to capture nonlinear dependence among the resulting uncorrelated, low frequency, residuals. Estimation of the new class of copulas is conducted using composite likelihood, facilitating applications involving hundreds of variables. In- and out-of-sample tests confirm the superiority of the proposed models applied to daily returns on constituents of the S&P 100 index.
    Keywords: Composite likelihood; forecasting; high frequency data; nonlinear dependence
    JEL: C32 C51 C58
    Date: 2015–05–19
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2015-50&r=for
  4. By: Baggio, Rodolfo
    Abstract: The desire to know and foresee the future is naturally bound to human nature. Traditional forecasting methods have looked after reductionist linear approaches: variables and relationships are monitored in order to foresee future outcomes with simplified models and to derive theoretical and practical implications. The limitations of this attitude have become apparent in many cases, mainly when dealing with dynamic evolving complex systems, that encompass numerous factors and activities which are interdependent and whose relationships might be highly nonlinear, resulting in an inherent unpredictability of their long-term behavior. Complexity science ideas are important interdisciplinary research themes emerged in the last few decades that allow to tackle the issue, at least partially. This paper presents a brief overview of the complexity framework as a means to understand structures, characteristics, relationships, and explores the most important implications and contributions of the literature on the predictability of a complex system. The objective is to allow the reader to gain a deeper appreciation of this approach.
    Keywords: forecasting, predictability, complex systems, nonlinear analysis, time series
    JEL: C00 C65 L83
    Date: 2015–06–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:65549&r=for
  5. By: Cohen, Jeffrey P. (University of Hartford); Coughlin, Cletus C. (Federal Reserve Bank of St. Louis); Clapp, John M. (University of Connecticut)
    Abstract: As an alternative to ordinary least squares (OLS), we estimate location values for single family houses using a standard housing price and characteristics dataset by local polynomial regressions (LPR), a semi-parametric procedure. We also compare the LPR and OLS models in the Denver metropolitan area in the years 2003, 2006 and 2010 with out-of-sample forecasting. We determine that the LPR model is more efficient than OLS at predicting location values in counties with greater densities of sales. Also, LPR outperforms OLS in 2010 for all 5 counties in our dataset. Our findings suggest that LPR is a preferable approach in areas with greater concentrations of sales and in periods of recovery following a financial crisis.
    Keywords: Land Values; Location Values; Semi-Parametric Estimation; Local Polynomial Regressions
    JEL: C14 H41 H54 R51 R53
    Date: 2015–07–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2015-014&r=for
  6. By: Michal Franta
    Abstract: A small-scale vector autoregression (VAR) is used to shed some light on the roles of extreme shocks and non-linearities during stress events observed in the economy. The model focuses on the link between credit/financial markets and the real economy and is estimated on US quarterly data for the period 1984–2013. Extreme shocks are accounted for by assuming t-distributed reduced-form shocks. Non-linearity is allowed by the possibility of regime switch in the shock propagation mechanism. Strong evidence for fat tails in error distributions is found. Moreover, the results suggest that accounting for extreme shocks rather than explicit modeling of non-linearity contributes to the explanatory power of the model. Finally, it is shown that the accuracy of density forecasts improves if non-linearities and shock distributions with fat tails are considered.
    Keywords: Bayesian VAR, density forecasting, fat tails, non-linearity
    JEL: C11 C32 E44
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:cnb:wpaper:2015/04&r=for
  7. By: Robert Novy-Marx
    Abstract: Strategies selected by combining multiple signals suffer severe overfitting biases, because underlying signals are typically signed such that each predicts positive in-sample returns. “Highly significant” backtested performance is easy to generate by selecting stocks on the basis of combinations of randomly generated signals, which by construction have no true power. This paper analyzes t-statistic distributions for multi-signal strategies, both empirically and theoretically, to determine appropriate critical values, which can be several times standard levels. Overfitting bias also severely exacerbates the multiple testing bias that arises when investigators consider more results than they present. Combining the best k out of n candidate signals yields a bias almost as large as those obtained by selecting the single best of n<sup>k</sup> candidate signals.
    JEL: C58 G11
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21329&r=for

This nep-for issue is ©2015 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.