nep-for New Economics Papers
on Forecasting
Issue of 2012‒02‒27
fifteen papers chosen by
Rob J Hyndman
Monash University

  1. Combinación de Proyecciones para el Precio del Petróleo: Aplicación y Evaluación de Metodologías By Ercio Muñoz; Miguel Ricaurte; Mariel Siravegna
  2. How Informative are In–Sample Information Criteria to Forecasting? The Case of Chilean GDP By Carlos Medel
  3. ¿Akaike o Schwarz? ¿Cuál elegir para Predecir el PIB Chileno? By Carlos Medel
  4. Forecasting inflation in Asian economies By Liew, Freddy
  5. The case for higher frequency inflation expectations By Guzman, Giselle C.
  6. Are Forecast Combinations Efficient? By Pablo Pincheira
  7. Metody prognozowania popytu i zarządzanie gospodarką magazynową w polskich sklepach internetowych – wyniki badań By Chodak, Grzegorz; Latus, Łukasz
  8. New cohort fertility forecasts for the developed world By Mikko Myrskylä; Joshua R. Goldstein; Yen-hsin Alice Cheng
  9. Modelling Changes in the Unconditional Variance of Long Stock Return Series By Cristina Amado; Timo Terasvirta
  10. Measuring and Predicting Heterogeneous Recessions By Cem Cakmaklý; Richard Paap; Dick van Dijk
  11. Two Exercises of Inflation Modelling and Forecasting for Azerbaijan By Alexander Chubrik; Przemyslaw Wozniak; Gulnar Hajiyeva
  12. "Prediction via the Quantile-Copula Conditional Density Estimator". By Faugeras, Olivier
  13. U-MIDAS: MIDAS regressions with unrestricted lag polynomials By Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
  14. The relationship between central bank transparency and the quality of inflation forecasts: is it U-shaped? By Emna Trabelsi
  15. The "Out of Sample" Performance of Long-run Risk Models By Wayne E. Ferson; Suresh K. Nallareddy; Biqin Xie

  1. By: Ercio Muñoz; Miguel Ricaurte; Mariel Siravegna
    Abstract: This paper conducts an exhaustive out-of-sample forecasting evaluation exercise for the monthly price of crude oil between 1992 and 2011. The idea is to identify the forecasting strategy that results in the “best” forecasts in terms of mean forecasting error. To this end, a wide variety of econometric models as well as future prices are tested for different forecasting horizons in an individual manner, as well as combined. We find that for short horizons (1 and 3 months), an ARIMA specification results in smaller forecasting errors, but for longer horizons (6-24 months), future prices outperform other models. All models are found to underestimate the true price of oil, on average. The combination of these individual models only yields smaller forecasting errors when compared to the “best” individual strategy in a restricted sample ending in 2005. Nevertheless, when we tabulate the number of times one strategy yields the largest forecasting error compared to other alternatives, combinations of forecasts never yields the highest absolute error except one month ahead. These results are robust to the sample selection.
    Date: 2012–01
  2. By: Carlos Medel
    Abstract: There is no standard economic forecasting procedure that systematically outperforms the others at all horizons and with any dataset. A common way to proceed, in many contexts, is to choose the best model within a family based on a fitting criteria, and then forecast. I compare the out-of-sample performance of a large number of autoregressive integrated moving average (ARIMA) models with some variations, chosen by three commonly used information criteria for model building: Akaike, Schwarz, and Hannan-Quinn. I perform this exercise to identify how to achieve the smallest root mean squared forecast error with models based on information criteria. I use the Chilean GDP dataset, estimating with a rolling window sample to generate one- to four-step ahead forecasts. Also, I examine the role of seasonal adjustment and the Easter effect on out-of-sample performance. After the estimation of more than 20 million models, the results show that Akaike and Schwarz are better criteria for forecasting purposes where the traditional ARMA specification is preferred. Accounting for the Easter effect improves the forecast accuracy only with seasonally adjusted data, and second-order stationarity is best.
    Date: 2012–01
  3. By: Carlos Medel
    Abstract: Schwarz. In this paper I evaluate the predictive ability of the Akaike and Schwarz information criteria using autoregressive integrated moving average models, with sectoral data of Chilean GDP. In terms of root mean square error, and after the estimation of more than a million models, the results indicate that —on average— the models based on the Schwarz criterion perform better than those selected with the Akaike, for the four horizons analyzed. Furthermore, the statistical significance of these differences indicates that the superiority in favor of the Schwarz criterion holds mainly at higher horizons.
    Date: 2012–01
  4. By: Liew, Freddy
    Abstract: This paper surveys the recent literature on inflation forecasting and conducts an extensive empirical analysis on forecasting inflation in Singapore, Japan, South Korea and Hong Kong paying particular attention to whether the inflation-markup theory can help to forecast inflation. We first review the relative performance of different predictors in forecasting h-quarter ahead inflation using single equations. These models include the autoregressive model and bivariate Philips curve models. The predictors are selected from business activity, financial activity, trade activity, labour market, interest rate market, money market, exchange rate market and global commodity market variables. We then evaluate a vector autoregressive inflation-markup model against the single equation models to understand whether there is any gain in forecasting using the inflation-markup theory. The paper subsequently analyses the robustness of these results by examining different forecasting procedures in the presence of structural breaks. Empirical results suggest that inflation in Singapore, Hong Kong and South Korea is best predicted by financial and business activity variables. For Japan, global commodity variables provide the most predictive content for inflation. In general, monetary variables tend to perform poorly. These results hold even when structural break is taken into consideration. The vector autoregressive inflation-markup model does improve on single equation models as forecasting horizon increases and these gains are found to be significant for Japan and Korea.
    Keywords: Inflation; Markup; Forecasting; Asia; Structural Break
    JEL: C32 C53 E31
    Date: 2012–01–01
  5. By: Guzman, Giselle C.
    Abstract: I present evidence that higher frequency measures of inflation expectations outperform lower frequency measures of inflation expectations in tests of accuracy, predictive power, and rationality. For decades, the academic literature has focused on three survey measures of expected inflation: the Livingston Survey, the Survey of Professional Forecasters, and the Michigan Surveys of Consumers. While these measures have been useful in developing models of forecasting inflation, the data are low frequency measures that are anachronistic in the modern era of high frequency and real-time data. I present a collection of 37 different measures of inflation expectations, including many previously unexploited monthly and real-time measures of inflation expectations. These higher frequency measures tend to outperform the standard three low frequency survey measures in tests of accuracy, predictive power, and rationality, indicating that there are benefits to using higher frequency measures of inflation expectations. Out of sample forecasts confirm the findings.
    Keywords: inflation; expectations; sentiment; TIPS; surveys; forecasting; Michigan; SPF; Livingston; time-series; econometrics; inflation; predictive power; out-of-sample forecasts; high frequency; Rational Expectations Hypothesis; Efficient Markets Hypothesis; hypothesis testing; inflation forecasting
    JEL: C51 C52 C12 G00 E47 D84 E58 E30 C02 G14 C82 E31 E44 C32 C13 D03 C53 C20 C22 C42 D83 C81 G10 E37 C01
    Date: 2011–06–29
  6. By: Pablo Pincheira
    Abstract: It is well known that weighted averages of two competing forecasts may reduce Mean Squared Prediction Errors (MSPE) and may also introduce certain inefficiencies. In this paper we take an in-depth view of one particular type of inefficiency stemming from simple combination schemes. We identify testable conditions under which every linear convex combination of two forecasts displays this type of inefficiency. In particular, we show that the process of taking averages of forecasts may induce inefficiencies in the combination, even when the individual forecasts are efficient. Furthermore, we show that the so-called "optimal weighted average" traditionally presented in the literature may indeed be suboptimal. We propose a simple testable condition to detect if this traditional weighted factor is optimal in a broader sense. An optimal "recombination weight" is introduced. Finally, we illustrate our findings with simulations and an empirical application in the context of the combination of inflation forecasts.
    Date: 2012–01
  7. By: Chodak, Grzegorz; Latus, Łukasz
    Abstract: In this article research in area of inventory control in Polish internet shops is presented. The first part of this article shows the results of studies on demand forecasting methods most commonly used by online shops. The second part presents a comprehensive analysis of replenishment strategies emphasising the radical solutions and the factors influencing the choice of specific logistic solutions, along with vast commentary of the authors. Also the changes in last four years in percentage share of on-stock products were analysed.
    Keywords: demand forecasting; e-commerce; inventory control; internet shop
    JEL: L81 L86
    Date: 2011–09
  8. By: Mikko Myrskylä (Max Planck Institute for Demographic Research, Rostock, Germany); Joshua R. Goldstein (Max Planck Institute for Demographic Research, Rostock, Germany); Yen-hsin Alice Cheng (Max Planck Institute for Demographic Research, Rostock, Germany)
    Abstract: The 1970s worries of the "population bomb" were replaced in the 1990s with concerns of population aging driven by falling birth rates. Across the developed world, the nearly universally-used fertility indicator, the period total fertility rate, fell well below two children per woman. However, declines in period fertility have largely been an artifact of later – but not necessarily less – childbearing. We produce new estimates of the actual number of children women have over their lifetimes – cohort fertility – for 37 developed countries. Our results suggest that family size has remained high in many "low fertility" countries. For example, cohort fertility averages 1.8 for the 1975 birth cohort in the 37 countries for which average period total fertility rate was only 1.5 in 2000. Moreover, we find that the long-term decline in cohort fertility has flattened or reversed in all world regions previously characterized by low fertility. These results are robust to statistical forecast uncertainty and the impact of the late 2000s recession. An application of the new forecasts analyzing the determinants of cohort fertility finds that the key dimensions of development that have been hypothesized to be important for fertility – general socioeconomic development, per capita income, and gender equality – are all positively correlated with fertility for the 1970s cohorts. Gender equality, however, emerges as the strongest determinant: where the gap in economic, political, and educational achievement between women and men is small, cohort fertility is high, whereas where the gap is large, fertility is low. Our new cohort fertility forecasts that document the flattening and even reversal of cohort fertility have large implications for the future of population aging and growth, particularly over the long term.
    Keywords: World, cohort fertility, developed areas, forecasts
    JEL: J1 Z0
    Date: 2012–02
  9. By: Cristina Amado (Universidade do Minho - NIPE); Timo Terasvirta (CREATES, Department of Economics and Business, Aarhus University)
    Abstract: In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building procedure is illustrated with an application to daily returns of the Dow Jones Industrial Average stock index covering a period of more than ninety years. The main conclusions are as follows. First, the LM tests strongly reject the assumption of constancy of the unconditional variance. Second, the results show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all horizons for a subset of the long return series.
    Keywords: Model specification; Conditional heteroskedasticity; Lagrange multiplier test; Timevarying unconditional variance; Long financial time series; Volatility persistence
    JEL: C12 C22 C51 C52 C53
    Date: 2012
  10. By: Cem Cakmaklý (Department of Quantitative Economics, University of Amsterdam); Richard Paap (Econometric Institute, Erasmus University Rotterdam); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: This paper conducts an empirical analysis of the heterogeneity of recessions in monthly U.S. coincident and leading indicator variables. Univariate Markovswitching models indicate that it is appropriate to allow for two distinct recession regimes, corresponding with ‘mild’ and ‘severe’ recessions. All downturns start with a mild decline in the level of economic activity. Contractions that develop into severe recessions mostly correspond with periods of substantial credit squeezes as suggested by the ‘financial accelerator’ theory. Multivariate Markov-switching models that allow for phase shifts between the cyclical regimes of industrial production and the Conference Board Leading Economic Index confirm these findings.
    Keywords: Business cycle, phase shifts, regime-switching models, Bayesian analysis
    JEL: C11 C32 C51 C52 E32
    Date: 2012–02
  11. By: Alexander Chubrik; Przemyslaw Wozniak; Gulnar Hajiyeva
    Abstract: The paper proposes two econometric models of inflation for Azerbaijan: one based on monthly data and eclectic, another based on quarterly data and takes into account disequilibrium at the money market. Inflation regression based on monthly data showed that consumer prices dynamics is explained by money growth (the more money, the higher the inflation), exchange rate behaviour (appreciation drives disinflation), commodities price dynamics (“imported” inflation) and administrative changes in regulated prices. For the quarterly model, nominal money demand equation (with inflation, real non-oil GDP and nominal interest rate on foreign currency deposits as predictors) and money supply equation were estimated, and error-correction mechanism from money demand equation was included into inflation equation. It is shown that disequilibrium at the money market (supply higher than demand) drives inflation together with money supply growth and nominal exchange rate depreciation and administrative changes in prices. No cost-push variables appeared to be significant in this equation specification. Both models give similar inflation projections, but sudden changes in money demand (2012) lead to significant differences between the projections. It is shown that money is the most important inflation determinant that explains up to 97.8% of CPI growth between 2012 and 2015, and that in order to keep inflation under control the Central Bank of Azerbaijan should link money supply to real non-oil GDP growth.
    Keywords: Inflation modelling, Inflation forecasting, Money demand, Money supply, Azerbaijan
    JEL: C32 E31 E41 O52
    Date: 2012
  12. By: Faugeras, Olivier
    Date: 2012
  13. By: Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: Mixed-data sampling (MIDAS) regressions allow to estimate dynamic equations that explain a low-frequency variable by high-frequency variables and their lags. When the difference in sampling frequencies between the regressand and the regressors is large, distributed lag functions are typically employed to model dynamics avoiding parameter proliferation. In macroeconomic applications, however, differences in sampling frequencies are often small. In such a case, it might not be necessary to employ distributed lag functions. In this paper, we discuss the pros and cons of unrestricted lag polynomials in MIDAS regressions. We derive unrestricted MIDAS regressions (U-MIDAS) from linear high-frequency models, discuss identification issues, and show that their parameters can be estimated by OLS. In Monte Carlo experiments, we compare U-MIDAS to MIDAS with functional distributed lags estimated by NLS. We show that U-MIDAS generally performs better than MIDAS when mixing quarterly and monthly data. On the other hand, with larger differences in sampling frequencies, distributed lag-functions outperform unrestricted polynomials. In an empirical application on out-of-sample nowcasting GDP in the US and the Euro area using monthly predictors, we find a good performance of U-MIDAS for a number of indicators, albeit the results depend on the evaluation sample. We suggest to consider U-MIDAS as a potential alternative to the existing MIDAS approach in particular for mixing monthly and quarterly variables. In practice, the choice between the two approaches should be made on a case-by-case basis, depending on their relative performance. --
    Keywords: mixed data sampling,distributed lag polynomals,time aggregation,now-casting
    JEL: E37 C53
    Date: 2011
  14. By: Emna Trabelsi
    Abstract: A recent theoretical literature highlighted the potential dangers of further increasing information disclosure by central banks. This paper gives a continuous empirical investigation of the existence of an optimal degree of transparency in the lines of van der Cruijsen et al. We test a quadratic relationship between central bank transparency and the inflation persistence by introducing some technical and economic modifications. Particularly, we used three new measures of transparency. An appropriate U shape test that was made through a Stata routine, recently developed by Lind and Mehlum, indicates a robust optimal intermediate degree of transparency, but its level is not. These results were obtained using a panel of 11 OECD central banks under the period 1999-2009. The estimations were run using a bias corrected LSDVC, a newly recent technique developed by Bruno for short dynamic panels with fixed effects, extended to accommodate unbalanced data.
    Keywords: Intermediate optimal transparency degree, inflation forecasts, inflation persistence, u-shaped relationship, non linear modeling, LSDVC, Principal Component Analysis.
    JEL: C23 E58
    Date: 2012–01–02
  15. By: Wayne E. Ferson; Suresh K. Nallareddy; Biqin Xie
    Abstract: This paper studies the ability of long-run risk models to explain out-of-sample asset returns during 1931-2009. The long-run risk models perform relatively well on the momentum effect. A cointegrated version of the model outperforms the classical, stationary version. Both the long-run and the short run consumption shocks in the models are empirically important for the models’ performance. The models’ average pricing errors are especially small in the decades from the 1950s to the 1990s. When we restrict the risk premiums to identify structural parameters, this results in larger average pricing errors but often smaller error variances. The mean squared errors are not substantially better than those of the classical CAPM, except for Momentum.
    JEL: E21 E27 G12
    Date: 2012–02

This nep-for issue is ©2012 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.