nep-for New Economics Papers
on Forecasting
Issue of 2006‒06‒03
ten papers chosen by
Rob J Hyndman
Monash University

  1. Short-term forecasts of euro area real GDP growth - an assessment of real-time performance based on vintage data By Marie Diron
  2. Modelling inflation in the Euro Area By Eilev S. Jansen
  3. Volatility Forecast Comparison using Imperfect Volatility Proxies By Andrew Patton
  4. Proprietary Income, Entrepreneurial Risk, and the Predictability of U.S. Stock Returns By Mathias Hoffmann
  5. Learning to Forecast the Exchange Rate: Two Competing Approaches By Paul De Grauwe; Agnieszka Markiewicz
  6. Forecasting the book return on owners' equity and the identification of mispriced securities: market efficiency tests on Swedish data By Skogsvik, Stina; Skogsvik, Kenth
  7. The airport Network and Catchment area Competition Model - A comprehensive airport demand forecasting system using a partially observed database By Eric Kroes; Abigail Lierens; Marco Kouwenhoven
  8. Competing Approaches to Forecasting Elections: Economic Models, Opinion Polling and Prediction Markets By Andrew Leigh; Justin Wolfers
  9. Modelling departure time and mode choice By Andrew Daly; Stephane Hess; Geoff Hyman; John Polak; Charlene Rohr
  10. Valuation of uncertainty in travel time and arrival time - some findings from a choice experiment By Dirk Van Amelsfort; Michiel Bliemer

  1. By: Marie Diron (Brevan Howard Asset Management LLP, London, SW1Y 6XA, United Kingdom.)
    Abstract: Economic policy makers, international organisations and private-sector forecasters commonly use short-term forecasts of real GDP growth based on monthly indicators, such as industrial production, retail sales and confidence surveys. An assessment of the reliability of such tools and of the source of potential forecast errors is essential. While many studies have evaluated the size of forecast errors related to model specifications and unavailability of data in real time, few have provided a complete assessment of forecast errors, which should notably take into account the impact of data revision. This paper proposes to bridge this gap. Using four years of data vintages for euro area conjunctural indicators, the paper decomposes forecast errors into four elements (model specification, erroneous extrapolations of the monthly indicators, revisions to the monthly indicators and revisions to the GDP data series) and assesses their relative sizes. The results show that gains in accuracy of forecasts achieved by using monthly data on actual activity rather than surveys or financial indicators are offset by the fact that the former set of monthly data is harder to forecast and less timely than the latter set. While the results presented in the paper remain tentative due to limited data availability, they provide a benchmark which future research may build on.
    Keywords: Forecasting, conjunctural analysis, bridge equations, real-time forecasting, vintage data.
    JEL: C22 C53 E17 E37 E66
    Date: 2006–05
  2. By: Eilev S. Jansen (Norges Bank and Norwegian University of Science and Technology)
    Abstract: The paper presents an incomplete competition model (ICM), where inflation is determined jointly with unit labour cost growth. The ICM is estimated on data for the Euro area and evaluated against existing models, i.e. the implicit inflation equation of the Area Wide model (AWM) - cf. Fagan, Henry and Mestre (2001) - and estimated versions of the (single equation) P* model and a hybrid New Keynesian Phillips curve. The evidence from these comparisons does not invite decisive conclusions. There is, however, some support in favour of the (reduced form) AWM inflation equation. It is the only model that encompasses a general unrestricted model and it forecast encompasses the competitors when tested on 20 quarters of one step ahead forecasts.
    Keywords: inflation, incomplete competition model, Area Wide model, P*-model, New Keynesian Phillips curve, model evaluation, forecast encompassing.
    JEL: C22 C32 C52 C53 E31
    Date: 2004–06–20
  3. By: Andrew Patton (London School of Economics)
    Abstract: The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We derive necessary and sufficient conditions on functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some interesting special cases of this class of ?robust? loss functions. We motivate the theory with analytical results on the distortions caused by some widely-used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.
    Keywords: forecast evaluation; forecast comparison; loss functions; realised Variance; range
    JEL: C53 C52 C22
    Date: 2006–05–01
  4. By: Mathias Hoffmann
    Abstract: Small businesses tend to be owned by wealthy households. Such entrepreneur households also own a large share of U.S. stock market wealth. Fluctuations in entrepreneurs’ hunger for risk could therefore help explain time variation in the equity premium. The paper suggests an entrepreneurial distress factor that is based on a cointegrating relationship between consumption and income from proprietary and non-proprietary wealth. I call this factor the cpy residual. It reflects cyclical fluctuations in proprietary income, is highly correlated with cross-sectional measures of idiosyncratic entrepreneurial risk and has considerable forecasting power for U.S. stock returns. In line with the theoretical mechanism, the correlation between cpy and the stock market has been declining since the beginning of the 1980s as stock market participation has widened and as entrepreneurial risk has become more easily diversifiable in the wake of U.S. state-level bank deregulation.
    Keywords: non-insurable background risk, entrepreneurial income, equity risk premium, long-horizon predictability
    JEL: E21 E31 G12
    Date: 2006
  5. By: Paul De Grauwe; Agnieszka Markiewicz
    Abstract: In this paper, we investigate the behavior of the exchange rate within the framework of an asset pricing model. We assume boundedly rational agents who use simple rules to forecast the future exchange rate. They test these rules continuously using two learning mechanisms. The first one, the fitness method, assumes that agents evaluate forecasts by computing their past profitability. In the second mechanism, agents learn to improve these rules using statistical methods. First, we find that both learning mechanisms reveal the fundamental value of the exchange rate in the steady state. Second, both mechanisms mimic regularities observed in the foreign exchange markets, namely exchange rate disconnect and excess volatility. Fitness learning rule generates the disconnection at different frequencies, while the statistical method has this ability only at the high frequencies. Statistical learning can produce excess volatility of magnitude closer to reality than fitness learning but can also lead to explosive solutions.
    JEL: C32 F31
    Date: 2006
  6. By: Skogsvik, Stina (Centre for Financial Analysis and Managerial Economics in Accounting); Skogsvik, Kenth (Centre for Financial Analysis and Managerial Economics in Accounting)
    Abstract: The general purpose of the study is to evaluate whether financial statement information is relevant to investors in determining stock values, and to test whether the Swedish stock market is efficient with respect to publicly available financial statement information. One major contribution of the study is to provide a more valid test of traditional "fundamental analysis", and a more powerful test of market efficiency than in previous research. <p> Ou & Penman (1989) constructed the so-called Pr-measure to predict the sign of one-year-ahead earnings changes and tested a trading strategy based on these predictions on U.S. stock market data. The results in Ou & Penman indicated that financial statements capture fundamentals that are not reflected in stock market prices. However, later studies have shown that the results presented in Ou & Penman are very sensitive to various choices of test procedures. Also, a lack of stability in the results over time and across countries has been documented. <p> An obvious shortcoming of a trading strategy based solely on the Pr-measure is that it does not incorporate any assessment of whether its predictions of future earnings changes are already included in stock market prices or not. This study contributes by formulating an investment strategy that combines the predictions of future company profitability according to a prediction model with the assessment of whether these predictions are reflected in stock market prices. The study provides a more powerful test of market efficiency, since positions will only be taken in those stocks for which the predictions of future company profitability are not reflected in current prices. In principle, the investor takes a long position when the estimated prediction model indicates an increase in the future company profitability, while the market's expectation is that future profitability will decrease. Analogously, the investor takes a short position when the prediction model indicates a decrease in future profitability, while the market expects the future profitability to increase. <p> The sample companies were all listed on the Stockholm Stock Exchange. The prediction models were estimated and validated using financial statement information for the period 1970-2002. Logit analysis was chosen as the statistical method of estimation - the probability of an increase in the future return on owners' equity was estimated given the past return on owners' equity. The performance of the investment strategy was evaluated in the period 1983-2003. Based on the latest available annual report, the probability of an increase in the future return on owners' equity was estimated in accordance with the prediction model. Next, an assessment of whether these predictions were reflected in stock market prices was made. For this purpose a specification of the residual income valuation model has been used. Stock market positions were held for 36 months. <p> Two abnormal return metrics have been tried in the study - the abnormal CAPM return (Jensen's alpha) and the market adjusted return. Controls for company characteristics such as size, book-to-market, E/P-ratio and dividend yield have also been conducted. The abnormal return to the investment strategy was considerably higher than for an investment strategy in the spirit of Ou & Penman, i.e. a strategy based only on the prediction model. In the main, the tests show that the abnormal return to the strategy was significant. Thus, the results reported in this study are not consistent with the efficient market hypothesis.
    Keywords: Fundamental analysis; Return on owners' equity; Market efficiency
    Date: 2005–06–21
  7. By: Eric Kroes; Abigail Lierens; Marco Kouwenhoven
    Abstract: For airport capacity planning long term forecasts of aircraft movements are required. The classical approach to generate such forecasts has been the use of time series data together with econometric models, to extrapolate observed patterns of growth into the future. More recently, the dramatically increased competition between airports, airlines and alliances on the one hand, and serious capacity problems on the other, have made this approach no longer adequate. Airport demand forecasts now need to focus heavily on the many competitive elements in addition to the growth element. In our paper we describe a comprehensive, pragmatic air demand model system that has been implemented for Amsterdam’s Schiphol Airport. This model, called the Airport Network and Catchment area Competition Model (ACCM), provides forecasts of future air passenger volumes and aircraft movements explicitly taking account of choices of air passengers among competing airports in Europe. The model uses a straightforward nested logit structure to represent choices of air passengers among alternative departure airports, transport modes to the airport, airlines/alliances/low cost carriers, types of flight (direct versus transfer), air routes, and main modes of transport (for those distances where car and high-speed train may be an alternative option). Target year passenger forecasts are obtained by taking observed base year passenger numbers, and applying two factors to these: (1)Firstly a growth factor, to express the global impact of key drivers of passenger demand growth such as population size, income, trade volume; (2)Secondly a market share ratio factor, to express the increase (or decline) in attractiveness of the airport due to anticipated changes in its air network and landside-accessibility, relative to other (competing) airports. The target year passenger forecasts are then converted into aircraft movements to assess whether or not the available runway capacity is adequate. Key inputs to the model are data bases describing for base year and target year the level of service (travel times, costs, service frequencies) of the land-side accessibility of all departure airports considered, and the air-side networks of all departure and hub airports considered. The air-side networks (supply) are derived from a detailed OAG based flight simulation model developed elsewhere. A particular characteristic of the ACCM implementation for Schiphol Airport is that it had to be developed using only a partial data set describing existing demand: although detailed OD- information was available for air passengers using Schiphol Airport in 2003, no such data was available for other airports or other transport modes. As a consequence a synthetic modelling approach was adopted, where the unobserved passenger segments for the base year were synthesised using market shares ratios between unobserved and observed segments forecasts for the base year together with the observed base year passenger volumes. This process is elegant and appealing in principle, but is not without a number of problems when applied in a real case. In the paper we will first set out the objectives of the ACCM as it was developed, and the operational and practical constraints that were imposed. Then we will describe how the ACCM fits with model developments in the literature, and sketch the overall structure that was adopted. The following sections will describe the modelled alternatives and the utility structures, the level-of-service data bases used for land-side and air-side networks, for base year and target year. Then we will describe in some detail how we dealt with the partial data issue: the procedure to generate non-observed base year data, the validation, the problems encountered, the solutions chosen. Finally we shall show a number of the results obtained (subject to permission by the Dutch Ministry of Transport), and provide some conclusions and recommendations for further application of the methodology.
    Date: 2005–08
  8. By: Andrew Leigh; Justin Wolfers
    Abstract: We review the efficacy of three approaches to forecasting elections: econometric models that project outcomes on the basis of the state of the economy; public opinion polls; and election betting (prediction markets). We assess the efficacy of each in light of the 2004 Australian election. This election is particularly interesting both because of innovations in each forecasting technology, and also because the increased majority achieved by the Coalition surprised most pundits. While the evidence for economic voting has historically been weak for Australia, the 2004 election suggests an increasingly important role for these models. The performance of polls was quite uneven, and predictions both across pollsters, and through time, vary too much to be particularly useful. Betting markets provide an interesting contrast, and a slew of data from various betting agencies suggests a more reasonable degree of volatility, and useful forecasting performance both throughout the election cycle and across individual electorates.
    Keywords: Voting, elections, prediction markets, opinion polling, macroeconomic voting
    JEL: D72 D84
    Date: 2005–11
  9. By: Andrew Daly; Stephane Hess; Geoff Hyman; John Polak; Charlene Rohr
    Abstract: As a result of increasing road congestion and road pricing, modelling the temporal response of travellers to transport policy interventions has rapidly emerged as a major issue in many practical transport planning studies. A substantial body of research is therefore being carried out to understand the complexities involved in modelling time of day choice. These models are contributing substantially to our understanding of how travellers make time-of-day decisions (Hess et al, 2004; de Jong et al, 2003). These models, however, tend to be far too complex and far too data intensive to be of use for application in large-scale modelling forecasting systems, where socio-economic detail is limited and detailed scheduling information is rarely available. Moreover, model systems making use of the some of the latest analytical structures, such as Mixed Logit, are generally inapplicable in practical planning, since they rely on computer-intensive simulation in application just as well as in estimation. The aim of this paper, therefore, is to describe the development of time-period choice models which are suitable for application in large-scale modelling forecasting systems. Large-scale practical planning models often rely on systems of nested logit models, which can incorporate many of the most important interactions that are present in the complex models but which have low enough run-times to allow them to be used for practical planning. In these systems, temporal choice is represented as the choice between a finite set of discrete alternatives, represented by mutually exclusive time-periods that are obtained by aggregation of the actual observed continuous time values. The issues that face modellers are then: -how should the time periods be defined, and in particular how long should they be? -how should the choices of time periods be related to each other, e.g. is the elasticity for shorter shifts greater than for longer shifts? -how should time period choice be placed in the model system relative to other choices, such as that of the mode of travel? These questions cannot be answered on a purely theoretical basis but require the analysis of empirical data. However, there is not a great deal of data available on the relevant choices. The time period models described in the paper are developed from three related stated preference (SP) studies undertaken over the past decade in the United Kingdom and the Netherlands. Because of the complications involved with using advanced models in large-scale modelling forecasting systems, the model structures are limited to nested logit models. Two different tree structures are explored in the analysis, nesting mode above time period choice or time period choice above mode. The analysis examines how these structures differ by data set, purpose of travel and time period specification. Three time period specifications were tested, dividing the 24-hour day into: -twenty-four 1-hour periods; -five coarse time-periods; -sixteen 15-minute morning-peak periods, and two coarse pre-peak and post-peak periods. In each case, the time periods are used to define both the outbound and the return trip timings. The analysis shows that, with a few exceptions, the nested models outperform the basic Multinomial Logit structures, which operate under the assumption of equal substitution patterns across alternatives. With a single exception, the nested models in turn show higher substitution between alternative time periods than between alternative modes, showing that, for all the time period lengths studied, travellers are more sensitive to transport levels of service in their choice of departure time than in choice of mode. The advantages of the nesting structures are especially pronounced in the 1-hour and 15-minute models, while, in the coarse time-period models, the MNL model often remains the preferred structure; this is a clear effect of the broader time-periods, and the consequently lower substitution between time-periods.
    Date: 2005–08
  10. By: Dirk Van Amelsfort; Michiel Bliemer
    Abstract: We are developing a dynamic modeling framework in which we can evaluate the effects of different road pricing measures on individual choice behavior as well as on a network level. Important parts of this framework are different choice models which forecast the route, departure time and mode choice behavior of travelers under road pricing in the Netherlands. In this paper we discuss the setup of the experiment in detail and present our findings about dealing with uncertainty, travel time and schedule delays in the utility functions. To develop the desired choice models a stated choice experiment was conducted. In this experiment respondents were presented with four alternatives, which can be described as follows: Alternative A: paying for preferred travel conditions. Alternative B: adjust arrival time and pay less. Alternative C: adjust route and pay less. Alternative D: adjust mode to avoid paying charge. The four alternatives differ mainly in price, travel time, time of departure/arrival and mode and are based on the respondents’ current morning commute characteristics. The travel time in the experiment is based on the reported (by the respondent) free-flow travel time for the home-to-work trip, and the reported trip length. We calculate the level of travel time, by setting a certain part of the trip length to be in free-flow conditions and calculate a free-flow and congested part of travel time. Adding the free-flow travel time and the congested travel time makes the total minimum travel time for the trip. Minimum travel time, because to this travel time we add an uncertainty margin, creating the maximum travel time. The level of uncertainty we introduced between minimum and maximum travel time was based on the difference between the reported average and free-flow travel time. In simpler words then explained here, we told respondents that the actual travel time for this trip is unknown, but that between the minimum and maximum each travel time has an equal change of occurring. As a consequence of introducing uncertainty in travel time, the arrival time also receives the same margin. Using the data from the experiment we estimated choice models following the schedule delay framework from Vickrey (1969) and Small (1987), assigning penalties to shifts from the preferred time of departure/arrival to earlier or later times. In the models we used the minimum travel time and the expected travel time (average of minimum and maximum). Using the expected travel time incorporates already some of the uncertainty (half) in the attribute travel time, making the uncertainty attribute in the utility function not significant. The parameters values and values-of-time for using the minimum or expected travel time do not differ. Initially, we looked at schedule delays only from an arrival time perspective. Here we also distinguished between schedule delays based on the minimum arrival time and the expected arrival time (average of minimum and maximum). Again, when using expected schedule delays the uncertainty is included in the schedule delays and a separate uncertainty attribute in the utility function is not significant. There is another issue involved when looking at the preferred arrival time of the respondents; there are three cases to take into account: 1.If the minimum and maximum arrival times are both earlier than the preferred arrival time we are certain about a schedule delay early situation (based on minimum or expected schedule delays). 2.If the minimum and maximum arrival times are both later than the preferred arrival time we are certain about a schedule delay late situation (based on minimum or expected schedule delays). 3.The scheduling situation is undetermined when the preferred arrival time is between the minimum and maximum arrival time. In this case we use an expected schedule delay assuming a uniform distribution of arrival times between the minimum and maximum arrival time. Parameter values for both situations are very different and results from the minimum arrival time approach are more in line with expectations. There is a choice to take into account uncertainty in the utility function in either the expected travel time, expected schedule delays or as a separate attribute. In the paper we discuss the effects of different approaches. We extended our models to also include schedule delays based on preferred departure time. In the departure time scheduling components uncertainty is not included. Results show that the depart schedule delay late is significant and substantial, together with significant arrival schedule early and late. Further extension of the model includes taking into account the amount of flexibility in departure and arrival times for each respondent. The results will be included in this paper.
    Date: 2005–08

This nep-for issue is ©2006 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.