nep-for New Economics Papers
on Forecasting
Issue of 2011‒12‒13
eighteen papers chosen by
Rob J Hyndman
Monash University

  1. The role of high frequency intra-daily data, daily range and implied volatility in multi-period Value-at-Risk forecasting By Louzis, Dimitrios P.; Xanthopoulos-Sisinis, Spyros; Refenes, Apostolos P.
  2. Combining benchmarking and chain-linking for short-term regional forecasting By Ángel Cuevas; Enrique M. Quilis; Antoni Espasa
  3. Forecasting Key Macroeconomic Variables of the South African Economy Using Bayesian Variable Selection By Mirriam Chitalu Chama-Chiliba; Rangan Gupta; Nonophile Nkambule; Naomi Tlotlego
  4. Institutions and Public Sector Performance: Empirical Analyses of Revenue Forecasting and Spatial Administrative Structures By Kauder, Björn
  5. Forecasting Financial and Macroeconomic Variables Using Data Reduction Methods: New Empirical Evidence By Huyn Hak Kim; Norman R. Swanson
  6. Can Internet search queries help to predict stock market volatility? By Dimpfl, Thomas; Jank, Stephan
  7. Real-Time Datasets Really Do Make a Difference: Definitional Change, Data Release, and Forecasting By Norman R. Swanson; Andres Fernandez
  8. Long-term penetration and traffic forecasts for the Western European fixed broadband market By Stordahl, Kjell
  9. In- and Out-of-Sample Specification Analysis of Spot Rate Models: Further Evidence for the Period 1982-2008 By Norman R. Swanson; Lili Cai
  10. Quantifying survey expectations: What's wrong with the probability approach? By Breitung, Jörg; Schmeling, Maik
  11. Is the Chinese Stock Market Really Efficient By Yan, Isabel K.; Chong, Terence; Lam, Tau-Hing
  12. Forecasting Investment-Grade Credit-Spreads. A Regularized Approach By Thiago De Oliveira Souza
  13. Seeing Inside the Black Box: Using Diffusion Index Methodology to Construct Factor Proxies in Largescale Macroeconomic Time Series Environments By Norman R. Swanson; Nii Ayi Armah
  14. Some Variables are More Worthy Than Others: New Diffusion Index Evidence on the Monitoring of Key Economic Indicators By Norman R. Swanson; Nii Ayi Armah
  15. Diffusion Index Models and Index Proxies: Recent Results and New Directions By Norman R. Swanson; Nii Ayi Armah
  16. Comparison of Bayesian Model Selection Criteria and Conditional Kolmogorov Test as Applied to Spot Asset Pricing Models By Xiangjin Shen; Hiroki Tsurumi
  17. Predictive Inference Under Model Misspecification with an Application to Assessing the Marginal Predictive Content of Money for Output By Norman R. Swanson; Nii Ayi Armah
  18. Monitoring and Forecasting Ocean Dynamics at a Regional Scale By Bastos, Luisa; Bos, Machiel; Caldeira, Rui; Couvelard, Xavier; Allis, Sheila; Bio, Ana; Araujo, Isabel; Fernandes, Joana; Lazaro, Clara

  1. By: Louzis, Dimitrios P.; Xanthopoulos-Sisinis, Spyros; Refenes, Apostolos P.
    Abstract: In this paper, we assess the informational content of daily range, realized variance, realized bipower variation, two time scale realized variance, realized range and implied volatility in daily, weekly, biweekly and monthly out-of-sample Value-at-Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed student distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi-period VaR estimates. The VaR forecasts are evaluated in terms of statistical and regulatory accuracy as well as capital efficiency. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures such as the adjusted realized range and the realized bipower variation, which are immune against microstructure noise bias and price jumps respectively, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Our results highlight the importance of robust high frequency intra-daily data based volatility estimators in a multi-step VaR forecasting context as they balance between statistical or regulatory accuracy and capital efficiency.
    Keywords: Realized GARCH; Value-at-Risk; multiple forecasting horizons; alternative volatility measures; microstructure noise; price jumps
    JEL: C53
    Date: 2012–10–29
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:35252&r=for
  2. By: Ángel Cuevas; Enrique M. Quilis; Antoni Espasa
    Abstract: In this paper we propose a methodology to estimate and forecast the GDP of the different regions of a country, providing quarterly profiles for the annual official observed data. Thus the paper offers a new instrument for short-term monitoring that allow the analysts to quantify the degree of synchronicity among regional business cycles. Technically, we combine time series models with benchmarking methods for forecast short-term quarterly indicators and to estimate quarterly regional GDPs ensuring their temporal and transversal consistency with the National Accounts data. The methodology addresses the issue of non-additivity taking into account linked volume indexes used by the National Accounts and provides an efficient combination of structural as well as short-term information. The methodology is illustrated by an application to the Spanish economy, providing real-time quarterly GDP estimates and forecasts at the regional level (i.e., with a minimum compilation delay with respect to the national quarterly GDP).
    Keywords: Forecasting, Spanish economy, Regional analysis, Benchmarking, Chain-linking
    JEL: C53 C43 C82 R11
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws114130&r=for
  3. By: Mirriam Chitalu Chama-Chiliba (Department of Economics, University of Pretoria); Rangan Gupta (Department of Economics, University of Pretoria); Nonophile Nkambule (Department of Economics, University of Pretoria); Naomi Tlotlego (Department of Economics, University of Pretoria)
    Abstract: We compare the forecasting performances of the classical and the Minnesota-type Bayesian vector autoregressive (VAR) models with those of linear (fixed-parameter) and nonlinear (time-varying parameter) VARs involving a stochastic search algorithm for variable selection, estimated using Markov Chain Monte Carlo methods. In this regard, we analyze the forecasting performances of all these models in predicting one- to eight-quarters-ahead of the growth rate of GDP, the consumer price index inflation rate and the three months Treasury bill rate for South Africa over an out-of-sample period of 2000:Q1-2011:Q2, using an in-sample period of 1960:Q1-1999:Q4. In general, we find that variable selection, whether imposed on a time-varying VAR or a fixed parameter VAR, and non-linearity in VARs play an important part in improving predictions when compared to the linear fixed coefficients classical VAR. However, we do not observe marked gains in forecasting power across the different Bayesian models, as well as, over the classical VAR model, possibly because the problem of over parameterization in the classical VAR is not that acute in our three-variable system.
    Keywords: Forecasting, time varying parameters, variable selection, Bayesian vector autoregression
    JEL: C11 C32 C52 E37
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:201132&r=for
  4. By: Kauder, Björn
    Abstract: This book analyzes the role of institutions in public finance, focusing on the issues of revenue forecasting and the spatial administrative structure of municipalities. Chapter 2 analyzes the international differences in forecasting practices and shows that forecasting performance depends on the institutional arrangement. The performance turns out to improve with the degree of independence, but also hinges on the timing of the forecast. Chapter 3 looks into revenue forecasting in Germany and widely confirms the unbiasedness and efficiency of the forecasts. Only with regard to tax law changes and the term of office there appears to be some room to improve the forecasts. In Chapter 4 we turn to local policies and provide evidence how the design of borders impacts on local tax policy. Both the number of competitors and the size of a core city in its agglomeration prove important. Chapter 5 analyzes the effects of reforms of spatial administrative structures. Considering the reforms in Germany in the 1960s and 1970s, we show that incorporated surrounding municipalities of core cities perform better in terms of population growth than comparable municipalities that have remained independent.
    Keywords: Revenue Forecasting; International Comparison; Spatial Administrative Structure; Local Tax Competition; Population Growth
    Date: 2011–11–04
    URL: http://d.repec.org/n?u=RePEc:lmu:dissen:13683&r=for
  5. By: Huyn Hak Kim (Rutgers University); Norman R. Swanson (Rutgers University)
    Abstract: In this paper, we empirically assess the predictive accuracy of a large group of models based on the use of principle components and other shrinkage methods, including Bayesian model averaging and various bagging, boosting, LASSO and related methods Our results suggest that model averaging does not dominate other well designed prediction model specification methods, and that using a combination of factor and other shrinkage methods often yields superior predictions. For example, when using recursive estimation windows, which dominate other “windowing" approaches in our experiments, prediction models constructed using pure principal component type models combined with shrinkage methods yield mean square forecast error “best” models around 70% of the time, when used to predict 11 key macroeconomic indicators at various forecast horizons. Baseline linear models (which “win”around 5% of the time) and model averaging methods (which win around 25% of the time) fare substantially worse than our sophisticated nonlinear models. Ancillary findings based on our forecasting experiments underscore the advantages of using recursive estimation strategies, and provide new evidence of the usefulness of yield and yield-spread variables in nonlinear prediction specification.
    Keywords: prediction, bagging; boosting, Bayesian model averaging,, ridge regression; least angle regression, elastic net and non-negative garotte
    JEL: G1
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201119&r=for
  6. By: Dimpfl, Thomas; Jank, Stephan
    Abstract: This paper studies the dynamics of stock market volatility and retail investor attention measured by internet search queries. We find a strong co-movement of stock market indices' realized volatility and the search queries for their names. Furthermore, Granger causality is bi-directional: high searches follow high volatility, and high volatility follows high searches. Using the latter feedback effect to predict volatility we find that search queries contain additional information about market volatility. They help to improve volatility forecasts in-sample and out-of-sample as well as for different forecasting horizons. Search queries are particularly useful to predict volatility in high-volatility phases. --
    Keywords: realized volatility,forecasting,investor behavior,noise trader,search engine data
    JEL: G10 G14 G17
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:tuewef:18&r=for
  7. By: Norman R. Swanson (Rutgers University); Andres Fernandez (Universidad de Los Andes)
    Abstract: In this paper, we empirically assess the extent to which early release inefficiency and definitional change affect prediction precision. In particular, we carry out a series of ex-ante prediction experiments in order to examine: the marginal predictive content of the revision process, the trade-offs associated with predicting different releases of a variable, the importance of particular forms of definitional change which we call “definitional breaks", and the rationality of early releases of economic variables. An important feature of our rationality tests is that they are based solely on the examination of ex-ante predictions, rather than being based on in-sample regression analysis, as are many tests in the extant literature. Our findings point to the importance of making real-time datasets available to forecasters, as the revision process has marginal predictive content, and because predictive accuracy increases when multiple releases of data are used when specifying and estimating prediction models. We also present new evidence that early releases of money are rational, whereas prices and output are irrational. Moreover, we find that regardless of which release of our price variable one specifies as the “target” variable to be predicted, using only “first release” data in model estimation and prediction construction yields mean square forecast error (MSFE) “best” predictions. On the other hand, models estimated and implemented using “latest available release” data are MSFE-best for predicting all releases of money. We argue that these contradictory finding are due to the relevance of definitional breaks in the data generating processes of the variables that we examine. In an empirical analysis, we examine the real-time predictive content of money for income, and we find that vector autoregressions with money do not perform significantly worse than autoregressions, when predicting output during the last 20 years.
    Keywords: bias, efficiency, generically comprehensive tests, rationality, preliminary, final, and real-time data.
    JEL: C32 C53 E01
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201113&r=for
  8. By: Stordahl, Kjell
    Abstract: The objective with the paper is to describe, analyze and forecast the future fixed broadband penetration and traffic growth in NGN and NGA networks in Western Europe - one of the most advanced telecommunications areas in the world. Analyses show that the broadband penetrations are very well fitted by Logistic models. Here, extended Logistic four parameter models are used to develop broadband penetration forecasts 2011 - 2015. Separate forecasts are developed for DSL, HFC(Hybrid Fiber Coax), FTTx and FWA (Fixed Wireless Access) The traffic forecasts are developed per user in the busy hour. Hence, it is possible to assess the future fixed broadband busy hour traffic in NGA networks and also the accumulated busy hour traffic in NGN networks taking into account the fixed broadband penetration forecasts. --
    Keywords: Fixed broadband,NGA,NGN,long-term forecasts,penetration,traffic
    JEL: O33
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:itse11:52210&r=for
  9. By: Norman R. Swanson (Rutgers University); Lili Cai (Shanghai Jiao Tong University)
    Abstract: We review and construct consistent in-sample specification and out-of-sample model selection tests on conditional distributions and predictive densities associated with continuous multifactor (possibly with jumps) and (non)linear discrete models of the short term interest rate. The results of our empirical analysis are used to carry out a “horserace” comparing discrete and continuous models across multiple sample periods, forecast horizons, and evaluation intervals. Our evaluation involves comparing models during two distinct historical periods, as well as across our entire weekly sample of Eurodollar deposit rates from 1982-2008. Interestingly, when our entire sample of data is used to estimate competing models, the “best” performer in terms of distributional “fit” as well as predictive density accuracy, both in-sample and out-of-sample, is the three factor Chen (CHEN: 1996) model examined by Andersen, Benzoni and Lund (2004). Just as interestingly, a logistic type discrete smooth transition autoregression (STAR) model is preferred to the “best” continuous model (i.e. the one factor Cox, Ingersoll, and Ross (CIR: 1985) model) when comparing predictive accuracy for the “Stable 1990s” period that we examine. Moreover, an analogous result holds for the “Post 1990s” period that we examine, where the STAR model is preferred to a two factor stochastic mean model. Thus, when the STAR model is parameterized using only data corresponding to a particular sub-sample, it outperforms the “best” continuous alternative during that period. However, when models are estimated using the entire dataset, the continuous CHEN model is preferred, regardless of the variety of model specification (selection) test that is carried out. Given that it is very difficult to ascertain the particular future regime that will ensue when constructing ex ante predictions, thus, the CHEN model is our overall “winning” model, regardless of sample period.
    Keywords: interest rate, multi-factor diffusion process, specification test,, out-of-sample forecasts, block bootstrap
    JEL: C1 C5 G0
    Date: 2011–05–13
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201102&r=for
  10. By: Breitung, Jörg; Schmeling, Maik
    Abstract: We study a matched sample of individual stock market forecasts consisting of both qualitative and quantitative forecasts. This allows us to test for the quality of forecast quantification methods by comparing quantified qualitative forecasts with actual quantitative forecasts. Focusing mainly on the widely used quantification framework advocated by Carlson and Parkin (1975), the so-called "probability approach", we find that quantified expectations derived from the probability approach display a surprisingly weak correlation with reported quantitative stock return forecasts. We trace the reason for this low correlation to the importance of asymmetric and time-varying thresholds, whereas individual heterogeneity across forecasters seems to play a minor role. Hence, our results suggest that qualitative survey data may not be a very useful device to obtain quantitative forecasts and we suggest ways to remedy this problem when designing qualitative surveys.
    Keywords: Quantification, Stock Market Expectations, Probability Approach, Heterogeneity
    JEL: C53 D84 G17
    Date: 2011–12
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-485&r=for
  11. By: Yan, Isabel K.; Chong, Terence; Lam, Tau-Hing
    Abstract: Groenewold et al (2004a) documented that the Chinese stock market is inefficient. In this paper, we revisit the efficiency problem of the Chinese stock market using time-series model based trading rules. Our paper distinguishes itself from previous studies in several aspects. First, while previous studies concentrate on the viability of linear forecasting techniques, we evaluate the profitability of the forecasts of the self-exciting threshold autoregressive model (SETAR), and compare it with the conventional linear AR and MA trading rules. Second, the finding of market inefficiency in earlier studies mainly rest on the statistical significance of the autocorrelation or regression coefficients. In contrast, this paper directly examines the profitability of various trading rules. Third, our sample covers an extensive period of 1991-2010. Sub-sample analysis shows that positive returns mainly concentrate in the pre-SOE reform period, suggesting that China’s stock market has become more efficient after the reform.
    Keywords: Efficient Market Hypothesis; SETAR Model; Bootstrapping; SOE reform
    JEL: G12 C22 G10
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:35219&r=for
  12. By: Thiago De Oliveira Souza
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/103693&r=for
  13. By: Norman R. Swanson (Rutgers University); Nii Ayi Armah (Bank of Canada)
    Abstract: In economics, common factors are often assumed to underlie the co-movements of a set of macroeconomic variables. For this reason, many authors have used estimated factors in the construction of prediction models. In this paper, we begin by surveying the extant literature on diffusion indexes. We then outline a number of approaches to the selection of factor proxies (observed variables that proxy unobserved estimated factors) using the statistics developed in Bai and Ng (2006a,b). Our approach to factor proxy selection is examined via a small Monte Carlo experiment, where evidence supporting our proposed methodology is presented, and via a large set of prediction experiments using the panel dataset of Stock and Watson (2005). One of our main empirical findings is that our “smoothed” approaches to factor proxy selection appear to yield predictions that are often superior not only to a benchmark factor model, but also to simple linear time series models which are generally difficult to beat in forecasting competitions. In some sense, by using our approach to predictive factor proxy selection, one is able to open up the “black box” often associated with factor analysis, and to identify actual variables that can serve as primitive building blocks for (prediction) models of a host of macroeconomic variables, and that can also serve are policy instruments, for example. Our findings suggest that important observable variables include: various S&P500 variables, including stock price indices and dividend series; a 1-year Treasury bond rate; various housing activity variables; industrial production; and exchange rates.
    Keywords: diffusion index; factor, forecast, macroeconometrics, parameter estimation error, proxy
    JEL: C22
    Date: 2011–05–14
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201105&r=for
  14. By: Norman R. Swanson (Rutgers University); Nii Ayi Armah (Bank of Canada)
    Abstract: Central banks regularly monitor select financial and macroeconomic variables in order to obtain early indication of the impact of monetary policies. This practice is discussed on the Federal Reserve Bank of New York website, for example, where one particular set of macroeconomic “indicators” is given. In this paper, we define a particular set of “indicators” that is chosen to be representative of the typical sort of variable used in practice by both policy-setters and economic forecasters. As a measure of the “adequacy” of the “indicators”, we compare their predictive content with that of a group of observable factor proxies selected from amongst 132 macroeconomic and financial time series, using the diffusion index methodology of Stock and Watson (2002a,b) and the factor proxy methodology of Bai and Ng (2006a,b) and Armah and Swanson (2010). The variables that we predict are output growth and inflation, two representative variables from our set of indicators that are often discussed when assessing the impact of monetary policy. Interestingly, we find that that indicators are all contained within the set the observable variables that proxy our factors. Our findings, thus, support the notion that a judiciously chosen set of macroeconomic indicators can effectively provide the same macroeconomic policy-relevant information as that contained in a largescale time series dataset. Of course, the large-scale datasets are still required in order to select the key indicator variables or confirm one’s prior choice of key variables. Our findings also suggest that certain yield “spreads” are also useful indicators. The particular spreads that we find to be useful are the difference between Treasury or corporate yields and the federal funds rate. After conditioning on these variables, traditional spreads, such as the yield curve slope and the reverse yield gap are found to contain no additional marginal predictive content. We also find that the macroeconomic indicators (not including spreads) perform best when forecasting inflation in non-volatile time periods, while inclusion of our spread variables improves predictive accuracy in times of high volatility.
    Keywords: diffusion index, factor, forecast, macroeconometrics;, monetary policy, parameter estimation error, proxy; federal reserve bank
    JEL: C22 C33 C51
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201115&r=for
  15. By: Norman R. Swanson (Rutgers University); Nii Ayi Armah (Bank of Canada)
    Abstract: Diffusion index models have received considerable attention from both theoreticians and empirical econometricians in recent years. One reason for this is that datasets with many variables are increasingly becoming available and being utilized for economic modelling, and another is that common factors are often assumed to underlie the co-movements of a set of macroeconomic variables. In this paper we review some recent results in the study of diffusion index models, focusing primarily on advances due to [4, 5] and [1]. We discuss, for example, the construction of factors used in prediction models implemented using diffusion index methodology and approaches that are useful for assessing whether there are observable variables that adequately “proxy” for estimated factors.
    Keywords: diffusion index , factor, forecast; macroeconometrics, parameter estimation error, proxy
    Date: 2011–05–15
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201114&r=for
  16. By: Xiangjin Shen (Rutgers University); Hiroki Tsurumi (Rutgers University)
    Abstract: We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non-nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1 1990 to April 18 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.
    Keywords: Deviance information criterion, Markov chain Monte Carlo algorithms, Block bootstrap, Conditional Kolmogorov test, CIR and Vasicek models
    JEL: C1 C5 G0
    Date: 2011–06–07
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201126&r=for
  17. By: Norman R. Swanson (Rutgers University); Nii Ayi Armah (Bank of Canada)
    Abstract: In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin by summarizing some recent theoretical findings, with particular emphasis on the construction of valid bootstrap procedures for calculating the impact of parameter estimation error. We then discuss the Corradi and Swanson (CS: 2002) test of (non)linear out-of-sample Granger causality. Thereafter, we carry out a series of Monte Carlo experiments examining the properties of the CS and a variety of other related predictive accuracy and model selection type tests. Finally, we present the results of an empirical investigation of the marginal predictive content of money for income, in the spirit of Stock and Watson (1989), Swanson (1998) and Amato and Swanson (2001).
    Keywords: block bootstrap, recursive estimation scheme, rolling estimation scheme, prediction, nonlinear causality
    JEL: C22 C51
    Date: 2011–05–14
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201103&r=for
  18. By: Bastos, Luisa (University of Porto); Bos, Machiel (CIIMAR, University of Porto); Caldeira, Rui (CIIMAR, University of Porto); Couvelard, Xavier (University of Madeira); Allis, Sheila (University of Las Palmas); Bio, Ana (CIIMAR, University of Porto); Araujo, Isabel (CIIMAR, University of Porto); Fernandes, Joana (University of Porto); Lazaro, Clara (University of Porto)
    Abstract: A new Oceanic Observatory for the North-West Iberian Margin is being developed in the scope of the RAIA project. The objective of RAIA is not only to improve our scientific knowledge of the ocean in this North-East Atlantic region, but also to use the in situ observations and ocean models to derive commercial products and services for a range of marine activities related to: sediment transport, coastal erosion, pollution (spill) monitoring, understanding of marine life, search & rescue and renewable energies. The Coastal & Ocean Dynamics Group at CIIMAR is participating in this project and their contributions are presented here
    Keywords: Ocean Modelling; Sensors; Data-base; Tides
    JEL: Q55
    Date: 2011–11–30
    URL: http://d.repec.org/n?u=RePEc:ris:cieodp:2011_014&r=for

This nep-for issue is ©2011 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.