
on Econometrics 
By:  Inoue, Atsushi; Kilian, Lutz 
Abstract:  This paper explores the usefulness of bagging methods in forecasting economic time series from linear multiple regression models. We focus on the widely studied question of whether the inclusion of indicators of real economic activity lowers the prediction meansquared error of forecast models of US consumer price inflation. We study bagging methods for linear regression models with correlated regressors and for factor models. We compare the accuracy of simulated outofsample forecasts of inflation based on these bagging methods to that of alternative forecast methods, including factor model forecasts, shrinkage estimator forecasts, combination forecasts and Bayesian model averaging. We find that bagging methods in this application are almost as accurate or more accurate than the best alternatives. Our empirical analysis demonstrates that large reductions in the prediction mean squared error are possible relative to existing methods, a result that is also suggested by the asymptotic analysis of some stylized linear multiple regression examples. 
Keywords:  Bayesian model averaging; bootstrap aggregation; factor models; forecast combination; forecast model selection; pretesting; shrinkage estimation 
JEL:  C22 C52 C53 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5304&r=ecm 
By:  Seung Hyun Hong (Dept. of Economics, Concordia University); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York) 
Abstract:  This paper develops a linearity test that can be applied to cointegrating relations. We consider the widely used RESET specification test and show that when this test is applied to nonstationary time series its asymptotic distribution involves a mixture of noncentral chi^2 distributions, which leads to severe size distortions in conventional testing based on the central chi^2. Nonstationarity is shown to introduce two bias terms in the limit distribution, which are the source of the size distortion in testing. Appropriate corrections for this asymptotic bias leads to a modified version of the RESET test which has a central chi^2 limit distribution under linearity. The modified test has power not only against nonlinear cointegration but also against the absence of cointegration. Simulation results reveal that the modified test has good size infinite samples and reasonable power against many nonlinear models as well as models with no cointegration, confirming the analytic results. In an empirical illustration, the linear purchasing power parity (PPP) specification is tested using US, Japan, and Canada monthly data after Bretton Woods. While commonly used ADF and PP cointegration tests give mixed results on the presence of linear cointegration in the series, the modified test rejects the null of linear PPP cointegration. 
Keywords:  Nonlinear cointegration, Specification test, RESET test, Noncentral chi^2 distribution 
JEL:  C12 C22 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1541&r=ecm 
By:  Timmermann, Allan G 
Abstract:  Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this paper we analyse theoretically the factors that determine the advantages from combining forecasts (for example, the degree of correlation between forecast errors and the relative size of the individual models’ forecast error variances). Although the reasons for the success of simple combination schemes are poorly understood, we discuss several possibilities related to model misspecification, instability (nonstationarities) and estimation error in situations where the numbers of models is large relative to the available sample size. We discuss the role of combinations under asymmetric loss and consider combinations of point, interval and probability forecasts. 
Keywords:  diversification gains; forecast combinations; model misspecification; pooling and trimming; shrinkage methods 
JEL:  C22 C53 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5361&r=ecm 
By:  Joerg Breitung; M. Hashem Pesaran 
Abstract:  This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T) and the cross section dimension (N) are relatively large. It distinguishes between the first generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the different units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the different cross section units are due to common random walk components. 
Keywords:  panel unit roots, panel cointegration, cross section dependence, common effects 
JEL:  C12 C15 C22 C23 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1565&r=ecm 
By:  Yingyao Hu; Geert Ridder 
Abstract:  We consider the estimation of nonlinear models with mismeasured explanatory variables, when information on the marginal distribution of the true values of these variables is available. We derive a semiparametric MLE that is is shown to be pn consistent and asymptotically normally distributed. In a simulation experiment we find that the finite sample distribution of the estimator is close to the asymptotic approximation. The semiparametric MLE is applied to a duration model for AFDC welfare spells with misreported welfare benefits. The marginal distribution of the correctly measured welfare benefits is obtained from an administrative source. 
Keywords:  measurement error model, marginal information, deconvolution, Fourier transform, duration model, welfare spells 
JEL:  C14 C41 I38 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:scp:wpaper:0539&r=ecm 
By:  Stephen G. Donald (Dept. of Economics, University of Texas at Austin); Natércia Fortuna (CEMPRE, Faculdade de Economia, Universidade do Porto); Vladas Pipiras (Dept. of Statistics and Operations Research, UNC at Chapel Hill) 
Abstract:  In a multivariate varyingcoefficient model, the response vectors Y are regressed on known functions u(X) of some explanatory variables X and the coefficients in an unknown regression matrix q(Z) depend on another set of explanatory variables Z. We provide statistical tests, called local and global rank tests, which allow to estimate the rank of an unknown regression coefficient matrix q(Z) locally at a fixed level of the variable Z or globally as the maximum rank over all levels of Z, respectively. In the case of local rank tests, we do so by applying already available rank tests to a kernelbased estimator of the coefficient matrix q(z). Global rank tests are obtained by integrating test statistics used in estimation of local rank tests. We present a simulation study where, focusing on global ranks, we examine small sample properties of the considered statistical tests. We also apply our results to estimate the socalled local and global ranks in a demand system where budget shares are regressed on known functions of total expenditures and the coefficients in a regression matrix depend on prices faced by a consumer. 
Keywords:  varyingcoefficient model, kernel smoothing, matrix rank estimation, demand systems, local and global ranks 
JEL:  C12 C13 C14 D12 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:por:fepwps:196&r=ecm 
By:  Beyer, Andreas; Farmer, Roger E A; Henry, Jérôme; Marcellino, Massimiliano 
Abstract:  NewKeynesian models are characterized by the presence of expectations as explanatory variables. To use these models for policy evaluation, the econometrician must estimate the parameters of expectation terms. Standard estimation methods have several drawbacks, including possible lack of identification of the parameters, misspecification of the model due to omitted variables or parameter instability, and the common use of inefficient estimation methods. Several authors have raised concerns over the validity of commonly used instruments to achieve identification. In this paper we analyse the practical relevance of these problems and we propose remedies to weak identification based on recent developments in factor analysis for information extraction from large data sets. Using these techniques, we evaluate the robustness of recent findings on the importance of forward looking components in the equations of the NewKeynesian model. 
Keywords:  determinacy of equilibrium; factor analysis; forwardlooking output equation; NewKeynesian Phillips curve; rational expectations; Taylor rule 
JEL:  E5 E52 E58 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5266&r=ecm 
By:  Peter Egger; Mario Larch; Michael Pfaffermayr; Janette Walde 
Abstract:  This paper undertakes a Monte Carlo study to compare MLEbased and GMMbased tests regarding the spatial autocorrelation coefficient of the error term in a Cliff and Ord type model. The main finding is that a Waldtest based on GMM estimation as derived by Kelejian and Prucha (2005a) performs surprisingly well. Our Monte Carlo study indicates that the GMM Waldtest is correctly sized even in small samples and exhibits the same power as their MLEbased counterparts. Since GMM estimates are much easier to calculate, the GMM Waldtest is recommended for applied researches. 
Keywords:  spatial autocorrelation, hypothesis tests, Monte Carlo studies, maximum likelihood estimation, generalized method of moments 
JEL:  C12 C21 R10 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1558&r=ecm 
By:  Troy Matheson (Reserve Bank of New Zealand) 
Abstract:  This paper focuses on forecasting four key New Zealand macroeconomic variables using a dynamic factor model and a large number of predictors. We compare the (simulated) realtime forecasting performance of the factor model with a variety of other time series models and gauge the sensitivity of our results to alternative variable selection algorithms. We find that the factor model performs particularly well at longer horizons. 
JEL:  C32 E47 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:nzb:nzbdps:2005/01&r=ecm 
By:  Pedro H. Albuquerque (Texas A&M International University) 
Abstract:  This paper presents an asymptotically optimal time interval selection criterion for the longrun correlation block estimator (Bartlett kernel estimator) based on the NeweyWest approach. An alignment criterion that enhances finitesample performance is also proposed. The procedure offers an optimal yet unobtrusive alternative to the common practice in finance and economics of arbitrarily choosing time intervals or lags in correlation studies. A Monte Carlo experiment using parameters derived from Dow Jones returns data confirms that the procedure is MSEsuperior to typical alternatives such as aggregation over arbitrary time intervals, VAR estimation, and NeweyWest automatic lag selection. 
Keywords:  LongRun Correlation, Bartlett, Lag Selection, Time Interval, Alignment, NeweyWest 
JEL:  C14 
Date:  2005–11–23 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511017&r=ecm 
By:  Marcellino, Massimiliano 
Abstract:  Pooling forecasts obtained from different procedures typically reduces the mean square forecast error and more generally improves the quality of the forecast. In this paper we evaluate whether pooling interpolated or backdated time series obtained from different procedures can also improve the quality of the generated data. Both simulation results and empirical analyses with macroeconomic time series indicate that pooling plays a positive and important role also in this context. 
Keywords:  factor Model; interpolation; Kalman Filter; pooling; spline 
JEL:  C32 C43 C82 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5295&r=ecm 
By:  Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York) 
Abstract:  In a simple model composed of a structural equation and identity, the finite sample distribution of the IV/LIML estimator is always bimodal and this is most apparent when the concentration parameter is small. Weak instrumentation is the energy that feeds the secondary mode and the coefficient in the structural identity provides a point of compression in the density that gives rise to it. The IV limit distribution can be normal, bimodal, or inverse normal depending on the behavior of the concentration parameter and the weakness of the instruments. The limit distribution of the OLS estimator is normal in all cases and has a much faster rate of convergence under very weak instrumentation. The IV estimator is therefore more resistant to the attractive effect of the identity than OLS. Some of these limit results differ from conventional weak instrument asymptotics, including convergence to a constant in very weak instrument cases and limit distributions that are inverse normal. 
Keywords:  Attraction, Bimodality, Concentration parameter, Identity, Inverse normal, Point of compression, Structural Equation, Weak instrumentation 
JEL:  C30 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1540&r=ecm 
By:  Marcella Veronesi (University of Maryland); Anna Alberini (University of Maryland and Fondazione Eni Enrico Mattei); Joseph C. Cooper (The Resource and Rural Economics Division, Economic Research Service) 
Abstract:  We examine starting point bias in CV surveys with dichotomous choice payment questions and followups, and doublebounded models of the WTP responses. We wish to investigate (1) the seriousness of the biases for the location and scale parameters of WTP in the presence of starting point bias; (2) whether or not these biases depend on the distribution of WTP and on the bids used; and (3) how well a commonly used diagnostic for starting point bias—a test of the null that bid set dummies entered in the righthand side of the WTP model are jointly equal to zero—performs under various circumstances. Because starting point bias cannot be separately identified in any reliable manner from biases caused by model specification, we use simulation approaches to address this issue. Our Monte Carlo simulations suggest that the effect of ignoring starting point bias is complex and depends on the true distribution of WTP. Bid set dummies tend to soak up misspecifications in the distribution assumed by the researcher for the latent WTP, rather than capturing the presence of starting point bias. Their power in detecting starting point bias is low. 
Keywords:  Anchoring, Dichotomous choice contingent valuation, Starting point bias, Doublebounded models, Estimation bias 
JEL:  Q51 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:fem:femwpa:2005.119&r=ecm 
By:  Kirdan Lees; Troy Matheson (Reserve Bank of New Zealand) 
Abstract:  We utilise prior information from a simple RBC model to improve ARMA forecasts of postwar US GDP. We develop three alternative ARMA forecasting processes that use varying degrees of information from the Campbell (1994) flexible labour model. Directly calibrating the model produces poor forecasting performance whereas a model that uses a Bayesian framework to take the model to the data, yields forecasting performance comparable to a purely statistical ARMA process. A final model that uses theory only to restrict the order of the ARMA process (the ps and qs), but that estimates the ARMA parameters using maximum likelihood, yields improved forecasting performance. 
JEL:  C11 C22 E37 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:nzb:nzbdps:2005/02&r=ecm 
By:  Golinelli, Roberto; Parigi, Giuseppe 
Abstract:  National accounts statistics undergo a process of revisions over time because of the accumulation of information and, less frequently, of deeper changes, as new definitions, new methodologies etc. are implemented. In this paper we try to characterise the revision process of the data of Italian GDP as published by the national statistical office (ISTAT) in the stream of the noise models literature. The analysis shows that this task can be better accomplished by concentrating on the growth rates of the data instead of the levels. Another issue tackled in the paper concerns the informative content of the preliminary releases vis a vis an intermediate vintage supposed to embody all statistical information (or no longer revisable as far as purely statistical changes are concerned) and the latest vintage of the data, supposed to be the definitive one. The analysis of the news models in differences is based on the comparison of the forecasting performance of the preliminary releases with that of a number of one step ahead forecasts computed from alternative models, ranging from very simple univariate to multivariate specifications based on indicators (bridge models). Results show that, for the intermediate vintage, the preliminary version is the better forecast, while the latest vintage, which embodies statistical as well as definitional revisions, may be better characterised by considering both the preliminary version and the bridge models forecasts. 
Keywords:  consistent vintages; predictions of 'actual' GDP; preliminary GDP forecasting; realtime data set for Italian GDP 
JEL:  C22 C53 C82 E10 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5302&r=ecm 
By:  Denis Fougère (CNRS and CRESTINSEE, CEPR and IZA Bonn); Thierry Kamionka (CNRS and CRESTINSEE) 
Abstract:  This survey is devoted to the modelling and the estimation of reducedform transition models, which have been extensively used and estimated in labor microeconometrics. The first section contains a general presentation of the statistical modelling of such processes using continuoustime (eventhistory) data. It also presents parametric and nonparametric estimation procedures, and focuses on the treatment of unobserved heterogeneity. The second section deals with the estimation of markovian processes using discretetime panel observations. Here the main question is whether the discretetime panel observation of a transition process is generated by a continuoustime homogeneous Markov process. After discussing this problem, we present maximumlikelihood and bayesian procedures for estimating the transition intensity matrix governing the process evolution. Particular attention is paid to the estimation of the continuoustime moverstayer model, which is the more elementary model of mixed Markov chains. 
Keywords:  labor market transitions, markovian processes, moverstayer model, unobserved heterogeneity 
JEL:  C41 C51 J64 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp1850&r=ecm 
By:  Pesaran, M Hashem; Zaffaroni, Paolo 
Abstract:  This paper considers the problem of model uncertainty in the case of multiasset volatility models and discusses the use of model averaging techniques as a way of dealing with the risk of inadvertently using false models in portfolio management. Evaluation of volatility models is then considered and a simple ValueatRisk (VaR) diagnostic test is proposed for individual as well as ‘average’ models. The asymptotic as well as the exact finitesample distribution of the test statistic, dealing with the possibility of parameter uncertainty, are established. The model averaging idea and the VaR diagnostic tests are illustrated by an application to portfolios of daily returns based on 22 of Standard & Poor’s 500 industry group indices over the period 19952003. We find strong evidence in support of ‘thick’ modelling proposed in the forecasting literature by Granger and Jeon (2004). 
Keywords:  decisionbased evaluations; model averaging; valueatrisk 
JEL:  C32 C52 C53 G11 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5279&r=ecm 
By:  Favero, Carlo A; Marcellino, Massimiliano 
Abstract:  In this paper we assess the possibility of producing unbiased forecasts for fiscal variables in the euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider ARMA models, VARs, small scale semistructural models at the national and euro area level, institutional forecasts (OECD), and pooling. Our small scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward biased forecast for the debtGDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small scale multi country model, simple time series models can produce more accurate forecasts, due to their parsimonious specification. 
Keywords:  euro area; fiscal forecasting; fiscal rules; forecast comparison 
JEL:  C30 C53 E62 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5294&r=ecm 
By:  Haupt, Harry; Oberhofer, Walter 
Abstract:  Dhrymes (1994, Econometric Theory, 10, 254285) demonstrates the arising identification and estimation problems in singular equation systems when the error vector obeys an autoregressive scheme, as an extension of restricted least squares. Unfortunately, his main theorem concerning the identification of such systems, does not hold in general, though. 
Keywords:  singular equation systems, identification 
JEL:  C32 
Date:  2005–11–28 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:597&r=ecm 
By:  Raquel Andres (Centre for the Study of Wealth and Inequality, Columbia University and Centre of Research on Welfare Economics (CREB)); Samuel Calonge (Department of Econometrics, Statistics and Spanish Economy, University of Barcelona and Centre of Research on Welfare Economics (CREB)) 
Abstract:  This paper discusses asymptotic and bootstrap inference methods for a set of inequality and progressivity indices. The application of nondegenerate Ustatistics theory is described, particularly through the derivation of the Suitsprogressivity index distribution. We have also provided formulae for the “plugin” estimator of the index variances, which are less onerous than the Ustatistic version (this is especially relevant for those indices whose asymptotic variances contain kernels of degree 3). As far as inference issues are concerned, there are arguments in favour of applying bootstrap methods. By using an accurate database on income and taxes of the Spanish households (statistical matching EPF90IRPF90), our results show that bootstrap methods perform better (considering their sample precision), particularly those methods yielding asymmetric CI. We also show that the bootstrap method is a useful technique for Lorenz dominance analysis. An illustration of such application has been made for the Spanish tax and welfare system. We distinguish clear dominance of cashbenefits on income redistribution. Public health and state school education also have significant redistributive effects. 
Keywords:  Income Inequality; Tax Progressivity; Statistical Inference; Ustatistics; Bootstrap method. 
JEL:  H00 C14 C15 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq200509&r=ecm 
By:  Bauer, Thomas; Sinning, Mathias 
Abstract:  In this paper, a decomposition method for Tobitmodels is derived, which allows the differences in a censored outcome variable between two groups to be decomposed into a part that is explained by differences in observed characteristics and a part attributable to differences in the estimated coefficients. The method is applied to a decomposition of the gender wage gap using German data 
Keywords:  BlinderOaxaca decomposition; Tobit model; wage gap 
JEL:  C24 J31 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5309&r=ecm 
By:  Kevin D. Hoover (University of California, Davis); Mark V. Siegler (California State University, Sacramento) 
Abstract:  For about twenty years, Deidre McCloskey has campaigned to convince the economics profession that it is hopelessly confused about statistical significance. She argues that many practices associated with significance testing are bad science and that most economists routinely employ these bad practices: “Though to a child they look like science, with all that really hard math, no science is being done in these and 96 percent of the best empirical economics. . .” (McCloskey 1999). McCloskey’s charges are analyzed and rejected. That statistical significance is not economic significance is a jejune and uncontroversial claim, and there is no convincing evidence that economists systematically mistake the two. Other elements of McCloskey’s analysis of statistical significance are shown to be illfounded, and her criticisms of practices of economists are found to be based in inaccurate readings and tendentious interpretations of their work. Properly used, significance tests are a valuable tool for assessing signal strength, for assisting in model specification, and for determining causal structure. 
Keywords:  statistical significance, economic significance, significance testing, regression analysis, econometric methodology, Deirdre McCloskey, NeymanPearson testing 
JEL:  C10 C12 B41 
Date:  2005–11–29 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511018&r=ecm 
By:  Zhong Zhao (IZA Bonn) 
Abstract:  Propensity score matching estimators have two advantages. One is that they overcome the curse of dimensionality of covariate matching, and the other is that they are nonparametric. However, the propensity score is usually unknown and needs to be estimated. If we estimate it nonparametrically, we are incurring the curseofdimensionality problem we are trying to avoid. If we estimate it parametrically, how sensitive the estimated treatment effects are to the specifications of the propensity score becomes an important question. In this paper, we study this issue. First, we use a Monte Carlo experimental method to investigate the sensitivity issue under the unconfoundedness assumption. We find that the estimates are not sensitive to the specifications. Next, we provide some theoretical justifications, using the insight from Rosenbaum and Rubin (1983) that any score finer than the propensity score is a balancing score. Then, we reconcile our finding with the finding in Smith and Todd (2005) that, if the unconfoundedness assumption fails, the matching results can be sensitive. However, failure of the unconfoundedness assumption will not necessarily result in sensitive estimates. Matching estimators can be speciously robust in the sense that the treatment effects are consistently overestimated or underestimated. Sensitivity checks applied in empirical studies are helpful in eliminating sensitive cases, but in general, it cannot help to solve the fundamental problem that the matching assumptions are inherently untestable. Last, our results suggest that including irrelevant variables in the propensity score will not bias the results, but overspecifying it (e.g., adding unnecessary nonlinear terms) probably will. 
Keywords:  sensitivity, propensity score, matching, causal model, Monte Carlo 
JEL:  C21 C14 C15 C16 C52 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp1873&r=ecm 
By:  Evans, Martin D.D. 
Abstract:  This paper describes a method for calculating daily realtime estimates of the current state of the US economy. The estimates are computed from data on scheduled US macroeconomic announcements using an econometric model that allows for variable reporting lags, temporal aggregation, and other complications in the data. The model can be applied to find realtime estimates of GDP, inflation, unemployment or any other macroeconomic variable of interest. In this paper I focus on the problem of estimating the current level of and growth rate in GDP. I construct daily realtime estimates of GDP that incorporate public information known on the day in question. The realtime estimates produced by the model are uniquely suited to studying how perceived developments the macro economy are linked to asset prices over a wide range of frequencies. The estimates also provide, for the first time, daily time series that can be used in practical policy decisions. 
Keywords:  forecasting GDP; Kalman filtering; realtime data 
JEL:  C32 E37 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5270&r=ecm 
By:  M. Hashem Pesaran; Martin Weale 
Abstract:  This paper focuses on survey expectations and discusses their uses for testing and modeling of expectations. Alternative models of expectations formation are reviewed and the importance of allowing for heterogeneity of expectations is emphasized. A weak form of the rational expectations hypothesis which focuses on average expectations rather than individual expectations is advanced. Other models of expectations formation, such as the adaptive expectations hypothesis, are briefly discussed. Testable implications of rational and extrapolative models of expectations are reviewed and the importance of the loss function for the interpretation of the test results is discussed. The paper then provides an account of the various surveys of expectations, reviews alternative methods of quantifying the qualitative surveys, and discusses the use of aggregate and individual survey responses in the analysis of expectations and for forecasting. 
Keywords:  models of expectations formation, survey data, heterogeneity, tests of rational expectations 
JEL:  C40 C50 C53 C80 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1599&r=ecm 