
on Econometrics 
By:  Marc P. Giannoni; Jean Boivin 
Abstract:  Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is adequately summarized by a small number of data series, whether or not measurement error is allowed for. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models are likely to be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a datarich environment. This framework provides an interpretation of all information contained in a large data set through the lenses of a DSGE model. The estimation involves Bayesian MarkovChain MonteCarlo (MCMC) methods extended so that the estimates can, in some cases, inherit the properties of classical maximum likelihood estimation. We apply this estimation approach to a stateoftheart DSGE monetary model. Treating theoretical concepts of the model  such as output, inflation and employment  as partially observed, we show that the information from a large set of macroeconomic indicators is important for accurate estimation of the model. It also allows us to improve the forecasts of important economic variables 
Keywords:  DSGE models, model estimation, measurement error, large data sets, factor models, MCMC techniques, Bayesian estimation 
JEL:  E52 E3 C32 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:431&r=ecm 
By:  Denis Bolduc; Lynda Khalaf; ClÃ©ment YÃ©lou 
Abstract:  The problem of constructing confidence set estimates for parameter ratios arises in a variety of econometrics contexts; these include valueoftime estimation in transportation research and inference on elasticities given several model specifications. Even when the model under consideration is identifiable, parameter ratios involve a possibly discontinuous parameter transformation that becomes illbehaved as the denominator parameter approaches zero. More precisely, the parameter ratio is not identified over the whole parameter space: it is locally almost unidentified or (equivalently) weakly identified over a subset of the parameter space. It is well known that such situations can strongly affect the distributions of estimators and test statistics, leading to the failure of standard asymptotic approximations, as shown by Dufour. Here, we provide explicit solutions for projectionbased simultaneous confidence sets for ratios of parameters when the joint confidence set is obtained through a generalized Fieller approach. A simulation study for a ratio of slope parameters in a simple binary probit model shows that the coverage rate of the Fieller's confidence interval is immune to weak identification whereas the confidence interval based on the deltamethod performs poorly, even when the sample size is large. The procedures are examined in illustrative empirical models, with a focus on choice models 
Keywords:  confidence interval; generalized Fieller's theorem; deltamethod; weak identification; ratio of parameters. 
JEL:  C10 C35 R40 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:48&r=ecm 
By:  Fabio Trojani; Francesco Audrino 
Abstract:  We propose a multivariate nonparametric technique for generating reliable historical yield curve scenarios and confidence intervals. The approach is based on a Functional Gradient Descent (FGD) estimation of the conditional mean vector and volatility matrix of a multivariate interest rate series. It is computationally feasible in large dimensions and it can account for nonlinearities in the dependence of interest rates at all available maturities. Based on FGD we apply filtered historical simulation to compute reliable outofsample yield curve scenarios and confidence intervals. We backtest our methodology on daily USD bond data for forecasting horizons from 1 to 10 days. Based on several statistical performance measures we find significant evidence of a higher predictive power of our method when compared to scenarios generating techniques based on (i) factor analysis, (ii) a multivariate CCCGARCH model, or (iii) an exponential smoothing volatility estimators as in the RiskMetrics approach 
Keywords:  Conditional mean and volatility estimation; Filtered Historical Simulation; Functional Gradient Descent; Term structure; Multivariate CCCGARCH models 
JEL:  C14 C15 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:14&r=ecm 
By:  Fuchun Li 
Abstract:  A new consistent test is proposed for the parametric specification of the diffusion function in a diffusion process without any restrictions on the functional form of the drift function. The data are assumed to be sampled discretely in a time interval that can be fixed or lengthened to infinity. The test statistic is shown to follow an asymptotic normal distribution under the null hypothesis that the parametric diffusion function is correctly specified. Monte Carlo simulations are conducted to examine the finitesample performance of the test, revealing that the test has good size and power. 
Keywords:  Econometric and statistical methods; Interest rates 
JEL:  C12 C14 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0535&r=ecm 
By:  Jose M. VidalSanz; Mercedes EstebanBravo 
Abstract:  This paper proposes a worstcase approach for estimating econometric models containing unobservable variables. Worstcase estimators are robust against the averse effects of unobservables and, unlike the classical literature, there are no assumptions made about the statistical nature of the unobservables. This method should be seen as complementing standard methods; cautious modelers should compare different estimates to determine robust models. Limiting theory is obtained, and a Monte Carlo study of finitesample properties is conducted. An economic application is included 
Keywords:  unobservable variables, robust estimation, minimax optimization, Mestimators, GMMestimators 
JEL:  C13 C51 C60 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:385&r=ecm 
By:  Kirstin Hubrich; David F. Hendry (Research Department European Central Bank) 
Abstract:  We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over forecasting the disaggregates and aggregating those forecasts, or using only aggregate information in forecasting the aggregate. An implication of a general theory of prediction is that the first should outperform the alternative methods to forecasting the aggregate in population. However, forecast models are based on sample information. The data generation process and the forecast model selected might differ. We show how changes in collinearity between regressors affect the biasvariance tradeoff in model selection and how the criterion used to select variables in the forecasting model affects forecast accuracy. We investigate why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of Euro area inflation in some situations, but not in others. 
Keywords:  Disaggregate information, predictability, forecast model selection, VAR, factor models 
JEL:  C32 C53 E31 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:270&r=ecm 
By:  A. Onatski; V. Karguine 
Abstract:  Data in which each observation is a curve occur in many applied problems. This paper explores prediction in time series in which the data is generated by a curvevalued autoregression process. It develops a novel technique, the predictive factor decomposition, for estimation of the autoregression operator, which is designed to be better suited for prediction purposes than the principal components method. The technique is based on finding a reducedrank approximation to the autoregression operator that minimizes the norm of the expected prediction error. Implementing this idea, we relate the operator approximation problem to an eigenvalue problem for an operator pencil that is formed by the crosscovariance and covariance operators of the autoregressive process. We develop an estimation method based on regularization of the empirical counterpart of this eigenvalue problem, and prove that with a certain choice of parameters, the method consistently estimates the predictive factors. In addition, we show that forecasts based on the estimated predictive factors converge in probability to the optimal forecasts. The new method is illustrated by an analysis of the dynamics of the term structure of Eurodollar futures rates. We restrict the sample to the period of normal growth and find that in this subsample the predictive factor technique not only outperforms the principal components method but also performs on par with the best available prediction methods 
Keywords:  Functional data analysis; Dimension reduction, Reducedrank regression; Principal component; Predictive factor, Generalized eigenvalue problem; Term structure; Interest rates 
JEL:  C23 C53 E43 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:59&r=ecm 
By:  Aaron Smallwood; Alex Maynard; Mark Wohar 
Abstract:  Persistent regressors pose a common problem in predictive regressions. Tests of the forward rate unbiased hypothesis (FRUH) constitute a prime example. Standard regression tests that strongly reject FRUH have been questioned on the grounds of potential longmemory in the forward premium. Researchers have argued that this could create a regression imbalance thereby invalidating standard statistical inference. To address this concern we employ a twostep procedure that rebalances the predictive equation, while still permitting us to impose the null of FRUH. We conduct a comprehensive simulation study to validate our procedure. The simulations demonstrate the good small sample performance of our twostage procedure, and its robustness to possible errors in the first stage estimation of the memory parameter. By contrast, the simulations for standard regression tests show the potential for significant size distortion, validating the concerns of previous researchers. Our empirical application to excess returns, suggests less evidence against FRUH than found using the standard, but possibly questionable, ttests. 
Keywords:  Long Memory; Predictive Regressions; Forward Rate Unbiasedness Hypothesis 
JEL:  C22 C12 F31 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:384&r=ecm 
By:  Annastiina Silvennoinen (Department of Economic Statistics, Stokholm School of Economics); Timo Teräsvirta (Department of Economic Statistics, Stokholm School of Economics) 
Abstract:  In this paper we propose a new multivariate GARCH model with timevarying conditional correlation structure. The approach adopted here is based on the decomposition of the covariances into correlations and standard deviations. The timevarying conditional correlations change smoothly between two extreme states of constant correlations according to an endogenous or exogenous transition variable. An LM test is derived to test the constancy of correlations and LM and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the Standard & Poor 500 stock index completes the paper. The model is estimated for the full fivedimensional system as well as several subsystems and the results discussed in detail. 
Keywords:  multivariate GARCH; constant conditional correlation; dynamic conditional correlation; return comovement; variable correlation GARCH model; volatility model evaluation 
JEL:  C12 C32 C51 C52 G1 
Date:  2005–10–01 
URL:  http://d.repec.org/n?u=RePEc:uts:rpaper:168&r=ecm 
By:  Martijn van Hasselt 
Abstract:  This paper considers two models to deal with an outcome variable that contains a large fraction of zeros, such as individual expenditures on health care: a sampleselection model and a twopart model. The sampleselection model uses two possibly correlated processes to determine the outcome: a decision process and an outcome process; conditional on a favorable decision, the outcome is observed. The twopart model comprises uncorrelated decision and outcome processes. The paper addresses the issue of selecting between these two models. With a Gaussian specification of the likelihood, the models are nested and inference can focus on the correlation coefficient. Using a fully parametric Bayesian approach, I present sampling algorithms for the model parameters that are based on data augmentation. In addition to the sampler output of the correlation coefficient, a Bayes factor can be computed to distinguish between the models. The paper illustrates the methods and their potential pitfalls using simulated data sets 
Keywords:  Sample Selection, Data Augmentation, Gibbs Sampling 
JEL:  C11 C15 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:241&r=ecm 
By:  Changli He (Department of Economic Statistics, Stokholm School of Economic); Annastiina Silvennoinen (Department of Economic Statistics, Stokholm School of Economics); Timo Teräsvirta (Department of Economic Statistics, Stokholm School of Economic) 
Abstract:  In this paper we consider the thirdmoment structure of a class of nonlinear time series models. Empirically it is often found that the marginal distribution of financial time series is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate for unconditional skewness. We consider modelling the unconditional mean and variance using models which respond nonlinearly or asymmetrically to shocks. We investigate the implications these models have on the third moment structure of the marginal distribution and different conditions under which the unconditional distribution exhibits skewness as well as nonzero thirdorder autocovariance structure. With this respect, the asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions are provided for all third order moments and crossmoments. Finally, we introduce a new tool, shock impact curve, that can be used to investigate the impact of shocks on the conditional mean squared error of the return. 
Keywords:  asymmetry; GARCH; nonlinearity; stock impact curve; time series; unconditional skewness 
JEL:  C22 
Date:  2005–10–01 
URL:  http://d.repec.org/n?u=RePEc:uts:rpaper:169&r=ecm 
By:  J. Huston McCulloch 
Abstract:  Adaptive Least Squares (ALS), i.e. recursive regression with asymptotically constant gain, as proposed by Ljung (1992), Sargent (1993, 1999), and Evans and Honkapohja (2001), is an increasingly widelyused method of estimating timevarying relationships and of proxying agentsâ€™ timeevolving expectations. This paper provides theoretical foundations for ALS as a special case of the generalized Kalman solution of a Time Varying Parameter (TVP) model. This approach is in the spirit of that proposed by Ljung (1992) and Sargent (1999), but unlike theirs, nests the rigorous Kalman solution of the elementary Local Level Model, and employs a very simple, yet rigorous, initialization. Unlike other approaches, the proposed method allows the asymptotic gain to be estimated by maximum likelihood (ML). The ALS algorithm is illustrated with univariate time series models of U.S. unemployment and inflation. Because the null hypothesis that the coefficients are in fact constant lies on the boundary of the permissible parameter space, the usual regularity conditions for the chisquare limiting distribution of likelihoodbased test statistics are not met. Consequently, critical values of the Likelihood Ratio test statistics are established by Monte Carlo means and used to test the constancy of the parameters in the estimated models. 
Keywords:  Kalman Filter, Adaptive Learning, Adaptive Least Squares, Time Varying Parameter Model, Natural Unemployment Rate, Inflation Forecasting 
JEL:  C22 E37 E31 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:239&r=ecm 
By:  James Morley; Tara M. Sinclair (Economics Washington University) 
Abstract:  While tests for unit roots and cointegration have important econometric and economic implications, they do not always offer conclusive results. For example, Rudebusch (1992; 1993) demonstrates that standard unit root tests have low power against estimated trend stationary alternatives. In addition, Perron (1989) shows that standard unit root tests cannot always distinguish unit root from stationary processes that contain segmented or shifted trends. Recent research (Harvey 1993; Engel and Morley 2001; Morley, Nelson et al. 2003; Morley 2004; Sinclair 2004) suggests that unobserved components models can provide a useful framework for representing economic time series which contain unit roots, including those that are cointegrated. These series can be modeled as containing an unobserved permanent component, representing the stochastic trend, and an unobserved transitory component, representing the stationary component of the series. These unobserved components are then estimated using the Kalman filter. The unobserved components framework can also provide a more powerful way to test for unit roots and cointegration than what is currently available (Nyblom and Harvey 2000). This paper develops a new test that nests a partial unobserved components model within a more general unobserved components model. This nesting allows the general and the restricted models to be compared using a likelihood ratio test. The likelihood ratio test statistic has a nonstandard distribution, but Monte Carlo simulation can provide its proper distribution. The simulation uses data generated with the results from the partial unobserved components model as the values for the null hypothesis. Consequently, the null hypothesis for this test is stationarity, which is useful in many cases. In this sense our test is like the wellknown KPSS test (Kwiatkowski, Phillips et al. 1992), but our test is a parametric version which provides more power by considering the unobserved components structure in calculation of the test statistic. This more powerful test can be used to evaluate important macroeconomic theories such as the permanent income hypothesis, real business cycle theories, and purchasing power parity for exchange rates 
Keywords:  unobserved components, unit roots, cointegration 
JEL:  C32 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:451&r=ecm 
By:  Emmanuel Guerre; Hyungsik Roger Moon 
Abstract:  This paper studies a semiparametric nonstationary binary choice model. Imposing a spherical normalization constraint on the parameter for identification purpose, we find that the MSE and SMSE are at least sqrt(n)consistent. Comparing this rate to the parametric MLE’s convergence rate, we show that when a normalization restriction is imposed on the parameter, the Park and Phillips (2000)’s parametric MLE converges at a rate of n^(3/4) and its limiting distribution is a mixed normal. Finally, we show briefy how to apply our estimation method to a nonstationary single index model. 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:scp:wpaper:0537&r=ecm 
By:  Beirlant,Jan; Joossens,Elisabeth; Segers,Johan (Tilburg University, Center for Economic Research) 
Abstract:  The generalized Pareto distribution (GPD) is probably the most popular model for inference on the tail of a distribution. The peaksoverthreshold methodology postulates the GPD as the natural model for excesses over a high threshold. However, for the GPD to fit such excesses well, the threshold should often be rather large, thereby restricting the model to only a small upper fraction of the data. In case of heavytailed distributions, we propose an extension of the GPD with a single parameter, motivated by a secondorder refinement of the underlying Paretotype model. Not only can the extended model be fitted to a larger fraction of the data, but in addition is the resulting maximum likelihood for the tail index asymptotically unbiased. In practice, sample paths of the new tail index estimator as a function of the chosen threshold exhibit much larger regions of stability around the true value. We apply the method to daily logreturns of the euroUK pound exchange rate. Some simulation results are presented as well. 
Keywords:  heavy tails;peaksoverthreshold;regular variation;tail index;62G20;62G32; C13; bias;exchange rate 
JEL:  C14 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2005112&r=ecm 
By:  Daniel VentosaSantaularia; Antonio E. Noriega 
Abstract:  We study the phenomenon of spurious regression between two random variables when the generating mechanism for individual series follows a stationary process around a trend with (possibly) multiple breaks in its level and slope. We develop relevant asymptotic theory and show that spurious regression occurs independently of the structure assumed for the errors. In contrast to previous findings, the spurious relationship is less severe when breaks are present, whether or not the regression model includes a linear trend. Simulations confirm our asymptotic results and reveal that, in finite samples, the spurious regression is sensitive to the presence of a linear trend and to the relative locations of the breaks within the sample 
Keywords:  Spurious regression, Structural breaks, Stationarity 
JEL:  C22 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:186&r=ecm 
By:  Jaromír Beneš (Czech National Bank, Monetary and Statistics Department, Prague, Czech Republic); David Vávra (Czech National Bank, Monetary and Statistics Department, Prague, Czech Republic) 
Abstract:  We propose the method of eigenvalue filtering as a new tool to extract time series subcomponents (such as businesscycle or irregular) defined by properties of the underlying eigenvalues. We logically extend the BeveridgeNelson decomposition of the VAR timeseries models focusing on the transient component. We introduce the canonical statespace representation of the VAR models to facilitate this type of analysis. We illustrate the eigenvalue filtering by examining a stylized model of inflation determination estimated on the Czech data. We characterize the estimated components of CPI, WPI and import inflations, together with the real production wage and real output, survey their basic properties, and impose an identification scheme to calculate the structural innovations. We test the results in a simple bootstrap simulation experiment. We find two major areas for further research  first, verifying and improving the robustness of the method, and second, exploring the method’s potential for empirical validation of structural economic models. 
Keywords:  Business cycle; inflation; eigenvalues; filtering; BeveridgeNelson decomposition; time series analysis. 
JEL:  C32 E32 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20050549&r=ecm 
By:  Magdalena E. Sokalska; Ananda Chanda (Finance New York University); Robert F. Engle 
Abstract:  This paper proposes a new way of modeling and forecasting intraday returns. We decompose the volatility of high frequency asset returns into components that may be easily interpreted and estimated. The conditional variance is expressed as a product of daily, diurnal and stochastic intraday volatility components. This model is applied to a comprehensive sample consisting of 10minute returns on more than 2500 US equities. We apply a number of different specifications. Apart from building a new model, we obtain several interesting forecasting results. In particular, it turns out that forecasts obtained from the pooled cross section of groups of companies seem to outperform the corresponding forecasts from companybycompany estimation. 
Keywords:  ARCH, Intraday Returns, Volatility 
JEL:  C22 G15 C53 
Date:  2005–11–11 
URL:  http://d.repec.org/n?u=RePEc:sce:scecf5:409&r=ecm 