
on Econometrics 
By:  Liesenfeld, Roman; Richard, JeanFrancois 
Abstract:  In this paper we discuss parameter identification and likelihood evaluation for multinomial multiperiod Probit models. It is shown in particular that the standard autoregressive specification used in the literature can be interpreted as a latent common factor model. However, this specification is not invariant with respect to the selection of the baseline category. Hence, we propose an alternative specification which is invariant with respect to such a selection and identifies coefficients characterizing the stationary covariance matrix which are not identified in the standard approach. For likelihood evaluation requiring highdimensional truncated integration we propose to use a generic procedure known as Efficient Importance Sampling (EIS). A special case of our proposed EIS algorithm is the standard GHK probability simulator. To illustrate the relative performance of both procedures we perform a set MonteCarlo experiments. Our results indicate substantial numerical e±ciency gains of the ML estimates based on GHKEIS relative to ML estimates obtained by using GHK. 
Keywords:  Discrete choice, Importance sampling, MonteCarlo integration, Panel data, Parameter identification, Simulated maximum likelihood 
JEL:  C15 C35 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:6340&r=ecm 
By:  Andrea Carriero (Queen Mary, University of London); George Kapetanios (Queen Mary, University of London); Massimiliano Marcellino (IEPBocconi University, IGIER and CEPR) 
Abstract:  The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance with the most promising existing alternatives, namely, factor models, large scale bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a twostep procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank bayesian VAR of Geweke (1996). As a result, we found that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast, and for key variables such as industrial production growth, inflation, and the federal funds rate. 
Keywords:  Bayesian VARs, Factor models, Forecasting, Reduced rank 
JEL:  C11 C13 C33 C53 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp617&r=ecm 
By:  Alfredo A. Romero (Department of Economics, College of William and Mary) 
Abstract:  The use of Rsquared in Model Selection is a common practice in econometrics. The rationale is that the statistic produces a consistent estimator of the true coefficient of determination for the underlying data while taking into consideration the number of variables involved in the model. This pursuit of parsimony comes with a cost: The researcher has no control over the error probabilities of the statistic. Alternative measures of goodness of fit, such as the Schwarz Information Criterion, provide only a marginal improvement to the problem. The FTest under the NeymanPearson testing framework will provide the best alternative for model selection criteria. 
Keywords:  Adjusted R squared, Schwarz Information Criterion BIC, NeymanPearson Testing, Nonsense Correlations 
JEL:  C12 C52 
Date:  2007–10–21 
URL:  http://d.repec.org/n?u=RePEc:cwm:wpaper:62&r=ecm 
By:  Eklund, Jana (Department of Business, Economics, Statistics and Informatics); Karlsson, Sune (Department of Business, Economics, Statistics and Informatics) 
Abstract:  Large scale Bayesian model averaging and variable selection exercises present, <p> despite the great increase in desktop computing power, considerable computational <p> challenges. Due to the large scale it is impossible to evaluate all possible models and <p> estimates of posterior probabilities are instead obtained from stochastic (MCMC) <p> schemes designed to converge on the posterior distribution over the model space. <p> While this frees us from the requirement of evaluating all possible models the computational <p> effort is still substantial and efficient implementation is vital. Efficient <p> implementation is concerned with two issues: the efficiency of the MCMC algorithm <p> itself and efficient computation of the quantities needed to obtain a draw from the <p> MCMC algorithm. We evaluate several different MCMC algorithms and find that <p> relatively simple algorithms with local moves perform competitively except possibly <p> when the data is highly collinear. For the second aspect, efficient computation <p> within the sampler, we focus on the important case of linear models where the computations <p> essentially reduce to least squares calculations. Least squares solvers that <p> update a previous model estimate are appealing when the MCMC algorithm makes <p> local moves and we find that the Cholesky update is both fast and accurate. 
Keywords:  Bayesian Model Averaging; Sweep operator; Cholesky decomposition; QR decomposition; SwendsenWang algorithm 
JEL:  C11 C15 C52 C63 
Date:  2007–09–10 
URL:  http://d.repec.org/n?u=RePEc:hhs:oruesi:2007_004&r=ecm 
By:  Knüppel, Malte; Tödter, KarlHeinz 
Abstract:  This paper discusses methods to quantify risk and uncertainty in macroeconomic forecasts. Both, parametric and nonparametric procedures are developed. The former are based on a class of asymmetrically weighted normal distributions whereas the latter employ asymmetric bootstrap simulations. Both procedures are closely related. The bootstrap is applied to the structural macroeconometric model of the Bundesbank for Germany. Forecast intervals that integrate judgement on risk and uncertainty are obtained. 
Keywords:  Macroeconomic forecasts, stochastic forecast intervals, risk, uncertainty, asymmetrically weighted normal distribution, asymmetric bootstrap 
JEL:  C14 C53 E37 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:6341&r=ecm 
By:  DeJong, David Neil; Dharmarajan, Hariharan; Liesenfeld, Roman; Richard, JeanFrancois 
Abstract:  We develop a numerical filtering procedure that facilitates efficient likelihood evaluation in applications involving nonlinear and nongaussian statespace models. The procedure approximates necessary integrals using continuous or piecewisecontinuous approximations of target densities. Construction is achieved via efficient importance sampling, and approximating densities are adapted to fully incorporate current information. 
Keywords:  particle filter, adaption, efficient importance sampling, kernel density approximation 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:zbw:cauewp:6339&r=ecm 
By:  John M. Maheu (University of Toronto, Canada and The Rimini Centre for Economics Analysis, Rimini, Italy.); Thomas H. McCurdy (University of Toronto, Canada) 
Abstract:  We provide an approach to forecasting the longrun (unconditional) distribution of equity returns making optimal use of historical data in the presence of structural breaks. Our focus is on learning about breaks in real time and assessing their impact on outofsample density forecasts. Forecasts use a probabilityweighted average of submodels, each of which is estimated over a different historyof data. The paper illustrates the importance of uncertainty about structural breaks and the value of modeling higherorder moments of excess returns when forecasting the return distribution and its moments. The shape of the longrun distribution and the dynamics of the higherorder moments are quite different from those generated by forecasts which cannot capture structural breaks. The empirical results strongly reject ignoring structural change in favor of our forecasts which weight historical data to accommodate uncertainty about structural breaks. We also strongly reject the common practice of using a fixedlength moving window. These differences in longrun forecasts have implications for many financial decisions, particularly for risk management and longrun investment decisions. 
Keywords:  density forecasts, structural change, model risk, parameter uncertainty, Bayesian learning, market returns 
JEL:  F22 J24 J61 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:1907&r=ecm 
By:  Richard H. Spady 
Abstract:  We model attitudes as latent variables that induce stochastic dominance relations in (item) responses. Observable characteristics that affect attitudes can be incorporated into the analysis to improve the measurement of the attitudes; the measurements are posterior distributions that condition on the responses and characteristics of each respondent. Methods to use these measurements to characterize the relation between attitudes and behaviour are developed and implemented. 
Keywords:  Latent variables 
JEL:  C01 C14 C25 C35 C51 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2007/29&r=ecm 
By:  Joshua Gallin (Board of Governors of the Federal Reserve System); Randal Verbrugge (U.S. Bureau of Labor Statistics) 
Abstract:  As a rental unit ages, its quality typically falls; a failure to correct for this would result in downward bias in the CPI. We investigate the BLS age bias imputation and explore two potential categories of error: approximations related to the construction of the age bias factor, and model misspecification. We find that, as long as one stays within the context of the current official regression specification, the approximation errors are innocuous. On the other hand, we find that the official regression specification – which is more or less of the form commonly used in the hedonic rent literature – is severely deficient in its ability to match the conditional logrent vs. age relationship in the data, and performs poorly in outofsample tests. It is straightforward to improve the specification in order to address these deficiencies. However, basing estimates upon a single regression model is risky. Agebias adjustment inherently suffers from a general problem facing some types of hedonicbased adjustments, which is related to model uncertainty. In particular, agebias adjustment relies upon specific coefficient estimates, but there is no guarantee that the true marginal influence of a regressor is being estimated in any given model, since one cannot guarantee that the GaussMarkov conditions hold. To address this problem, we advocate the use of model averaging, which is a method that minimizes downside risks related to model misspecification and generates more reliable coefficient estimates. Thus, after selecting several appropriate models, we estimate agebias factors by taking a trimmed average over the factors derived from each model. We argue that similar methods may be readily implemented by statistical agencies (even very small ones) with little additional effort. We find that, in 2004 data, BLS agebias factors were too small, on average, by nearly 40%. Since the age bias term itself is rather small, the implied downwardbias of the aggregate indexes is modest. On the other hand, errors in particular metropolitan areas were much larger, with annual downwardbias as large as 0.6%. 
Keywords:  Depreciation, Hedonics, Model Averaging, Inflation, CPI Bias 
JEL:  E31 C81 C82 R31 R21 O47 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:bls:wpaper:ec070100&r=ecm 
By:  Brad Baxter (School of Economics, Mathematics & Statistics, Birkbeck); Liam Graham; Stephen Wright (School of Economics, Mathematics & Statistics, Birkbeck) 
Abstract:  We relax the assumption of full information that underlies most dynamic general equilibrium models, and instead assume agents optimally form estimates of the states from an incomplete information set. We derive a version of the Kalman filter that is endogenous to agents' optimising decisions, and state conditions for its convergence. We show the (restrictive) conditions under which the endogenous Kalman filter will at least asymptotically reveal the true states. In general we show that incomplete information can have signi?cant implications for the timeseries properties of economies. We provide a Matlab toolkit which allows the easy implementation of models with incomplete information. 
Keywords:  Dynamic general equilibrium, Kalman filter, imperfect information, signal extraction 
JEL:  E27 E37 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:bbk:bbkefp:0719&r=ecm 
By:  Ravi Kanbur; Stuart Sayer; Andy Snell 
Abstract:  Standard measures of inequality have been criticized for a long time on the grounds that they are snap shot measures which do not take into account the process generating the observed distribution. Rather than focusing on outcomes, it is argued, we should be interested in whether the underlying process is “fair”. Following this line of argument, this paper develops statistical tests for fairness within a well defined income distribution generating process and a well specified notion of “fairness”. We find that standard test procedures, such as LR, LM and Wald, lead to test statistics which are closely related to standard measures of inequality. The answer to the “process versus outcomes” critique is thus not to stop calculating inequality measures, but to interpret their values differently–to compare them to critical values for a test of the null hypothesis of fairness. 
Date:  2007–10–26 
URL:  http://d.repec.org/n?u=RePEc:edn:esedps:174&r=ecm 
By:  Patrick Fève; Alain Guay 
Abstract:  The usefulness of SVARs for developing empirically plausible models is actually subject to many controversies in quantitative macroeconomics. In this paper, we propose a simple alternative two step SVARs based procedure which consistently identifies and estimates the effect of permanent technology shocks on aggregate variables. Simulation experiments from a standard business cycle model show that our approach outperforms standard SVARs. The two step procedure, when applied to actual data, predicts a significant shortrun decrease of hours after a technology improvement followed by a delayed and humpshaped positive response. Additionally, the rate of inflation and the nominal interest rate displays a significant decrease after a positive technology shock. 
Keywords:  SVARs, longrun restriction, technology shocks, consumption to output ratio, hours worked 
JEL:  C32 E32 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:0736&r=ecm 