
on Econometrics 
By:  De Mol, Christine; Giannone, Domenico; Reichlin, Lucrezia 
Abstract:  This paper considers Bayesian regression with normal and double exponential priors as forecasting methods based on large panels of time series. We show that, empirically, these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices. Moreover, we study the asymptotic properties of the Bayesian regression under Gaussian prior under the assumption that data are quasi collinear to establish a criterion for setting parameters in a large crosssection. 
Keywords:  Bayesian VAR; large crosssections; Lasso regression; principal components; ridge regressions 
JEL:  C11 C13 C33 C53 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5829&r=ecm 
By:  Oliver Linton; Enno Mammen 
Abstract:  We consider a semiparametric distributed lag model in which the "news impact curve" m isnonparametric but the response is dynamic through some linear filters. A special case ofthis is a nonparametric regression with serially correlated errors. We propose an estimatorof the news impact curve based on a dynamic transformation that produces white noiseerrors. This yields an estimating equation for m that is a type two linear integral equation.We investigate both the stationary case and the case where the error has a unit root. In thestationary case we establish the pointwise asymptotic normality. In the special case of anonparametric regression subject to time series errors our estimator achieves efficiencyimprovements over the usual estimators, see Xiao, Linton, Carroll, and Mammen (2003). Inthe unit root case our procedure is consistent and asymptotically normal unlike the standardregression smoother. We also present the distribution theory for the parameter estimates,which is nonstandard in the unit root case. We also investigate its finite sampleperformance through simulation experiments. 
Keywords:  Efficiency, Inverse Problem, Kernel Estimation, Nonparametric regression,Time Series, Unit Roots. 
JEL:  C14 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/503&r=ecm 
By:  KeLi Xu (Dept. of Economics, Yale University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Stable autoregressive models of known finite order are considered with martingale differences errors scaled by an unknown nonparametric timevarying function generating heterogeneity. An important special case involves structural change in the error variance, but in most practical cases the pattern of variance change over time is unknown and may involve shifts at unknown discrete points in time, continuous evolution or combinations of the two. This paper develops kernelbased estimators of the residual variances and associated adaptive least squares (ALS) estimators of the autoregressive coefficients. These are shown to be asymptotically efficient, having the same limit distribution as the infeasible generalized least squares (GLS). Comparisons of the efficient procedure and the ordinary least squares (OLS) reveal that least squares can be extremely inefficient in some cases while nearly optimal in others. Simulations show that, when least squares work well, the adaptive estimators perform comparably well, whereas when least squares work poorly, major efficiency gains are achieved by the new estimators. 
Keywords:  Adaptive estimation, Autoregression, Heterogeneity, Weighted regression 
JEL:  C14 C22 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1585&r=ecm 
By:  Doz, Catherine; Giannone, Domenico; Reichlin, Lucrezia 
Abstract:  This paper considers quasimaximum likelihood estimations of a dynamic approximate factor model when the panel of time series is large. Maximum likelihood is analyzed under different sources of misspecification: omitted serial correlation of the observations and crosssectional correlation of the idiosyncratic components. It is shown that the effects of misspecification on the estimation of the common factors is negligible for large sample size (T) and the crosssectional dimension (n). The estimator is feasible when n is large and easily implementable using the Kalman smoother and the EM algorithm as in traditional factor analysis. Simulation results illustrate what are the empirical conditions in which we can expect improvement with respect to simple principle components considered by Bai (2003), Bai and Ng (2002), Forni, Hallin, Lippi, and Reichlin (2000, 2005b), Stock and Watson (2002a,b). 
Keywords:  factor model; large crosssections; Quasi Maximum Likelihood 
JEL:  C32 C33 C51 
Date:  2006–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5724&r=ecm 
By:  Zhongjun Qu (Department of Economics, University of Illinois at UrbanaChampaign); Pierre Perron (Department of Economics, Boston University) 
Abstract:  We consider Johansen’s (1988, 1991) cointegration tests when a Vector AutoRegressive (VAR) process of order k is used to approximate a more general linear process with an infinite VAR representation. In this case, and in particular when a moving average component is present, traditional methods to select the lag order, such as Akaike’s (AIC) or the Bayesian information criteria, lead to too parsimonious a model, with the implication that the cointegration tests suffer from substantial size distortions in finite samples. We extend the analysis of Ng and Perron (2001) to derive a Modified Akaike’s Information Criterion (MAIC) in this multivariate setting. The idea is to use the information specified by the null hypothesis as it relates to restrictions on the parameters of the model to keep an extra term in the penalty function of the AIC. This MAIC takes a very simple form for which this extra term is simply the likelihood ratio test for testing the null hypothesis of r against more than r cointegrating vectors. We provide theoretical analyses of its validity and of the fact that cointegration tests constructed from a VAR whose lag order is selected using the MAIC have the same limit distribution as when the order is finite and known. We also provide theoretical and simulation analyses to show how the MAIC leads to VAR approximations that yield tests with drastically improved size properties with little loss of power. 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2006011&r=ecm 
By:  Isabel Molina 
Abstract:  Assuming a multivariate linear regression model with one random factor, we consider the parameters defined as exponentials of mixed effects, i.e., linear combinations of fixed and random effects. Such parameters are of particular interest in prediction problems where the dependent variable is the logarithm of the variable that is the object of inference. We derive biascorrected empirical predictors of such parameters. A second order approximation for the mean crossed product error of the predictors of two of these parameters is obtained, and an estimator is derived from it. The mean squared error is obtained as a particular case. 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws066117&r=ecm 
By:  Ai Deng (Department of Economics, Boston University); Pierre Perron (Department of Economics, Boston University) 
Abstract:  We consider the CUSUM of squares test in a linear regression model with general mixing assumptions on the regressors and the errors. We derive its limit distribution and show how it depends on the nature of the error process. We suggest a corrected version that has a limit distribution free of nuisance parameters. We also discuss how it provides an improvement over the standard approach to testing for a change in the variance in a univariate times series. Simulation evidence is presented to support this. We illustrate the usefulness of our method by analyzing changes in the variance of stock returns and a variety of macroeconomic time series, as well as by testing for change in the variance of the residuals in a typical fourvariable VAR model. Our results show the widespread prevalence of changes in the variance of such series and the fact that the variability of shocks affecting the U.S. economy has decreased. 
Keywords:  Changepoint, Variance shift, Recursive residuals, Dynamic models, Conditional heteroskedasticity. 
JEL:  D80 D91 G11 E21 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2005046&r=ecm 
By:  Chang Sik Kim (Dept. of Economics, Ewha Women's University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Estimation of the memory parameter (d) is considered for models of nonstationary fractionally integrated time series with d > (1/2). It is shown that the log periodogram regression estimator of d is inconsistent when 1 < d < 2 and is consistent when (1/2) < d = 1. For d > 1, the estimator is shown to converge in probability to unity. 
Keywords:  Discrete Fourier transform, Fractional Brownian motion, Fractional integration, Inconsistency, Log periodogram regression, Long memory parameter, Nonstationarity, Semiparametric estimation 
JEL:  C22 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1587&r=ecm 
By:  Ai Deng (Department of Economics, Boston University) 
Abstract:  This paper provides an asymptotic theory for the spurious regression analyzed by Ferson, Sarkissian and Simin (2003). The asymptotic framework developed by Nabeya and Perron (1994) is used to provide approximations for the various estimates and statistics. Also, using a fixedbandwidth asymptotic framework, a convergent t test is constructed, following Sun (2005). These are shown to be accurate and to explain the simulation findings in Ferson et al. (2003). Monte Carlo studies show that our asymptotic distribution provides a very good finite sample approximation for sample sizes often encountered in finance. Our analysis also reveals an important potential problem in the theoretical hypothesis testing literature on predictability. A possible reconciling interpretation is provided. 
Keywords:  spurious regression, observational equivalence, NabeyaPerron asymptotics, fixedb asymptotics, data mining, nearly integrated, nearly white noise (NINW) 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2005048&r=ecm 
By:  Peter M Robinson 
Abstract:  Employing recent results of Robinson (2005) we consider the asymptotic properties ofconditionalsumofsquares (CSS) estimates of parametric models for stationary timeseries with long memory. CSS estimation has been considered as a rival to Gaussianmaximum likelihood and Whittle estimation of time series models. The latter kinds ofestimate have been rigorously shown to be asymptotically normally distributed in case oflong memory. However, CSS estimates, which should have the same asymptoticdistributional properties under similar conditions, have not received comparabletreatment: the truncation of the infinite autoregressive representation inherent in CSSestimation has been essentially ignored in proofs of asymptotic normality. Unlike in shortmemory models it is not straightforward to show the truncation has negligible effect. 
Keywords:  Long memory, conditionalsumofsquares estimation,central limit theorem, almost sure convergence. 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/505&r=ecm 
By:  George Athanasopoulos; Rob J. Hyndman 
Abstract:  In this paper, we model and forecast Australian domestic tourism demand. We use a regression framework to estimate important economic relationships for domestic tourism demand. We also identify the impact of world events such as the 2000 Sydney Olympics and the 2002 Bali bombings on Australian domestic tourism. To explore the time series nature of the data, we use innovation state space models to forecast the domestic tourism demand. Combining these two frameworks, we build innovation state space models with exogenous variables. These models are able to capture the time series dynamics in the data, as well as economic and other relationships. We show that these models outperform alternative approaches for shortterm forecasting and also produce sensible longterm forecasts. The forecasts are compared with the official Australian government forecasts, which are found to be more optimistic than our forecasts. 
Keywords:  Australia, domestic tourism, exponential smoothing, forecasting, innovation state space models. 
JEL:  C13 C22 C53 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200619&r=ecm 
By:  Ivan FernandezVal (Department of Economics, Boston University); 
Abstract:  Fixed e®ects estimates of structural parameters in nonlinear panel models can be severely biased due to the incidental parameters problem. In this paper I show that the most important com ponent of this incidental parameters bias for probit ¯xed e®ects estimators of index coe±cients is proportional to the true parameter value, using a largeT expansion of the bias. This result allows me to derive a lower bound for this bias, and to show that ¯xed e®ects estimates of ratios of coe±cients and average marginal e®ects have zero bias in the absence of heterogeneity and have negligible bias relative to their true values for a wide range of distributions of regressors and individual e®ects. Numerical examples suggest that this small bias property also holds for logit and linear probability models, and for exogenous variables in dynamic binary choice models. An empirical analysis of female labor force participation using data from the PSID shows that whereas the signi¯cant biases in ¯xed e®ects estimates of model parameters do not contami nate the estimates of marginal e®ects in static models, estimates of both index coe±cients and marginal e®ects can be severely biased in dynamic models. Improved bias corrected estimators for index coe±cients and marginal e®ects are also proposed for both static and dynamic models. 
JEL:  C23 C25 J22 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp200538&r=ecm 
By:  Sokbae Lee; Oliver Linton; YoonJae Whang 
Abstract:  We propose a test of the hypothesis of stochastic monotonicity. This hypothesis isof interest in many applications. Our test is based on the supremum of a rescaledUstatistic. We show that its asymptotic distribution is Gumbel. The proof is difficultbecause the approximating Gaussian stochastic process contains both a stationaryand a nonstationary part and so we have to extend existing results that only applyto either one or the other case. 
Keywords:  Distribution function, Extreme Value Theory, Gaussian Process,Monotonicity. 
JEL:  C14 C15 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/504&r=ecm 
By:  Stanislav Anatolyev (NES) 
Abstract:  This article surveys estimation in stationary time series models using the approach of optimal instrumentation. We review tools that allow construction and implementation of optimal instrumental variables estimators in various circumstances { in single and multiperiod models, in the absence and presence of conditional heteroskedasticity, by considering linear and nonlinear instruments. We also discuss issues adjacent to the theme of optimal instruments. The article is directed primarily towards practitioners, but also may be found useful by econometric theorists and teachers of graduate econometrics. 
Keywords:  Instrumental variables estimation; Moment restrictions; Optimal instrument; Effciency bounds; Stationary time series. 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0069&r=ecm 
By:  Offer Lieberman (TechnionIsrael Institute of Technology); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  An infiniteorder asymptotic expansion is given for the autocovariance function of a general stationary longmemory process with memory parameter d in (1/2,1/2). The class of spectral densities considered includes as a special case the stationary and invertible ARFIMA(p,d,q) model. The leading term of the expansion is of the order O(1/k^{12d}), where k is the autocovariance order, consistent with the well known power law decay for such processes, and is shown to be accurate to an error of O(1/k^{32d}). The derivation uses Erdélyi's (1956) expansion for Fouriertype integrals when there are critical points at the boundaries of the range of integration  here the frequencies {0,2}. Numerical evaluations show that the expansion is accurate even for small k in cases where the autocovariance sequence decays monotonically, and in other cases for moderate to large k. The approximations are easy to compute across a variety of parameter values and models. 
Keywords:  Autocovariance, Asymptotic expansion, Critical point, Fourier integral, Long memory 
JEL:  C13 C22 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1586&r=ecm 
By:  Roger Klein (Rutgers University); Francis Vella (Georgetown University and IZA Bonn) 
Abstract:  This paper provides a control function estimator to adjust for endogeneity in the triangular simultaneous equations model where there are no available exclusion restrictions to generate suitable instruments. Our approach is to exploit the dependence of the errors on exogenous variables (e.g. heteroscedasticity) to adjust the conventional control function estimator. The form of the error dependence on the exogenous variables is subject to restrictions, but is not parametrically specified. In addition to providing the estimator and deriving its largesample properties, we present simulation evidence which indicates the estimator works well. 
Keywords:  endogeneity, heteroskedasticity, control function 
JEL:  C14 C30 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2378&r=ecm 
By:  Richard K. Crump; V. Joseph Hotz; Guido W. Imbens; Oscar A. Mitnik 
Abstract:  Estimation of average treatment effects under unconfoundedness or exogenous treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing such lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1,0.9]. 
JEL:  C1 C13 C14 C2 C21 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0330&r=ecm 
By:  Stanislav Anatolyev (NES) 
Abstract:  We develop and evaluate sequential testing tools for a class of nonparametric tests for predictability of financial returns that includes, in particular, the directional accuracy and excess profitability tests. We consider both the retrospective context where a researcher wants to track predictability over time in a historical sample, and the monitoring context where a researcher conducts testing as new observations arrive. Throughout, we elaborate on both twosided and onesided testing, focusing on linear monitoring boundaries that are continuations of horizontal lines corresponding to retrospective critical values. We illustrate our methodology by testing for directional and mean predictability of returns in a dozen of young stock markets in Eastern Europe. 
Keywords:  Testing, monitoring, predictability, stock returns 
JEL:  C12 C22 C52 C53 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0071&r=ecm 
By:  Gregory Connor; Oliver Linton 
Abstract:  We introduce an alternative version of the FamaFrench threefactor model of stockreturns together with a new estimation methodology. We assume that the factorbetas in the model are smooth nonlinear functions of observed securitycharacteristics. We develop an estimation procedure that combines nonparametrickernel methods for constructing mimicking portfolios with parametric nonlinearregression to estimate factor returns and factor betas simultaneously. Themethodology is applied to US common stocks and the empirical findings comparedto those of Fama and French. 
Keywords:  characteristicbased factor model, arbitrage pricing theory, kernelestimation, nonparametric estimation. 
JEL:  G12 C14 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/506&r=ecm 
By:  Manuela Angelucci (University of Arizona and IZA Bonn); Orazio Attanasio (University College London, NBER, BREAD and CEPR) 
Abstract:  In this paper we discuss several methodological issues related to the identification and estimation of Average Treatment on the Treated (ATT) effects in the presence of low compliance. We consider nonexperimental data consisting of a treatment group, where a program is implemented, and of a control group that is nonrandomly drawn, where the program is not offered. Estimating the ATT involves tackling both the nonrandom assignment of the program and the nonrandom participation among treated individuals. We argue against standard matching approaches to deal with the latter issue because they are based on the assumption that we observe all variables that determine both participation and outcome. Instead, we propose an IVtype estimator which exploits the fact that the ATT can be expressed as the Average Intent to Treat divided by the participation share, in the absence of spillover effects. We propose a semiparametric estimator that couples the flexibility of matching estimators with a standard Instrumental Variable approach. We discuss the different assumptions necessary for the identification of the ATT with each of the two approaches, and we provide an empirical application by estimating the effect of the Mexican conditional cash transfer program, Oportunidades, on food consumption. 
Keywords:  program evaluation, treatment effects 
JEL:  C31 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2368&r=ecm 
By:  Li,Youwei; Donkers,Bas; Melenberg,Bertrand (Tilburg University, Center for Economic Research) 
Abstract:  Microscopic simulation models are often evaluated based on visual inspection of the results. This paper presents formal econometric techniques to compare microscopic simulation (MS) models with reallife data. A related result is a methodology to compare different MS models with each other. For this purpose, possible parameters of interest, such as mean returns, or autocorrelation patterns, are classified and characterized. For each class of characteristics, the appropriate techniques are presented. We illustrate the methodology by comparing the MS model developed by Levy, Levy, and Solomon (2000) and the market fraction model developed by He and Li (2005a, b) with actual data 
Keywords:  Microscopic simulation models;Econometric analysis 
JEL:  C10 G12 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200699&r=ecm 
By:  Ilze Kalnina; Oliver Linton 
Abstract:  We propose an econometric model that captures the e¤ects of marketmicrostructure on a latent price process. In particular, we allow for correlationbetween the measurement error and the return process and we allow themeasurement error process to have a diurnal heteroskedasticity. Wepropose a modification of the TSRV estimator of quadratic variation. Weshow that this estimator is consistent, with a rate of convergence thatdepends on the size of the measurement error, but is no worse than n1=6.We investigate in simulation experiments the finite sample performance ofvarious proposed implementations. 
Keywords:  Endogenous noise, Market Microstructure, Realised Volatility,Semimartingale 
JEL:  C12 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2006/509&r=ecm 
By:  Li,Youwei; Donkers,Bas; Melenberg,Bertrand (Tilburg University, Center for Economic Research) 
Abstract:  This paper illustrates how to compare different microscopic simulation (MS) models and how to compare a MS model with real data in case the parameters of interest are estimated non or semiparametrically. As examples we investigate the marginal singleperiod probability density function of stock returns, and the corresponding spectral density function and memory parameters. We illustrate the methodology by the MS models developed by Levy, Levy, Solomon (2000) and the market fraction model developed by He and Li (2005a, b), and confront the resulting return data with the S&P 500 stock index data. 
Keywords:  Microscopic simulation models;Probability density function;Spectral density function;Memory parameters 
JEL:  C14 G12 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200695&r=ecm 
By:  Pierre Perron (Department of Economics, Boston University); Zhongjun Qu (Department of Economics, Boston University) 
Abstract:  Recently, there has been an upsurge of interest on the possibility of confusing long memory and structural changes in level. Many studies have documented the fact that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. Yet, no theoretical results are available pertaining to the distributions of the estimates. We fill this gap by analyzing the properties of the log periodogram estimate when the jump component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. Simulations are presented to highlight the properties of the distributions and to assess the adequacy of our limit results as approximations to the finite sample distributions. Also, we explain how the limit distribution changes as the number of frequencies used varies, a feature that is different from the case with a pure fractionally integrated model. We confront this practical implication to daily SP500 absolute returns and their square roots over the period 19282002. Our findings are remarkable, the path of the log periodogram estimates clearly follows a pattern that would obtain if the true underlying process was one of shortmemory contaminated by level shifts instead of a pure fractionally integrated process. A simple testing procedure is also proposed, which reinforces this conclusion. 
Keywords:  structural change, jumps, long memory processes, fractional integration, Poisson process, frequency domain estimates. 
JEL:  C22 
Date:  2004–06 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2006015&r=ecm 
By:  A. PRINZIE; D. VAN DEN POEL 
Abstract:  Data mining applications addressing classification problems must master two key tasks: feature selection and model selection. This paper proposes a random feature selection procedure integrated within the multinomial logit (MNL) classifier to perform both tasks simultaneously. We assess the potential of the random feature selection procedure (exploiting randomness) as compared to an expert feature selection method (exploiting domainknowledge) on a CRM crosssell application. The results show great promise as the predictive accuracy of the integrated random feature selection in the MNL algorithm is substantially higher than that of the expert feature selection method. 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:06/390&r=ecm 
By:  Zhong Zhao (IZA Bonn) 
Abstract:  We use the data from the National Supported Work Demonstration to study performance of nonpropensityscorematching estimators, and to compare them with propensity score matching. We find that all matching estimators we studied here are sensitive to the choice of data set. Propensity score methods are sensitive to smoothing parameters, and they usually have larger standard error. Differenceindifferences and biascorrected matching improve the performance of the matching estimators considered here. Our results suggest that the 1974 earnings are important for Dehejia and Wahba’s PSID data but not for their CPS data in replicating experiment results. After decomposing the selection bias, we find that a sizable selection bias on unobservables is present in all data sets. 
Keywords:  treatment effect, matching estimators, NSW data, selection bias 
JEL:  C14 C21 I38 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2375&r=ecm 
By:  Esther Ruiz; Helena Veiga 
Abstract:  In this paper, we propose a new stochastic volatility model, called ALMSV, to cope simultaneously with the leverage effect and longmemory. We derive its statistical properties and compare them with the properties of the FIEGARCH model. We show that the dependence of the autocorrelations of squares on the parameters measuring the asymmetry and the persistence is different in both models. The kurtosis and autocorrelations of squares do not depend on the asymmetry in the ALMSV model while they increase with the asymmetry in the FIEGARCH model. Furthermore, the autocorrelations of squares increase with the persistence in the ALMSV model and decrease in the FIEGARCH model. On the other hand, the autocorrelations of absolute returns increase with the magnitude of the asymmetry in the FIEGARCH model while they can increase or decrease depending on the sign of the asymmetry in the LMSV model. Finally, the crosscorrelations between squares and original observations are, in general, larger in the FIEGARCH model than in the ALMSV model. The results are illustrated by fitting both models to represent the dynamic evolution of volatilities of daily returns of the S&P500 and DAX indexes. 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws066016&r=ecm 
By:  Giannone, Domenico; Reichlin, Lucrezia 
Abstract:  This paper asks two questions. First, can we detect empirically whether the shocks recovered from the estimates of a structural VAR are truly structural? Second, can the problem of nonfundamentalness be solved by considering additional information? The answer to the first question is 'yes' and that to the second is 'under some conditions'. 
Keywords:  identification; information; invertibility; structural VAR 
JEL:  C32 C33 E00 E32 O3 
Date:  2006–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5725&r=ecm 
By:  AssenmacherWesche, Katrin; Gerlach, Stefan 
Abstract:  While monetary targeting has become increasingly rare, many central banks attach weight to money growth in setting interest rates. This raises the issue of how money can be combined with other variables, in particular the output gap, when analysing inflation. The Swiss National Bank emphasises that the indicators it uses to do so vary across forecasting horizons. While real indicators are employed for shortrun forecasts, money growth is more important at longer horizons. Using band spectral regressions and causality tests in the frequency domain, we show that this interpretation of the inflation process fits the data well. 
Keywords:  frequency domain; Phillips curve; quantity theory; spectral regression 
JEL:  C22 E3 E5 
Date:  2006–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5723&r=ecm 
By:  Frank A Cowell 
Abstract:  This article provides a brief overview of the key issues in inequality measurement andhas been prepared for inclusion in the second edition of The New Palgrave. 
Keywords:  inequality, social welfare, ranking. 
JEL:  C13 D63 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stidar:86&r=ecm 
By:  Edward L. Glaeser 
Abstract:  Economists are quick to assume opportunistic behavior in almost every walk of life other than our own. Our empirical methods are based on assumptions of human behavior that would not pass muster in any of our models. The solution to this problem is not to expect a mass renunciation of data mining, selective data cleaning or opportunistic methodology selection, but rather to follow Leamer's lead in designing and using techniques that anticipate the behavior of optimizing researchers. In this essay, I make ten points about a more economic approach to empirical methods and suggest paths for methodological progress. 
JEL:  A11 B4 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0329&r=ecm 
By:  Tatsuma Wada (Department of Economics, Boston University); Pierre Perron (Department of Economics, Boston University) 
Abstract:  This paper first generalizes the trendcycle decomposition framework of Perron and Wada (2005) based on an unobserved components models with innovations having a mixtures of Normals distribution, which is able to handle sudden level and slope changes to the trend function as well as outliers. We investigate how important are the differences in the implied trend and cycle compared to the popular decomposition based on the Hodrick and Prescott (HP) (1997) filter. Our results show important qualitative and quantitative differences in the implied cycles for both real GDP and consumption series for the G7 countries. Most of the differences can be ascribed to the fact that the HP filter does not handle well slope changes, level shifts and outliers, while our method does so. Third, we assess how such different cycles affect some socalled “stylized facts” about the relative variability of consumption and output across countries. Our results show again important differences. In particular, the crosscountry consumption correlations are generally higher than the output correlations, except for the period from 1975 to 1985, provided Canada is excluded. Our results therefore provide a partial solution to this puzzle. The evidence is particularly strong for the most recent period. 
Keywords:  TrendCycle Decomposition, Unobserved Components Model, International Business Cycle, Non Gaussian Filter. 
JEL:  C22 E32 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp200544&r=ecm 