
on Econometrics 
By:  Mohitosh Kejriwal; Pierre Perron 
Abstract:  This paper considers issues related to testing for multiple structural changes in cointegrated systems. We derive the limiting distribution of the SupWald test under mild conditions on the errors and regressors for a variety of testing problems. We show that even if the coefficients of the integrated regressors are held fixed but the intercept is allowed to change, the limit distributions are not the same as would prevail in a stationary framework. Including stationary regressors whose coefficients are not allowed to change does not affect the limiting distribution of the tests under the null hypothesis. We also propose a procedure that allows one to test the null hypothesis of, say, k changes, versus the alternative hypothesis of k + 1 changes. This sequential procedure is useful in that it permits consistent estimation of the number of breaks present. We show via simulations that our tests maintain the correct size in finite samples and are much more powerful than the commonly used LM tests, which suffer from important problems of nonmonotonic power in the presence of serial correlation in the errors. 
Keywords:  changepoint, sequential procedure, wald tests, unit roots, cointegration 
JEL:  C22 
Date:  2008–11 
URL:  http://d.repec.org/n?u=RePEc:pur:prukra:1216&r=ecm 
By:  Hiroaki Chigira; Tsunemasa Shiba 
Abstract:  We propose a Bayesian procedure to estimate heteroscedastic variances of the regression error term ƒÖ, when the form of heteroscedasticity is unknown. The prior information on ƒÖ is elicited from the wellknown EickerWhite Heteroscedasticity Consistent VarianceCovariance Matrix Estimator. Markov Chain Monte Carlo algorithm is used to simulate posterior pdffs of the unknown elements of ƒÖ. In addition to the numerical examples, we present an empirical investigation of the stock prices of Japanese pharmaceutical and biomedical companies to demonstrate usefulness of the proposed method. 
Keywords:  EickerWhite HCCM, orthogonal regressors, informative prior pdf's, MCMC, stock return variance 
JEL:  C11 C13 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08051&r=ecm 
By:  Hadri, Kaddour; Kurozumi, Eiji 
Abstract:  This paper develops a simple test for the null hypothesis of stationarity in heterogeneous panel data with crosssectional dependence in the form of a common factor in the disturbance. We do not estimate the common factor but mopup its effect by employing the same method as the one proposed in Pesaran (2007) in the unit root testing context. Our test is basically the same as the KPSS test but the regression is augmented by crosssectional average of the observations. We also develop a Lagrange multiplier (LM) test allowing for crosssectional dependence and, under restrictive assumptions, compare our augmented KPSS test with the extended LM test under the null of stationarity, under the local alternative and under the fixed alternative, and discuss the differences between these two tests. We also extend our test to the more realistic case where the shocks are serially correlated. We use Monte Carlo simulations to examine the finite sample property of the augmented KPSS test. 
Keywords:  Panel data, stationarity, KPSS test, crosssectional dependence, LM test, locally best test 
JEL:  C12 C33 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:hit:ccesdp:7&r=ecm 
By:  Drew Creal; Siem Jan Koopman; Andre Lucas 
Abstract:  We propose a new class of observation driven time series models that we refer to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled likelihood score. This provides a unified and consistent framework for introducing timevarying parameters in a wide class of nonlinear models. The GAS model encompasses other wellknown models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity and single source of error models. In addition, the GAS specification gives rise to a wide range of new observation driven models. Examples include nonlinear regression models with timevarying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with timevarying parameters and pooling restrictions, new models for timevarying copula functions and models for timevarying higher order moments. We study the properties of GAS models and provide several nontrivial examples of their application. 
Keywords:  dynamic models, timevarying parameters, nonlinearity, exponential family, marked point processes, copulas 
JEL:  C10 C22 C32 C51 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08038&r=ecm 
By:  Kukenova, Madina; Monteiro, JoseAntonio 
Abstract:  This paper investigates the finite sample properties of estimators for spatial dynamic panel models in the presence of several endogenous variables. So far, none of the available estimators in spatial econometrics allows considering spatial dynamic models with one or more endogenous variables. We propose to apply systemGMM, since it can correct for the endogeneity of the dependent variable, the spatial lag as well as other potentially endogenous variables using internal and/or external instruments. The MonteCarlo investigation compares the performance of spatial MLE, spatial dynamic MLE (Elhorst (2005)), spatial dynamic QMLE (Yu et al. (2008)), LSDV, differenceGMM (Arellano & Bond (1991)), as well as extendedGMM (Arellano & Bover (1995), Blundell & Bover (1998)) in terms of bias, root mean squared error and standarderror accuracy. The results suggest that, in order to account for the endogeneity of several covariates, spatial dynamic panel models should be estimated using extended GMM. On a practical ground, this is also important, because systemGMM avoids the inversion of high dimension spatial weights matrices, which can be computationally unfeasible for large N and/or T. 
Keywords:  Spatial Econometrics; Dynamic Panel Model; System GMM; Monte Carlo Simulations 
JEL:  C15 C33 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:13404&r=ecm 
By:  Alessandra Amendola; Giuseppe Storti 
Abstract:  This paper proposes a novel approach to the combination of conditional covariance matrix forecasts based on the use of the Generalized Method of Moments (GMM). It is shown how the procedure can be generalized to deal with large dimensional systems by means of a twostep strategy. The finite sample properties of the GMM estimator of the combination weights are investigated by Monte Carlo simulations. Finally, in order to give an appraisal of the economic implications of the combined volatility predictor, the results of an application to tactical asset allocation are presented. 
Keywords:  Multivariate GARCH, Forecast Combination, GMM, Portfolio Optimization 
JEL:  C52 C53 C32 G11 
Date:  2009–01 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009007&r=ecm 
By:  Hyungsik Roger Moon; Frank Schorfheide 
Abstract:  A large sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by a finitedimensional reduced form parameter vector. It is used to analyze the differences between frequentist confidence sets and Bayesian credible sets in partially identified models. A key difference is that frequentist set estimates extend beyond the boundaries of the identified set (conditional on the estimated reduced form parameter), whereas Bayesian credible sets can asymptotically be located in the interior of the identified set. Our asymptotic approximations are illustrated in the context of simple moment inequality models and a numerical illustration for a twoplayer entry game is provided. 
JEL:  C11 C32 C35 
Date:  2009–04 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14882&r=ecm 
By:  Ole E. BarndorffNielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard 
Abstract:  We propose a multivariate realised kernel to estimate the expost covariation of logprices. We show this new consistent estimator is guaranteed to be positive semidefinite and is robust to measurement noise of certain types and can also handle nonsynchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 minutes intervals. We show the new estimator is substantially more precise. 
Keywords:  HAC estimator, Long run variance estimator, Market frictions, Quadratic variation, Realised variance 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08037&r=ecm 
By:  Roy Cerqueti (Univesity of Macerata); Paolo Falbo (University of Brescia); Cristian Pelizzari (University of Brescia) 
Abstract:  <p> </p><p align="left"><font size="1">While the large portion of the literature on Markov chain (possibly of order<br />higher than one) bootstrap methods has focused on the correct estimation of<br />the transition probabilities, little or no attention has been devoted to the<br />problem of estimating the dimension of the transition probability matrix.<br />Indeed, it is usual to assume that the Markov chain has a onestep memory<br />property and that the state space could not to be clustered, and coincides<br />with the distinct observed values. In this paper we question the opportunity<br />of such a standard approach.<br />In particular we advance a method to jointly estimate the order of the Markov<br />chain and identify a suitable clustering of the states. Indeed in several real<br />life applications the "memory" of many<br />processes extends well over the last observation; in those cases a correct<br />representation of past trajectories requires a significantly richer set than<br />the state space. On the contrary it can sometimes happen that some distinct<br />values do not correspond to really "different<br />states of a process; this is a common conclusion whenever,<br />for example, a process assuming two distinct values in t is not affected in<br />its distribution in t+1. Such a situation would suggest to reduce the<br />dimension of the transition probability matrix.<br />Our methods are based on solving two optimization problems. More specifically<br />we consider two competing objectives that a researcher will in general pursue<br />when dealing with bootstrapping: preserving the similarity between the<br />observed and the bootstrap series and reducing the probabilities of getting a<br />perfect replication of the original sample. A brief axiomatic discussion is<br />developed to define the desirable properties for such optimal criteria. Two<br />numerical examples are presented to illustrate the method.</font></p><p align="left"> </p> 
Keywords:  order of Markov chains,similarity of time series,transition probability matrices,multiplicity of time series,partition of states of Markov chains,Markov chains,bootstrap methods 
JEL:  O1 O11 
Date:  2009–04 
URL:  http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper35&r=ecm 
By:  Andrew Ching (University of Toronto); Susumu Imai (Queen's University); Masakazu Ishihara (University of Toronto); Neelam Jain (Northern Illinois University) 
Abstract:  This paper provides a stepbystep guide to estimating discrete choice dynamic programming (DDP) models using the Bayesian Dynamic Programming algorithm developed by Imai Jain and Ching (2008) (IJC). The IJC method combines the DDP solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm, which solves the DDP model and estimates its structural parameters simultaneously. The main computational advantage of this estimation algorithm is the efficient use of information obtained from the past iterations. In the conventional Nested Fixed Point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the Bayesian Dynamic Programming algorithm extensively uses the computational results obtained from the past iterations to help solving the DDP model at the current iterated parameter values. Consequently, it significantly alleviates the computational burden of estimating a DDP model. We carefully discuss how to implement the algorithm in practice, and use a simple dynamic store choice model to illustrate how to apply this algorithm to obtain parameter estimates. 
Keywords:  Bayesian Dynamic Programming, Discrete Choice Dynamic Programming, Markov Chain Monte Carlo 
JEL:  C11 M3 
Date:  2009–04 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1201&r=ecm 
By:  Ostap Okhrin; Yarema Okhrin; Wolfgang Schmid 
Abstract:  In this paper we analyse the properties of hierarchical Archimedean copulas. This class is a generalisation of the Archimedean copulas and allows for general nonexchangeable dependency structures. We show that the structure of the copula can be uniquely recovered from all bivariate margins. We derive the distribution of the copula value, which is particularly useful for tests and constructing con¯dence intervals. Furthermore, we analyse dependence orderings, multivariate dependence measures and extreme value copulas. Special attention we pay to the tail dependencies and derive several tail dependence indices for general hierarchical Archimedean copulas. 
Keywords:  copula; multivariate distribution; Archimedean copula; stochastic ordering; hierarchical copula 
JEL:  C16 C46 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009014&r=ecm 
By:  Choi, In; Kurozumi, Eiji 
Abstract:  In this paper, Mallows?(1973) Cp criterion, Akaike?s (1973) AIC, Hurvich and Tsai?s (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leadsandlags cointegrating regression. Deriving model selection criteria for the leadsandlags regression is a nontrivial task since the true model is of in?nite dimension. This paper justi?es using the conventional formulas of those model selection criteria for the leadsandlags cointegrating regression. The numbers of leads and lags can be selected in scienti?c ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the longrun coeï¿ cient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, ?xed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it. 
Keywords:  Cointegration, Leadsandlags regression, AIC, Corrected AIC, BIC, Cp 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:hit:ccesdp:6&r=ecm 
By:  Peter C. B. Phillips; Jun Yu 
Abstract:  A model of financial asset price determination is proposed that incorporates flat trading features into an efficient price process. The model involves the superposition of a Brownian semimartingale process for the effcient price and a Bernoulli process that determines the extent of price trading. The approach is related to sticky price modeling and the Calvo pricing mechanism in macroeconomic dynamics. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly affected, so that both the feasible central limit theorem and the inferential framework suggested in BarndorffNielson and Shephard (2002) remain valid under flat price trading even though there is information loss due to flat trading effects. The results are related to work by Jacod (1993) and Mykland and Zhang (2006) on realized volatility measures with random and intermittent sampling, and to ACD models for irregularly spaced transactions data. Extensions are given to include models with microstructure noise. Some simulation results are reported. Empirical evaluations with tickbytick data indicate that the effect of flat trading on the limit theory under microstructure noise is likely to be minor in most cases, thereby affirming the relevance of existing approaches. 
Keywords:  Bernoulli process, Brownian semimartingale, Calvo pricing, Flat trading, Microstructure noise, Quarticity function, Realized volatility, Stopping times 
JEL:  C15 G12 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08039&r=ecm 
By:  Deniz Dilan Karaman Örsal; Bernd Droge 
Abstract:  In this note we establish the existence of the first two moments of the asymptotic trace statistic, which appears as weak limit of the likelihood ratio statistic for testing the cointe gration rank in a vector autoregressive model and whose moments may be used to develop panel cointegration tests. Moreover, we justify the common practice to approximate these moments by simulating a certain statistic, which converges weakly to the asymptotic trace statistic. To accomplish this we show that the moments of the mentioned statistic converge to those of the asymptotic trace statistic as the time dimension tends to infinity. 
Keywords:  Cointegration, Trace statistic, Asymptotic moments, Uniform integrability 
JEL:  C32 C33 C12 
Date:  2009–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009012&r=ecm 
By:  Mendonca, Gui Pedro 
Abstract:  Even though the output and unemployment relation has always been a key theme in applied macroeconometrics research, the global hypothesis of modular short and long run dynamics assuming classic macroeconomic assumptions, is still to become a widely discussed subject in the field, and, therefore entails a large scope for further improvement, discussion and experimentation. Following recent advances in non linear bivariate estimation techniques this paper evaluates the joint hypotheses of endogenous growth, the natural rate hypothesis and asymmetric short run error correction. To tackle this global proposal a three step methodology, based on numeric grid search procedures is employed on data from nineteen OCDE countries. First, a numerical grid search is used to estimate linear trend output regimes with structural breaks and long run natural Unemployment rate regimes are endogenously obtained from these estimates. Finally, different grid search procedures, based on the original two step procedure for estimating linear cointegration models, are used to estimate the short run adjustment process assuming threshold vector error correction dynamics, following recent proposals on asymmetric Okun adjustment. 
Keywords:  Okun Law; Structural Change; AdditiveOutlier Models; Bivariate Threshold Vector Error Correction Systems; Output/Unemployment Dynamics 
JEL:  C32 C51 C52 E27 
Date:  2008–11–28 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:14648&r=ecm 
By:  Ji Cao; Wolfgang Härdle; Julius Mungo 
Abstract:  As a function of strike and time to maturity the implied volatility estimation is a challenging task in nancial econometrics. Dynamic Semiparametric Factor Models (DSFM) are a model class that allows for the estimation of the implied volatility surface (IVS) in a dynamic context, employing semiparametric factor functions and timevarying loadings. Because nancial asset volatilities move over time, across assets and over markets, this paper analyses volatility interaction between German and Korean stock markets. As proxy for the volatility, factor loadings series derived from a DSFM application on option prices are employed. We examine volatility transmission between the markets under the vector autoregressive (VAR) model framework. Our results show that a shock in the volatility of one market may not translate directly into greater uncertainty in another market and it is unlikely that portfolio investors can benet from diversication among these markets due to cointegration. 
Keywords:  implied volatility surface, dynamic semiparametric factor model, VAR, cointegration 
JEL:  C14 G12 
Date:  2009–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009019&r=ecm 
By:  Klein, Achim; Urbig, Diemo; Kirn, Stefan 
Abstract:  Introduction. The objects of investigation of this work are microlevel behaviors in stock markets. We aim at better understanding which strategies of market participants drive stock markets. The problem is that microlevel data from real stock markets are largely unobservable. We take an estimation perspective to obtain daily time series of fractions of chartists and fundamentalists among market participants. We estimate the heterogeneous agentbased financial market model introduced by Lux and Marchesi [1] to the S&P 500. This model has more realistic time series properties compared to less complex econometric and other agentbased models. Such kinds of models have a rather complex dependency between micro and macro parameters that have to be mapped to empirical data by the estimation method. This poses heavy computational burdens. Our contribution to this field is a new method for indirectly estimating timevarying microparameters of highly complex agentbased models at high frequency. Related work. Due to the high complexity, few authors have published on this topic to date (e.g., [2], [3], and [4]). Recent approaches in directly estimating agentbased models are restricted to simpler models, make simplifying assumptions on the estimation procedure, estimate only nontime varying parameters, or estimate only low frequency time series. Approach and computational methods. The indirect estimation method we propose is based on estimating the inverse model of a rich agentbased model that derives realistic macro market behavior from heterogeneous market participants’ behaviors. Applying the inverse model, which maps macro parameters back to micro parameters, to widely available macrolevel financial market data, allows for estimating time series of aggregated real world microlevel strategy data at daily frequency. To estimate the inverse model in the first place, a neural network approach is used, as it allows for a large degree of freedom concerning the structure of the mapping to be represented by the neural network. As basis for learning the mapping, micro and macro time series of the market model are generated artificially using a multiagent simulation based on RePast [5]. After applying several preprocessing and smoothing methods to these time series, a feedforward multilayer perceptron is trained using a variant of the LevenbergMarquardt algorithm combined with Bayesian regularization [6]. Finally, the trained network is applied to the S&P 500 to estimate daily time series of fractions of strategies used by market participants. Results. The main contribution of this work is a modelfree indirect estimation approach. It allows estimating microparameter time series of the underlying agentbased model of high complexity at high frequency. No simplifying assumptions concerning the model or the estimation process have to be applied. Our results also contribute to the understanding of theoretical models. By investigating fundamental dependencies in the Lux and Marchesi model by means of sensitivity analysis of the resulting neural network inverse model, price volatility is found to be a major driver. This provides additional support to findings in [1]. Some face validity for concrete estimation results obtained from the S&P 500 is shown by comparing to results of Boswijk et al. [3]. This is the work which comes closest to our approach, albeit their model is simpler and estimation frequency is yearly. We find support for Boswijk et al.’s key finding of a large fraction of chartists during the end of 1990s price bubble in technology stocks. Eventually, our work contributes to understanding what kind of microlevel behaviors drive stock markets. Analyzing correlations of our estimation results to historic market events, we find the fraction of chartists being large at times of crises, crashes, and bubbles. See also <a href="http://www.whodrivesthemarket.com">http://www.whodrivesthemarket.com</a> for continuously updated and derived liveresults. 
Keywords:  stock market; heterogeneous agentbased models; indirect estimation; inverse model; trading strategies; chartists; fundamentalists; neural networks 
JEL:  C32 G12 C45 C81 C15 
Date:  2008–06–24 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:14433&r=ecm 
By:  Stefan LasÃ©en; Lars E.O. Svensson 
Abstract:  This paper specifies how to do policy simulations with alternative instrumentrate paths in DSGE models such as Ramses, the Riksbank's main model for policy analysis and forecasting. The new element is that these alternative instrumentrate paths are anticipated by the private sector. Such simulations correspond to situations where the Riksbank transparently announces that it plans to implement a particular instrumentrate path and where this announcement is believed by the private sector. Previous methods have instead implemented alternative instrumentrate paths by adding unanticipated shocks to an instrument rule, as in the method of modest interventions by Leeper and Zha (2003). This corresponds to a very different situation where the Riksbank would nontransparently and secretly plan to implement deviations from an announced instrument rule. Such deviations are in practical simulations normally both serially correlated and large, which seems inconsistent with the assumption that they would remain unanticipated by the private sector. Simulations with anticipated instrumentrate paths seem more relevant for the transparent flexible inflation targeting that the Riksbank conducts. We provide an algorithm for the computation of policy simulations with arbitrary restrictions on nominal and real instrumentrate paths for an arbitrary number of periods after which a given policy rule, including targeting rules and explicit, implicit, or forecastbased instrument rules is implemented. When inflation projections are sufficiently sensitive to the real interestrate path, restrictions on real interestrate paths provide more intuitive and robust results, whereas restrictions on nominal interestrate path may provide somewhat counterintuitive results. 
JEL:  E52 E58 
Date:  2009–04 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14902&r=ecm 
By:  James D. Hamilton; Michael T. Owyang 
Abstract:  This paper develops a framework for inferring common Markovswitching components in a panel data set with large crosssection and timeseries dimensions. We apply the framework to studying similarities and differences across U.S. states in the timing of business cycles. We hypothesize that there exists a small number of cluster designations, with individual states in a given cluster sharing certain business cycle characteristics. We find that although oilproducing and agricultural states can sometimes experience a separate recession from the rest of the United States, for the most part, differences across states appear to be a matter of timing, with some states entering recession or recovering before others. 
Keywords:  Business cycles ; Recessions 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:200913&r=ecm 
By:  Yann Bramoullé; Bernard Fortin 
Abstract:  In a social network, agents have their own reference group that may influence their behavior. In turn, the agents' attributes and their behavior affect the formation and the structure of the social network. We survey the econometric literature on both aspects of social networks and discuss the identification and estimation issues they raise. 
Keywords:  Social network, peer effects, identification, network formation, pairwise regressions, separability, mutual consent 
JEL:  D85 L14 Z13 C3 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:0913&r=ecm 