
on Econometrics 
By:  JeanMarie Dufour; Tarek Jouini 
Abstract:  We study two linear estimators for stationary invertible VARMA models in echelon form – to achieve identification (model parameter unicity) – with known Kronecker indices. Such linear estimators are much simpler to compute than Gaussian maximumlikelihood estimators often proposed for such models, which require highly nonlinear optimization. The first estimator is an improved twostep estimator which can be interpreted as a generalizedleastsquares extension of the twostep leastsquares estimator studied in Dufour and Jouini (2005). The setup considered is also more general and allows for the presence of drift parameters. The second estimator is a new relatively simple threestep linear estimator which is asymptotically equivalent to ML, hence asymptotically efficient, when the innovations of the process are Gaussian. The latter is based on using modified approximate residuals which better take into account the truncation error associated with the approximate long autoregression used in the first step of the method. We show that both estimators are consistent and asymptotically normal under the assumption that the innovations are a strong white noise, possibly nonGaussian. Explicit formulae for the asymptotic covariance matrices are provided. The proposed estimators are computationally simpler than earlier “efficient” estimators, and the distributional theory we supply does not rely on a Gaussian assumption, in contrast with Gaussian maximum likelihood or the estimators considered by Hannan and Kavalieris (1984b) and Reinsel, Basu and Yap (1992). We present simulation evidence which indicates that the proposed threestep estimator typically performs better in finite samples than the alternative multistep linear estimators suggested by Hannan and Kavalieris (1984b), Reinsel et al. (1992), and Poskitt and Salau (1995). <P> 
Keywords:  echelon form, linear estimation, generalized least squares, GLS; twostep linear estimation, threestep linear estimation, asymptotically efficient, maximum likelihood, ML, stationary process, invertible process, Kronecker indices, simulation, 
JEL:  C13 C32 
Date:  2011–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s25&r=ecm 
By:  Elise Coudin; JeanMarie Dufour 
Abstract:  We propose estimators for the parameters of a linear median regression without any assumption on the shape of the error distribution – including no condition on the existence of moments – allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear) including GARCHtype and stochastic volatility of unknown order. The estimators follow from a reverse inference approach, based on the class of distributionfree sign tests proposed in Coudin and Dufour (2009, Econometrics J.) under a mediangale assumption. As a result, the estimators inherit strong robustness properties from their generating tests. Since the proposed estimators are based on maximizing a test statistic (or a pvalue function) over different null hypotheses, they can be interpreted as HodgesLehmanntype (HL) estimators. It is easy to adapt the signbased estimators to account for linear serial dependence. Both finitesample and largesample properties are established under weak regularity conditions. The proposed estimators are median unbiased (under symmetry and estimator unicity) and satisfy natural equivariance properties. Consistency and asymptotic normality are established without any condition on error moment existence, allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear). These conditions are considerably weaker than those used to show corresponding results for LAD estimators. In a Monte Carlo study on bias and mean square error, we find that signbased estimators perform better than LADtype estimators, especially in heteroskedastic settings. The proposed procedures are applied to a trend model of the Standard and Poor’s composite price index, where disturbances are affected by both heavy tails (nonnormality) and heteroskedasticity. <P> 
Keywords:  sign test, median regression, HodgesLehmann estimator, pvalue; least absolute deviations, quantile regression; simultaneous inference, Monte Carlo tests, projection methods, nonnormality, heteroskedasticity; serial dependence; GARCH; stochastic volatility., 
JEL:  C13 C12 C14 C15 
Date:  2011–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s24&r=ecm 
By:  Christophe Ley; Davy Paindaveine 
Abstract:  McWilliams (1990) introduced a nonparametric procedure based on runs for the problem of testing univariate symmetry about the origin (equivalently, about an arbitrary specified center). His procedure first reorders the observations according to their absolute values, then rejects the null when the number of runs in the resulting series of signs is too small. This test is universally consistent and enjoys nice robustness properties, but is unfortunately limited to the univariate setup. In this paper, we extend McWilliams’ procedure into tests of central symmetry in any dimension. The proposed tests first reorder the observations according to their statistical depth in a symmetrized version of the sample, then reject the null when an original concept of simplicial runs in the resulting series of (spatial) signs is too small. Our tests are affineinvariant and have good robustness properties. In particular, they do not require any finite moment assumption. We derive their limiting null distribution, which establishes their asymptotic distributionfreeness. We study their finitesample properties through Monte Carlo experiments, and conclude with some final comments. 
Keywords:  Antiranks; Central symmetry testing; statistical depth; Multivariate runs; Spatial signs 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/76999&r=ecm 
By:  Chafik Bouhaddioui; JeanMarie Dufour 
Abstract:  We propose a semiparametric approach for testing orthogonality and causality between two infiniteorder cointegrated vector autoregressive IVAR(1) series. The procedures considered can be viewed as extensions of classical methods proposed by Haugh (1976, JASA) and Hong (1996, Biometrika) for testing independence between stationary univariate time series. The tests are based on the residuals of long autoregressions, hence allowing for computational simplicity, weak assumptions on the form of the underlying process, and a direct interpretation of the results in terms of innovations (or reducedform shocks). The test statistics are standardized versions of the sum of weighted squares of residual crosscorrelation matrices. The weights depend on a kernel function and a truncation parameter. The asymptotic distributions of the test statistics under the null hypothesis are derived, and consistency is established against fixed alternatives of serial crosscorrelation of unknown form. Apart from standardization factors, the multivariate portmanteau statistic which takes into account a fixed number of lags, can be viewed as a special case of our procedure based on the truncated uniform kernel. A simulation study is presented which indicates that the proposed tests have good size and power properties in finite samples. The proposed procedures are applied to study interactions between Canadian and American monetary quarterly variables associated with monetary policy (money, interest rates, prices, aggregate output). The empirical results clearly allow to reject the absence of correlation between the shocks in both countries, and indicate a unidirectional Granger causality running from the U.S. variables to the Canadian ones. <P> 
Keywords:  Infiniteorder cointegrated vector autoregressive process; independence; causality; residual crosscorrelation; consistency; asymptotic power, 
Date:  2011–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s23&r=ecm 
By:  Diaa Noureldin; Neil Shephard; Kevin Sheppard 
Abstract:  This paper introduces a new class of multivariate volatility models that utilizes highfrequency data. We discuss the models’ dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closedform formulas for multistep forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model outofsample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations. 
Keywords:  HEAVY model, GARCH, multivariate volatility, realized covariance, covariance targeting, multistep forecasting, Wishart distribution 
JEL:  C32 C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:533&r=ecm 
By:  Taisuke Otsu (Cowles Foundation, Yale University) 
Abstract:  This paper studies large deviation properties of the generalized method of moments and generalized empirical likelihood estimators for moment restriction models. We consider two cases for the data generating probability measure: the model assumption and local deviations from the model assumption. For both cases, we derive conditions where these estimators have exponentially small error probabilities for point estimation. 
Keywords:  Generalized method of moments, Empirical likelihood, Large deviations 
JEL:  C13 C14 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1783&r=ecm 
By:  Taisuke Otsu (Cowles Foundation, Yale University) 
Abstract:  This paper studies moderate deviation behaviors of the generalized method of moments and generalized empirical likelihood estimators for generalized estimating equations, where the number of equations can be larger than the number of unknown parameters. We consider two cases for the data generating probability measure: the model assumption and local contaminations or deviations from the model assumption. For both cases, we characterize the firstorder terms of the moderate deviation error probabilities of these estimators. Our moderate deviation analysis complements the existing literature of the local asymptotic analysis and misspecification analysis for estimating equations, and is useful to evaluate power and robust properties of statistical tests for estimating equations which typically involve some estimators for nuisance parameters. 
Keywords:  Generalized method of moments, Empirical likelihood, Moderate deviations, Large deviations 
JEL:  C13 C14 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1785&r=ecm 
By:  MarieClaude Beaulieu; JeanMarie Dufour; Lynda Khalaf; Maral Kichian 
Abstract:  We test for the presence of timevarying parameters (TVP) in the longrun dynamics of energy prices for oil, natural gas and coal, within a standard class of meanreverting models. We also propose residualbased diagnostic tests and examine outofsample forecasts. Insample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Outofsample analysis suggests a randomwalk specification for oil price, and TVP models for both realtime forecasting in the case of gas and longrun forecasting in the case of coal <P> 
Keywords:  structural change, timevarying parameter, energy prices, coal, gas, crude oil, unidentified nuisance parameter, exact test, Monte Carlo test, Kalman filter, normality test, 
JEL:  C22 C52 C53 Q40 
Date:  2011–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s22&r=ecm 
By:  Paulina Ilmonen; Davy Paindaveine 
Keywords:  Independent component analysis; Invariance principle; Local asymptotic normality; Rankbased inference; Semiparametric efficiency; Signed ranks 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/76045&r=ecm 
By:  Russo, Carlo; Sabbatini, Massimo 
Abstract:  In the absence of reliable a priori information, choosing the appropriate theoretical model to describe an industryâs behavior is a critical issue for empirical studies about market power. A wrong choice may result in model misspecification and the conclusions of the empirical analysis may be driven by the wrong assumption about the behavioral model. This paper develops a methodology aimed to reduce the risk of misspecification bias. The approach is based on the sequential application of a sliced inverse regression (SIR) and a nonparametric NadarayaâWatson regression (NW). The SIRâNW algorithm identifies the factors affecting pricing behavior in an industry and provides a nonparametric characterization of the function linking these variables to price. This information may be used to guide the choice of the model specification for a parametric estimation of market power. The SIRâNW algorithm is designed to complement the estimation of structural models of market behavior, rather than to replace it. The value of this methodology for empirical industrial organization studies lies in its dataâdriven approach that does not rely on prior knowledge of the industry. The method reverses the usual hypothesisâtesting approach. Instead of first choosing the model based on a priori information and then testing if it is compatible with the data, the econometrician selects a theoretical model based on the observed data. Thus, the methodology is particularly suited for those cases where the researcher has no a priori information about the behavioral model, or little confidence in the information that is available . 
Keywords:  Agribusiness, Agricultural and Food Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Research Methods/ Statistical Methods, 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:ags:iefi10:100478&r=ecm 
By:  Livan, Giacomo; Alfarano, Simone; Scalas, Enrico 
Abstract:  We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of crosscorrelations between stocks. We interpret and corroborate these findings in terms of factor models, and and we compare empirical spectra to those predicted by Random Matrix Theory for such models. 
Keywords:  random matrix theroy; financial econometrics; correlation matrix 
JEL:  C51 G11 C01 
Date:  2011–02–19 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:28964&r=ecm 
By:  Knoth, Sven (Institute of Mathematics and Statistics, Helmut Schmidt University Hamburg); Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  Different change point models for AR(1) processes are reviewed. For some models, the change is in the distribution conditional on earlier observations. For others the change is in the unconditional distribution. Some models include an observation before the first possible change time — others not. Earlier and new CUSUM type methods are given and minimax optimality is examined. For the conditional model with an observation before the possible change there are sharp results of optimality in the literature. The unconditional model with possible change at (or before) the first observation is of interest for applications. We examined this case and derived new variants of four earlier suggestions. By numerical methods and Monte Carlo simulations it was demonstrated that the new variants dominate the original ones. However, none of the methods is uniformly minimax optimal. 
Keywords:  Autoregressive; Change point; Monitoring; Online detection 
JEL:  C10 
Date:  2011–02–10 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_004&r=ecm 
By:  Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry 
Abstract:  Success in accurately forecasting breaks requires that they are predictable from relevant information available at the forecast origin using an appropriate model form, which can be selected and estimated before the break. To clarify the roles of these six necessary conditions, we distinguish between the information set for ‘normal forces’ and the ones for ‘break drivers’, then outline sources of potential information. Relevant nonlinear, dynamic models facing multiple breaks can have more candidate variables than observations, so we discuss automatic model selection. As a failure to accurately forecast breaks remains likely, we augment our strategy by modelling breaks during their progress, and consider robust forecasting devices. 
Keywords:  Economic forecasting, structural breaks, information sets, nonlinearity 
JEL:  C1 C53 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:535&r=ecm 
By:  Mario Forni; Luca Gambetti 
Abstract:  We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and percapita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically. 
Keywords:  Structural VAR, nonfundamentalness, information, FAVAR models, technology shocks. 
JEL:  C32 E32 E62 
Date:  2011–02–22 
URL:  http://d.repec.org/n?u=RePEc:aub:autbar:863.11&r=ecm 
By:  Huffaker, Ray 
Abstract:  In âdissipativeâ dynamical systems, variables evolve asymptotically toward lowâdimensional âattractorsâ that define their dynamical properties. Unfortunately, realâworld dynamical systems are generally too complex for us to directly observe these attractors. Fortunately, there is a methodââphase space reconstructionââthat can be used to indirectly detect attractors in realâworld dynamical systems using time series data on a single variable (Broomhead and King, 1985; Schaffer and Kott, 1985; Kott et al, 1988; Williams,1997). Armed with this knowledge, we can formulate more accurate and informative models of realâworld dynamical systems. We begin by introducing the concept of phase space attractors within the context of a dynamic ISLM model. We next demonstrate how phase space reconstruction faithfully reproduces one of the modelâs attractors. Finally, we discuss how phase space reconstruction fits into a more general âdiagnosticâ modeling approach that relies on historical data to guide and test the deterministic formulation of theoretical dynamical models. As an example of diagnostic modeling, we test how closely the attractor generated by the dynamic ISLM model visually approximates the attractor reconstructed from time series data on realâworld interest rates. 
Keywords:  Agribusiness, Agricultural and Food Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Research Methods/ Statistical Methods, 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:ags:iefi10:100455&r=ecm 
By:  Silvio M. Duarte Queiros; Evaldo M. F. Curado; Fernando D. Nobre 
Abstract:  We introduce a generalisation of the wellknown ARCH process, widely used for generating uncorrelated stochastic time series with longterm nonGaussian distributions and longlasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold and another one when the local standard deviation outnumbers it. In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterised by large values of the Hurst exponent is greater than 0.8, which are ubiquitous features in complex systems. 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1102.4819&r=ecm 
By:  Mazzeu, Joao; Otuki, Thiago; Da Silva, Sergio 
Abstract:  We carry out a statistical physics analysis of the flash crash of May 6, 2010 using data from the Dow Jones Industrial Average index sampled at a oneminute frequency from September 1, 2009 to May 31, 2010. We evaluate the hypothesis of a nonGaussian Levystable distribution to model the data and pay particular attention to the distributiontail behavior. We conclude that there is nonGaussian scaling and thus that the flash crash cannot be considered an anomaly. From the study of tails, we find that the flash crash followed a powerlaw pattern outside the Levy regime, which was not the inverse cubic law. Finally, we show that the timedependent variance of the DJIAindex returns, not tracked by the Levy, can be modeled in a straightforward manner by a GARCH (1, 1) process. 
Keywords:  flash crash; econophysics; stable distribution; extreme events 
JEL:  C46 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:29138&r=ecm 
By:  Xavier De Scheemaekere; Ariane Szafarz 
Abstract:  Bernoulli’s (1713) wellknown Law of Large Numbers (LLN) establishes a legitimate oneway transition from mathematical probability to observed frequency. However, Bernoulli went one step further and abusively introduced the inverse proposition. Based on a careful analysis of Bernoulli’s original proof, this paper identifies this appealing, but illegitimate, inversion of LLN as a strong driver of confusion among probabilists. Indeed, during more than two centuries this “inference fallacy” hampered the emergence of rigorous mathematical foundations for the theory of probability. In particular, the confusion pertaining to the status of statistical inference was detrimental to both Laplace’s approach based on “equipossibility” and Mises’ approach based on “collectives”. Only Kolmogorov’s (1933) axiomatization made it possible to adequately frame statistical inference within probability theory. This paper argues that a key factor in Kolmogorov’s success has been his ability to overcome the inference fallacy. 
Keywords:  Probability; Bernoulli; Kolmogorov; Statistics; Law of Large Numbers 
JEL:  N01 B31 C65 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:sol:wpaper:2013/77259&r=ecm 
By:  Nicola Barban; Francesco Billari 
Abstract:  In this article we compare two techniques that are widely used in the analysis of life course trajectories, latent class analysis (LCA) and sequence analysis (SA). In particular, we focus on the use of these techniques as devices to obtain classes of individual life course trajectories. We first compare the consistency of the classification obtained via the two techniques using an actual dataset on the life course trajectories of young adults. Then, we adopt a simulation approach to measure the ability of these two methods to correctly classify groups of life course trajectories when specific forms of "random" variability are introduced within prespecified classes in an artificial dataset. In order to do so, we introduce simulation operators that have a life course and/or observational meaning. Our results contribute on the one hand to outline the usefulness and robustness of findings based on the classification of life course trajectories through LCA and SA, on the other hand to illuminate the potential pitfalls of actual applications of these techniques. 
Keywords:  sequence analysis; latent class analysis; life course analysis; categorical time series 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:don:donwpa:041&r=ecm 
By:  Brissimis, Sophocles; Migiakis, Petros 
Abstract:  The rational expectations hypothesis for survey and modelbased inflation forecasts − from the Survey of Professional Forecasters and the Greenbook respectively − is examined by properly taking into account the persistence characteristics of the data. The finding of nearunitroot effects in the inflation and inflation expectations series motivates the use of a localtounity specification of the inflation process that enables us to test whether the data are generated by locally nonstationary or stationary processes. Thus, we test, rather than assume, stationarity of nearunitroot processes. In addition, we set out an empirical framework for assessing relationships between locally nonstationary series. In this context, we test the rational expectations hypothesis by allowing the coexistence of a longrun relationship obtained under the rational expectations restrictions with shortrun "learning" effects. Our empirical results indicate that the rational expectations hypothesis holds in the long run, while forecasters adjust their expectations slowly in the short run. This finding lends support to the hypothesis that the persistence of inflation comes from the dynamics of expectations. 
Keywords:  Inflation; rational expectations; high persistence 
JEL:  C32 D84 C50 E31 E52 E37 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:29052&r=ecm 
By:  Mario Forni; Luca Gambetti; Luca Sala 
Abstract:  This paper uses a structural, large dimensional factor model to evaluate the role of 'news' shocks (shocks with a delayed effect on productivity) in generating the business cycle. We find that (i) existing smallscale VECM models are affected by 'nonfundamentalness' and therefore fail to recover the correct shock and impulse response functions; (ii) news shocks have a limited role in explaining the business cycle; (iii) their effects are in line with what predicted by standard neoclassical theory; (iv) the bulk of business cycle fluctuations are explained by shocks unrelated to technology. 
Keywords:  structural factor model, news shocks, invertibility, fundamentalness. 
JEL:  C32 E32 E62 
Date:  2011–02–21 
URL:  http://d.repec.org/n?u=RePEc:aub:autbar:862.11&r=ecm 