nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒03‒05
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Asymptotic Distributions for Some Quasi-Efficient Estimators in Echelon VARMA Models By Jean-Marie Dufour; Tarek Jouini
  2. Robust Sign-Based and Hodges-Lehmann Estimators in Linear Median Regressions with Heterogenous Serially Dependent Errors By Elise Coudin; Jean-Marie Dufour
  3. Depth-Based Runs Tests for Multivariate Central Symmetry By Christophe Ley; Davy Paindaveine
  4. Semiparametric Innovation-Based Tests of Orthogonality and Causality Between Two Infinite-Order Cointegrated Ceries with Application to Canada/US Monetary Interactions By Chafik Bouhaddioui; Jean-Marie Dufour
  5. Multivariate High-Frequency-Based Volatility (HEAVY) Models By Diaa Noureldin; Neil Shephard; Kevin Sheppard
  6. Large Deviations of Generalized Method of Moments and Empirical  Likelihood Estimators By Taisuke Otsu
  7. Moderate Deviations of Generalized Method of Moments and  Empirical Likelihood Estimators By Taisuke Otsu
  8. An Identification-Robust Test for Time-Varying Parameters in the Dynamics of Energy Prices By Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf; Maral Kichian
  9. Semiparametrically Efficient Inference Based on Signed Ranks in Symmetric Independent Component Models By Paulina Ilmonen; Davy Paindaveine
  10. Modelling Pricing Behavior with Weak AâPriori Information: Exploratory Approach By Russo, Carlo; Sabbatini, Massimo
  11. The fine structure of spectral properties for random correlation matrices: an application to financial markets By Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
  12. Minimax Optimality of CUSUM for an Autoregressive Model By Knoth, Sven; Frisén, Marianne
  13. Forecasting breaks and forecasting during breaks By Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry
  14. Testing for Sufficient Information in Structural VARs By Mario Forni; Luca Gambetti
  15. Phase Space Reconstruction from Time Series Data: Where History Meets Theory By Huffaker, Ray
  16. Minding impacting events in a model of stochastic variance By Silvio M. Duarte Queiros; Evaldo M. F. Curado; Fernando D. Nobre
  17. The canonical econophysics approach to the flash crash of May 6, 2010 By Mazzeu, Joao; Otuki, Thiago; Da Silva, Sergio
  18. The Inference Fallacy From Bernoulli to Kolmogorov By Xavier De Scheemaekere; Ariane Szafarz
  19. Classifying life course trajectories: A comparison of latent class and sequence analysis By Nicola Barban; Francesco Billari
  20. Inflation persistence and the rationality of inflation expectations By Brissimis, Sophocles; Migiakis, Petros
  21. No News in Business Cycles By Mario Forni; Luca Gambetti; Luca Sala

  1. By: Jean-Marie Dufour; Tarek Jouini
    Abstract: We study two linear estimators for stationary invertible VARMA models in echelon form – to achieve identification (model parameter unicity) – with known Kronecker indices. Such linear estimators are much simpler to compute than Gaussian maximum-likelihood estimators often proposed for such models, which require highly nonlinear optimization. The first estimator is an improved two-step estimator which can be interpreted as a generalized-least-squares extension of the two-step least-squares estimator studied in Dufour and Jouini (2005). The setup considered is also more general and allows for the presence of drift parameters. The second estimator is a new relatively simple three-step linear estimator which is asymptotically equivalent to ML, hence asymptotically efficient, when the innovations of the process are Gaussian. The latter is based on using modified approximate residuals which better take into account the truncation error associated with the approximate long autoregression used in the first step of the method. We show that both estimators are consistent and asymptotically normal under the assumption that the innovations are a strong white noise, possibly non-Gaussian. Explicit formulae for the asymptotic covariance matrices are provided. The proposed estimators are computationally simpler than earlier “efficient” estimators, and the distributional theory we supply does not rely on a Gaussian assumption, in contrast with Gaussian maximum likelihood or the estimators considered by Hannan and Kavalieris (1984b) and Reinsel, Basu and Yap (1992). We present simulation evidence which indicates that the proposed three-step estimator typically performs better in finite samples than the alternative multi-step linear estimators suggested by Hannan and Kavalieris (1984b), Reinsel et al. (1992), and Poskitt and Salau (1995). <P>
    Keywords: echelon form, linear estimation, generalized least squares, GLS; two-step linear estimation, three-step linear estimation, asymptotically efficient, maximum likelihood, ML, stationary process, invertible process, Kronecker indices, simulation,
    JEL: C13 C32
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-25&r=ecm
  2. By: Elise Coudin; Jean-Marie Dufour
    Abstract: We propose estimators for the parameters of a linear median regression without any assumption on the shape of the error distribution – including no condition on the existence of moments – allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear) including GARCH-type and stochastic volatility of unknown order. The estimators follow from a reverse inference approach, based on the class of distribution-free sign tests proposed in Coudin and Dufour (2009, Econometrics J.) under a mediangale assumption. As a result, the estimators inherit strong robustness properties from their generating tests. Since the proposed estimators are based on maximizing a test statistic (or a p-value function) over different null hypotheses, they can be interpreted as Hodges-Lehmann-type (HL) estimators. It is easy to adapt the sign-based estimators to account for linear serial dependence. Both finite-sample and large-sample properties are established under weak regularity conditions. The proposed estimators are median unbiased (under symmetry and estimator unicity) and satisfy natural equivariance properties. Consistency and asymptotic normality are established without any condition on error moment existence, allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear). These conditions are considerably weaker than those used to show corresponding results for LAD estimators. In a Monte Carlo study on bias and mean square error, we find that sign-based estimators perform better than LAD-type estimators, especially in heteroskedastic settings. The proposed procedures are applied to a trend model of the Standard and Poor’s composite price index, where disturbances are affected by both heavy tails (non-normality) and heteroskedasticity. <P>
    Keywords: sign test, median regression, Hodges-Lehmann estimator, p-value; least absolute deviations, quantile regression; simultaneous inference, Monte Carlo tests, projection methods, nonnormality, heteroskedasticity; serial dependence; GARCH; stochastic volatility.,
    JEL: C13 C12 C14 C15
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-24&r=ecm
  3. By: Christophe Ley; Davy Paindaveine
    Abstract: McWilliams (1990) introduced a nonparametric procedure based on runs for the problem of testing univariate symmetry about the origin (equivalently, about an arbitrary specified center). His procedure first reorders the observations according to their absolute values, then rejects the null when the number of runs in the resulting series of signs is too small. This test is universally consistent and enjoys nice robustness properties, but is unfortunately limited to the univariate setup. In this paper, we extend McWilliams’ procedure into tests of central symmetry in any dimension. The proposed tests first reorder the observations according to their statistical depth in a symmetrized version of the sample, then reject the null when an original concept of simplicial runs in the resulting series of (spatial) signs is too small. Our tests are affine-invariant and have good robustness properties. In particular, they do not require any finite moment assumption. We derive their limiting null distribution, which establishes their asymptotic distribution-freeness. We study their finite-sample properties through Monte Carlo experiments, and conclude with some final comments.
    Keywords: Anti-ranks; Central symmetry testing; statistical depth; Multivariate runs; Spatial signs
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/76999&r=ecm
  4. By: Chafik Bouhaddioui; Jean-Marie Dufour
    Abstract: We propose a semiparametric approach for testing orthogonality and causality between two infinite-order cointegrated vector autoregressive IVAR(1) series. The procedures considered can be viewed as extensions of classical methods proposed by Haugh (1976, JASA) and Hong (1996, Biometrika) for testing independence between stationary univariate time series. The tests are based on the residuals of long autoregressions, hence allowing for computational simplicity, weak assumptions on the form of the underlying process, and a direct interpretation of the results in terms of innovations (or reduced-form shocks). The test statistics are standardized versions of the sum of weighted squares of residual cross-correlation matrices. The weights depend on a kernel function and a truncation parameter. The asymptotic distributions of the test statistics under the null hypothesis are derived, and consistency is established against fixed alternatives of serial cross-correlation of unknown form. Apart from standardization factors, the multivariate portmanteau statistic which takes into account a fixed number of lags, can be viewed as a special case of our procedure based on the truncated uniform kernel. A simulation study is presented which indicates that the proposed tests have good size and power properties in finite samples. The proposed procedures are applied to study interactions between Canadian and American monetary quarterly variables associated with monetary policy (money, interest rates, prices, aggregate output). The empirical results clearly allow to reject the absence of correlation between the shocks in both countries, and indicate a unidirectional Granger causality running from the U.S. variables to the Canadian ones. <P>
    Keywords: Infinite-order cointegrated vector autoregressive process; independence; causality; residual cross-correlation; consistency; asymptotic power,
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-23&r=ecm
  5. By: Diaa Noureldin; Neil Shephard; Kevin Sheppard
    Abstract: This paper introduces a new class of multivariate volatility models that utilizes high-frequency data. We discuss the models’ dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closed-form formulas for multi-step forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model out-of-sample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations.
    Keywords: HEAVY model, GARCH, multivariate volatility, realized covariance, covariance targeting, multi-step forecasting, Wishart distribution
    JEL: C32 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:533&r=ecm
  6. By: Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper studies large deviation properties of the generalized method of moments and generalized empirical likelihood estimators for moment restriction models. We consider two cases for the data generating probability measure: the model assumption and local deviations from the model assumption. For both cases, we derive conditions where these estimators have exponentially small error probabilities for point estimation.
    Keywords: Generalized method of moments, Empirical likelihood, Large deviations
    JEL: C13 C14
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1783&r=ecm
  7. By: Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper studies moderate deviation behaviors of the generalized method of moments and generalized empirical likelihood estimators for generalized estimating equations, where the number of equations can be larger than the number of unknown parameters. We consider two cases for the data generating probability measure: the model assumption and local contaminations or deviations from the model assumption. For both cases, we characterize the first-order terms of the moderate deviation error probabilities of these estimators. Our moderate deviation analysis complements the existing literature of the local asymptotic analysis and misspecification analysis for estimating equations, and is useful to evaluate power and robust properties of statistical tests for estimating equations which typically involve some estimators for nuisance parameters.
    Keywords: Generalized method of moments, Empirical likelihood, Moderate deviations, Large deviations
    JEL: C13 C14
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1785&r=ecm
  8. By: Marie-Claude Beaulieu; Jean-Marie Dufour; Lynda Khalaf; Maral Kichian
    Abstract: We test for the presence of time-varying parameters (TVP) in the long-run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean-reverting models. We also propose residual-based diagnostic tests and examine out-of-sample forecasts. In-sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out-of-sample analysis suggests a randomwalk specification for oil price, and TVP models for both real-time forecasting in the case of gas and long-run forecasting in the case of coal <P>
    Keywords: structural change, time-varying parameter, energy prices, coal, gas, crude oil, unidentified nuisance parameter, exact test, Monte Carlo test, Kalman filter, normality test,
    JEL: C22 C52 C53 Q40
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-22&r=ecm
  9. By: Paulina Ilmonen; Davy Paindaveine
    Keywords: Independent component analysis; Invariance principle; Local asymptotic normality; Rank-based inference; Semiparametric efficiency; Signed ranks
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/76045&r=ecm
  10. By: Russo, Carlo; Sabbatini, Massimo
    Abstract: In the absence of reliable a priori information, choosing the appropriate theoretical model to describe an industryâs behavior is a critical issue for empirical studies about market power. A wrong choice may result in model misspecification and the conclusions of the empirical analysis may be driven by the wrong assumption about the behavioral model. This paper develops a methodology aimed to reduce the risk of misspecification bias. The approach is based on the sequential application of a sliced inverse regression (SIR) and a nonparametric NadarayaâWatson regression (NW). The SIRâNW algorithm identifies the factors affecting pricing behavior in an industry and provides a nonparametric characterization of the function linking these variables to price. This information may be used to guide the choice of the model specification for a parametric estimation of market power. The SIRâNW algorithm is designed to complement the estimation of structural models of market behavior, rather than to replace it. The value of this methodology for empirical industrial organization studies lies in its dataâdriven approach that does not rely on prior knowledge of the industry. The method reverses the usual hypothesisâtesting approach. Instead of first choosing the model based on a priori information and then testing if it is compatible with the data, the econometrician selects a theoretical model based on the observed data. Thus, the methodology is particularly suited for those cases where the researcher has no a priori information about the behavioral model, or little confidence in the information that is available .
    Keywords: Agribusiness, Agricultural and Food Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Research Methods/ Statistical Methods,
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:ags:iefi10:100478&r=ecm
  11. By: Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
    Abstract: We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross-correlations between stocks. We interpret and corroborate these findings in terms of factor models, and and we compare empirical spectra to those predicted by Random Matrix Theory for such models.
    Keywords: random matrix theroy; financial econometrics; correlation matrix
    JEL: C51 G11 C01
    Date: 2011–02–19
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28964&r=ecm
  12. By: Knoth, Sven (Institute of Mathematics and Statistics, Helmut Schmidt University Hamburg); Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: Different change point models for AR(1) processes are reviewed. For some models, the change is in the distribution conditional on earlier observations. For others the change is in the unconditional distribution. Some models include an observation before the first possible change time — others not. Earlier and new CUSUM type methods are given and minimax optimality is examined. For the conditional model with an observation before the possible change there are sharp results of optimality in the literature. The unconditional model with possible change at (or before) the first observation is of interest for applications. We examined this case and derived new variants of four earlier suggestions. By numerical methods and Monte Carlo simulations it was demonstrated that the new variants dominate the original ones. However, none of the methods is uniformly minimax optimal.
    Keywords: Autoregressive; Change point; Monitoring; Online detection
    JEL: C10
    Date: 2011–02–10
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_004&r=ecm
  13. By: Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry
    Abstract: Success in accurately forecasting breaks requires that they are predictable from relevant information available at the forecast origin using an appropriate model form, which can be selected and estimated before the break. To clarify the roles of these six necessary conditions, we distinguish between the information set for ‘normal forces’ and the ones for ‘break drivers’, then outline sources of potential information. Relevant non-linear, dynamic models facing multiple breaks can have more candidate variables than observations, so we discuss automatic model selection. As a failure to accurately forecast breaks remains likely, we augment our strategy by modelling breaks during their progress, and consider robust forecasting devices.
    Keywords: Economic forecasting, structural breaks, information sets, non-linearity
    JEL: C1 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:535&r=ecm
  14. By: Mario Forni; Luca Gambetti
    Abstract: We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and per-capita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically.
    Keywords: Structural VAR, non-fundamentalness, information, FAVAR models, technology shocks.
    JEL: C32 E32 E62
    Date: 2011–02–22
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:863.11&r=ecm
  15. By: Huffaker, Ray
    Abstract: In âdissipativeâ dynamical systems, variables evolve asymptotically toward lowâdimensional âattractorsâ that define their dynamical properties. Unfortunately, realâworld dynamical systems are generally too complex for us to directly observe these attractors. Fortunately, there is a methodââphase space reconstructionââthat can be used to indirectly detect attractors in realâworld dynamical systems using time series data on a single variable (Broomhead and King, 1985; Schaffer and Kott, 1985; Kott et al, 1988; Williams,1997). Armed with this knowledge, we can formulate more accurate and informative models of realâworld dynamical systems. We begin by introducing the concept of phase space attractors within the context of a dynamic ISLM model. We next demonstrate how phase space reconstruction faithfully reproduces one of the modelâs attractors. Finally, we discuss how phase space reconstruction fits into a more general âdiagnosticâ modeling approach that relies on historical data to guide and test the deterministic formulation of theoretical dynamical models. As an example of diagnostic modeling, we test how closely the attractor generated by the dynamic ISLM model visually approximates the attractor reconstructed from time series data on realâworld interest rates.
    Keywords: Agribusiness, Agricultural and Food Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Research Methods/ Statistical Methods,
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:ags:iefi10:100455&r=ecm
  16. By: Silvio M. Duarte Queiros; Evaldo M. F. Curado; Fernando D. Nobre
    Abstract: We introduce a generalisation of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold and another one when the local standard deviation outnumbers it. In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterised by large values of the Hurst exponent is greater than 0.8, which are ubiquitous features in complex systems.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.4819&r=ecm
  17. By: Mazzeu, Joao; Otuki, Thiago; Da Silva, Sergio
    Abstract: We carry out a statistical physics analysis of the flash crash of May 6, 2010 using data from the Dow Jones Industrial Average index sampled at a one-minute frequency from September 1, 2009 to May 31, 2010. We evaluate the hypothesis of a non-Gaussian Levy-stable distribution to model the data and pay particular attention to the distribution-tail behavior. We conclude that there is non-Gaussian scaling and thus that the flash crash cannot be considered an anomaly. From the study of tails, we find that the flash crash followed a power-law pattern outside the Levy regime, which was not the inverse cubic law. Finally, we show that the time-dependent variance of the DJIA-index returns, not tracked by the Levy, can be modeled in a straightforward manner by a GARCH (1, 1) process.
    Keywords: flash crash; econophysics; stable distribution; extreme events
    JEL: C46
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29138&r=ecm
  18. By: Xavier De Scheemaekere; Ariane Szafarz
    Abstract: Bernoulli’s (1713) well-known Law of Large Numbers (LLN) establishes a legitimate one-way transition from mathematical probability to observed frequency. However, Bernoulli went one step further and abusively introduced the inverse proposition. Based on a careful analysis of Bernoulli’s original proof, this paper identifies this appealing, but illegitimate, inversion of LLN as a strong driver of confusion among probabilists. Indeed, during more than two centuries this “inference fallacy” hampered the emergence of rigorous mathematical foundations for the theory of probability. In particular, the confusion pertaining to the status of statistical inference was detrimental to both Laplace’s approach based on “equipossibility” and Mises’ approach based on “collectives”. Only Kolmogorov’s (1933) axiomatization made it possible to adequately frame statistical inference within probability theory. This paper argues that a key factor in Kolmogorov’s success has been his ability to overcome the inference fallacy.
    Keywords: Probability; Bernoulli; Kolmogorov; Statistics; Law of Large Numbers
    JEL: N01 B31 C65
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:sol:wpaper:2013/77259&r=ecm
  19. By: Nicola Barban; Francesco Billari
    Abstract: In this article we compare two techniques that are widely used in the analysis of life course trajectories, latent class analysis (LCA) and sequence analysis (SA). In particular, we focus on the use of these techniques as devices to obtain classes of individual life course trajectories. We first compare the consistency of the classification obtained via the two techniques using an actual dataset on the life course trajectories of young adults. Then, we adopt a simulation approach to measure the ability of these two methods to correctly classify groups of life course trajectories when specific forms of "random" variability are introduced within pre-specified classes in an artificial dataset. In order to do so, we introduce simulation operators that have a life course and/or observational meaning. Our results contribute on the one hand to outline the usefulness and robustness of findings based on the classification of life course trajectories through LCA and SA, on the other hand to illuminate the potential pitfalls of actual applications of these techniques.
    Keywords: sequence analysis; latent class analysis; life course analysis; categorical time series
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:don:donwpa:041&r=ecm
  20. By: Brissimis, Sophocles; Migiakis, Petros
    Abstract: The rational expectations hypothesis for survey and model-based inflation forecasts − from the Survey of Professional Forecasters and the Greenbook respectively − is examined by properly taking into account the persistence characteristics of the data. The finding of near-unit-root effects in the inflation and inflation expectations series motivates the use of a local-to-unity specification of the inflation process that enables us to test whether the data are generated by locally non-stationary or stationary processes. Thus, we test, rather than assume, stationarity of near-unit-root processes. In addition, we set out an empirical framework for assessing relationships between locally non-stationary series. In this context, we test the rational expectations hypothesis by allowing the co-existence of a long-run relationship obtained under the rational expectations restrictions with short-run "learning" effects. Our empirical results indicate that the rational expectations hypothesis holds in the long run, while forecasters adjust their expectations slowly in the short run. This finding lends support to the hypothesis that the persistence of inflation comes from the dynamics of expectations.
    Keywords: Inflation; rational expectations; high persistence
    JEL: C32 D84 C50 E31 E52 E37
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29052&r=ecm
  21. By: Mario Forni; Luca Gambetti; Luca Sala
    Abstract: This paper uses a structural, large dimensional factor model to evaluate the role of 'news' shocks (shocks with a delayed effect on productivity) in generating the business cycle. We find that (i) existing small-scale VECM models are affected by 'non-fundamentalness' and therefore fail to recover the correct shock and impulse response functions; (ii) news shocks have a limited role in explaining the business cycle; (iii) their effects are in line with what predicted by standard neoclassical theory; (iv) the bulk of business cycle fluctuations are explained by shocks unrelated to technology.
    Keywords: structural factor model, news shocks, invertibility, fundamentalness.
    JEL: C32 E32 E62
    Date: 2011–02–21
    URL: http://d.repec.org/n?u=RePEc:aub:autbar:862.11&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.