nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒07‒23
23 papers chosen by
Sune Karlsson
Orebro University

  1. Goodness-of-fit tests based on series estimators in nonparametric instrumental regression By Breunig, Christoph
  2. Inference for Income Distributions Using Grouped Data By Gholamreza Hajargsht, William E. Griffiths, Joseph Brice, D.S. Prasada Rao, Duangkamon Chotikapanich
  3. GMM Estimation of Mixtures from Grouped Data: By William E. Griffiths and Gholamreza Hajargasht
  4. Testing macroeconomic models by indirect inference on unfiltered data By Meenagh, David; Minford, Patrick; Wickens, Michael R.
  5. Modeling Multivariate Extreme Events Using Self-Exciting Point Processes By Oliver Grothe; Volodymyr Korniichuk; Hans Manner
  6. Testing DSGE models by Indirect inference and other methods: some Monte Carlo experiments By Le, Vo Phuong Mai; Meenagh, David; Minford, Patrick; Wickens, Michael R.
  7. Residual test for cointegration with GLS detrended data By Pierre Perron; Gabriel Rodriguez
  8. A higher order correlation unscented Kalman filter By Oliver Grothe
  9. Maximum Likelihood Characterization of Distributions By Mitia Duerinckx; Christophe Ley; Yves-Caoimhin Swan
  10. Testing Causality Between Two Vectors in Multivariate GARCH Models By Tomasz Wozniak
  11. Copula-Based Dynamic Conditional Correlation Multiplicative Error Processes By Taras Bodnar; Nikolaus Hautsch; ;
  12. Generating Random Optimising Choices By Jan Heufer
  13. GROUPED PATTERNS OF HETEROGENEITY IN PANEL DATA By Stéphane Bonhomme; Elena Manresa
  14. Vine Constructions of Levy Copulas By Oliver Grothe; Stephan Nicklas
  15. Measurement error and imputation of consumption in survey data By Rodolfo G. Campos; Ilina Reggio
  16. Selecting predictors by using Bayesian model averaging in bridge models By Lorenzo Bencivelli; Massimiliano Marcellino; Gianluca Moretti
  17. Realized Volatility and Change of Regimes By Giampiero M. Gallo; Edoardo Otranto
  18. A new approach to unbiased estimation for SDE's By Chang-han Rhee; Peter W. Glynn
  19. Detecting Spatial Clustering Using a Firm-Level Cluster Index By Tobias Scholl; Thomas Brenner
  20. A skew and leptokurtic distribution with polynomial tails and characterizing functions in closed form By Fischer, Matthias
  21. Inference for Systems of Stochastic Differential Equations from Discretely Sampled data: A Numerical Maximum Likelihood Approach By Thomas Lux
  22. Calibrated spline estimation of detailed fertility schedules from abridged data By Carl Schmertmann
  23. On ABCs (and Ds) of VAR representations of DSGE models By Massimo Franchi; Paolo Paruolo

  1. By: Breunig, Christoph
    Abstract: This paper proposes several tests of restricted specification in nonparametric instrumental regression. Based on series estimators, test statistics are established that allow for tests of the general model against a parametric or nonparametric specification as well as a test of exogeneity of the vector of regressors. The tests are asymptotically normally distributed under correct specification and consistent against any alternative model. Under a sequence of local alternative hypotheses, the asymptotic distribution of the tests is derived. Moreover, uniform consistency is established over a class of alternatives whose distance to the null hypothesis shrinks appropriately as the sample size increases.
    JEL: C12 C14
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:mnh:wpaper:32111&r=ecm
  2. By: Gholamreza Hajargsht, William E. Griffiths, Joseph Brice, D.S. Prasada Rao, Duangkamon Chotikapanich
    Abstract: We develop a general approach to estimation and inference for income distributions using grouped or aggregate data that are typically available in the form of population shares and class mean incomes, with unknown group bounds. Generic moment conditions and an optimal weight matrix that can be used for GMM estimation of any parametric income distribution are derived. Our derivation of the weight matrix and its inverse allows us to express the seemingly complex GMM objective function in a relatively simple form that facilitates estimation. We show that our proposed approach, that incorporates information on class means as well as population proportions, is more efficient than maximum likelihood estimation of the multinomial distribution that uses only population proportions. In contrast to the earlier work of Chotikapanich et al. (2007, 2012), that did not specify a formal GMM framework, did not provide methodology for obtaining standard errors, and restricted the analysis to the beta-2 distribution, we provide standard errors for estimated parameters and relevant functions of them, such as inequality and poverty measures, and we provide methodology for all distributions. A test statistic for testing the adequacy of a distribution is proposed. Using eight countries/regions for the year 2005, we show how the methodology can be applied to estimate the parameters of the generalized beta distribution of the second kind, and its special-case distributions, the beta-2, Singh-Maddala, Dagum, generalized gamma and lognormal distributions. We test the adequacy of each distribution and compare predicted and actual income shares, where the number of groups used for prediction can differ from the number used in estimation. Estimates and standard errors for inequality and poverty measures are provided.
    Keywords: GMM; Generalized beta distribution; Inequality and poverty.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1140&r=ecm
  3. By: William E. Griffiths and Gholamreza Hajargasht
    Abstract: We show how the generalized method of moments (GMM) framework developed in Hajargasht et al. (2012) for estimating income distributions from grouped data can be adapted for estimating mixtures. This approach can be used to estimate a mixture of any distributions where the moments and moment distribution functions of the mixture components can be expressed in terms of the parameters of those components. The required expressions for mixtures of lognormal and gamma densities are provided; in our empirical work we focus on estimation of mixtures of lognormal distributions. Two- and three-component lognormal mixtures are estimated for the income distributions of China rural, China urban, India rural, India urban, Pakistan, Russia, South Africa, Brazil and Indonesia. Their performance, in terms of goodness-of-fit and validity of moment conditions, is compared with that of a generalized beta (GB2) distribution. We find that the three-component lognormal mixture always outperforms the GB2 distribution, but the two-component mixture does not. For Brazil and Indonesia we have single observations, making it possible to compare maximum likelihood estimation of the mixtures from a complete set of single observations with GMM estimates obtained after grouping the data. Estimates from both procedures are found to be comparable, lending support to the usefulness of the GMM approach.
    Keywords: Lognormal distribution, Generalized beta distribution, Inequality measures
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1148&r=ecm
  4. By: Meenagh, David; Minford, Patrick; Wickens, Michael R.
    Abstract: We extend the method of indirect inference testing to data that is not filtered and so may be non-stationary. We apply the method to an open economy real businss cycle model on UK data. We review the method using a Monte Carlo experiment and find that it performs accurately and has good power.
    Keywords: Bootstrap; DSGE; indirect inference; Monte Carlo; VECM
    JEL: C12 C32 C52 E1
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9058&r=ecm
  5. By: Oliver Grothe (Department of Economic and Social Statistics, University of Cologne); Volodymyr Korniichuk (CGS, University of Cologne); Hans Manner (Department of Economic and Social Statistics, University of Cologne)
    Abstract: We propose a new model that can capture the typical features of multivariate extreme events observed in financial time series, namely clustering behavior in magnitudes and arrival times of multivariate extreme events, and time-varying dependence. The model is developed in the framework of the peaks-over-threshold approach in extreme value theory and relies on a Poisson process with self-exciting intensity. We discuss the properties of the model, treat its estimation, deal with testing goodness-of-fit, and develop a simulation algorithm. The model is applied to return data of two stock markets and four major European banks.
    Keywords: Time Series, Peaks Over Threshold, Hawkes Processes, Extreme Value Theory
    JEL: C32 C51 C58 G15
    Date: 2012–06–27
    URL: http://d.repec.org/n?u=RePEc:cgr:cgsser:03-06&r=ecm
  6. By: Le, Vo Phuong Mai; Meenagh, David; Minford, Patrick; Wickens, Michael R.
    Abstract: Using Monte Carlo experiments, we examine the performance of Indirect Inference tests of DSGE models, usually versions of the Smets-Wouters New Keynesian model of the US postwar period. We compare these with tests based on direct inference (using the Likelihood Ratio), and on the Del Negro-Schorfheide DSGE-VAR weight. We find that the power of all three tests is substantial so that a false model will tend to be rejected by all three; but that the power of the indirect inference tests are by far the greatest, necessitating re-estimation by indirect inference to ensure that the model is tested in its fullest sense.
    Keywords: Bootstrap; DSGE; DSGE-VAR weight; indirect inference; likelihood ratio; New Classical; New Keynesian; Wald statistic
    JEL: C12 C32 C52 E1
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9056&r=ecm
  7. By: Pierre Perron (Boston University); Gabriel Rodriguez (Departamento de Economía - Pontificia Universidad Católica del Perú)
    Abstract: We analyze di¤erent residual-based tests for the null of no cointegration using GLS detrended data. We …nd and simulate the limiting distributions of these statistics when GLS demeaned and GLS detrended data are used. The distributions depend of the number of right-hand side variables, the type of deterministic components used in the cointegration equation, and a nuisance parameter R2 which measures the long-run correlation between xt and yt. We present an extensive number of Figures which show the asymptotic power functions of the di¤erent statistics analyzed in this paper. The results show that GLS allows to obtain more asymptotic power in comparison with OLS detrending. The more simple residual-based tests (as the ADF) shows power gains for small values of R2 and for only one right-hand side variable. This evidence is valid for R2 less than 0.4. Figures shows that when R2 is larger, the ECR statistics are better for any value of the right-hand side variables. In particular, evidence shows that the ECR statistic which assumes a known cointegration vector is the most powerful. A set of simulated asymptotic critical values are also presented. Unlike other references, in the present framework we use di¤erent c for di¤erent number of right-hand side variables (xt variables) and according to the set of deterministic components. In this selection, we use a R2 = 0:4, which appears to be a sensible choice.
    Keywords: Cointegration, Residual-Based Unit Root Test, ECR Test, OLS and GLS Detrented Data, Hypothesis Testing
    JEL: C2 C3 C5
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00327&r=ecm
  8. By: Oliver Grothe
    Abstract: Many nonlinear extensions of the Kalman filter, e.g., the extended and the unscented Kalman filter, reduce the state densities to Gaussian densities. This approximation gives sufficient results in many cases. However, this filters only estimate states that are correlated with the observation. Therefore, sequential estimation of diffusion parameters, e.g., volatility, which are not correlated with the observations is not possible. While other filters overcome this problem with simulations, we extend the measurement update of the Gaussian two-moment filters by a higher order correlation measurement update. We explicitly state formulas for a higher order unscented Kalman filter within a continuous-discrete state space. We demonstrate the filter in the context of parameter estimation of an Ornstein-Uhlenbeck process.
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1207.4300&r=ecm
  9. By: Mitia Duerinckx; Christophe Ley; Yves-Caoimhin Swan
    Abstract: Abstract: Gauss’ principle states that the maximum likelihood estimator of the parameter in a location family is the sample mean for all samples of all sample sizes if and only if the family is Gaussian. There exist many extensions of this result in diverse directions. In this paper we propose a unified treatment of this literature. In doing so we define the fundamental concept of minimal necessary sample size at which a given characterization holds. Many of the cornerstone references on this topic are retrieved and discussed in light of our findings, and several new characterization theorems are provided.
    Keywords: location parameter; maximum Likelihood estimator; minimal necessary sample size; MLE characterization; scale parameter; score function
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/122336&r=ecm
  10. By: Tomasz Wozniak
    Abstract: Spillover and contagion eects have gained significant interest in the recent years of financial crisis. Attention has not only been directed to relations between returns of financial variables, but to spillovers in risk as well. I use the family of Constant Conditional Correlation GARCH models to model the risk associated with financial time series and to make inferences about Granger causal relations between second conditional moments. The restrictions for second-order Granger noncausality between two vectors of variables are derived. To assess the credibility of the noncausality hypotheses, I employ Bayes factors. Bayesian testing procedures have not yet been applied to the problem of testing Granger noncausality. Contrary to classical tests, Bayes factors make such testing possible, regardless of the form of the restrictions on the parameters of the model. Moreover, they relax the assumptions about the existence of higher-order moments of the processes required in classical tests.
    Keywords: Second-Order Causality; Volatility Spillovers; Bayes Factors; GARCH Models
    JEL: C11 C12 C32 C53
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1139&r=ecm
  11. By: Taras Bodnar; Nikolaus Hautsch; ;
    Abstract: We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows disentangling (multivariate) dynamics in higher order moments. To capture the latter, we propose a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
    Keywords: multiplicative error model, trading processes, copula, DCC-GARCH, liquidity risk
    JEL: C32 C58 C46
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-044&r=ecm
  12. By: Jan Heufer
    Abstract: This paper provides an efficient way to generate a set of random choices on a set of budgets which satisfy the Generalised Axiom of Revealed Preferences (GARP), that is, they are consistent with utility maximisation. The choices are drawn from an approximate uniform distribution on the admissible region on each budget which ensures consistency with GARP, based on a Markovian Monte Carlo algorithm due to Smith (1984). This procedure can be used to extend Bronars‘ (1987) method as it can be used to approximate the power of tests for conditions for which GARP is a necessary but not sufficient condition (e.g., homotheticity, separability, risk aversion, etc.). For example, it allows to approximate the probability that a set of random choices which happens to satisfy GARP is also consistent with homotheticity. The approach can also be applied to production analysis and nonparametric tests of cost minimisation.
    Keywords: Cost minimisation; GARP; hypothesis testing; Monte Carlo methods; nonparametric tests; test power; random optimal choices; revealed preference; utility maximisation
    JEL: C14 C63 D11 D12
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:rwi:repape:0354&r=ecm
  13. By: Stéphane Bonhomme (CEMFI, Centro de Estudios Monetarios y Financieros); Elena Manresa (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: This paper introduces time-varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unspecified. We estimate the model’s parameters using a “grouped fixed-effects” estimator that minimizes a least-squares criterion with respect to all possible groupings of the cross-sectional units. We rely on recent advances in the clustering literature for fast and efficient computation. Our estimator is higher-order unbiased as both dimensions of the panel tend to infinity, under conditions that we characterize. As a result, inference is not affected by the fact that group membership is estimated. We apply our approach to study the link between income and democracy across countries, while allowing for grouped patterns of unobserved heterogeneity. The results shed new light on the evolution of political and economic outcomes of countries.
    Keywords: Discrete heterogeneity, panel data, fixed effects, democracy.
    JEL: C23
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2012_1208&r=ecm
  14. By: Oliver Grothe; Stephan Nicklas
    Abstract: Levy copulas are the most natural concept to capture jump dependence in multivariate Levy processes. They translate the intuition and many features of the copula concept into a time series setting. A challenge faced by both, distributional and Levy copulas, is to find flexible but still applicable models for higher dimensions. To overcome this problem, the concept of pair copula constructions has been successfully applied to distributional copulas. In this paper we develop the pair construction for Levy copulas (PLCC). Similar to pair constructions of distributional copulas, the pair construction of a d-dimensional Levy copula consists of d(d-1)/2 bivariate dependence functions. We show that only d-1 of these bivariate functions are Levy copulas, whereas the remaining functions are distributional copulas. Since there are no restrictions concerning the choice of the copulas, the proposed pair construction adds the desired flexibility to Levy copula models. We provide detailed estimation and simulation algorithms and apply the pair construction in a simulation study.
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1207.4309&r=ecm
  15. By: Rodolfo G. Campos; Ilina Reggio
    Abstract: We study how estimators used to impute consumption in survey data are inconsistent due to measurement error in consumption. Previous research suggests instrumenting consumption to overcome this problem. We show that, if additional regressors are present in the estimation, then instrumenting consumption may still produce inconsistent estimators. This inconsistency arises from the likely correlation between those additional regressors and the measurement error, a correlation not directly observable. We propose an additional condition to be satisfied by the instrument that reduces and even eliminates measurement error bias in the presence of additional regressors. This condition is directly observable in the data.
    Keywords: Consumption, Measurement error, Instrumental variables, Consumer expenditure survey, Panel study of income dynamics
    JEL: C13 C26 E21
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1219&r=ecm
  16. By: Lorenzo Bencivelli (Bank of Italy); Massimiliano Marcellino (European University Institute, Bocconi University and CEPR); Gianluca Moretti (UBS Global asset management)
    Abstract: This paper proposes the use of Bayesian model averaging (BMA) as a tool to select the predictors' set for bridge models. BMA is a computationally feasible method that allows us to explore the model space even in the presence of a large set of candidate predictors. We test the performance of BMA in now-casting by means of a recursive experiment for the euro area and the three largest countries. This method allows flexibility in selecting the information set month by month. We find that BMA based bridge models produce smaller forecast error than fixed composition bridges. In an application to the euro area they perform at least as well as medium-scale factor models.
    Keywords: business cycle analysis, forecasting, Bayesian model averaging, bridge models.
    JEL: C22 C52 C53
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_872_12&r=ecm
  17. By: Giampiero M. Gallo (Dipartimento di Statistica, Universita` di Firenze); Edoardo Otranto (Università degli Studi di Messina, Dipartimento di Scienze Cognitive e della Formazione)
    Abstract: Persistence and occasional abrupt changes in the average level characterize the dynamics of high frequency based measures of volatility. Since the beginning of the 2000s, this pattern can be attributed to the dot com bubble, the quiet period of expansion of credit between 2003 and 2006 and then the harsh times after the burst of the subprime mortgage crisis. We conjecture that the inadequacy of many econometric volatility models (a very high level of estimated persistence, serially correlated residuals) can be solved with an adequate representation of such a pattern. We insert a Markovian dynamics in a Multiplicative Error Model to represent the conditional expectation of the realized volatility, allowing us to address the issues of a slow moving average level of volatility and of a different dynamics across regime. We apply the model to realized volatility of the S&P500 index and we gauge the usefulness of such an approach by a more interpretable persistence, better residual properties, and an increased goodness of fit.
    Keywords: Multiplicative Error Models, regime switching, realized volatility, volatility persistence
    JEL: C22 C51 C52 C58
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2012_02&r=ecm
  18. By: Chang-han Rhee; Peter W. Glynn
    Abstract: In this paper, we introduce a new approach to constructing unbiased estimators when computing expectations of path functionals associated with stochastic differential equations (SDEs). Our randomization idea is closely related to multi-level Monte Carlo and provides a simple mechanism for constructing a finite variance unbiased estimator with "square root convergence rate" whenever one has available a scheme that produces strong error of order greater than 1/2 for the path functional under consideration.
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1207.2452&r=ecm
  19. By: Tobias Scholl (Schumpeter Center for Clusters, Entrepreneurship and Innovation, Frankfurt University); Thomas Brenner (Working Group on Economic Geography and Location Research, Philipps University Marburg)
    Abstract: We present a new statistical method that detects industrial clusters at a firm level. The proposed method does not divide space into subunits whereby it is not affected by the Modifiable Areal Unit Problem (MAUP). Our metric differs both in its calculation and interpretation from existing distanceâ€based metrics and shows four central properties that enable its meaningful usage for cluster analysis. The method fulfills all five criteria for a test of localization proposed by Duranton and Overman (2005).
    Keywords: Spatial concentration, localization, clusters, MAUP, distance-based measures
    JEL: C40 C60 R12
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:pum:wpaper:2012-02&r=ecm
  20. By: Fischer, Matthias
    Abstract: We introduce a new skewed and leptokurtic distribution derived from the hyperbolic secant distribution and Johnson's S transformation. Properties of this new distribution are given. Finally, we empirically demonstrate in the context of financial return data that its exibility is comparable to that of their most advanced peers. --
    Keywords: hyperbolic secant distribution,SU-transformation,skewness,leptokurtosis,polynomial tails
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032012&r=ecm
  21. By: Thomas Lux
    Abstract: Maximum likelihood estimation of discretely observed diffusion processes is mostly hampered by the lack of a closed form solution of the transient density. It has recently been argued that a most generic remedy to this problem is the numerical solution of the pertinent Fokker-Planck (FP) or forward Kol- mogorov equation. Here we expand extant work on univariate diffusions to higher dimensions. We find that in the bivariate and trivariate cases, a numerical solution of the FP equation via alternating direction finite difference schemes yields results surprisingly close to exact maximum likelihood in a number of test cases. After providing evidence for the effciency of such a numerical approach, we illustrate its application for the estimation of a joint system of short-run and medium run investor sentiment and asset price dynamics using German stock market data
    Keywords: stochastic differential equations, numerical maximum likelihood, Fokker-Planck equation, finite difference schemes, asset pricing
    JEL: C58 G12 C13
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1781&r=ecm
  22. By: Carl Schmertmann (Max Planck Institute for Demographic Research, Rostock, Germany)
    Abstract: OBJECTIVE<br> I develop and explain a new method for interpolating detailed fertility schedules from age-group data. The method allows estimation of fertility rates over a fine grid of ages, from either standard or non-standard age groups. Users can calculate detailed schedules directly from the input data, using only elementary arithmetic.<br> <br> METHODS<br> The new method, the calibrated spline (CS) estimator, expands an abridged fertility schedule by finding the smooth curve that minimizes a squared error penalty. The penalty is based both on fit to the available age-group data, and on similarity to patterns of <sub>1</sub>f<sub>x</sub> schedules observed in the Human Fertility Database (HFD) and in the US Census International Database (IDB).<br> <br> RESULTS<br> I compare the CS estimator to two very good alternative methods that require more computation: Beers interpolation and the HFD's splitting protocol. CS replicates known <sub>1</sub>f<sub>x</sub> schedules from <sub>5</sub>f<sub>x</sub> data better than the other two methods, and its interpolated schedules are also smoother.<br> <br> CONCLUSIONS<br> The CS method is an easily computed, flexible, and accurate method for interpolating detailed fertility schedules from age-group data.<br> <br> COMMENTS<br> Data and R programs for replicating this paper’s results are available online at <a target="_blank" href="http://calibrated-spline.schmert.n et">http://calibrated-spline.schmert.net </a>
    JEL: J1 Z0
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2012-022&r=ecm
  23. By: Massimo Franchi ("Sapienza" Universita' di Roma); Paolo Paruolo (Universita' dell'Insubria)
    Abstract: This paper shows that the poor man's invertibility condition in Fernandez-Villaverde et al. (2007) is, in general, sufficient but not necessary for fundamentalness; that is, a violation of this condition does not necessarily imply the impossibility of recovering the structural shocks of a DSGE via a VAR. The permanent income model in Fernandez-Villaverde et al. (2007) is used to illustrate this fact. A necessary and sufficient condition for fundamentalness is formulated and its relations with the poor man's invertibility condition are discussed.
    Keywords: DSGE; VAR; invertibility; non-fundamentalness.
    Date: 2012–07
    URL: http://d.repec.org/n?u=RePEc:sas:wpaper:20124&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.