nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒02‒22
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Fitting dynamic factor models to non-stationary time series By Eichler Michael; Motta Giovanni; Sachs Rainer von
  2. Wavelet-based detection of outliers in volatility models By Aurea Grané; Helena Veiga
  3. Optimal Bandwidth Choice for the Regression Discontinuity Estimator By Guido Imbens; Karthik Kalyanaraman
  4. Consistent Estimation of Global VAR Models By Mutl, Jan
  5. Model selection, estimation and forecasting in VAR models with short-run and long-run restrictions By Athanasopoulos, George; Issler, João Victor; Guillén, Osmani Teixeira de Carvalho; Farshid, Vahid
  6. Forecasting inflation with gradual regime shifts and exogenous information By Andrés González; Kirstin Hubrich; Timo Teräsvirta
  7. The Econometrics of DSGE Models By Jesús Fernández-Villaverde
  8. Simple tests for exogeneity of a binary explanatory variable in count data regression models By Kevin E. Staub
  9. GARCH models with leverage effect : differences and similarities By María José Rodríguez; Esther Ruiz
  10. On Granger-causality and the effect of interventions in time series By Eichler Michael; Didelez Vanessa
  11. A Local Examination for Persistence in Exclusions-from-Core Measures of Inflation Using Real-Time Data By Tierney, Heather L.R.
  12. Identification-Robust Minimum Distance Estimation of the New Keynesian Phillips Curve By Leandro M. Magnusson; Sophocles Mavroeidis
  13. Measuring Inequality Using Censored Data: A Multiple Imputation Approach By Jenkins S; Burkhauser R; Feng S; Larrimore J
  14. Spatial HAC estimator: analysis of convergence of European regions By Oleksandr Shepotylo
  15. Regression Discontinuity Designs in Economics By David S. Lee; Thomas Lemieux

  1. By: Eichler Michael; Motta Giovanni; Sachs Rainer von (METEOR)
    Abstract: Factor modelling of a large time series panel has widely proven useful to reduce its cross-sectional dimensionality. This is done by explaining common co-movements in the panel through the existence of a small number of common components, up to some idiosyncratic behaviour of each individual series. To capture serial correlation in the common components, a dynamic structure is used as in traditional (uni- or multivariate) time series analysis of second order structure, i.e. allowing for infinite-length filtering of the factors via dynamic loadings. In this paper, motivated from economic data observed over long time periods which show smooth transitions over time in their covariance structure, we allow the dynamic structure of the factor model to be non-stationary over time, by proposing a deterministic time variation of its loadings. In this respect we generalise existing recent work on static factor models with time-varying loadings as well as the classical, i.e. stationary, dynamic approximate factor model. Motivated from the stationary case, we estimate the common components of our dynamic factor model by the eigenvectors of a consistent estimator of the now time-varying spectral density matrix of the underlying data-generating process. This can be seen as time-varying principal components approach in the frequency domain. We derive consistency of this estimator in a "double-asymptotic" framework of both cross-section and time dimension tending to infinity. A simulation study illustrates the performance of our estimators.
    Keywords: econometrics;
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2009002&r=ecm
  2. By: Aurea Grané; Helena Veiga
    Abstract: Outliers in financial data can lead to model parameter estimation biases, invalid inferences and poor volatility forecasts. Therefore, their detection and correction should be taken seriously when modeling financial data. This paper focuses on these issues and proposes a general detection and correction method based on wavelets that can be applied to a large class of volatility models. The effectiveness of our proposal is tested by an intensive Monte Carlo study for six well known volatility models and compared to alternative proposals in the literature, before applying it to three daily stock market indexes. The Monte Carlo experiments show that our method is both very effective in detecting isolated outliers and outlier patches and much more reliable than other wavelet-based procedures since it detects a significant smaller number of false outliers.
    Keywords: Outliers, Outlier patches, Volatility models, Wavelets
    JEL: C22 C5
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws090403&r=ecm
  3. By: Guido Imbens; Karthik Kalyanaraman
    Abstract: We investigate the problem of optimal choice of the smoothing parameter (bandwidth) for the regression discontinuity estimator. We focus on estimation by local linear regression, which was shown to be rate optimal (Porter, 2003). Investigation of an expected-squared-error-loss criterion reveals the need for regularization. We propose an optimal, data dependent, bandwidth choice rule. We illustrate the proposed bandwidth choice using data previously analyzed by Lee (2008), as well as in a simulation study based on this data set. The simulations suggest that the proposed rule performs well.
    JEL: C14
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14726&r=ecm
  4. By: Mutl, Jan (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: In this paper, I propose an instrumental variable (IV) estimation procedure to estimate global VAR (GVAR) models and show that it leads to consistent and asymptotically normal estimates of the parameters. I also provide computationally simple conditions that guarantee that the GVAR model is stable.
    Keywords: Global VAR, GVAR, Consistent estimation, Instrumental variables
    JEL: C31 C32 C33
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:234&r=ecm
  5. By: Athanasopoulos, George; Issler, João Victor; Guillén, Osmani Teixeira de Carvalho; Farshid, Vahid
    Abstract: We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.
    Date: 2009–02–05
    URL: http://d.repec.org/n?u=RePEc:fgv:epgewp:689&r=ecm
  6. By: Andrés González (Banco de la República, Bogotá and CREATES, University of Aarhus, Denmark); Kirstin Hubrich (European Central Bank, Frankfurt am Main and CREATES, University of Aarhus, Denmark); Timo Teräsvirta (CREATES, University of Aarhus, Denmark)
    Abstract: In this work, we make use of the shifting-mean autoregressive model which is a flexible univariate nonstationary model. It is suitable for describing characteristic features in inflation series as well as for medium-term forecasting. With this model we decompose the inflation process into a slowly moving nonstationary component and dynamic short-run fluctuations around it. We fit the model to the monthly euro area, UK and US inflation series. An important feature of our model is that it provides a way of combining the information in the sample and the a priori information about the quantity to be forecast to form a single inflation forecast. We show, both theoretically and by simulations, how this is done by using the penalised likelihood in the estimation of model parameters. In forecasting inflation, the central bank inflation target, if it exists, is a natural example of such prior information. We further illustrate the application of our method by an ex post forecasting experiment for euro area and UK inflation. We find that that taking the exogenous information into account does im- prove the forecast accuracy compared to that of a linear autoregressive benchmark model.
    Keywords: Nonlinear forecast, nonlinear model, nonlinear trend, penalised likelihood, structural shift, time-varying parameter
    JEL: C22 C52 C53 E31 E47
    Date: 2009–01–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-03&r=ecm
  7. By: Jesús Fernández-Villaverde (Department of Economics, University of Pennsylvania)
    Abstract: In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decided to estimate these models using Bayesian methods. Third, I briefly introduce some of the techniques required to compute and estimate these models. Fourth, I illustrate the techniques under consideration by estimating a benchmark DSGE model with real and nominal rigidities. I conclude by offering some pointers for future research.
    Keywords: DSGE Models, Likelihood Estimation, Bayesian Methods
    JEL: C11 C13 E30
    Date: 2009–01–19
    URL: http://d.repec.org/n?u=RePEc:pen:papers:09-008&r=ecm
  8. By: Kevin E. Staub (Socioeconomic Institute, University of Zurich)
    Abstract: This article investigates power and size of some tests for exogeneity of a binary explanatory variable in count models by conducting extensive Monte Carlo simulations. The tests under consideration are Hausman contrast tests as well as univariate Wald tests, including a new test of notably easy implementation. Performance of the tests is explored under misspecification of the underlying model and under different conditions regarding the instruments. The results indicate that often the tests that are simpler to estimate outperform tests that are more demanding. This is especially the case for the new test.
    Keywords: Endogeneity, Poisson, dummy variable, testing
    JEL: C12 C25
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:soz:wpaper:0904&r=ecm
  9. By: María José Rodríguez; Esther Ruiz
    Abstract: In this paper, we compare the statistical properties of some of the most popular GARCH models with leverage effect when their parameters satisfy the positivity, stationarity and nite fourth order moment restrictions. We show that the EGARCH specication is the most exible while the GJR model may have important limitations when restricted to have nite kurtosis. On the other hand, we show empirically that the conditional standard deviations estimated by the TGARCH and EGARCH models are almost identical and very similar to those estimated by the APARCH model. However, the estimates of the QGARCH and GJR models differ among them and with respect to the other three specications.
    Keywords: EGARCH, GJR, QGARCH, TGARCH, APARCH
    JEL: C22
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws090302&r=ecm
  10. By: Eichler Michael; Didelez Vanessa (METEOR)
    Abstract: We combine two approaches to causal reasoning. Granger-causality, on the one hand, is popular in fields like econometrics, where randomised experiments are not very common. Instead information about the dynamic development of a system is explicitly modelled and used to define potentially causal relations. On the other hand, the notion of causality as effect of interventions is predominant in fields like medical statistics or computer science. In this paper, we consider the effect of external, possibly multiple and sequential, interventions in a system of multivariate time series, the Granger-causal structure of which is taken to be known. We address the following questions: under what assumptions about the system and the interventions does Granger-causality inform us about the effectiveness of interventions, and when does the possibly smaller system of observable times series allow us to estimate this effect? For the latter we derive criteria that can be checked graphica lly and are in the same spirit as Pearl''s back-door and front-door criteria (Pearl 1995).
    Keywords: econometrics;
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2009003&r=ecm
  11. By: Tierney, Heather L.R.
    Abstract: Using parametric and nonparametric methods, inflation persistence is examined through the relationship between exclusions-from-core inflation and total inflation for two sample periods and in five in-sample forecast horizons ranging from one quarter to three years over fifty vintages of real-time data in two measures of inflation: personal consumption expenditure and the consumer price index. Unbiasedness is examined at the aggregate and local levels. A local nonparametric hypothesis test for unbiasedness is developed and proposed for testing the local conditional nonparametric regression estimates, which can be vastly different from the aggregated nonparametric model. This paper finds that the nonparametric model outperforms the parametric model for both data samples and for all five in-sample forecast horizons.
    Keywords: Real-Time Data; Local Estimation; Nonparametrics; Inflation Persistence; Monetary Policy
    JEL: C14 E52 E40
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:13383&r=ecm
  12. By: Leandro M. Magnusson (Department of Economics, Tulane University); Sophocles Mavroeidis (Department of Economics, Brown University)
    Abstract: Limited-information identification-robust methods on the indexation and price rigidity parameters of the new Keynesian Phillips curve yield very wide confidence intervals. Full-information methods impose more restrictions on the reduced-form dynamics, and thus make more efficient use of the information in the data. We propose identification-robust minimum distance methods for exploiting these additional restrictions and show that they yield considerably smaller confidence intervals for the coefficients of the model compared to their limited-information GMM counterparts. In contrast to previous studies that used GMM, we find evidence of partial but not full indexation, and we obtain sharper inference on the degree of price stickiness.
    Keywords: weak identification, minimum distance, GMM, Phillips curve
    JEL: C22 E31
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:tul:wpaper:0904&r=ecm
  13. By: Jenkins S (Institute for Social and Economic Research); Burkhauser R (Cornell University); Feng S (Shanghai University of Finance and Economics); Larrimore J (Cornell University)
    Abstract: To measure income inequality with right censored (topcoded) data, we propose multiple imputation for censored observations using draws from Generalized Beta of the Second Kind distributions to provide partially synthetic datasets analyzed using complete data methods. Estimation and inference uses ReiterÂ’s (Survey Methodology 2003) formulae. Using Current Population Survey (CPS) internal data, we find few statistically significant differences in income inequality for pairs of years between 1995 and 2004. We also show that using CPS public use data with cell mean imputations may lead to incorrect inferences about inequality differences. Multiply-imputed public use data provide an intermediate solution.
    Date: 2009–02–09
    URL: http://d.repec.org/n?u=RePEc:ese:iserwp:2009-04&r=ecm
  14. By: Oleksandr Shepotylo (Kyiv School of Economics and Kyiv Economics Institute)
    Abstract: This paper applies a nonparametric heteroskedasticity and autocorrelation consistent (HAC) estimator of error terms in the context of the spatial autoregressive model of GDP per capita convergence of European regions at NUTS 2 level. By introducing the spatial dimension, it looks at how the equilibrium distribution of GDP per capita of EU regions evolves both in time and space dimensions. Results demonstrate that the global spatial spillovers of growth rates make an important contribution to the process of convergence by reinforcing the economic growth of neighboring regions. Results are even more pronounced when the convergence in wage per worker is considered. The choice of kernel functions does not significantly affect the estimation of the variance-covariance matrix, while the choice of the bandwidth parameter is quite important. Finally, results are sensitive to the weighting matrix specification, and further research is needed to give a more rigorous justification for the selection of the weighting matrix.
    Keywords: Convergence, spatial econometrics, regional economics, EU
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:kse:dpaper:15&r=ecm
  15. By: David S. Lee; Thomas Lemieux
    Abstract: This paper provides an introduction and "user guide" to Regression Discontinuity (RD) designs for empirical researchers. It presents the basic theory behind the research design, details when RD is likely to be valid or invalid given economic incentives, explains why it is considered a "quasi-experimental" design, and summarizes different ways (with their advantages and disadvantages) of estimating RD designs and the limitations of interpreting these estimates. Concepts are discussed using using examples drawn from the growing body of empirical research using RD.
    JEL: C1 H0 I0 J0
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14723&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.