nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒08‒28
twelve papers chosen by
Sune Karlsson
Orebro University

  1. PANEL GROWTH REGRESSIONS WITH GENERAL PREDETERMINED VARIABLES: LIKELIHOOD-BASED ESTIMATION AND BAYESIAN AVERAGING By Enrique Moral-Benito
  2. Identifying Finite Mixtures in Econometric Models By Marc Henry; Yuichi Kitamura; Bernard Salanié
  3. Nonparametric transfer function models . By Liu, Jun M.; Chen, Rong; Yao, Qiwei
  4. Non-linear DSGE Models and The Central Difference Kalman Filter By Martin M. Andreasen
  5. Nonparametric estimation of the volatility under microstructure noise: wavelet adaptation By Hoffmann, Marc; Munk, Axel; Schmidt-Hieber, Johannes
  6. Pre-Averaging Based Estimation of Quadratic Variation in the Presence of Noise and Jumps: Theory, Implementation, and Empirical Evidence By Nikolaus Hautsch; Mark Podolskij
  7. Maximum penalized quasi-likelihood estimation of the diffusion function By Jeff Hamrick; Yifei Huang; Constantinos Kardaras; Murad Taqqu
  8. Understanding and Forecasting Aggregate and Disaggregate Price Dynamics By D'Agostino, Antonello; Bermingham, Colin
  9. DSGE Model Validation in a Bayesian Framework: an Assessment By Paccagnini, Alessia
  10. Unit Roots, Level Shifts and Trend Breaks in Per Capita Output: A Robust Evaluation By Mohitosh Kejriwal; Claude Lopez
  11. Modeling Seasonality in New Product Diffusion By Peers, Y.; Fok, D.; Franses, Ph.H.B.F.
  12. The Impact of Data Revisions on the Robustness of Growth Determinants - A Note on 'Determinants of Economic Growth. Will Data Tell?' By Feldkircher, Martin; Zeugner, Stefan

  1. By: Enrique Moral-Benito (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: In this paper I estimate empirical growth models simultaneaously considering endogenous regressors and model uncertainty. In order to apply Bayesian methods such as Bayesian Model Averaging (BMA) to dynamic panel data models with predetermined or endogenous variables and fixed effects, I propose a likelihood function for such models. The resulting maximum likelihood estimator can be interpreted as the LIML counterpart of GMM estimators. Via Monte Carlo simulations, I conclude that the finite-sample performance of the proposed estimator is better than that of the commonly-used standard GMM. In contrast to the previous consensus in the empirical growth literature, empirical results indicate that once endogeneity and model uncertainty are accounted for, the estimated convergence rate is not significantly different from zero. Moreover, there seems to be only one variable, the investment ration, that causes long-run economic growth.
    Keywords: Dynamic panel estimation, growth regressions, Bayesian Model Averaging, weak instruments, maximum likelihood.
    JEL: C11 C33 O40
    Date: 2010–07
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2010_1006&r=ecm
  2. By: Marc Henry (Université de Montréal - Département de sciences économiques); Yuichi Kitamura (Yale University - Department of Economics); Bernard Salanié (Columbia University - Department of Economics)
    Abstract: Mixtures of distributions are present in many econometric models, such as models with unobserved heterogeneity. It is thus crucial to have a general approach to identify them nonparametrically. Yet the literature so far only contains isolated examples, applied to specific models. We derive the identifying implications of a conditional independence assumption in finite mixture models. It applies for instance to models with unobserved heterogeneity, regime switching models, and models with mismeasured discrete regressors. Under this assumption, we derive sharp bounds on the mixture weights and components. For models with two mixture components, we show that if in addition the components behave differently in the tails of their distributions, then components and weights are fully nonparametrically identified. We apply our findings to the nonparametric identification and estimation of outcome distributions with a misclassified binary regressor. This provides a simple estimator that does not require instrumental variables, auxiliary data, symmetric error distributions or other shape restrictions.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:clu:wpaper:0910-20&r=ecm
  3. By: Liu, Jun M.; Chen, Rong; Yao, Qiwei
    Abstract: In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between 'input' and 'output' time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modelling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example.
    Date: 2010–07
    URL: http://d.repec.org/n?u=RePEc:ner:lselon:http://eprints.lse.ac.uk/28868/&r=ecm
  4. By: Martin M. Andreasen (Bank of England and CREATES)
    Abstract: This paper introduces a Quasi Maximum Likelihood (QML) approach based on the Cen- tral Difference Kalman Filter (CDKF) to estimate non-linear DSGE models with potentially non-Gaussian shocks. We argue that this estimator can be expected to be consistent and asymptotically normal for DSGE models solved up to third order. A Monte Carlo study shows that this QML estimator is basically unbiased and normally distributed infi?nite samples for DSGE models solved using a second order or a third order approximation. These results hold even when structural shocks are Gaussian, Laplace distributed, or display stochastic volatility.
    Keywords: Non-linear filtering, Non-Gaussian shocks, Quasi Maximum Likelihood, Stochastic volatility, Third order perturbation.
    JEL: C13 C15 E10 E32
    Date: 2010–07–20
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-30&r=ecm
  5. By: Hoffmann, Marc; Munk, Axel; Schmidt-Hieber, Johannes
    Abstract: We study nonparametric estimation of the volatility function of a diffusion process from discrete data, when the data are blurred by additional noise. This noise can be white or correlated, and serves as a model for microstructure effects in financial modeling, when the data are given on an intra-day scale. By developing pre-averaging techniques combined with wavelet thresholding, we construct adaptive estimators that achieve a nearly optimal rate within a large scale of smoothness constraints of Besov type. Since the underlying signal (the volatility) is genuinely random, we propose a new criterion to assess the quality of estimation; we retrieve the usual minimax theory when this approach is restricted to deterministic volatility.
    Keywords: Adaptive estimation; diffusion processes; high-frequency data; microstructure noise; minimax estimation; semimartingales; wavelets.
    JEL: C14 C0 C22
    Date: 2010–07–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:24562&r=ecm
  6. By: Nikolaus Hautsch (Humboldt-Universität zu Berlin); Mark Podolskij (ETH Zurich and CREATES)
    Abstract: This paper provides theory as well as empirical results for pre-averaging estimators of the daily quadratic variation of asset prices. We derive jump robust inference for pre-averaging estimators, corresponding feasible central limit theorems and an explicit test on serial dependence in microstructure noise. Using transaction data of different stocks traded at the NYSE, we analyze the estimators’ sensitivity to the choice of the pre-averaging bandwidth and suggest an optimal interval length. Moreover, we investigate the dependence of pre-averaging based inference on the sampling scheme, the sampling frequency, microstructure noise properties as well as the occurrence of jumps. As a result of a detailed empirical study we provide guidance for optimal implementation of pre-averaging estimators and discuss potential pitfalls in practice.
    Keywords: Quadratic Variation, MarketMicrostructure Noise, Pre-averaging, Sampling Schemes, Jumps
    JEL: C14 C22 G10
    Date: 2010–07–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-29&r=ecm
  7. By: Jeff Hamrick; Yifei Huang; Constantinos Kardaras; Murad Taqqu
    Abstract: We develop a maximum penalized quasi-likelihood estimator for estimating in a nonparametric way the diffusion function of a diffusion process, as an alternative to more traditional kernel-based estimators. After developing a numerical scheme for computing the maximizer of the penalized maximum quasi-likelihood function, we study the asymptotic properties of our estimator by way of simulation. Under the assumption that overnight London Interbank Offered Rates (LIBOR); the USD/EUR, USD/GBP, JPY/USD, and EUR/USD nominal exchange rates; and 1-month, 3-month, and 30-year Treasury bond yields are generated by diffusion processes, we use our numerical scheme to estimate the diffusion function. Finally, we provide a guide to MATLAB software that executes the estimation procedure and that is available from the authors.
    Date: 2010–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1008.2421&r=ecm
  8. By: D'Agostino, Antonello (Central Bank and Financial Services Authority of Ireland); Bermingham, Colin (Central Bank and Financial Services Authority of Ireland)
    Abstract: The issue of forecast aggregation is to determine whether it is better to forecast a series directly or instead construct forecasts of its components and then sum these component forecasts. Notwithstanding some underlying theoretical results, it is gener- ally accepted that forecast aggregation is an empirical issue. Empirical results in the literature often go unexplained. This leaves forecasters in the dark when confronted with the option of forecast aggregation. We take our empirical exercise a step further by considering the underlying issues in more detail. We analyse two price datasets, one for the United States and one for the Euro Area, which have distinctive dynamics and provide a guide to model choice. We also consider multiple levels of aggregation for each dataset. The models include an autoregressive model, a factor augmented autoregressive model, a large Bayesian VAR and a time-varying model with stochastic volatility. We find that once the appropriate model has been found, forecast aggrega- tion can significantly improve forecast performance. These results are robust to the choice of data transformation.
    Date: 2010–08
    URL: http://d.repec.org/n?u=RePEc:cbi:wpaper:8/rt/10&r=ecm
  9. By: Paccagnini, Alessia
    Abstract: This paper presents the concept of Model Validation applied to a Dynamic Stochastic General equilibrium Model (DSGE). The main problem discussed is the approximation of the statistical representation for a DSGE model when not all endogenous variables are observable. MonteCarlo experiments in artificial world are implemented to assess this problem by using the DSGE-VAR. Two Data Generating Processes are compared: a forward-looking and a backward-looking model. These experiments are followed by an empirical analysis with real world data for the US economy.
    Keywords: Bayesian Analysis; DSGE Models; Vector Autoregressions; MonteCarlo experiments
    JEL: C32 C15 C01 C11
    Date: 2010–05–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:24509&r=ecm
  10. By: Mohitosh Kejriwal; Claude Lopez
    Abstract: Determining whether per capita output can be characterized by a stochastic trend is complicated by the fact that infrequent breaks in trend can bias standard unit root tests towards non-rejection of the unit root hypothesis. The bulk of the existing literature has focused on the application of unit root tests allowing for structural breaks in the trend function under the trend stationary alternative but not under the unit root null. These tests, however, provide little information regarding the existence and number of trend breaks. Moreover, these tests su¤er from serious power and size distortions due to the asymmetric treatment of breaks under the null and alternative hypotheses. This paper estimates the number of breaks in trend employing procedures that are robust to the unit root/stationarity properties of the data. Our analysis of the per-capita GDP for OECD countries thereby permits a robust classi?cation of countries according to the ?growth shift?, ?level shift? and ?linear trend? hypotheses. In contrast to the extant literature, unit root tests conditional on the presence or absence of breaks do not provide evidence against the unit root hypothesis.
    Keywords: growth shift, level shift, structural change, trend breaks, unit root
    JEL: C22
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:pur:prukra:1227&r=ecm
  11. By: Peers, Y.; Fok, D.; Franses, Ph.H.B.F.
    Abstract: Although high frequency diffusion data is nowadays available, common practice is still to only use yearly figures in order to get rid of seasonality. This paper proposes a diffusion model that captures seasonality in a way that naturally matches the overall S-shaped pattern. The model is based on the assumption that additional sales at seasonal peaks are drawn from previous or future periods. This implies that the seasonal pattern does not influence the underlying diffusion pattern. The model is compared with alternative approaches through simulations and empirical examples. As alternatives we consider the standard Generalized Bass Model and ignoring seasonality by using the basic Bass model. One of our main findings is that modeling seasonality in a Generalized Bass Model does generate good predictions, but gives biased estimates. In particular, the market potential parameter will be underestimated. Ignoring seasonality gives the true parameter estimates if the data is available of the entire diffusion period. However, when only part of the diffusion period is available estimates and predictions become biased. Our model gives correct estimates and predictions even if the full diffusion process is not yet available.
    Keywords: new product diffusion;seasonality
    Date: 2010–07–15
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:1765020378&r=ecm
  12. By: Feldkircher, Martin (Oesterreichische Nationalbank); Zeugner, Stefan (Université Libre de Bruxelles)
    Abstract: Ciccone and Jarocinski (2010) show that inference in Bayesian model averaging (BMA) can be highly sensitive to small changes in the dependent variable. In particular they demonstrate that the importance of growth determinants in explaining growth varies tremendously over different revisions of Penn World Table (PWT) income data. They conclude that ’agnostic’ priors appear too sensible for this strand of growth empirics. In response, we show that the instability found owes much to a specific BMA set-up: the variation in results can be considerably reduced by applying an evenly ’agnostic’, but flexible prior.
    Keywords: Bayesian model averaging; Growth determinants; Zellner’s g prior; Model uncertainty
    JEL: C11 C15 E01 O47
    Date: 2010–08–20
    URL: http://d.repec.org/n?u=RePEc:ris:sbgwpe:2010_012&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.