nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒11‒25
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Smooth Minimum Distance Estimation and Testing in Conditional Moment Restrictions Models: Uniform in Bandwidth Theory By Pascal Lavergne; Valentin Patilea
  2. Estimation and Model Selection of Semiparametric Multivariate Survival Functions under General Censorship By Xiaohong Chen; Yanqin Fan; Demian Pouzo; Zhiliang Ying
  3. U-statistic Type Tests for Structural Breaks in Linear Regression Models By Jose Olmo; William Pouliot
  4. One for All and All for One:Regression Checks With Many Regressors By Pascal Lavergne; Valentin Patilea
  5. Improved small sample inference for efficient method of moments and indirect inference estimators By Veronika Czellar; Eric Zivot
  6. Asymptotic properties of the Bernstein density copula for dependent data By BOUEZMARNI, Taoufik; ROMBOUTS, Jeroen V.K.; TAAMOUTI, Abderrahim
  7. Covariance estimation via Fourier method in the presence of asynchronous trading and microstructure noise By S. Sanfelici; M. E. Mancino
  8. An easy test for two stationary long processes being uncorrelated via AR approximations By WANG , Shin-Huei; HSIAO, Cheng
  9. Spatial Dynamic Panel Model and System GMM: A Monte Carlo Investigation By Kukenova, Madina; Monteiro, Jose-Antonio
  10. Modified Fast Double Sieve Bootstraps for ADF Tests By Patrick Richard
  11. Practical Issues in the Analysis of Univariate GARCH Models By Eric Zivot
  12. Long Memory versus Structural Breaks in Modeling and Forecasting Realized Volatility By Kyongwook Choi; Wei-Choun Yu; Eric Zivot
  13. Valid Inference for a Class of Models Where Standard Inference Performs Poorly: Including Nonlinear Regression, ARMA, GARCH, and Unobserved Components By Jun Ma; Charles R. Nelson
  14. Bayesian inference for Hidden Markov Model By Rosella Castellano; Luisa Scaccia
  15. The effect of the great moderation on the U.S. business cycle in a time-varying multivariate trend-cycle model By Drew Creal; Siem Jan Koopman; Eric Zivot
  16. Considering threshold effects in the long-run equilibrium in a vector error correction model: An application to the German apple market By Goetz, Linde; von Crammon-Taubadel, Stephan
  17. Confidence Intervals for Estimates of Elasticities By J. G. Hirschberg, J. N. Lye; D. J. Slottje
  18. Alternative Approaches to Evaluation in Empirical Microeconomics By Blundell, Richard; Costa Dias, Monica
  19. Predictive Densities for Shire Level Wheat Yield in Western Australia By William E Griffiths; Lisa S Newton; Christopher J O’Donnell
  20. Tobit at Fifty: A Brief History of Tobin's Remarkable Estimator, of Related Empirical Methods, and of Limited Dependent Variable Econometrics in Health Economics By Kohei Enami; John Mullahy
  21. A Comment on Weak Instrument Robust Tests in GMM and the New Keynesian Phillips Curve By Eric Zivot; Saraswata Chaudhuri
  22. The Calibration of Probabilistic Economic Forecasts By John Galbraith; Simon van Norden

  1. By: Pascal Lavergne (Simon Fraser University); Valentin Patilea (IRMAR-INSA)
    Abstract: We propose a new estimation method for models defined by conditional moment restrictions,that minimizes a distance criterion based on kernel smoothing. Whether the bandwidth parameter is fixed or decreases to zero with the sample size, our approach defines a whole class of estimators. We develop a theory that focuses on uniformity in bandwidth. We establish a pn-asymptotic representation of our estimator as a process depending on the bandwidth within a wide range including fixed bandwidths and that applies to misspecified models. We also study an efficient version of our estimator. We develop inference procedures based on a distance metric statistic for testing restrictions on parameters and we propose a new bootstrap technique. Our new methods apply to non-smooth problems, are simple to implement, and perform well in small samples.
    Keywords: Conditional Moments, Smoothing Methods
    JEL: C31 C13 C14
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp08-08&r=ecm
  2. By: Xiaohong Chen (Cowles Foundation, Yale University); Yanqin Fan (Dept. of Economics, Vanderbilt University); Demian Pouzo (Dept. of Economics, Columbia University); Zhiliang Ying (Dept. of Statistics, Columbia University)
    Abstract: Many models of semiparametric multivariate survival functions are characterized by nonparametric marginal survival functions and parametric copula functions, where different copulas imply different dependence structures. This paper considers estimation and model selection for these semiparametric multivariate survival functions, allowing for misspecified parametric copulas and data subject to general censoring. We first establish convergence of the two-step estimator of the copula parameter to the pseudo-true value defined as the value of the parameter that minimizes the KLIC between the parametric copula induced multivariate density and the unknown true density. We then derive its root--n asymptotically normal distribution and provide a simple consistent asymptotic variance estimator by accounting for the impact of the nonparametric estimation of the marginal survival functions. These results are used to establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application of the model selection test to the Loss-ALAE insurance data set is provided.
    Keywords: Multivariate survival models, Misspecified copulas, Penalized pseudo-likelihood ratio, Fixed or random censoring, Kaplan-Meier estimator
    JEL: C14 C22 G22
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1683&r=ecm
  3. By: Jose Olmo (Department of Economics, City University, London); William Pouliot (Department of Economics, City University, London)
    Abstract: This article introduces a U-statistic type process that is based on a kernel function which can depend on nuisance parameters. It is shown here that this process can accommodate very easily anti-symmetric kernels very useful for detecting changing patterns in the dynamics of time series. This theory is applied to structural break hypothesis tests in linear regression models. In particular, the flexibility of these processes will be exploited to introduce a simultaneous and joint test that exhibit statistical power against changes in either intercept or slope. In contrast to the literature, these tests are able to distinguish between rejections due to changes in intercept from rejections due to changes in slope; allow control of global errors rate; and are explicitly designed to have power when the distribution error is asymmetric. These tests can also incorporate different weight functions devised to detect changes early as well as later on in the sample, and show very good performance in small samples. These tests, therefore, outperform CUSUM type tests widely employed in this literature.
    Keywords: Change-Point tests; CUSUM test; Linear regression models; Stochastic processes; U-statistics
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:0815&r=ecm
  4. By: Pascal Lavergne (Simon Fraser University); Valentin Patilea (CREST-ENSAI and IRMAR-INSA)
    Abstract: We develop a novel approach to build checks of parametric regression models when many regressors are present, based on a class of rich enough semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was one regressor only, whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment (ICM) test of Bierens. Qualitative information can be easily incorporated in the procedure to enhance power. Our test is little sensitive to the smoothing parameter and performs better than several known lack-of-fit tests in multidimensional settings, as illustrated by extensive simulations and an application to a cross-country growth regression.
    Keywords: Dimensionality, Hypothesis testing, Nonparametric methods
    JEL: C52 C12
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp08-06&r=ecm
  5. By: Veronika Czellar (HEC Paris); Eric Zivot (Department of Economic, University of Washington)
    Abstract: The efficient method of moments (EMM) and indirect inference (II) are two widely used simulation-based techniques for estimating structural models that have intractable likelihood functions. The poor performance in finite samples of traditional coefficient and overidentification tests based on the EMM or II objective function indicates a failure of first order asymptotic theory for the distribution of these tests, especially for EMM. We propose practically feasible saddlepoint coefficcient tests for hypotheses on structural coefficients estimated by II and EMM that are asymptotically chi-square distributed and have much better finite sample performance than traditional tests. To construct the tests, we make use of the fact that II and EMM estimators have asymptotically equivalent M-estimators and then use the coefficient saddlepoint tests for M-estimators developed by Robinson, Ronchetti and Young (2003). We evaluate the nite sample behavior of our coeffucient saddlepoint tests by Monte Carlo methods using a MA(1) model. Whereas traditional likelihood-ratio type tests can exhibit substantial size distortions,we show that our saddlepoint tests do not. We also find that the size-adjusted power of our saddlepoint tests is similar to and sometimes greater than the power of traditional tests.
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-04&r=ecm
  6. By: BOUEZMARNI, Taoufik (---); ROMBOUTS, Jeroen V.K.; TAAMOUTI, Abderrahim
    Abstract: Copulas are extensively used for dependence modeling. In many cases the data does not reveal how the dependence can be modeled using a particular parametric copula. Nonparametric copulas do not share this problem since they are entirely data based. This paper proposes nonparametric estimation of the density copula for a-mixing data using Bernstein polynomials. We study the asymptotic properties of the Bernstein density copula, i.e., we provide the exact asymptotic bias and variance, we establish the uniform strong consistency and the asymptotic normality.
    Keywords: nonparametric estimation, copula, Bernstein polynomial, a-mixing, asymptotic properties, boundary bias
    JEL: C13 C14
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2008045&r=ecm
  7. By: S. Sanfelici; M. E. Mancino
    Abstract: We analyze the effects of market microstructure noise on the Fourier estimator of multivariate volatilities. We prove that the estimator is consistent in the case of asynchronous data and robust in the presence of microstructure noise. This result is obtained through an analytical computation of the bias and the mean squared error of the Fourier estimator and con¯rmed by Monte Carlo experiments.
    JEL: C14 C32 G1
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:par:dipeco:2008-me01&r=ecm
  8. By: WANG , Shin-Huei (Université catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); HSIAO, Cheng
    Abstract: This paper proposes an easy test for two stationary autoregressive fractionally integrated moving average (ARFIMA) processes being uncorrelated via AR approximations. We prove that an ARFIMA process can be approximated well by an autoregressive (AR) model and establish the theoretical foundation of Haugh's (1976) statistics to test two ARFIMA processes being uncorrelated. Using AIC or Mallow's Cp criterion as a guide, we demonstrate through Monte Carlo studies that a lower order AR(k) model is sufficient to prewhiten an ARFIMA process and the Haugh test statistics perform very well in finite sample. We illustrate the methodology by investigating the independence between the volatility of two daily nominal dollar exchange rates-Euro and Japanese Yen and find that there exists "strongly simultaneous correlation" between the volatilities of Euro and Yen within 25 days.
    Keywords: forecasting, long memory process, structural break.
    JEL: C22 C53
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2008047&r=ecm
  9. By: Kukenova, Madina; Monteiro, Jose-Antonio
    Abstract: Since there is so far no estimator that allows to estimate a dynamic panel model that includes a spatial lag as well as other potential endogenous variables. This paper wants to determine if it is suitable to instrument the spatial lag variable (which is by de…finition endogenous/simultaneous) using the instruments proposed by system GMM, i.e. lagged spatial lag values. The Monte Carlo investigation highlights the possibility to estimate a dynamic spatial lag model using the extended GMM proposed by Arellano and Bover (1995) and Blundell and Bover (1998), especially when N and T are large.
    Keywords: Spatial Econometrics; Dynamic Panel Model; System GMM; Monte Carlo Simulations
    JEL: C15 C33
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:11569&r=ecm
  10. By: Patrick Richard (GREDI, Département d'économique, Université de Sherbrooke)
    Abstract: This paper studies the finite sample performance of the sieve bootstrap augmented Dickey-Fuller (ADF) unit root test. It is well known that this test’s accuracy in terms of rejection probability under the null depends greatly on the underlying DGP. Through extensive simulations, we find that it also depends on the numbers of lags employed in the bootstrap DGP and in the bootstrap ADF regression. Based on this finding and using some well established theoretical results, we propose a simple modification that significantly improves the test’s accuracy. We also introduce different versions of the fast double bootstrap, each modified according to the same theoretical basis. According to our simulations, these new testing procedures have lower error in rejection probability under the null while retaining good power.
    Keywords: ARMA; bias correction; GLS
    JEL: C13 C22
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:shr:wpaper:08-17&r=ecm
  11. By: Eric Zivot (Department of Economics, University of Washington)
    Abstract: This paper gives a tour through the empirical analysis of univariate GARCH models for financial time series with stops along the way to discuss various practical issues associated with model specification, estimation, diagnostic evaluation and forecasting.
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-03-fc&r=ecm
  12. By: Kyongwook Choi (Department of Economics, The University of Seoul,); Wei-Choun Yu (Economics and Finance Department, Winona State University); Eric Zivot (Department of Economics, University of Washington)
    Abstract: We explore the possibility of structural breaks in realized volatility with observed long-memory properties for the daily Deutschemark/Dollar, Yen/Dollar and Yen/Deutschemark spot exchange rate realized volatility. We find that structural breaks can partly explain the persistence of realized volatility. We propose a VAR-RV-Break model that provides superior predictive ability compared to most of the forecasting models when the future break is known. With unknown break dates and sizes, we find that the VAR-RV-I(d) long memory model, however, is a very robust forecasting method even when the true financial volatility series are generated by structural breaks.
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-20&r=ecm
  13. By: Jun Ma (U of Alabama); Charles R. Nelson
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-06-r&r=ecm
  14. By: Rosella Castellano (University of Macerata); Luisa Scaccia (University of Macerata)
    Abstract: <p> </p><p align="left">Hidden Markov Models can be considered an extension of mixture models, allowing for</p><div align="left">dependent observations. In a hierarchical Bayesian framework, we show how Reversible</div><div align="left">Jump Markov Chain Monte Carlo techniques can be used to estimate the parameters of a</div><div align="left">model, as well as the number of regimes. We consider a mixture of normal distributions</div><div align="left">characterized by different means and variances under each regime, extending the model</div><div align="left">proposed by Robert <font size="2">et al. </font><font size="2">(2000), based on a mixture of zero mean normal distributions.</font></div>
    JEL: O1 O11
    Date: 2007–10
    URL: http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper00043&r=ecm
  15. By: Drew Creal (Department of Econometrics, Vrije Universiteit Amsterdam); Siem Jan Koopman (Department of Econometrics, Vrije Universiteit Amsterdam); Eric Zivot (University of Washington)
    Abstract: In this paper we investigate whether the dynamic properties of the U.S. business cycle have changed in the last fifty years. For this purpose we develop a flexible business cycle indicator that is constructed from a moderate set of macroeconomic time series. The coincident economic indicator is based on a multivariate trend-cycle decomposition model that accounts for time variation in macroeconomic volatility, known as the great moderation. In particular, we consider an unobserved components time series model with a common cycle that is shared across different time series but adjusted for phase shift and amplitude. The extracted cycle can be interpreted as the result of a model-based bandpass filter and is designed to emphasize the business cycle frequencies that are of interest to applied researchers and policymakers. Stochastic volatility processes and mixture distributions for the irregular components and the common cycle disturbances enable us to account for all the heteroskedasticity present in the data. The empirical results are based on a Bayesian analysis and show that time-varying volatility is only present in the a selection of idiosyncratic components while the coefficients driving the dynamic properties of the business cycle indicator have been stable over time in the last fifty years.
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-15&r=ecm
  16. By: Goetz, Linde; von Crammon-Taubadel, Stephan
    Abstract: We propose a three-step procedure to estimate a regime-dependent vector error correction model (VECM). In this model, not only the short-run adjustment process towards equilibrium is non-linear, as in threshold VECM and Markov switching VECM frameworks, but the long-run equilibrium relationship itself can also display threshold-type non-linearity. The proposed approach is unique in explicitly testing the null hypothesis of linear cointegration against the alternative of threshold cointegration based on the Gonzalo AND PITARAKIS (2006) test. The model is applied to apple price data on wholesale markets in Hamburg and Munich, using the share of domestic apples in total wholesale trade as the threshold variable. We identify four price transmission regimes characterized by different equilibrium relationships and short-run adjustment processes. This proposed approach is particularly suitable for capturing irregular seasonal threshold effects in price transmission typical for fresh fruits and vegetables.
    Keywords: threshold cointegration, spatial price transmission, vector error correction model, Marketing,
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:ags:eaae08:44247&r=ecm
  17. By: J. G. Hirschberg, J. N. Lye; D. J. Slottje
    Abstract: Elasticities are often estimated from the results of demand analysis however, drawing inferences from them may involve assumptions that could influence the outcome. In this paper we investigate one of the most common forms of elasticity which is defined as a ratio of estimated relationships and demonstrate how the Fieller method for the construction of confidence intervals can be used to draw inferences. We estimate the elasticities of expenditure from Engel curves using a variety of estimation models. Parametric Engel curves are modelled using OLS, MM robust regression, and Tobit. Semiparametric Engel curves are estimated using a penalized spline regression. We demonstrate the construction of confidence intervals of the expenditure elasticities for a series of expenditure levels as well as the estimated cumulative density function for the elasticity evaluated for a particular household.
    Keywords: Engel curves, Fieller method, Tobit, robust regression, semiparametric
    JEL: C12 C13 C14 C24 D12
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1053&r=ecm
  18. By: Blundell, Richard (University College London); Costa Dias, Monica (Institute for Fiscal Studies, London)
    Abstract: This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy, assumptions, and data requirements of each approach are discussed drawing on empirical evidence from the education and employment policy evaluation literature. A workhorse simulation model of education returns is used throughout the paper to discuss and illustrate each approach. The full set of STATA datasets and do-files are available free online and can be used to reproduce all estimation and simulation results.
    Keywords: evaluation methods
    JEL: C52 J24
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3800&r=ecm
  19. By: William E Griffiths; Lisa S Newton; Christopher J O’Donnell
    Abstract: Wheat yield in Western Australia (WA) depends critically on rainfall during three periods – germination, growing and flowering. The degree of uncertainty attached to a wheat-yield prediction depends on whether the prediction is made before or after the rainfall in each period has been realised. Bayesian predictive densities that reflect the different levels of uncertainty in wheat-yield predictions made at four different points in time are derived for five shires in Western Australia. The framework used for prediction is a linear regression model with stochastic regressors and inequality restrictions on the coefficients. An algorithm is developed that can be used more generally for obtaining Bayesian predictive densities in linear and nonlinear models with inequality constraints, and with or without stochastic regressors.
    Keywords: Bayesian forecasting; inequality restrictions; random regressors.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:mlb:wpaper:1051&r=ecm
  20. By: Kohei Enami; John Mullahy
    Abstract: Practitioners of empirical health economics might be forgiven for paying little heed to the recent 50th anniversary of the publication of one of the most important papers in its methodological heritage: James Tobin's widely-cited 1958 Econometrica paper that developed what later became known as the Tobit estimator. This golden anniversary milestone provides a fitting opportunity to reflect on Tobin's contribution and to assess the role that econometric limited dependent variable modeling has played in empirical health economics. Of primary focus here is how Tobin's estimator came to be and came to take root in empirical health economics. The paper provides a brief history of Tobin's estimator and related methods up through about 1971, discusses the early applications of Tobit and related estimators in health economics, i.e. the "technology diffusion" of Tobit in health economics, and offers some concluding remarks.
    JEL: I1
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14512&r=ecm
  21. By: Eric Zivot (University of Washington); Saraswata Chaudhuri (University of North Carolina, Chapel Hill)
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2008-23&r=ecm
  22. By: John Galbraith; Simon van Norden
    Abstract: A probabilistic forecast is the estimated probability with which a future event will satisfy a specified criterion. One interesting feature of such forecasts is their calibration, or the match between predicted probabilities and actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. Here we show that we can do so without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on calibration error in a number of economic applications including recession and inflation prediction, using both forecasts made and stored in real time and pseudoforecasts made using the data vintage available at the forecast date. We evaluate outcomes using both first-release outcome measures as well as later, thoroughly-revised data. We find strong evidence of incorrect calibration in professional forecasts of recessions and inflation. We also present evidence of asymmetries in the performance of inflation forecasts based on real-time output gaps. <P>Une prévision probabiliste représente la probabilité qu’un événement futur satisfasse une condition donnée. Un des aspects intéressants de ces prévisions est leur calibration, c’est-à-dire l’appariement entre les probabilités prédites et les probabilités réalisées. Dans le passé, la calibration a été évaluée en regroupant des probabilités de prévisions en catégories distinctes. Nous proposons d’utiliser des estimateurs à noyaux, qui sont plus efficaces et qui estiment une relation lisse entre les probabilités prédites et réalisées. Nous nous servons de ces estimations pour évaluer l’importance empirique des erreurs de calibration dans plusieurs pratiques économiques, telles que la prévision de récessions et de l’inflation. Pour ce faire, nous utilisons des prévisions historiques, ainsi que des pseudoprévisions effectuées à l’aide de données telles qu’elles étaient au moment de la prévision. Nous analysons les résultats en utilisant autant des estimations préliminaires que des estimations tardives, ces dernières incorporant parfois des révisions importantes. Nous trouvons une forte évidence empirique d’une calibration erronée des prévisions professionnelles de récession et d’inflation. Nous présentons aussi une évidence d’asymétries dans la performance des prévisions d’inflation basées sur des estimations des écarts de la production en temps réel.
    Keywords: calibration, probability forecast, real-time data, inflation, recession, calibration, probabilités de prévisions, données « en temps réel », inflation, récession
    Date: 2008–11–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2008s-28&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.