nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒08‒30
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Combining parametric and nonparametric approaches for more efficient time series prediction By Dabo-Niang, Sophie; Francq, Christian; Zakoian, Jean-Michel
  2. Nonparametric Estimation in Random Coefficients Binary Choice Models By Eric Gautier; Yuichi Kitamura
  3. Testing for Unit Root against LSTAR model – wavelet improvements under GARCH distortion By Li, Yushu; Shukur, Ghazi
  4. Optimal Probabilistic Forecasts for Counts By Brendan P.M. McCabe; Gael M. Martin; David Harris
  5. Testing for Long Memory Against ESTAR Nonlinearities By Kuswanto, Heri; Sibbertsen, Philipp
  6. Noncausal vector autoregression By Lanne, Markku; Saikkonen, Pentti
  7. Optimal Comparison of Misspecified Moment Restriction Models By Vadim Marmer; Taisuke Otsu
  8. Robustness, Infinitesimal Neighborhoods, and Moment Restrictions By Yuichi Kitamura; Taisuke Otsu; Kirill Evdokimov
  9. On the Asymptotic Optimality of Empirical Likelihood for Testing Moment Restrictions By Yuichi Kitamura; Andres Santos; Azeem M. Shaikh
  10. Low-Frequency Robust Cointegration Testing By Ulrich Müller; Mark W. Watson
  11. Developing Median Regression for SURE Models - with Application to 3-Generation Immigrants’ data in Sweden By Zeebari, Zangin; Shukur, Ghazi
  12. Macro modelling with many models By Ida Wolden Bache; James Mitchell; Francesco Ravazzolo; Shaun P. Vahey
  13. Matching on the Estimated Propensity Score By Alberto Abadie; Guido W. Imbens
  14. Forecasting the Real Exchange Rate using a Long Span of Data. A Rematch: Linear vs Nonlinear By David Peel; Ivan Paya; E Pavlidis
  15. Understanding forecast failure of ESTAR models of real exchange rates By Daniel Buncic
  16. Exploring Time-Varying Jump Intensities: Evidence from S&P500 Returns and Options By Peter Christoffersen; Kris Jacobs; Chayawat Ornthanalai
  17. Calibration and Resolution Diagnostics for Bank of England Density Forecasts By John Galbraith; Simon van Norden

  1. By: Dabo-Niang, Sophie; Francq, Christian; Zakoian, Jean-Michel
    Abstract: We introduce a two-step procedure for more efficient nonparametric prediction of a strictly stationary process admitting an ARMA representation. The procedure is based on the estimation of the ARMA representation, followed by a nonparametric regression where the ARMA residuals are used as explanatory variables. Compared to standard nonparametric regression methods, the number of explanatory variables can be reduced because our approach exploits the linear dependence of the process. We establish consistency and asymptotic normality results for our estimator. A Monte Carlo study and an empirical application on stock market indices suggest that significant gains can be achieved with our approach.
    Keywords: ARMA representation; noisy data; Nonparametric regression; optimal prediction
    JEL: C14 C22
    Date: 2009
  2. By: Eric Gautier (ENSAE-CREST); Yuichi Kitamura (Cowles Foundation, Yale University)
    Abstract: This paper considers random coefficients binary choice models. The main goal is to estimate the density of the random coefficients nonparametrically. This is an ill-posed inverse problem characterized by an integral transform. A new density estimator for the random coefficients is developed, utilizing Fourier-Laplace series on spheres. This approach offers a clear insight on the identification problem. More importantly, it leads to a closed form estimator formula that yields a simple plug-in procedure requiring no numerical optimization. The new estimator, therefore, is easy to implement in empirical applications, while being flexible about the treatment of unobserved heterogeneity. Extensions including treatments of non-random coefficients and models with endogeneity are discussed.
    Keywords: Inverse problems, Discrete choice models
    JEL: C14 C25
    Date: 2009–08
  3. By: Li, Yushu (CAFO, Växjö University); Shukur, Ghazi (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology)
    Abstract: In this paper, we propose a Nonlinear Dickey-Fuller test for unit root against first order Logistic Smooth Transition Autoregressive LSTAR (1) model with time as the transition variable. The Nonlinear Dickey-Fuller test statistic is established under the null hypothesis of random walk without drift and the alternative model is a nonlinear LSTAR (1) model. The asymptotic distribution of the test is analytically derived while the small sample distributions are investigated by Monte Carlo experiment. The size and power properties of the test have been investigated using Monte Carlo experiment. The results have shown that there is a serious size distortion for the Nonlinear Dickey-Fuller test when GARCH errors appear in the Data Generating Process (DGP), which lead to an over-rejection of the unit root null hypothesis. To solve this problem, we use the Wavelet technique to count off the GARCH distortion and to improve the size property of the test under GARCH error. We also discuss the asymptotic distributions of the test statistics in GARCH and wavelet environments. Finally, an empirical example is used to compare our test with the traditional Dickey-Fuller test.
    Keywords: Unit root Test; Dickey-Fuller test; STAR model; GARCH; Wavelet method; MODWT
    JEL: C32
    Date: 2009–08–26
  4. By: Brendan P.M. McCabe; Gael M. Martin; David Harris
    Abstract: Optimal probabilistic forecasts of integer-valued random variables are derived. The optimality is achieved by estimating the forecast distribution nonparametrically over a given broad model class and proving asymptotic efficiency in that setting. The ideas are demonstrated within the context of the integer autoregressive class of models, which is a suitable class for any count data that can be interpreted as a queue, stock, birth and death process or branching process. The theoretical proofs of asymptotic optimality are supplemented by simulation results which demonstrate the overall superiority of the nonparametric method relative to a misspecified parametric maximum likelihood estimator, in large but .nite samples. The method is applied to counts of wage claim benefits, stock market iceberg orders and civilian deaths in Iraq, with bootstrap methods used to quantify sampling variation in the estimated forecast distributions.
    Keywords: Nonparametric Inference; Asymptotic Efficiency; Count Time Series; INAR Model Class; Bootstrap Distributions; Iceberg Stock Market Orders.
    JEL: C14 C22 C53
    Date: 2009–08
  5. By: Kuswanto, Heri; Sibbertsen, Philipp
    Abstract: We develop a Wald type test to distinguish between long memory and ESTAR nonlinearity by using a directed-Wald statistic to overcome the problem of restricted parameters under the alternative. The test is derived from two basic model specifications where the first is the standard model based on an auxiliary regression and the second allows the parameter to appear as a nuisance parameter in the transition function. A simulation study indicates that both approaches lead to tests with good size and power properties to distinguish between stationary long memory and ESTAR. Moreover, the second approach is shown to have more power.
    Keywords: directed-Wald test, ESTAR, long memory
    JEL: C12 C22
    Date: 2009–08
  6. By: Lanne, Markku (Department of Economics, and HECER, University of Helsinki); Saikkonen, Pentti (Department of Mathematics and Statistics, and HECER, University of Helsinki)
    Abstract: In this paper, we propose a new noncausal vector autoregressive (VAR) model for non-Gaussian time series. The assumption of non-Gaussianity is needed for reasons of identifiability. Assuming that the error distribution belongs to a fairly general class of elliptical distributions, we develop an asymptotic theory of maximum likelihood estimation and statistical inference. We argue that allowing for noncausality is of importance in empirical economic research, which currently uses only conventional causal VAR models. Indeed, if noncausality is incorrectly ignored, the use of a causal VAR model may yield suboptimal forecasts and misleading economic interpretations. This is emphasized in the paper by noting that noncausality is closely related to the notion of nonfundamentalness, under which structural economic shocks cannot be recovered from an estimated causal VAR model. As detecting nonfundamentalness is therefore of great importance, we propose a procedure for discriminating between causality and noncausality that can be seen as a test of nonfundamentalness. The methods are illustrated with applications to fiscal foresight and the term structure of interest rates.
    Keywords: elliptic distribution; fiscal foresight; maximum likelihood estimation; noncausal; nonfundamentalness; non-Gaussian; term structure of interest rates
    JEL: C32 C46 C52 E62 G12
    Date: 2009–08–12
  7. By: Vadim Marmer (University of British Columbia); Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper considers optimal testing of model comparison hypotheses for misspecified unconditional moment restriction models. We adopt the generalized Neyman-Pearson optimality criterion, which focuses on the convergence rates of the type I and II error probabilities under fixed global alternatives, and derive an optimal but practically infeasible test. We then propose feasible approximation test statistics to the optimal one. For linear instrumental variable regression models, the conventional empirical likelihood ratio test statistic emerges. For general nonlinear moment restrictions, we propose a new test statistic based on an iterative algorithm. We derive asymptotic properties of these test statistics.
    Keywords: Moment restriction, Model comparison, Misspecification, Generalized Neyman-Pearson optimality, Empirical likelihood, GMM
    JEL: C12 C14 C52
    Date: 2009–08
  8. By: Yuichi Kitamura (Cowles Foundation, Yale University); Taisuke Otsu (Cowles Foundation, Yale University); Kirill Evdokimov (Dept. of Economics, Yale University)
    Abstract: This paper is concerned with robust estimation under moment restrictions. A moment restriction model is semiparametric and distribution-free, therefore it imposes mild assumptions. Yet it is reasonable to expect that the probability law of observations may have some deviations from the ideal distribution being modeled, due to various factors such as measurement errors. It is then sensible to seek an estimation procedure that are robust against slight perturbation in the probability measure that generates observations. This paper considers local deviations within shrinking topological neighborhoods to develop its large sample theory, so that both bias and variance matter asymptotically. The main result shows that there exists a computationally convenient estimator that achieves optimal minimax robust properties. It is semiparametrically efficient when the model assumption holds, and at the same time it enjoys desirable robust properties when it does not.
    Keywords: Asymptotic minimax theorem, Hellinger distance, Semiparametric efficiency
    JEL: C10
    Date: 2009–08
  9. By: Yuichi Kitamura (Cowles Foundation, Yale University); Andres Santos (Dept. of Economics, University of California, San Diego); Azeem M. Shaikh (Dept. of Economics, University of Chicago)
    Abstract: In this paper we make two contributions. First, we show by example that empirical likelihood and other commonly used tests for parametric moment restrictions, including the GMM-based J-test of Hansen (1982), are unable to control the rate at which the probability of a Type I error tends to zero. From this it follows that, for the optimality claim for empirical likelihood in Kitamura (2001) to hold, additional assumptions and qualifications need to be introduced. The example also reveals that empirical and parametric likelihood may have non-negligible differences for the types of properties we consider, even in models in which they are first-order asymptotically equivalent. Second, under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for "most" alternatives. This result further implies that empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.
    Keywords: Empirical likelihood, Large deviations, Hoeffding optimality, Moment restrictions
    JEL: C12 C14
    Date: 2009–08
  10. By: Ulrich Müller; Mark W. Watson
    Abstract: Standard inference in cointegrating models is fragile because it relies on an assumption of an I(1) model for the common stochastic trends, which may not accurately describe the data's persistence. This paper discusses efficient low-frequency inference about cointegrating vectors that is robust to this potential misspecification. A simple test motivated by the analysis in Wright (2000) is developed and shown to be approximately optimal in the case of a single cointegrating vector.
    JEL: C32 E32
    Date: 2009–08
  11. By: Zeebari, Zangin (CAFO, Växjö University); Shukur, Ghazi (CESIS - Centre of Excellence for Science and Innovation Studies, Royal Institute of Technology)
    Abstract: In this paper we generalize the median regression method in order to make it applicable to systems of regression equations. Given the existence of proper systemwise medians of the errors from different equations, we apply the weighted median regression with the weights obtained from the covariance matrix of errors from different equations calculated by conventional SURE method. The Seemingly Unrelated Median Regression Equations (SUMRE) method produces results that are more robust than the usual SURE or single equations OLS estimations when the distributions of the dependent variables are not symmetric. Moreover, the estimations of the SUMRE method are also more efficient than those of the cases of single equation median regressions when the cross equations errors are correlated. More precisely, the aim of our SUMRE method is to produce a harmony of existing skewness and correlations of errors in systems of regression equations. A theorem is derived and indicates that even with the lack of statistically significant correlations between the equations, using the SMRE method instead of the SURE method will not damage the estimation of parameters. A Monte Carlo experiment was conducted to investigate the properties of the SUMRE method in situations where the number of equations in the system, number of observations, strength of the correlations of cross equations errors and the departure from the normality distribution of the errors were varied. The results show that, when the cross equations correlations are medium or high and the level of skewness of the errors of the equations are also medium or high, the SUMRE method produces estimators that are more efficient and less biased than the ordinary SURE GLS estimators. Moreover, the estimates of applying the SUMRE method are also more efficient and less biased than the estimates obtained when applying the OLS or single equation median regressions. In addition, our results from an empirical application are in accordance with what we discovered from the simulation study, with respect to the relative gain in efficiency of SUMRE estimators compared to SURE estimators, in the presence of Skewness of error terms.
    Keywords: Median regression; SURE models; robustness; efficiency
    JEL: C10 C13 C51
    Date: 2009–08–26
  12. By: Ida Wolden Bache (Norges Bank (Central Bank of Norway)); James Mitchell (National Institute of Economic and Social Research); Francesco Ravazzolo (Norges Bank (Central Bank of Norway)); Shaun P. Vahey (Melbourne Business School)
    Abstract: We argue that the next generation of macro modellers at Inflation Targeting central banks should adapt a methodology from the weather forecasting literature known as `ensemble modelling'. In this approach, uncertainty about model specifications (e.g., initial conditions, parameters, and boundary conditions) is explicitly accounted for by constructing ensemble predictive densities from a large number of component models. The components allow the modeller to explore a wide range of uncertainties; and the resulting ensemble `integrates out' these uncertainties using time-varying weights on the components. We provide two examples of this modelling strategy: (i) forecasting inflation with a disaggregate ensemble; and (ii) forecasting inflation with an ensemble DSGE.
    Keywords: Ensemble modelling, Forecasting, DSGE models, Density combination
    JEL: C11 C32 C53 E37 E52
    Date: 2009–08–17
  13. By: Alberto Abadie; Guido W. Imbens
    Abstract: Propensity score matching estimators (Rosenbaum and Rubin, 1983) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators. Moreover, we derive an adjustment to the large sample variance of propensity score matching estimators that corrects for first step estimation of the propensity score. In spite of the great popularity of propensity score matching estimators, these results were previously unavailable in the literature.
    JEL: C13 C14
    Date: 2009–08
  14. By: David Peel; Ivan Paya; E Pavlidis
    Abstract: This paper deals with the nonlinear modeling and forecasting of the dollar-sterling real exchange rate using a long span of data. Our contribution is threefold. First, we provide significant evidence of smooth transition dynamics in the series by employing a battery of recently developed in-sample statistical tests. Second, we investigate the small sample properties of several evaluation measures for comparing recursive forecasts when one of the competing models is nonlinear. Finally, we run a forecasting race for the post-Bretton Woods era between the nonlinear real exchange rate model, the random walk, and the linear autoregressive model. The winner turns out to be the nonlinear model, against the odds.
    Keywords: Real Exchange Rate, Nonlinearity, Robust Linearity Tests, Forecast Evaluation, Bootstrapping.
    Date: 2009
  15. By: Daniel Buncic
    Abstract: The forecast performance of the empirical ESTAR model of Taylor, Peel and Sarno (2001) is examined for 4 bilateral real exchange rate series over an out-of-sample eval-uation period of nearly 12 years. Point as well as density forecasts are constructed, considering forecast horizons of 1 to 22 steps head. The study finds that no forecast gains over a simple AR(1) specification exist at any of the forecast horizons that are considered, regardless of whether point or density forecasts are utilised in the evaluation. Non-parametric methods are used in conjunction with simulation techniques to learn about the models and their forecasts. It is shown graphically that the nonlinearity in the point forecasts of the ESTAR model decreases as the forecast horizon increases. The non-parametric methods show also that the multiple steps ahead forecast densities are normal looking with no signs of bi-modality, skewness or kurtosis. Overall, there seems little to be gained from using an ESTAR specification over a sim¬ple AR(1) model.
    Keywords: Purchasing power parity, regime modelling, non-linear real exchange rate models, ESTAR, forecast evaluation, density forecasts, non-parametric methods.
    JEL: C22 C52 C53 F31 F47
    Date: 2009–08–18
  16. By: Peter Christoffersen; Kris Jacobs; Chayawat Ornthanalai
    Abstract: Standard empirical investigations of jump dynamics in returns and volatility are fairly complicated due to the presence of latent continuous-time factors. We present a new discrete-time framework that combines heteroskedastic processes with rich specifications of jumps in returns and volatility. Our models can be estimated with ease using standard maximum likelihood techniques. We provide a tractable risk neutralization framework for this class of models which allows for separate modeling of risk premia for the jump and normal innovations. We anchor our models in the literature by providing continuous time limits of the models. The models are evaluated by fitting a long sample of S&P500 index returns, and by valuing a large sample of options. We find strong empirical support for time-varying jump intensities. A model with jump intensity that is affine in the conditional variance performs particularly well both in return fitting and option valuation. Our implementation allows for multiple jumps per day, and the data indicate support for this model feature, most notably on Black Monday in October 1987. Our results also confirm the importance of jump risk premia for option valuation: jumps cannot significantly improve the performance of option pricing models unless sizeable jump risk premia are present. <P>Les recherches empiriques standards portant sur la dynamique des sauts dans les rendements et dans la volatilité sont plutôt complexes en raison de la présence de facteurs inobservables en temps continu. Nous présentons un nouveau cadre d’étude en temps discret qui combine des processus hétéroscédastiques et des caractéristiques à concentration élevée de sauts dans les rendements et dans la volatilité. Nos modèles peuvent être facilement évalués à l’aide des méthodes standards du maximum de vraisemblance. Nous offrons une démarche souple de neutralisation du risque pour cette catégorie de modèles, ce qui permet de modéliser distinctement les primes de risque liées aux sauts et celles liées aux innovations normales. Nous imbriquons nos modèles dans la littérature en établissant leurs limites en temps continu. Ces derniers sont évalués en intégrant un échantillon de rendements à long terme de l’indice S&P 500 et en évaluant un vaste échantillon d’options. Nous trouvons un solide appui empirique en ce qui a trait aux intensités de sauts variant dans le temps. Un modèle avec intensité de saut affine dans la variance conditionnelle est particulièrement efficace sur les plans de l’ajustement des rendements et de l’évaluation des options. La mise en œuvre de notre modèle permet de multiples sauts par jour et les données appuient cette caractéristique, plus particulièrement en ce qui a trait au lundi noir d’octobre 1987. Nos résultats confirment aussi l’importance des primes liées au risque de sauts pour l’évaluation du prix des options : les sauts ne peuvent contribuer à améliorer considérablement la performance des modèles utilisés pour fixer les prix des options, sauf en présence de primes de risque de sauts assez importantes.
    Keywords: compound Poisson process, option valuation, filtering; volatility jumps, jump risk premia, time-varying jump intensity, heteroskedasticity. , processus composé de Poisson, évaluation du prix des options, filtrage, sauts liés à la volatilité, primes de risque de sauts, intensité des sauts variant dans le temps, hétéroscédasticité.
    JEL: G12
    Date: 2009–08–01
  17. By: John Galbraith; Simon van Norden
    Abstract: This paper applies new diagnostics to the Bank of England’s pioneering density forecasts (fan charts). We compute their implicit probability forecast for annual rates of inflation and output growth that exceed a given threshold (in this case, the target inflation rate and 2.5% respectively.) Unlike earlier work on these forecasts, we measure both their calibration and their resolution, providing both formal tests and graphical interpretations of the results. These results both reinforce earlier evidence on some of the limitations of these forecasts and provide new evidence on their information content. <P>Cet étude développe et applique des nouvelles techniques pour diagnostiquer les prévisions de densité de la Banque d’Angleterre (leur “fan charts”). Nous calculons leurs probabilités implicites pour des taux d’inflation et de croissance du PIB qui dépassent des seuils critiques (soit le taux d’inflation ciblé, soit 2.5%.) En contraste avec des travaux antérieurs sur ces prévisions, nous gaugeons leur calibration aussi bien que leur résolution, en donnant des tests formels et des interprétations graphiques. Les résultats renforcent des conclusions déjà existant sur les limites de ces prévisions et ils donnent de nouvelles évidences sur leurs valeurs ajoutées.
    Keywords: calibration, density forecast, probability forecast, resolu, calibration, prévisions de densité, probabilités implicites, résolution.
    Date: 2009–08–01

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.