nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒01‒23
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. A Simple Panel-CADF Test for Unit Roots By Costantini, Mauro; Lupi, Claudio
  2. Instrument Variable Estimation of a Spatial Autoregressive Panel Model with Random Effects By Badi H. Baltagi; Long Liu
  3. Folklore Theorems, Implicit Maps and New Unit Root Limit Theory By Peter C.B. Phillips
  4. First Difference MLE and Dynamic Panel Estimation By Chirok Han; Peter C.B. Phillips
  5. Specification Testing for Nonlinear Cointegrating Regression By Qiying Wang; Peter C.B. Phillips
  6. Bias in Estimating Multivariate and Univariate Diffusions By Xiaohu Wang; Peter C.B. Phillips; Jun Yu
  7. Bayesian prior elicitation in DSGE models: macro- vs micro-priors By Marco J. Lombardi; Giulio Nicoletti
  8. Change point for multinomial data using phi-divergence test statistics By Apostolos Batsidis; Nirian Martín; Leandro Pardo; Konstantinos Zografos
  9. One‐Step R‐Estimation in Linear Models with Stable Errors By Marc Hallin; Yves-Caoimhin Swan; Thomas Verdebout; David Veredas
  10. Modelling Volatility by Variance Decomposition By Cristina Amado; Timo Teräsvirta
  11. Bias from the use of mean-based methods on test scores By Koerselman, Kristian
  12. Bias Correction of ML and QML Estimators in the EGARCH(1,1) Model By Antonis Demos; Dimitra Kyriakopoulou
  13. Long memory in an oil market: a spectral approach By Yuriy Balagula; Yulia Abakumova
  14. Inconsistent VAR Regression with Common Explosive Roots By Peter C.B. Phillips; Tassos Magdalinos
  15. Dynamic Evaluation of Job Search Assistance By Kastoryano, Stephen; van der Klaauw, Bas
  16. Using Large Data Sets to Forecast Sectoral Employment By Rangan Gupta; Alain Kabundi; Stephen M. Miller; Josine Uwilingiye
  17. Unter verallgemeinerter Mittelwertbildung abgeschlossene Familien von Copulas By Klein, Ingo

  1. By: Costantini, Mauro (Department of Economics, University of Vienna, Vienna, Austria); Lupi, Claudio (Department SEGeS, Faculty of Economics, University of Molise, Campobasso, Italy)
    Abstract: In this paper we propose a simple extension to the panel case of the covariate-augmented Dickey Fuller (CADF) test for unit roots developed in Hansen (1995). The extension we propose is based on a p-values combination approach that takes into account cross-section dependence. We show that the test is easy to compute, has good size properties and gives power gains with respect to other popular panel approaches. A procedure to compute the asymptotic p-values of Hansen’s CADF test is also a side-contribution of the paper. We also complement Hansen (1995) and Caporale and Pittis (1999) with some new theoretical results. Two empirical applications are carried out for illustration purposes on international data to test the PPP hypothesis and the presence of a unit root in international industrial production indices.
    Keywords: Unit root, panel data, approximate p-values, Monte Carlo
    JEL: C22 C23 F31
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:261&r=ecm
  2. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio, One UTSA Circle, TX 78249-0633)
    Abstract: This paper extends the instrumental variable estimators of Kelejian and Prucha (1998) and Lee (2003) proposed for the cross-sectional spatial autoregressive model to the random effects spatial autoregressive panel data model. It also suggests an extension of the Baltagi (1981) error component 2SLS estimator to this spatial panel model.
    Keywords: Panel Data, Spatial Model, Two Stage Least Squares, Error Components.
    JEL: C13 C21
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:127&r=ecm
  3. By: Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: The delta method and continuous mapping theorem are among the most extensively used tools in asymptotic derivations in econometrics. Extensions of these methods are provided for sequences of functions, which are commonly encountered in applications, and where the usual methods sometimes fail. Important examples of failure arise in the use of simulation based estimation methods such as indirect inference. The paper explores the application of these methods to the indirect inference estimator (IIE) in first order autoregressive estimation. The IIE uses a binding function that is sample size dependent. Its limit theory relies on a sequence-based delta method in the stationary case and a sequence-based implicit continuous mapping theorem in unit root and local to unity cases. The new limit theory shows that the IIE achieves much more than bias correction. It changes the limit theory of the maximum likelihood estimator (MLE) when the autoregressive coefficient is in the locality of unity, reducing the bias and the variance of the MLE without affecting the limit theory of the MLE in the stationary case. Thus, in spite of the fact that the IIE is a continuously differentiable function of the MLE, the limit distribution of the IIE is not simply a scale multiple of the MLE but depends implicitly on the full binding function mapping. The unit root case therefore represents an important example of the failure of the delta method and shows the need for an implicit mapping extension of the continuous mapping theorem.
    Keywords: Binding function, Delta method, Exact bias, Implicit continuous maps, Indirect inference, Maximum likelihood
    JEL: C23
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1781&r=ecm
  4. By: Chirok Han (Korea University); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: First difference maximum likelihood (FDML) seems an attractive estimation methodology in dynamic panel data modeling because differencing eliminates fixed effects and, in the case of a unit root, differencing transforms the data to stationarity, thereby addressing both incidental parameter problems and the possible effects of nonstationarity. This paper draws attention to certain pathologies that arise in the use of FDML that have gone unnoticed in the literature and that affect both finite sample peformance and asymptotics. FDML uses the Gaussian likelihood function for first differenced data and parameter estimation is based on the whole domain over which the log-likelihood is defined. However, extending the domain of the likelihood beyond the stationary region has certain consequences that have a major effect on finite sample and asymptotic performance. First, the extended likelihood is not the true likelihood even in the Gaussian case and it has a finite upper bound of definition. Second, it is often bimodal, and one of its peaks can be so peculiar that numerical maximization of the extended likelihood frequently fails to locate the global maximum. As a result of these pathologies, the FDML estimator is a restricted estimator, numerical implementation is not straightforward and asymptotics are hard to derive in cases where the peculiarity occurs with non-negligible probabilities. We investigate these problems, provide a convenient new expression for the likelihood and a new algorithm to maximize it. The peculiarities in the likelihood are found to be particularly marked in time series with a unit root. In this case, the asymptotic distribution of the FDMLE has bounded support and its density is infinite at the upper bound when the time series sample size T approaching infinity. As the panel width n approaching infinity the pathology is removed and the limit theory is normal. This result applies even for T fixed and we present an expression for the asymptotic distribution which does not depend on the time dimension. When n,T approaching infinity, the FDMLE has smaller asymptotic variance than that of the bias corrected MLE, an outcome that is explained by the restricted nature of the FDMLE.
    Keywords: Asymptote, Bounded support, Dynamic panel, Efficiency, First difference MLE, Likelihood, Quartic equation, Restricted extremum estimator
    JEL: C22 C23
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1780&r=ecm
  5. By: Qiying Wang (University of Sydney); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: We provide a limit theory for a general class of kernel smoothed U statistics that may be used for specification testing in time series regression with nonstationary data. The framework allows for linear and nonlinear models of cointegration and regressors that have autoregressive unit roots or near unit roots. The limit theory for the specification test depends on the self intersection local time of a Gaussian process. A new weak convergence result is developed for certain partial sums of functions involving nonstationary time series that converges to the intersection local time process. This result is of independent interest and useful in other applications.
    Keywords: Intersection local time, Kernel regression, Nonlinear nonparametric model, Ornstein-Uhlenbeck process, Specification tests, Weak convergence
    JEL: C14 C22
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1779&r=ecm
  6. By: Xiaohu Wang (Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University)
    Abstract: Multivariate continuous time models are now widely used in economics and finance. Empirical applications typically rely on some process of discretization so that the system may be estimated with discrete data. This paper introduces a framework for discretizing linear multivariate continuous time systems that includes the commonly used Euler and trapezoidal approximations as special cases and leads to a general class of estimators for the mean reversion matrix. Asymptotic distributions and bias formulae are obtained for estimates of the mean reversion parameter. Explicit expressions are given for the discretization bias and its relationship to estimation bias in both multivariate and in univariate settings. In the univariate context, we compare the performance of the two approximation methods relative to exact maximum likelihood (ML) in terms of bias and variance for the Vasicek process. The bias and the variance of the Euler method are found to be smaller than the trapezoidal method, which are in turn smaller than those of exact ML. Simulations suggest that when the mean reversion is slow the approximation methods work better than ML, the bias formulae are accurate, and for scalar models the estimates obtained from the two approximate methods have smaller bias and variance than exact ML. For the square root process, the Euler method outperforms the Nowman method in terms of both bias and variance. Simulation evidence indicates that the Euler method has smaller bias and variance than exact ML, Nowman's method and the Milstein method.
    Keywords: Bias, Diffusion, Euler approximation, Trapezoidal approximation, Milstein approximation
    JEL: C15 G12
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1778&r=ecm
  7. By: Marco J. Lombardi (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Giulio Nicoletti (Banca d’Italia, Via Nazionale, 91, 00184 Roma, Italy.)
    Abstract: Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either be information concerning deep parameters’ values (‘microprior’) or some macroeconomic indicator, e.g. moments of observable variables (‘macroprior’). In this paper we introduce a non parametric prior which is elicited from impulse response functions. Results show that using either a microprior or a macroprior can lead to different posterior estimates. We probe into the details of our result, showing that model misspecification is to blame for that. JEL Classification: C11, C51, E30.
    Keywords: DSGE Models, Bayesian Estimation, Prior Distribution, Impulse Response Function.
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20111289&r=ecm
  8. By: Apostolos Batsidis; Nirian Martín; Leandro Pardo; Konstantinos Zografos
    Abstract: We propose two families of maximally selected phi-divergence tests for studying change point locations when the unknown probability vectors of a sequence of multinomial random variables, with possibly different sizes, are piecewise constant. In addition, these test-statistics are valid to estimate the location of the change-point. Two variants of the first family are considered by following two versions of the Darling- Erdös' formula. Under the no changes null hypothesis, we derive their limit distributions, extreme value and Gaussian-type respectively. We pay special attention to the checking the accuracy of these limit distributions in case of finite sample sizes. In such a framework, a Monte Carlo analysis shows the possibility of improving the behaviour of the test-statistics based on the likelihood ratio and chi-square tests introduced in Horváth and Serbinowska (1995). The data of the classical Lindisfarne Scribes problem are used in order to apply the proposed test-statistics
    Keywords: Multinomial sampling, Change-point, Phi-divergence test-statistics
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws110101&r=ecm
  9. By: Marc Hallin; Yves-Caoimhin Swan; Thomas Verdebout; David Veredas
    Abstract: Classical estimation techniques for linear models either are inconsistent, or perform somewhat poorly under α-stable error densities; most of them are not even rate-optimal. In this paper, we develop an original R-estimation method and investigate its asymptotic performances under stable densities. Contrary to traditional least squares, the proposed R-estimators, remain root-n consistent (the optimal rate) under the whole family of stable distributions, irrespective of their asymmetry and tail index. While stable-likelihood estimation, due to the absence of a closed form for stable densities, is generally considered unfeasible, our method allows us to construct estimators reaching the parametric efficiency bounds associated with any prescribed values (α0, β0) of the tail index α and skewness parameter β, while preserving root-n consistency under any (α, β). The method furthermore avoids all forms of multidimensional argmin computation. Simulations confirm its excellent finite-sample performances.
    Keywords: Stable distributions, local asymptotic normality, R-estimation,; asymptotic relative efficiencies
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/73398&r=ecm
  10. By: Cristina Amado (Universidade do Minho - NIPE); Timo Teräsvirta (CREATES, School of Economics and Management, Aarhus University)
    Abstract: In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the conditional and unconditional variances where the transition between regimes over time is smooth. The main focus is on the multiplicative decom- position that decomposes the variance into an unconditional and conditional component. A modelling strategy for the time-varying GARCH model based on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspeci.cation tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance.
    Keywords: Conditional heteroskedasticity; Structural change; Lagrange multiplier test; Misspeci.cation test; Nonlinear time series; Time-varying parameter model.
    JEL: C12 C22 C51 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:01/2011&r=ecm
  11. By: Koerselman, Kristian (Swedish Institute for Social Research, Stockholm University)
    Abstract: Economists regularly regress IQ scores or achievement test scores on covariates, for example to evaluate educational policy. These test scores are ordinal measures, and their distributions can take an arbitrary shape, even though they are often constructed to look normal. The ordinality of test scores makes the use of mean-based methods such as OLS is inappropriate: estimates are not robust to changes in test score estimation assumptions and methods. I simulate the magnitude of robustness problems, and show that in practice, problems with mean-based regression of normally distributed test scores are small. Even so, test score distributions with more exotic shapes will need to be transformed before use.
    Keywords: dmissible statistics; test scores; educational achievement; item response theory; IQ; PISA.
    JEL: C40 I20 I21 J24
    Date: 2011–01–03
    URL: http://d.repec.org/n?u=RePEc:hhs:sofiwp:2011_001&r=ecm
  12. By: Antonis Demos (www.aueb.gr/users/demos); Dimitra Kyriakopoulou
    Abstract: n this paper we derive the bias approximations of the Maximum Likelihood (ML) and Quasi-Maximum Likelihood (QML) Estimators of the EGARCH(1,1) parameters and we check our theoretical results through simulations. With the approximate bias expressions up to O(1/T), we are then able to correct the bias of all estimators. To this end, a Monte Carlo exercise is conducted and the results are presented and discussed. We conclude that, for given sets of parameters values, the bias correction works satisfactory for all parameters. The results for the bias expressions can be used in order to formulate the approximate Edgeworth distribution of the estimators.
    Date: 2010–06–10
    URL: http://d.repec.org/n?u=RePEc:aue:wpaper:1108&r=ecm
  13. By: Yuriy Balagula (Department of Economics, European University at St. Petersburg); Yulia Abakumova (Department of Economics, European University at St. Petersburg)
    Abstract: In the paper, we propose a spectral approach to estimation of the long-memory effect in time series and its practical application for oil prices analysis.
    Keywords: econometrics, long memory, oil price
    JEL: C32 O13 E3
    Date: 2010–11–16
    URL: http://d.repec.org/n?u=RePEc:eus:wpaper:ec0111&r=ecm
  14. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magdalinos (University of Nottingham)
    Abstract: Nielsen (2009) shows that vector autoregression is inconsistent when there are common explosive roots with geometric multiplicity greater than unity. This paper discusses that result, provides a co-explosive system extension and an illustrative example that helps to explain the finding, gives a consistent instrumental variable procedure, and reports some simulations. Some exact limit distribution theory is derived and a useful new reverse martingale central limit theorem is proved.
    Keywords: Co-explosive behavior, Common roots, Endogeneity, Forward instrumentation, Geometric multiplicity, Reverse martingale
    JEL: C22
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1777&r=ecm
  15. By: Kastoryano, Stephen (University of Amsterdam); van der Klaauw, Bas (VU University Amsterdam)
    Abstract: This paper evaluates a job search assistance program for unemployment insurance recipients. The assignment to the program is dynamic. We provide a discussion on dynamic treatment effects and identification conditions. In the empirical analyses we use administrative data from a unique institutional environment. This allows us to compare different microeconometric evaluation estimators. All estimators find that the job search assistance program reduces the exit to work, in particular when provided early during the spell of unemployment. Furthermore, continuous-time (timing-of-events and regression discontinuity) methods are more robust than discrete-time (propensity score and regression discontinuity) methods.
    Keywords: treatment evaluation, dynamic enrollment, empirical evaluation
    JEL: C22 J64 J68
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp5424&r=ecm
  16. By: Rangan Gupta (University of Pretoria); Alain Kabundi (University of Johannesburg); Stephen M. Miller (University of Connecticut and University of Nevada, Las Vegas); Josine Uwilingiye (University of Johannesburg)
    Abstract: We implement several Bayesian and classical models to forecast employment for eight sectors of the US economy. In addition to standard vector-autoregressive and Bayesian vector autoregressive models, we also include the information content of 143 additional monthly series in some models. Several approaches exist for incorporating information from a large number of series. We consider two approaches -- extracting common factors (principle components) in a factor-augmented vector autoregressive or vector error-correction, Bayesian factor-augmented vector autoregressive or vector error-correction models, or Bayesian shrinkage in a large-scale Bayesian vector autoregressive models. Using the period of January 1972 to December 1999 as the in-sample period and January 2000 to March 2009 as the out-of-sample horizon, we compare the forecast performance of the alternative models. Finally, we forecast out-of sample from April 2009 through March 2010, using the best forecasting model for each employment series. We find that factor augmented models, especially error-correction versions, generally prove the best in out-of-sample forecast performance, implying that in addition to macroeconomic variables, incorporating long-run relationships along with short-run dynamics play an important role in forecasting employment.
    Keywords: Sectoral Employment, Forecasting, Factor Augmented Models, Large-Scale BVAR models
    JEL: C32 R31
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2011-02&r=ecm
  17. By: Klein, Ingo
    Abstract: We will identify sufficient and partly necessary conditions for a family of copulas to be closed under the construction of generalized linear mean values. These families of copulas generalize results well-known from the literature for the Farlie-Gumbel-Morgenstern (FGM), the Ali-Mikhai-Haq (AMH) and the Barnett-Gumbel (BG) families of copulas closed under weighted linear, harmonic and geometric mean. For these generalizations we calculate the range of Spearman's ρ depending on the choice of weights α, the copulas generating function φ and the exponent γ determining what kind of mean value will be considered. It seems that FGM and AMH generating function φ(υ) = 1 - υ maximizes the range of Spearman's ρ. Furthermore, it will be shown that the considered families of copulas closed under the construction of generalized linear means have no tail dependence in the sense of Ledford & Tawn. --
    Keywords: copula,generalized linear means,Spearman's ρ,tail dependence
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:zbw:faucse:862010&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.