nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒03‒26
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. Cointegrating Polynomial Regressions By Hong, Seung Hyun; Wagner, Martin
  2. Markov-Switching MIDAS Models By Pierre Guerin; Massimiliano Marcellino
  3. Small Sample Properties of Alternative Tests for Martingale Difference Hypothesis By Amélie Charles; Olivier Darné; Jae H Kim
  4. Bootstrap Tests for Structural Breaks When the Regressors and Error Term are Nonstationary By Dong Jin Lee
  5. Specification and estimation of rating scale models - with an application to the determinants of life satisfaction By Raphael Studer; Rainer Winkelmann
  6. Observation Driven Mixed-Measurement Dynamic Factor Models with an Application to Credit Risk By Drew Creal; Bernd Schwaab; Siem Jan Koopman; Andre Lucas
  7. A practical comparison of the bivariate probit and linear IV estimators By Chiburis, Richard C.; Das, Jishnu; Lokshin, Michael
  8. Forecasting the Term Structure of Interest Rates Using Integrated Nested Laplace Approximations By Márcio Laurini; Luiz Koodi Hotta
  9. Testing for non-causality by using the Autoregressive Metric By Di Iorio, Francesca; Triacca, Umberto
  10. Density estimators through Zero Variance Markov Chain Monte Carlo By Antonietta Mira; Daniele Imparato
  11. Zero Variance Markov Chain Monte Carlo for Bayesian Estimators By Antonietta Mira; Daniele Imparato; Reza Solgi
  12. Relating Stochastic Volatility Estimation Methods By Charles S. Bos
  13. Hodges-Lehmann Optimality for Testing Moment By Ivan Canay; Taisuke Otsu
  14. To Aggregate or Not to Aggregate: Should decisions and models have the same frequency? By Kiygi Calli, M.; Weverbergh, M.; Franses, Ph.H.B.F.
  15. The Identification of Price Jumps By Jan Hanousek; Evzen Kocenda; Jan Novotny
  16. Give missings a chance: Combined stochastic and rule-based approach to improve regression models with mismeasured monotonic covariates without side information By Dlugosz, Stephan
  17. Unit-root and stationarity testing with empirical application on industrial production of CEE-4 countries By Lyócsa, Štefan; Výrost, Tomáš; Baumöhl, Eduard
  18. Clustering life trajectories: A new divisive hierarchical clustering algorithm for discrete-valued discrete time series By Dlugosz, Stephan
  19. Note on the Interpretation of Convergence Speed in the Dynamic Panel Model By Masahiko Shibamoto; Yoshiro Tsutsui

  1. By: Hong, Seung Hyun (Korea Institute of Public Finance, Seoul, Korea); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: This paper develops a fully modified OLS estimator for cointegrating polynomial regressions, i.e. for regressions including deterministic variables, integrated processes and powers of integrated processes as explanatory variables and stationary errors. The errors are allowed to be serially correlated and the regressors are allowed to be endogenous. The paper thus extends the fully modified approach developed in Phillips and Hansen (1990). The FM-OLS estimator has a zero mean Gaussian mixture limiting distribution, which is the basis for standard asymptotic inference. In addition Wald and LM tests for specification as well as a KPSS-type test for cointegration are derived. The theoretical analysis is complemented by a simulation study which shows that the developed FM-OLS estimator and tests based upon it perform well in the sense that the performance advantages over OLS are by and large similar to the performance advantages of FM-OLS over OLS in cointegrating regressions.
    Keywords: Cointegrating polynomial regression, fully modified OLS estimation, integrated process, testing
    JEL: C12 C13 C32
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:264&r=ecm
  2. By: Pierre Guerin; Massimiliano Marcellino
    Abstract: This paper introduces a new regression model - Markov-switching mixed data sampling (MS-MIDAS) - that incorporates regime changes in the parameters of the mixed data sampling (MIDAS) models and allows for the use of mixed-frequency data in Markov-switching models. After a discussion of estimation and inference for MS-MIDAS, and a small sample simulation based evaluation, the MS-MIDAS model is applied to the prediction of the US and UK economic activity, in terms both of quantitative forecasts of the aggregate economic activity and of the prediction of the business cycle regimes. Both simulation and empirical results indicate that MSMIDAS is a very useful specification.
    Keywords: Business cycle, Mixed-frequency data, Non-linear models, Forecasting, Nowcasting
    JEL: C22 C53 E37
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2011/03&r=ecm
  3. By: Amélie Charles (Audencia Nantes, School of Management); Olivier Darné (LEMNA, University of Nantes); Jae H Kim (School of Economics and Finance, La Trobe University)
    Abstract: A Monte Carlo experiment is conducted to compare power properties of al- ternative tests for the martingale difference hypothesis. Overall, we find that the wild bootstrap automatic variance ratio test shows the highest power against lin- ear dependence; while the generalized spectral test performs most desirably under nonlinear dependence.
    Keywords: Monte Carlo experiment; Nonlinear dependence; Portmanteau test; Variance ratio test
    JEL: C12 C14
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:ltr:wpaper:2010.07&r=ecm
  4. By: Dong Jin Lee (University of Connecticut)
    Abstract: This paper considers tests for structural breaks in linear models when the regressors and the serially dependent error process are unstable. The set of models contains various economic circumstances such as the structural breaks in the regressors and/or the error variance, and a linear trend model with I(0)/I(1) error. We show that the existing heteroscedasticity robust tests and the fixed regressor bootstrap method of Hansen (2000) have severe size distortion problem even in the asymptotics. We suggest a method which combines the fixed regressor bootstrap and the sieve-wild bootstrap method to nonparametrically approximate the the serially dependent unstable error process. The suggested method is shown to asymptotically replicates the true distribution of the existing tests under various circumstances. Monte Carlo experiments show significant improvements both in the size and the power properties. Once the size is controlled by the bootstrap, Wald type tests have better power properties relative to LM type tests.
    Keywords: structural break, sieve bootstrap, fixed regressor bootstrap, robust test, break in linear trend
    JEL: C10 C12 C22
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2011-05&r=ecm
  5. By: Raphael Studer; Rainer Winkelmann
    Abstract: This article proposes a new class of rating scale models, which merges advantages and overcomes shortcomings of the traditional linear and ordered latent regression models. Both parametric and semi-parametric estimation is considered. The insights of an empirical application to satisfaction data are threefold. First, the methods are easily implementable in standard statistical software. Second, the non-linear model allows for exible marginal effects, and predicted means respect the boundaries of the dependent variable. Third, average marginal effects are similar to ordinary least squares estimates.
    Keywords: Rating variables, non-linear least squares, quasi-maximum likelihood, semiparametric least squares, subjective well-being
    JEL: C21 I00
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:003&r=ecm
  6. By: Drew Creal (University of Chicago, Booth School of Business); Bernd Schwaab (European Central Bank); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam)
    Abstract: We propose a dynamic factor model for mixed-measurement and mixed-frequency panel data. In this framework time series observations may come from a range of families of parametric distributions, may be observed at different time frequencies, may have missing observations, and may exhibit common dynamics and cross-sectional dependence due to shared exposure to dynamic latent factors. The distinguishing feature of our model is that the likelihood function is known in closed form and need not be obtained by means of simulation, thus enabling straightforward parameter estimation by standard maximum likelihood. We use the new mixed-measurement framework for the signal extraction and forecasting of macro, credit, and loss given default risk conditions for U.S. Moody's-rated firms from January 1982 until March 2010.
    Keywords: panel data; loss given default; default risk; dynamic beta density; dynamic ordered probit; dynamic factor model
    JEL: C32 G32
    Date: 2011–02–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110042&r=ecm
  7. By: Chiburis, Richard C.; Das, Jishnu; Lokshin, Michael
    Abstract: This paper presents asymptotic theory and Monte-Carlo simulations comparing maximum-likelihood bivariate probit and linear instrumental variables estimators of treatment effects in models with a binary endogenous treatment and binary outcome. The three main contributions of the paper are (a) clarifying the relationship between the Average Treatment Effect obtained in the bivariate probit model and the Local Average Treatment Effect estimated through linear IV; (b) comparing the mean-square error and the actual size and power of tests based on these estimators across a wide range of parameter values relative to the existing literature; and (c) assessing the performance of misspecification tests for bivariate probit models. The authors recommend two changes to common practices: bootstrapped confidence intervals for both estimators, and a score test to check goodness of fit for the bivariate probit model.
    Keywords: Scientific Research&Science Parks,Science Education,Statistical&Mathematical Sciences,Econometrics,Educational Technology and Distance Education
    Date: 2011–03–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:5601&r=ecm
  8. By: Márcio Laurini (IBMEC Business School); Luiz Koodi Hotta (IMECC-Unicamp)
    Abstract: This article discuss the use of Bayesian methods for inference and forecasting in dynamic term structure models through Integrated Nested Laplace Approximations (INLA). This method of analytical approximations allows for accurate inferences for latent factors, parameters and forecasts in dynamic models with reduced computational cost. In the estimation of dynamic term structure models it also avoids some simplifications in the inference procedures, as the estimation in two stages. The results obtained in the estimation of the dynamic Nelson-Siegel model indicate that this methodology performs more accurate out-of-sample forecasts compared to the methods of two-stage estimation by OLS and also Bayesian estimation methods using MCMC. These analytical approaches also allow calculating efficiently measures of model selection such as generalized cross validation and marginal likelihood, that may be computationally prohibitive in MCMC estimations.
    Keywords: Term Structure, Latent Factors, Bayesian Forecasting, Laplace Approximations
    JEL: C11 G12
    Date: 2011–03–14
    URL: http://d.repec.org/n?u=RePEc:ibr:dpaper:2011-01&r=ecm
  9. By: Di Iorio, Francesca; Triacca, Umberto
    Abstract: A new non-causality test based on the notion of distance between ARMA models is proposed in this paper. The advantage of this test is that it can be used in possible integrated and cointegrated systems, without pre-testing for unit roots and cointegration. The Monte Carlo experiments indicate that the proposed method performs reasonably well in nite samples. The empirical relevance of the test is illustrated via two applications.
    Keywords: AR metric; Bootstrap test; Granger non-causality; VAR
    JEL: C12 C15 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29637&r=ecm
  10. By: Antonietta Mira (Department of Economics, University of Insubria, Italy); Daniele Imparato (Department of Economics, University of Insubria, Italy)
    Abstract: A Markov Chain Monte Carlo method is proposed for the pointwise evaluation of a density whose normalizing constant is not known. This method was introduced in the physics literature by Assaraf et al (2007). Conditions for unbiasedness of the estimator are derived. A central limit theorem is also proved under regularity conditions. The new idea is tested on some toy-examples.
    Keywords: Density estimator, Fundamental solution, MCMC simulation
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:ins:quaeco:qf1108&r=ecm
  11. By: Antonietta Mira (Department of Economics, University of Insubria, Italy); Daniele Imparato (Department of Economics, University of Insubria, Italy); Reza Solgi (Istituto di Finanza, Universita di Lugano)
    Abstract: A general purpose variance reduction technique for Markov chain Monte Carlo (MCMC) estimators, based on the zero-variance principle introduced in the physics literature, is proposed to evaluate the expected value, of a function f with respect to a, possibly unnormalized, probability distribution . In this context, a control variate approach, generally used for Monte Carlo simulation, is exploited by replacing f with a dierent function, ~ f. The function ~ f is constructed so that its expectation, under , equals f , but its variance with respect to is much smaller. Theoretically, an optimal re-normalization f exists which may lead to zero variance; in practice, a suitable approximation for it must be investigated. In this paper, an ecient class of re-normalized ~ f is investigated, based on a polynomial parametrization. We nd that a low-degree polynomial (1st, 2nd or 3rd degree) can lead to dramatically huge variance reduction of the resulting zero-variance MCMC estimator. General formulas for the construction of the control variates in this context are given. These allow for an easy implementation of the method in very general settings regardless of the form of the target/posterior distribution (only dierentiability is required) and of the MCMC algorithm implemented (in particular, no reversibility is needed).
    Keywords: Control variates, GARCH models, Logistic regression, Metropolis-Hastings algorithm, Variance reduction
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:ins:quaeco:qf1109&r=ecm
  12. By: Charles S. Bos
    Abstract: Estimation of the volatility of time series has taken off since the introduction of the GARCH and stochastic volatility models. While variants of the GARCH model are applied in scores of articles, use of the stochastic volatility model is less widespread. In this article it is argued that one reason for this difference is the relative difficulty of estimating the unobserved stochastic volatility, and the varying approaches that have been taken for such estimation. In order to simplify the comprehension of these estimation methods, the main methods for estimating stochastic volatility are discussed, with focus on their commonalities. In this manner, the advantages of each method are investigated, resulting in a comparison of the methods for their efficiency, difficulty-of-implementation, and precision.
    Keywords: Stochastic volatility; estimation; methodology
    JEL: C13 C51
    Date: 2011–03–03
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110049&r=ecm
  13. By: Ivan Canay (Dept. of Economics, Northwestern University); Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges-Lehmann optimal. These new conditions extend the scope of the Hodges-Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges-Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges-Lehmann optimality is a weak asymptotic requirement.
    Keywords: Asymptotic optimality, Large deviations, Moment condition, Generalized method of moments, Generalized empirical likelihood
    JEL: C13 C14
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1789&r=ecm
  14. By: Kiygi Calli, M.; Weverbergh, M.; Franses, Ph.H.B.F.
    Abstract: We examine the situation where hourly data are available to design advertising-response models, whereas managerial decision making can concern hourly, daily or weekly intervals. The key question is how models for hourly data compare to models based on weekly data with respect to forecasting accuracy and with respect to assessing advertising impact. Simulation experiments suggest that the strategy, which entails modeling the least aggregated data and forecasting more aggregate data, yields better forecasts, provided that one has a correct model specification for the higher frequency data. A detailed analysis of three actual data sets confirms this conclusion. A key feature of this confirmation is that aggregation affects data transformation to dampen the variance. The estimated advertising impact is sensitive to the appropriate transformation. Our conclusion is that disaggregated models are preferable also when decision have to be made at lower frequencies.
    Keywords: advertising effectiveness;advertising response;aggregation;normative and predictive validity
    Date: 2010–12–15
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:1765022614&r=ecm
  15. By: Jan Hanousek; Evzen Kocenda; Jan Novotny
    Abstract: We performed an extensive simulation study to compare the relative performance of many price-jump indicators with respect to false positive and false negative probabilities. We simulated twenty different time series specifications with different intraday noise volatility patterns and price-jump specifications. The double McNemar (1947) non-parametric test has been applied on constructed artificial time series to compare fourteen different price-jump indicators that are widely used in the literature. The results suggest large differences in terms of performance among the indicators, but we were able to identify the best-performing indicators. In the case of false positive probability, the best-performing price-jump indicator is based on thresholding with respect to centiles. In the case of false negative probability, the best indicator is based on bipower variation.
    Keywords: price jumps; price-jump indicators; non-parametric testing; Monte Carlo simulations; financial econometrics
    JEL: C14 F37 G15
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp434&r=ecm
  16. By: Dlugosz, Stephan
    Abstract: Register data are known for their large sample size and good data quality. The measurement accuracy of variables highly depends on their high importance for administrative processes. The education variable in the IAB employment sub-sample is an example for information that is gathered without a clear purpose. It therefore severely suffers from missing values and misclassifications. In this paper, a classical approach to deal with incomplete data is used in combination with rule-based plausibility checks for misclassification to improve the quality of the variable. The developed correction procedure is applied to simple Mincer-type wage regressions. The procedure reveals that the quality of years in education is very important: The German labour market rewards general education less than vocational training. Furthermore, using this method, no indication of an inflation in formal education degrees could be found. --
    Keywords: Measurement error,EM by the method of weights,wage regression,expansion of educational degrees,misclassification,imputation rules
    JEL: C13 J24 J31
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:11013&r=ecm
  17. By: Lyócsa, Štefan; Výrost, Tomáš; Baumöhl, Eduard
    Abstract: The purpose of this paper is to explain both the need and the procedures of unit-root testing to a wider audience. The topic of stationarity testing in general and unit root testing in particular is one that covers a vast amount of research. We have been discussing the problem in four different settings. First we investigate the nature of the problem that motivated the study of unit-root processes. Second we present a short list of several traditional as well as more recent univariate and panel data tests. Third we give a brief overview of the economic theories, in which the testing of the underlying research hypothesis can be expressed in a form of a unit-root / stationary test like the issues of purchasing power parity, economic bubbles, industry dynamic, economic convergence and unemployment hysteresis can be formulated in a form equivalent to the testing of a unit root within a particular series. The last, fourth aspect is dedicated to an empirical application of testing for the non-stationarity in industrial production of CEE-4 countries using a simulation based unit-root testing methodology.
    Keywords: Unit-root; Stationarity; Univariate tests; Panel tests; Simulation based unit root tests; Industrial production
    JEL: C20 E23 C30 E60
    Date: 2011–03–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29648&r=ecm
  18. By: Dlugosz, Stephan
    Abstract: A new algorithm for clustering life course trajectories is presented and tested with large register data. Life courses are represented as sequences on a monthly timescale for the working-life with an age span from 16-65. A meaningful clustering result for this kind of data provides interesting subgroups with similar life course trajectories. The high sampling rate allows precise discrimination of the different subgroups, but it produces a lot of highly correlated data for phases with low variability. The main challenge is to select the variables (points in time) that carry most of the relevant information. The new algorithm deals with this problem by simultaneously clustering and identifying critical junctures for each of the relevant subgroups. The developed divisive algorithm is able to handle large amounts of data with multiple dimensions within reasonable time. This is demonstrated on data from the Federal German pension insurance. --
    Keywords: Clustering,measures of association,discrete data,time series
    JEL: C33 J00
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:11015&r=ecm
  19. By: Masahiko Shibamoto (Research Institute for Economics and Business Administration, Kobe University); Yoshiro Tsutsui (Graduate School of Economics, Osaka University)
    Abstract: Studies using the dynamic panel regression approach have found the speed of income convergence among the world and regional economies to be high. For example, Lee et al. (1997, 1998) report the income convergence speed to be 30% per annum. This note argues that their estimates may be seriously overstated. Using a factor model, we show that the coefficient of the lagged income in their specification may not be the long-run convergence speed, but the adjustment speed of the short-run deviation from the long-run equilibrium path. We give an example of an empirical analysis, where the short-run adjustment speed is about 40%.
    Keywords: convergence speed, dynamic panel regression, factor model
    JEL: O40
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:kob:dpaper:dp2011-04&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.