nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒08‒28
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Uniform Inference on Quantile Effects under Sharp Regression Discontinuity Designs By Zhongjun Qu; Jungmo Yoon
  2. Fractal approach towards power-law coherency to measure cross-correlations between time series By Ladislav Kristoufek
  3. Forecasting in the presence of in and out of sample breaks By Jiawen Xu; Pierre Perron
  4. Specification testing for errors-in-variables models By Taisuke Otsu; Luke Taylor
  5. Residuals-based Tests for Cointegration with GLS Detrended Data By Pierre Perron; Gabriel Rodríguez
  6. Combining Long Memory and Level Shifts in Modeling and Forecasting the Volatility of Asset Returns By Rasmus T. Varneskov; Pierre Perron
  7. Convergence rates of sums of a-mixing triangular arrays: with an application to non-parametric drift function estimation of continuous-time processes By Shin Kanaya
  8. Estimating Country Heterogeneity in Capital - Labor Substitution Using Panel Data By Lucciano Villacorta
  9. Contagion, spillover and interdependence By Rigobon, Roberto
  10. Improved Tests for Forecast Comparisons in the Presence of Instabilities By Luis Filipe Martins; Pierre Perron
  11. Confi dence Intervals for Projections of Partially Identi fied Parameters By Hiroaki Kaido; Francesca Molinari; Jorg Stoye
  12. Measuring Business Cycles with Structural Breaks and Outliers: Applications to International Data By Pierre Perron; Tatsuma Wada
  13. A Framework for Measurement Error in Self-Reported Health Conditions By Perry Singleton; Ling Li
  14. A Dynamic Multi-Level Factor Model with Long-Range Dependence By Yunus Emre Ergemen; Carlos Vladimir Rodríguez-Caballero
  15. Eigenvalue Ratio Estimators for the Number of Common Factors By Cavicchioli, Maddalena; Forni, Mario; Lippi, Marco; Zaffaroni, Paolo

  1. By: Zhongjun Qu (Boston University); Jungmo Yoon (Hanyang University)
    Abstract: This paper builds upon conditional quantile processes to develop methods for conducting uni- form inference on quantile treatment effects under sharp regression discontinuity (RD) designs. It begins by developing Score and Wald type tests for a range of hypotheses that are related to treatment significance, homogeneity and unambiguity. It gives conditions under which the asymptotic distributions of these tests are unaffected by the biases from the nonparametric es- timation without requiring under-smoothing. Further, for situations where the conditions can be stringent, the paper develops a procedure that explicitly accounts for the effects of the bi- ases while paying special attention to their estimation uncertainty. The paper also provides a procedure for constructing uniform confidence bands for the quantile treatment effects. As an empirical application, we apply the methods to study the e§ects of cash-on-hand on unemploy- ment durations. The results reveal pronounced treatment heterogeneity and also point to the importance of considering the long-term unemployed.
    Keywords: heterogeneity, quantile regression, regression discontinuity, treatment effect, unemployment duration
    JEL: C14 C21
    Date: 2015–11–11
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-009&r=ecm
  2. By: Ladislav Kristoufek
    Abstract: We focus on power-law coherency as an alternative approach towards studying power-law cross-correlations between simultaneously recorded time series. To be able to study empirical data, we introduce three estimators of the power-law coherency parameter $H_{\rho}$ based on popular techniques usually utilized for studying power-law cross-correlations -- detrended cross-correlation analysis (DCCA), detrending moving-average cross-correlation analysis (DMCA) and height cross-correlation analysis (HXA). In the finite sample properties study, we focus on the bias, variance and mean squared error of the estimators. We find that the DMCA-based method is the safest choice among the three. The HXA method is reasonable for long time series with at least $10^4$ observations, which can be easily attainable in some disciplines but problematic in others. The DCCA-based method does not provide favorable properties which even deteriorate with an increasing time series length. The paper opens a new venue towards studying cross-correlations between time series.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1608.06781&r=ecm
  3. By: Jiawen Xu (Shanghai University of Finance and Economics); Pierre Perron (Boston University)
    Abstract: We present a frequentist-based approach to forecast time series in the presence of in-sample and out-of-sample breaks in the parameters of the forecasting model. We first model the parameters as following a random level shift process, with the occurrence of a shift governed by a Bernoulli process. In order to have a structure so that changes in the parameters be forecastable, we introduce two modifications. The Örst models the probability of shifts according to some covariates that can be forecasted. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Similar modifications can also be made to model changes in the variance of the error process. Our full model can be cast into a non-linear nonGaussian state space framework. To estimate it, we use particle filtering and a Monte Carlo expectation maximization algorithm. Simulation results show that the algorithm delivers accurate in-sample estimates, in particular the filtered estimates of the time path of the parameters follow closely their true variations. We provide a number of empirical applications and compare the forecasting performance of our approach with a variety of alternative methods. These show that substantial gains in forecasting accuracy are obtained.
    Keywords: instabilities; structural change; forecasting; random level shifts; particle filter
    JEL: C22 C53
    Date: 2015–09–20
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-012&r=ecm
  4. By: Taisuke Otsu; Luke Taylor
    Abstract: This paper considers specification testing for regression models with errors-in-variables and proposes a test statistic comparing the distance between the parametric and nonparametric fits based on deconvolution techniques. In contrast to the method proposed by Hall and Ma (2007), our test allows general nonlinear regression models. Since our test employs the smoothing approach, it complements the nonsmoothing one by Hall and Main terms of local power properties. The other existing method, by Song (2008), is shown to possess trivial power under certain alternatives. We establish the asymptotic properties of our test statistic for the ordinary and supersmooth measurement error densities and develop a bootstrap method to approximate the critical value. We apply the test to the specification of Engel curves in the US. Finally, some simulation results endorse our theoretical findings: our test has advantages in detecting high frequency alternatives and dominates the existing tests under certain specifications.
    Keywords: specification test, measurement errors, deconvolution
    JEL: C12
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2015/586&r=ecm
  5. By: Pierre Perron (Boston University); Gabriel Rodríguez (Pontificia Universidad Católica del Perú)
    Abstract: We provide GLS-detrended versions of single-equation static regression or residuals-based tests for testing whether or not non-stationary time series are cointegrated. Our approach is to consider nearly optimal tests for unit roots and apply them in the cointegration context. We derive the local asymptotic power functions of all tests considered for a triangular DGP imposing a directional restriction such that the regressors are pure integrated processes. Our GLS versions of the tests do indeed provide substantial power improvements over their OLS counterparts. Simulations show that the gains in power are important and stable across various configurations.
    Keywords: Cointegration, Residuals-Based Unit Root Tests, ECR Tests, OLS and GLS Detrended Data, Hypothesis Testing
    JEL: C22 C32 C52
    Date: 2015–10–19
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-017&r=ecm
  6. By: Rasmus T. Varneskov (Aarhus University and CREATES); Pierre Perron (Boston University)
    Abstract: We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons.
    Keywords: Forecasting, Kalman Filter, Long Memory Processes, State Space Modeling, Stochastic Volatility, Structural Change
    JEL: C13 C22 C53
    Date: 2015–09–08
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-015&r=ecm
  7. By: Shin Kanaya (Aarhus University and CREATES)
    Abstract: The convergence rates of the sums of a-mixing (or strongly mixing) triangular arrays of heterogeneous random variables are derived. We pay particular attention to the case where central limit theorems may fail to hold, due to relatively strong time-series dependence and/or the non-existence of higher-order moments. Several previous studies have presented various versions of laws of large numbers for sequences/triangular arrays, but their convergence rates were not fully investigated. This study is the first to investigate the convergence rates of the sums of a-mixing triangular arrays whose mixing coefficients are permitted to decay arbitrarily slowly. We consider two kinds of asymptotic assumptions: one is that the time distance between adjacent observations is fixed for any sample size n; and the other, called the infill assumption, is that it shrinks to zero as n tends to infinity. Our convergence theorems indicate that an explicit trade-off exists between the rate of convergence and the degree of dependence. While the results under the infill assumption can be seen as a direct extension of those under the fixed-distance assumption, they are new and particularly useful for deriving sharper convergence rates of discretization biases in estimating continuous-time processes from discretely sampled observations. We also discuss some examples to which our results and techniques are useful and applicable: a moving-average process with long lasting past shocks, a continuous-time diffusion process with weak mean reversion, and a near-unit-root process.
    Keywords: Law of large numbers, rate of convergence, a-mixing triangular array, in?fill asymptotics, kernel estimation
    JEL: C14 C22 C58
    Date: 2016–07–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2016-24&r=ecm
  8. By: Lucciano Villacorta
    Abstract: The aggregate elasticity of substitution between labor and capital is central in understanding the global decline in the labor share and the cross-country heterogeneity in productivities. Available estimates vary substantially because of the different assumptions made about unobserved technologies. In this paper, I develop a flexible framework to estimate the aggregate elasticity of substitution between labor and capital for a panel of countries. In contrast to previous studies, my framework considers country heterogeneity in the elasticity of substitution. The growth rates of labor- and capital- augmenting technologies are also allowed to vary across countries and time while retaining some commonalities across the panel via a dynamic factor model. Estimation is based on posterior distributions in a Bayesian fixed effects framework. I propose a computationally convenient procedure to compute posterior distributions in two steps that combines the Gibbs and the Metropolis-Hasting algorithm. Using the EU KLEMS database, I find evidence of heterogeneity in the elasticity of substitution across countries, with a mean of 0.90 and standard deviation of 0.23. The bias in the technical change is the dominant mechanism in explaining the labor share decline in the majority of countries. However, the increase in the capital-labor ratio (or the decline in the price of investment goods) is also an important mechanism for some countries. Finally, I find a strong correlation between direction of the technical change, the elasticity of substitution and the relative endowment of capital and labor.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:788&r=ecm
  9. By: Rigobon, Roberto (Massachusetts Institute of Technology)
    Abstract: This paper reviews the empirical literature on international spillovers and contagion. Theoretical models of spillover and contagion imply that the reduced form observable variables suffer from two possible sources of bias: endogeneity and omitted variables. These econometric problems in combination with the heteroskedasticity that plagues the data produce time varying biases. Several empirical methodologies are evaluated from this perspective: non-parametric techniques such as correlations and principal components, as well as parametric methods such as OLS, VAR, event studies, ARCH, non-linear regressions, etc. The paper concludes that there is no single technique that can solve the full fledge problem and discusses three methodologies that can partially address some of the questions in the literature.
    Keywords: spillovers; contagion; heteroskedasticity
    JEL: C58 F32 F36 G15
    Date: 2016–08–11
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0607&r=ecm
  10. By: Luis Filipe Martins (Lisbon University Institute); Pierre Perron (Boston University)
    Abstract: Of interest is comparing the out-of-sample forecasting performance of two competing models in the presence of possible instabilities. To that effect, we suggest using simple structural change tests, sup-Wald and UDmax as proposed by Andrews (1993) and Bai and Perron (1998), for changes in the mean of the loss-differences. Giacomini and Rossi (2010) proposed a áuctuations test and a one-time reversal test also applied to the loss-differences. When properly constructed to account for potential serial correlation under the null hypothesis to have a pivotal limit distribution, it is shown that their tests have undesirable power properties, power that can be low and non-increasing as the alternative gets further from the null hypothesis. The good power properties they reported is simply an artifact of imposing a priori that the loss differentials are serially uncorrelated and using the simple sample variance to scale the tests. On the contrary, our statistics are shown to have higher monotonic power, especially the UDmax version. We use their empirical examples to show the practical relevance of the issues raised.
    Keywords: non-monotonic power, structural change, forecasts, long-run variance
    JEL: C22 C53
    Date: 2015–10–06
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-014&r=ecm
  11. By: Hiroaki Kaido (Boston University); Francesca Molinari (Cornell University); Jorg Stoye (Cornell University)
    Abstract: This paper proposes a bootstrap-based procedure to build con dence intervals for single components of a partially identi ed parameter vector, and for smooth functions of such components, in moment (in)equality models. The extreme points of our con dence interval are obtained by maximizing/minimizing the value of the component (or function) of interest subject to the sample analog of the moment (in)equality conditions properly relaxed. The novelty is that the amount of relaxation, or critical level, is computed so that the component (or function) of, instead of itself, is uniformly asymptotically cov- ered with prespeci ed probability. Calibration of the critical level is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attrac- tive. Computation of the extreme points of the con dence interval is based on a novel application of the response surface method for global optimization, which may prove of independent interest also for applications of other methods of inference in the moment (in)equalities literature. The critical level is by construction smaller (in nite sample) than the one used if projecting con dence regions designed to cover the entire parameter vector. Hence, our con dence interval is weakly shorter than the projection of established con dence sets (Andrews and Soares, 2010), if one holds the choice of tuning parameters constant. We provide simple conditions under which the comparison is strict. Our inference method controls asymptotic coverage uniformly over a large class of data generating processes. Our assumptions and those used in the leading alternative approach (a pro ling based method) are not nested. We explain why we employ some restrictions that are not required by other methods and provide examples of models for which our method is uniformly valid but pro ling based methods are not.
    Keywords: Partial identi cation; Inference on projections; Moment inequalities; Uniform inference
    Date: 2016–01–04
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2016-001&r=ecm
  12. By: Pierre Perron (Boston University); Tatsuma Wada (Keio University)
    Abstract: This paper first generalizes the trend-cycle decomposition framework of Perron and Wada (2009) based on unobserved components models with innovations having a mixture of normals distribution, which is able to handle sudden level and slope changes to the trend function as well as outliers. We investigate how important are the differences in the implied trend and cycle compared to the popular decomposition based on the Hodrick and Prescott (HP) (1997) filter. Our results show important qualitative and quantitative differences in the implied cycles for both real GDP and consumption series for the G7 countries. Most of the differences can be ascribed to the fact that the HP filter does not handle well slope changes, level shifts and outliers, while our method does so. Then, we reassess how such different cycles affect some socalled “stylized facts†about the relative variability of consumption and output across countries.
    Keywords: Trend-Cycle Decomposition, Unobserved Components Model, International Business Cycle, Non Gaussian Filter
    JEL: C22 E32
    Date: 2015–10–29
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-016&r=ecm
  13. By: Perry Singleton (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Ling Li (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244)
    Abstract: This study develops and estimates a model of measurement error in self-reported health conditions. The model allows self-reports of a health condition to differ from a contemporaneous medical examination, prior medical records, or both. The model is estimated using a two-sample strategy, which combines survey data linked medical examination results and survey data linked to prior medical records. The study finds substantial inconsistencies between self-reported health, the medical record, and prior medical records. The study proposes alternative estimators for the prevalence of diagnosed and undiagnosed conditions and estimates the bias that arises when using self-reported health conditions as explanatory variables.
    Keywords: Measurement Error, Disease Prevalence, Diabetes, Hypertension
    JEL: I12 J22
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:191&r=ecm
  14. By: Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Vladimir Rodríguez-Caballero (Aarhus University and CREATES)
    Abstract: A dynamic multi-level factor model with stationary or nonstationary global and regional factors is proposed. In the model, persistence in global and regional common factors as well as innovations allows for the study of fractional cointegrating relationships. Estimation of global and regional common factors is performed in two steps employing canonical correlation analysis and a sequential least-squares algorithm. Selection of the number of global and regional factors is discussed. The small sample properties of our methodology are investigated by some Monte Carlo simulations. The method is then applied to the Nord Pool power market for the analysis of price comovements among different regions within the power grid. We find that the global factor can be interpreted as the system price of the power grid as well as a fractional cointegration relationship between prices and the global factor.
    Keywords: Multi-level factor, long memory, fractional cointegration, electricity prices
    JEL: C12 C22
    Date: 2016–08–12
    URL: http://d.repec.org/n?u=RePEc:aah:create:2016-23&r=ecm
  15. By: Cavicchioli, Maddalena; Forni, Mario; Lippi, Marco; Zaffaroni, Paolo
    Abstract: In this paper we introduce three dynamic eigenvalue ratio estimators for the number of dynamic factors. Two of them, the Dynamic Eigenvalue Ratio (DER) and the Dynamic Growth Ratio (DGR) are dynamic counterparts of the eigenvalue ratio estimators (ER and GR) proposed by Ahn and Horenstein (2013). The third, the Dynamic eigenvalue Di fference Ratio (DDR), is a new one but closely related to the test statistic proposed by Onatsky (2009). The advantage of such estimators is that they do not require preliminary determination of discretionary parameters. Finally, a static counterpart of the latter estimator, called eigenvalue Diff erence Ratio estimator (DR), is also proposed. We prove consistency of such estimators and evaluate their performance under simulation. We conclude that both DDR and DR are valid alternatives to existing criteria. Application to real data gives new insights on the number of factors driving the US economy.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11440&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.