nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒01‒01
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Focused Information Criterion and Model Averaging for Large Panels with a Multifactor Error Structure By Shou-Yung Yin; Chu-An Liu; Chang-Ching Lin
  2. A One-Covariate at a Time, Multiple Testing Approach to Variable Selection in High-Dimensional Linear Regression Models By Chudik, A.; Kapetanios, G.; Pesaran, Hashem
  3. Identifying Treatment Effects with Data Combination and Unobserved Heterogeneity By Pablo Lavado; Gonzalo Rivera
  4. Asymptotic Properties of QML Estimators for VARMA Models with Time-Dependent Coefficients By Abdelkamel Alj; Rajae Azrak; Christophe Ley; Guy Melard
  5. Technical Appendix to Asymptotic Properties of QML Estimators for VARMA Models with Time-Dependent Coefficients By Abdelkamel Alj; Rajae Azrak; Christophe Ley; Guy Melard
  6. Markov-Switching Three-Pass Regression Filter By Pierre Guerin; Danilo Leiva-Leon; Massimiliano Marcellino
  7. Bayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Realized Measures By Richard Gerlach; Chao Wang
  8. Model Averaging Estimators for the Stochastic Frontier Model By Christopher F. Parmeter; Alan T. K. Wan; Xinyu Zhang
  9. Classification Trees for Heterogeneous Moment-Based Models By Sam Asher; Denis Nekipelov; Paul Novosad; Stephen P. Ryan
  10. Panel data estimators and aggregation By Biørn, Erik
  11. What is the truth about DSGE models? Testing by indirect inference By Meenagh, David; Minford, Patrick; Wickens, Michael; Xu, Yongdeng
  12. A Bridge Too Far? The State of the Art in Combining the Virtues of Stochastic Frontier Analysis and Data Envelopement Analysis By Christopher F. Parmeter; Valentin Zelenyuk
  13. Another Look at Single-Index Models Based on Series Estimation By Chaohua Dong; Jiti Gao; Bin Peng
  14. Evaluating trends in time series of distributions: A spatial fingerprint of human effects on climate By Yoosoon Chang; Robert K. Kaufmann; Chang Sik Kim; J. Isaac Miller; Joon Y. Park; Sungkeun Park
  15. Structural Change in (Economic) Time Series By Christian Kleiber
  16. Homogeneity Pursuit in Panel Data Models: Theory and Applications By Wuyi Wang; Peter C.B. Phillips; Liangjun Su
  17. Adaptive Minnesota Prior for High-Dimensional Vector Autoregressions By Korobilis, Dimitris; Pettenuzzo, Davide
  18. Words are the new numbers: A newsy coincident index of business cycles By Leif Anders Thorsrud
  19. Econometric Analysis of Production Networks with Dominant Units By Pesaran, Hashem.; Fan Yang, Cynthia.

  1. By: Shou-Yung Yin (Institute of Economics, Academia Sinica, Taipei, Taiwan); Chu-An Liu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Chang-Ching Lin (Department of Economics, National Cheng Kung University)
    Abstract: This paper considers model selection and model averaging in panel data models with a multifactor error structure. We investigate the limiting distribution of the common correlated effects estimator (Pesaran, 2006) in a local asymptotic framework and show that the trade-off between bias and variance remains in the asymptotic theory. We then propose a focused information criterion and a plug-in averaging estimator for large heterogeneous panels and examine their theoretical properties. The novel feature of the proposed method is that it aims to minimize the sample analog of the asymptotic mean squared error and can be applied to cases irrespective of whether the rank condition holds or not. Monte Carlo simulations show that both proposed selection and averaging methods generally achieve lower expected squared error than other methods. The proposed methods are applied to analyze the consumer response to gasoline taxes. JEL Classification: C23, C51, C52
    Keywords: Cross-sectional dependence, Common correlated effects, Focused information criterion, Model averaging, Model selection
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:16-a016&r=ecm
  2. By: Chudik, A.; Kapetanios, G.; Pesaran, Hashem
    Abstract: Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade o¤ parsimony and .t when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use of penalized regressions remain contentious. In this paper, we provide an alternative approach that considers the statistical significance of the individual covariates one at a time, whilst taking full account of the multiple testing nature of the inferential problem involved. We refer to the proposed method as One Covariate at a Time Multiple Testing (OCMT) procedure. The OCMT provides an alternative to penalised regression methods: It is based on statistical inference and is therefore easier to interpret and relate to the classical statistical analysis, it allows working under more general assumptions, it is faster, and performs well in small samples for almost all of the different sets of experiments considered in this paper. We provide extensive theoretical and Monte Carlo results in support of adding the proposed OCMT model selection procedure to the toolbox of applied researchers. The usefulness of OCMT is also illustrated by an empirical application to forecasting U.S. output growth and inflation.
    Keywords: One covariate at a time, multiple testing, model selection, high dimensionality, penalised regressions, boosting, Monte Carlo experiments
    JEL: C52 C55
    Date: 2016–12–16
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1677&r=ecm
  3. By: Pablo Lavado (Universidad del Pacífico); Gonzalo Rivera (World Bank)
    Abstract: This paper considers identification of treatment effects when the outcome variables and covariates are not observed in the same data sets. Ecological inference models, where aggregate outcome information is combined with individual demographic information, are a common example of these situations. In this context, the counterfactual distributions and the treatment effects are not point identified. However, recent results provide bounds to partially identify causal effects. Unlike previous works, this paper adopts the selection on unobservables assumption, which means that randomization of treatment assignments is not achieved until time fixed unobserved heterogeneity is controlled for. Panel data models linear in the unobserved components are considered to achieve identification. To assess the performance of these bounds, this paper provides a simulation exercise.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:apc:wpaper:2016-079&r=ecm
  4. By: Abdelkamel Alj; Rajae Azrak; Christophe Ley; Guy Melard
    Abstract: This paper is about vector autoregressive-moving average (VARMA) models with time-dependent coefficients to represent non-stationary time series. Contrary to other papers in the univariate case, the coefficients depend on time but not on the series’ length n. Under appropriate assumptions, it is shown that a Gaussian quasi-maximum likelihood estimator is almost surely consistent and asymptotically normal. The theoretical results are illustrated by means of two examples of bivariate processes. It is shown that the assumptions underly- ing the theoretical results apply. In the second example the innovations are marginally heteroscedastic with a correlation ranging from −0.8 to 0.8. In the two examples, the asymptotic information matrix is obtained in the Gaussian case. Finally, the finite-sample behavior is checked via a Monte Carlo simulation study for n from 25 to 400. The results confirm the validity of the asymptotic properties even for short series and the asymptotic information matrix deduced from the theory.
    Keywords: non-stationary process; multivariate time series; time-varying models
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/241623&r=ecm
  5. By: Abdelkamel Alj; Rajae Azrak; Christophe Ley; Guy Melard
    Abstract: This technical appendix contains proofs for the asymptotic properties of quasi-maximum likelihood (QML) estimators for vector autoregressive moving average (VARMA) models in the case where the coefficients depend on time instead of being constant. We refer to the main theorems of the paper Asymptotic properties of QML estimators for VARMA models with time-dependent coefficients" (Alj, Azrak, Ley and M elard, 2016).
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/241626&r=ecm
  6. By: Pierre Guerin; Danilo Leiva-Leon; Massimiliano Marcellino
    Abstract: We introduce a new approach for the estimation of high-dimensional factor models with regime-switching factor loadings by extending the linear three-pass regression filter to settings where parameters can vary according to Markov processes. The new method, denoted as Markov-Switching three-pass regression filter (MS-3PRF), is suitable for datasets with large cross-sectional dimensions since estimation and inference are straightforward, as opposed to existing regime-switching factor models, where computational complexity limits applicability to few variables. In a Monte- Carlo experiment, we study the finite sample properties of the MS-3PRF and find that it performs favorably compared with alternative modelling approaches whenever there is structural instability in factor loadings. As empirical applications, we consider forecasting economic activity and bilateral exchange rates, finding that the MS-3PRF approach is competitive in both cases. Keywords: Factor model, Markov-switching, Forecasting. JEL Classification Code: C22, C23, C53.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:591&r=ecm
  7. By: Richard Gerlach; Chao Wang
    Abstract: A new model framework called Realized Conditional Autoregressive Expectile (Realized-CARE) is proposed, through incorporating a measurement equation into the conventional CARE model, in a manner analogous to the Realized-GARCH model. Competing realized measures (e.g. Realized Variance and Realized Range) are employed as the dependent variable in the measurement equation and to drive expectile dynamics. The measurement equation here models the contemporaneous dependence between the realized measure and the latent conditional expectile. We also propose employing the quantile loss function as the target criterion, instead of the conventional violation rate, during the expectile level grid search. For the proposed model, the usual search procedure and asymmetric least squares (ALS) optimization to estimate the expectile level and CARE parameters proves challenging and often fails to convergence. We incorporate a fast random walk Metropolis stochastic search method, combined with a more targeted grid search procedure, to allow reasonably fast and improved accuracy in estimation of this level and the associated model parameters. Given the convergence issue, Bayesian adaptive Markov Chain Monte Carlo methods are proposed for estimation, whilst their properties are assessed and compared with ALS via a simulation study. In a real forecasting study applied to 7 market indices and 2 individual asset returns, compared to the original CARE, the parametric GARCH and Realized-GARCH models, one-day-ahead Value-at-Risk and Expected Shortfall forecasting results favor the proposed Realized-CARE model, especially when incorporating the Realized Range and the sub-sampled Realized Range as the realized measure in the model.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1612.08488&r=ecm
  8. By: Christopher F. Parmeter (University of Miami); Alan T. K. Wan (City University of Hong Kong); Xinyu Zhang (Academy of Mathematics and Systems Science)
    Abstract: Given the concern over the impact of distributional assumptions for the stochastic frontier model, the present research proposes two distinct model averaging estimators, one which can average over distinct distributions for firm specific inefficiency and another that can average over a nested class of distributions. These estimators are shown to produce optimal weights when the criterion is to uncover conditional inefficiency. We study the finite-sample performance of the model average estimator via Monte Carlo experiments and an empirical application focusing on Philippines rice farming.
    Keywords: Jackknife, optimal, focus criterion, leave-one-out Publication Status: Under Review
    JEL: C1
    Date: 2016–12–06
    URL: http://d.repec.org/n?u=RePEc:mia:wpaper:2016-09&r=ecm
  9. By: Sam Asher; Denis Nekipelov; Paul Novosad; Stephen P. Ryan
    Abstract: A basic problem in applied settings is that different parameters may apply to the same model in different populations. We address this problem by proposing a method using moment trees; leveraging the basic intuition of a classification tree, our method partitions the covariate space into disjoint subsets and fits a set of moments within each subspace. We prove the consistency of this estimator and show standard rates of convergence apply post-model selection. Monte Carlo evidence demonstrates the excellent small sample performance and faster-than-parametric convergence rates of the model selection step in two common empirical contexts. Finally, we showcase the usefulness of our approach by estimating heterogeneous treatment effects in a regression discontinuity design in a development setting.
    JEL: C14 C18 C51 C52 O12 O18
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:22976&r=ecm
  10. By: Biørn, Erik (Dept. of Economics, University of Oslo)
    Abstract: For a panel data regression equation with two-way unobserved heterogeneity, individual-specific and period-specific, ‘within-individual’ and ‘within-period’ estimators, which can be given Ordinary Least Squares (OLS) or Instrumental Variables (IV) interpretations, are considered. A class of estimators defined as linear aggregates of these estimators, is defined. Nine aggregate estimators, including between, within, and Generalized Least Squares (GLS), are special cases. Other estimators are shown to be more robust to simultaneity and measurement error bias than the standard aggregate estimators and more efficient than the ‘disaggregate’ estimators. Empirical illustrations relating to manufacturing productivity are given.
    Keywords: Panel data; Aggregation; IV estimation; Robustness; Method of moments; Factor productivity
    JEL: C13 C23 C43
    Date: 2016–12–17
    URL: http://d.repec.org/n?u=RePEc:hhs:osloec:2016_019&r=ecm
  11. By: Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School); Wickens, Michael (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: This paper addresses the growing gulf between traditional macroeconometrics and the increasingly dominant preference among macroeconomists to use DSGE models and to estimate them using Bayesian estimation with strong priors but not to test them as they are likely to fail conventional statistical tests. This is in con‡ict with the high scienti…c ideals with which DSGE models were fi…rst invested in their aim of …nding true models of the macroeconomy. As macro models are in reality only approximate representations of the economy, we argue that a pseudo-true inferential framework should be used to provide a measure of the robustness of DSGE models.
    Keywords: Pseudo-true inference, DSGE models, Indirect Inference; Wald tests, Likelihood Ratio tests; robustness
    JEL: C12 C32 C52 E1
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2016/14&r=ecm
  12. By: Christopher F. Parmeter (University of Miami); Valentin Zelenyuk (University of Queensland)
    Abstract: A recent spate of research has attempted to develop estimators for the stochastic frontier model which embrace semi- and nonparametric insights to enjoy the advantages inherent in the more traditional operations research method of data envelopment analysis. These newer methods explicitly allow statistical noise in the model, which is a common criticism of the data envelopment estimator. Further, several of these newer methods have focused on ensuring that axioms of production hold. These models and their subsequent estimators, despite having many appealing features, have yet to appear regularly in applied research. Given the pace at which estimators of this style are being proposed coupled with the dearth of formal applications, we seek to review the literature and discuss practical implementation issues of these methods. We provide a general overview of the major recent developments in this exciting field, draw connections with the data envelopment analysis field and discuss how useful synergies can be undertaken.
    Keywords: Partly Linear, Heteroskedasticity, Nonparametric, Bandwidth Publication Status: Under Review
    JEL: C1
    Date: 2016–12–06
    URL: http://d.repec.org/n?u=RePEc:mia:wpaper:2016-10&r=ecm
  13. By: Chaohua Dong; Jiti Gao; Bin Peng
    Abstract: In this paper, a semiparametric single-index model is investigated. The link function is allowed to be unbounded and has unbounded support that answers a pen ding issue in the literature. Meanwhile, the link function is treated as a point in an infinitely many dimensional function space which enables us to derive the estimates for the index parameter and the link function simultaneously. This approach is different from the profile method commonly used in the literature. The estimator is derive d from an optimization with the constraint of identification condition for index parameter, which is a natural way but ignored in the literature. In addition, making use of a property of Hermite orthogonal polynomials, an explicit estimator for the index parameter is obtained. Asymptotic properties for the two estimators of the index parameter are established. Their efficiency is discussed in some special cases as well. The finite sample properties of the two estimates are demonstrated through an extensive Monte Carlo study and an empirical example.
    Keywords: asymptotic theory, closed-form estimate, cross-sectional model, Hermite orthogonal expansion, series method
    JEL: C13 C14 C51
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2016-19&r=ecm
  14. By: Yoosoon Chang (Department of Economics, Indiana University); Robert K. Kaufmann (Department of Earth and Environment, Boston University); Chang Sik Kim (Department of Economics, Sungkyunkwan University); J. Isaac Miller (Department of Economics, University of Missouri); Joon Y. Park (Department of Economics, Indiana University and Sungkyunkwan University); Sungkeun Park (Korea Institute for Industrial Economics and Trade)
    Abstract: We analyze a time series of global temperature anomaly distributions to identify and estimate persistent features in climate change. Temperature densities from globally distributed data between 1850 and 2012 are treated as a time series of functional observations that change over time. We employ a formal test for the existence of functional unit roots in the time series of these densities. Further, we develop a new test to distinguish functional unit roots from functional deterministic trends or explosive behavior. Results suggest that temperature anomalies contain stochastic trends (as opposed to deterministic trends or explosive roots), two trends are present in the Northern Hemisphere while one stochastic trend is present in the Southern Hemisphere, and the probabilities of observing moderately positive anomalies have increased, but the probabilities of extremely positive anomalies has decreased. These results are consistent with the anthropogenic theory of climate change, in which a natural experiment causes human emissions of greenhouse gases and sulfur to be greater in the Northern Hemisphere and radiative forcing to be greater in the Southern Hemisphere. This Version:
    Keywords: attribution of climate change, temperature distribution, global temperature trends, functional unit roots
    JEL: C14 C23 C33 Q54
    Date: 2015–09–09
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1622&r=ecm
  15. By: Christian Kleiber (University of Basel)
    Abstract: Methods for detecting structural changes, or change points, in time series data are widely used in many fields of science and engineering. This chapter sketches some basic methods for the analysis of structural changes in time series data. The exposition is confined to retrospective methods for univariate time series. Several recent methods for dating structural changes are compared using a time series of oil prices spanning more than 60 years. The methods broadly agree for the first part of the series up to the mid-1980s, for which changes are associated with major historical events, but provide somewhat different solutions thereafter, reflecting a gradual increase in oil prices that is not well described by a step function. As a further illustration, 1990s data on the volatility of the Hang Seng stock market index are reanalyzed.
    Keywords: change point problem, segmentation, structural change, time series
    JEL: C22 C87
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:bsl:wpaper:2016/06&r=ecm
  16. By: Wuyi Wang (School of Economics, Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University); Liangjun Su (School of Economics, Singapore Management University)
    Abstract: This paper studies estimation of a panel data model with latent structures where individuals can be classified into different groups where slope parameters are homogeneous within the same group but heterogeneous across groups. To identify the unknown group structure of vector parameters, we design an algorithm called Panel-CARDS which is a systematic extension of the CARDS procedure proposed by Ke, Fan, and Wu (2015) in a cross section framework. The extension addresses the problem of comparing vector coefficients in a panel model for homogeneity and introduces a new concept of controlled classification of multidimensional quantities called the segmentation net. We show that the Panel-CARDS method identifies group structure asymptotically and consistently estimates model parameters at the same time. External information on the minimum number of elements within each group is not required but can be used to improve the accuracy of classification and estimation in finite samples. Simulations evaluate performance and corroborate the asymptotic theory in several practical design settings. Two empirical economic applications are considered: one explores the effect of income on democracy by using cross-country data over the period 1961-2000; the other examines the effect of minimum wage legislation on unemployment in 50 states of the United States over the period 1988-2014. Both applications reveal the presence of latent groupings in these panel data.
    Keywords: CARDS, Clustering, Heterogeneous slopes, Income and democracy, Minimum wage and employment, Oracle estimator, Panel structure model
    JEL: C33 C38 C51
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2063&r=ecm
  17. By: Korobilis, Dimitris; Pettenuzzo, Davide
    Abstract: We develop a novel, highly scalable estimation method for large Bayesian Vector Autoregressive models (BVARs) and employ it to introduce an "adaptive" version of the Minnesota prior. This flexible prior structure allows each coeffcient of the VAR to have its own shrinkage intensity, which is treated as an additional parameter and estimated from the data. Most importantly, our estimation procedure does not rely on computationally intensive Markov Chain Monte Carlo (MCMC) methods, making it suitable for high-dimensional VARs with more predictors that observations. We use a Monte Carlo study to demonstrate the accuracy and computational gains of our approach. We further illustrate the forecasting performance of our new approach by applying it to a quarterly macroeconomic dataset, and find that it forecasts better than both factor models and other existing BVAR methods.
    Keywords: Bayesian VARs, Minnesota prior, Large datasets, Macroeconomic forecasting
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:18626&r=ecm
  18. By: Leif Anders Thorsrud (Norges Bank (Central Bank of Norway))
    Abstract: I construct a daily business cycle index based on quarterly GDP and textual information contained in a daily business newspaper. The newspaper data are decomposed into time series representing newspaper topics using a Latent Dirichlet Allocation model. The business cycle index is estimated using the newspaper topics and a time-varying Dynamic Factor Model where dynamic sparsity is enforced upon the factor loadings using a latent threshold mechanism. The resulting index is shown to be not only more timely but also more accurate than commonly used alternative business cycle indicators. Moreover, the derived index provides the index user with broad based high frequent information about the type of news that drive or reflect economic fluctuations.
    Keywords: Business cycles, Dynamic Factor Model, Latent Dirichlet Allocation (LDA)
    JEL: C11 C32 E32
    Date: 2016–12–21
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2016_21&r=ecm
  19. By: Pesaran, Hashem.; Fan Yang, Cynthia.
    Abstract: This paper builds on the work of Acemoglu et al. (2012) and considers a production network with unobserved common technological factor and establishes general conditions under which the network structure contributes to aggregate fluctuations. It introduces the notions of strongly and weakly dominant units, and shows that at most a finite number of units in the network can be strongly dominant, while the number of weakly dominant units can rise with N (the cross section dimension). This paper further establishes the equivalence between the highest degree of dominance in a network and the inverse of the shape parameter of the power law. A new extremum estimator for the degree of pervasiveness of individual units in the network is proposed, and is shown to be robust to the choice of the underlying distribution. Using Monte Carlo techniques, the proposed estimator is shown to have satisfactory small sample properties. Empirical applications to US input-output tables suggest the presence of production sectors with a high degree of pervasiveness, but their effects are not sufficiently pervasive to be considered as strongly dominant.
    Keywords: aggregate ?uctuations, strongly and weakly dominant units, spatial models, outdegrees, degree of pervasiveness, power law, input-output tables, US economy
    JEL: C12 C13 C23 C67 E32
    Date: 2016–12–16
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1678&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.