nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒10‒25
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Orthogonal Series Estimation in Nonlinear Cointegrating Models with Endogeneity By Biqing Cai; Chaohua Dong; Jiti Gao
  2. Cross-sectional Independence Test for a Class of Parametric Panel Data Models By Guangming Pan; Jiti Gao; Yanrong Yang; Meihui Guo
  3. A SPECTRAL EM ALGORITHM FOR DYNAMIC FACTOR MODELS By Gabriele Fiorentini; Alessandro Galesi; Enrique Sentana
  4. Partial Identification in Applied Research: Benefits and Challenges By Ho, Katherine; Rosen, Adam M.
  5. Real-Time Forecasting with a Large, Mixed Frequency, Bayesian VAR By McCracken, Michael W.; Owyang, Michael T.; Sekhposyan, Tatevik
  6. Composite likelihood inference for hidden Markov models for dynamic networks By Bartolucci, Francesco; Marino, Maria Francesca; Pandolfi, Silvia
  7. Testing for a Structural Break in Dynamic Panel Data Models with Common Factors By Huanjun Zhu; Vasilis Sarafidis; Mervyn Silvapulle; Jiti Gao
  8. Endogeneity and Non-Response Bias in Treatment Evaluation: Nonparametric Identification of Causal Effects by Instruments By Fricke, Hans; Frölich, Markus; Huber, Martin; Lechner, Michael
  9. Testing for Spacial Lag and Spatial Error Dependence in a Fixed Effects Panel Data Model Using Double Length Artificial Regressions By Badi H. Baltagi; Long Liu
  10. Portfolio Optimization under Expected Shortfall: Contour Maps of Estimation Error By Fabio Caccioli; Imre Kondor; G\'abor Papp
  11. TESTING A LARGE NUMBER OF HYPOTHESES IN APPROXIMATE FACTOR MODELS By Dante Amengual; Luca Repetto
  12. Model Averaging by Stacking By Claudio, Morana
  13. On Consistency of Approximate Bayesian Computation By David T. Frazier; Gael M. Martin; Christian P. Robert
  14. Networks, Dynamic Factors, and the Volatility Analysis of High-Dimensional Financial Series By Matteo Barigozzi; Marc Hallin
  15. Tests for sphericity in multivariate garch models By Francq, Christian; Jiménez Gamero, Maria Dolores; Meintanis, Simos
  16. Looking for efficient qml estimation of conditional value-at-risk at multiple risk levels By Francq, Christian; Zakoian, Jean-Michel
  17. Heterogeneity in the Dynamic Effects of Uncertainty on Investment By Sungje Byun; Soojin Jo

  1. By: Biqing Cai; Chaohua Dong; Jiti Gao
    Abstract: This paper proposes a new statistic to conduct cross-sectional independence test for the residuals involved in a parametric panel data model. The proposed test statistic, which is called linear spectral statistic (LSS), is established based on the characteristic function of the empirical spectral distribution (ESD) of the sample correlation matrix of the residuals. The main advantage of the proposed test statistic is that it can capture nonlinear cross-sectional dependence. Asymptotic theory for a general class of linear spectral statistics is established, as the cross-sectional dimension N and time length T go to infinity proportionally. This type of statistics covers many classical statistics, including the bias-corrected Lagrange Multiplier (LM) test statistic and the likelihood ratio test statistic. Furthermore, the power under a local alternative hypothesis is analyzed and the asymptotic distribution of the proposed statistic under this local hypothesis is also established. Finite sample performance shows that the proposed test statistic works well numerically in each individual case and it can also distinguish some dependent but uncorrelated structures, for example, nonlinear MA(1) models and multiple ARCH(1) models.
    Keywords: Characteristic function, cross–sectional independence, empirical spectral distribution, linear panel data models, Marcenko-Pastur Law
    JEL: C12 C21 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-18&r=all
  2. By: Guangming Pan; Jiti Gao; Yanrong Yang; Meihui Guo
    Abstract: This paper proposes a new statistic to conduct cross-sectional independence test for the residuals involved in a parametric panel data model. The proposed test statistic, which is called linear spectral statistic (LSS), is established based on the characteristic function of the empirical spectral distribution (ESD) of the sample correlation matrix of the residuals. The main advantage of the proposed test statistic is that it can capture nonlinear cross-sectional dependence. Asymptotic theory for a general class of linear spectral statistics is established, as the cross-sectional dimension N and time length T go to infinity proportionally. This type of statistics covers many classical statistics, including the bias-corrected Lagrange Multiplier (LM) test statistic and the likelihood ratio test statistic. Furthermore, the power under a local alternative hypothesis is analyzed and the asymptotic distribution of the proposed statistic under this local hypothesis is also established. Finite sample performance shows that the proposed test statistic works well numerically in each individual case and it can also distinguish some dependent but uncorrelated structures, for example, nonlinear MA(1) models and multiple ARCH(1) models.
    Keywords: Characteristic function, cross–sectional independence, empirical spectral distribution, linear panel data models, Marcenko-Pastur Law
    JEL: C12 C21 C22
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-17&r=all
  3. By: Gabriele Fiorentini (Università di Firenze); Alessandro Galesi (CEMFI, Centro de Estudios Monetarios y Financieros); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We introduce a frequency domain version of the EM algorithm for general dynamic factor models. We consider both AR and ARMA processes, for which we develop iterative indirect inference procedures analogous to the algorithms in Hannan (1969). Although our proposed procedure allows researchers to estimate such models by maximum likelihood with many series even without good initial values, we recommend switching to a gradient method that uses the EM principle to swiftly compute frequency domain analytical scores near the optimum. We successfully employ our algorithm to construct an index that captures the common movements of US sectoral employment growth rates.
    Keywords: Indirect inference, Kalman filter, sectoral employment, spectral maximum likelihood, Wiener-Kolmogorov filter.
    JEL: C32 C38 C51
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2014_1411&r=all
  4. By: Ho, Katherine; Rosen, Adam M.
    Abstract: Advances in the study of partial identification allow applied researchers to learn about parameters of interest without making assumptions needed to guarantee point identification. We discuss the roles that assumptions and data play in partial identification analysis, with the goal of providing information to applied researchers that can help them employ these methods in practice. To this end, we present a sample of econometric models that have been used in a variety of recent applications where parameters of interest are partially identified, highlighting common features and themes across these papers. In addition, in order to help illustrate the combined roles of data and assumptions, we present numerical illustrations for a particular application, the joint determination of wages and labor supply. Finally we discuss the benefits and challenges of using partially identifying models in empirical work and point to possible avenues of future research.
    Keywords: partial identification
    JEL: C13 C18
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:10883&r=all
  5. By: McCracken, Michael W. (Federal Reserve Bank of St. Louis); Owyang, Michael T. (Federal Reserve Bank of St. Louis); Sekhposyan, Tatevik (Texas A&M University)
    Abstract: We assess point and density forecasts from a mixed-frequency vector autoregression (VAR) to obtain intra-quarter forecasts of output growth as new information becomes available. The econometric model is specified at the lowest sampling frequency; high frequency observations are treated as different economic series occurring at the low frequency. We impose restrictions on the VAR to account explicitly for the temporal ordering of the data releases. Because this type of data stacking results in a high-dimensional system, we rely on Bayesian shrinkage to mitigate parameter proliferation. The relative performance of the model is compared to forecasts from various time-series models and the Survey of Professional Forecaster's. We further illustrate the possible usefulness of our proposed VAR for causal analysis.
    Keywords: Vector autoregression; Blocking model; Stacked vector autoregression; Mixed-frequency estimation; Bayesian methods; Nowcasting; Forecasting
    JEL: C22 C52 C53
    Date: 2015–10–08
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2015-030&r=all
  6. By: Bartolucci, Francesco; Marino, Maria Francesca; Pandolfi, Silvia
    Abstract: We introduce a hidden Markov model for dynamic network data where directed relations among a set of units are observed at different time occasions. The model can also be used with minor adjustments to deal with undirected networks. In the directional case, dyads referred to each pair of units are explicitly modelled conditional on the latent states of both units. Given the complexity of the model, we propose a composite likelihood method for making inference on its parameters. This method is studied in detail for the directional case by a simulation study in which different scenarios are considered. The proposed approach is illustrated by an example based on the well-known Enron dataset about email exchange.
    Keywords: Dyads; EM algorithm; Enron dataset; Latent Markov models
    JEL: C13 C14 C18 C3
    Date: 2015–10–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67242&r=all
  7. By: Huanjun Zhu; Vasilis Sarafidis; Mervyn Silvapulle; Jiti Gao
    Abstract: This paper develops a method for testing for the presence of a single structural break in panel data models with unobserved heterogeneity represented by a factor error structure. The common factor approach is an appealing way to capture the effect of unobserved variables, such as skills and innate ability in studies of returns to education, common shocks and cross-sectional dependence in models of economic growth, law enforcement acts and public attitudes towards crime in statistical modelling of criminal behaviour. Ignoring these variables may result in inconsistent parameter estimates and invalid inferences. We focus on the case where the time frequency of the data may be yearly and thereby the number of time series observations is small, even if the sample covers a rather long period of time. We develop a Distance type statistic based on a Method of Moments estimator that allows for unobserved common factors. Existing structural break tests proposed in the literature are not valid under these circumstances. The asymptotic properties of the test statistic are established for both known and unknown breakpoints. In our simulation study, the method performed well, both in terms of size and power, as well as in terms of successfully locating the time at which the break occurred. The method is illustrated using data from a large sample of banking institutions, providing empirical evidence on the well-known Gibrat's `Law'.
    Keywords: Method of moments, unobserved heterogeneity, break-point detection, fixed T asymptotics
    JEL: C11 C15 C18
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-20&r=all
  8. By: Fricke, Hans (University of St. Gallen); Frölich, Markus (University of Mannheim); Huber, Martin (University of Fribourg); Lechner, Michael (University of St. Gallen)
    Abstract: This paper proposes a nonparametric method for evaluating treatment effects in the presence of both treatment endogeneity and attrition/non-response bias, using two instrumental variables. Making use of a discrete instrument for the treatment and a continuous instrument for non-response/attrition, we identify the average treatment effect on compliers as well as the total population and suggest non- and semiparametric estimators. We apply the latter to a randomized experiment at a Swiss University in order to estimate the effect of gym training on students' self-assessed health. The treatment (gym training) and attrition are instrumented by randomized cash incentives paid out conditional on gym visits and by a cash lottery for participating in the follow-up survey, respectively.
    Keywords: endogeneity, attrition, local average treatment effect, weighting, instrument, experiment
    JEL: C14 C21 C23 C24 C26
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9428&r=all
  9. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio, One UTSA Circle, TX 78249-0633)
    Abstract: This paper revisits the joint and conditional Lagrange Multiplier tests derived by Debarsy and Ertur (2010) for a fixed effects spatial lag regression model with spatial auto-regressive error, and derives these tests using artificial Double Length Regressions (DLR). These DLR tests and their corresponding LM tests are compared using an empirical example and a Monte Carlo simulation.
    Keywords: Double Length Regresson; Spatial Lag Dependence; Spatial Error Dependence; Artificial Regressions; Panel Data; Fixed Effects
    JEL: C12 R15
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:183&r=all
  10. By: Fabio Caccioli; Imre Kondor; G\'abor Papp
    Abstract: The contour maps of the error of historical resp. parametric estimates for large random portfolios optimized under the risk measure Expected Shortfall (ES) are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns as well as the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-the-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1510.04943&r=all
  11. By: Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Luca Repetto (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We propose a method to test hypotheses in approximate factor models when the number of restrictions under the null hypothesis grows with the sample size. We use a simple test statistic, based on the sums of squared residuals of the restricted and the unrestricted versions of the model, and derive its asymptotic distribution under different assumptions on the covariance structure of the error term. We show how to standardize the test statistic in the presence of both serial and cross-section correlation to obtain a standard normal limiting distribution. We provide estimators for those quantities that are easy to implement. Finally, we illustrate the small sample performance of these testing procedures through Monte Carlo simulations and apply them to reconsider Reis and Watson (2010)'s hypothesis of existence of a pure inflation factor in the US economy.
    Keywords: Approximate factor model, hypothesis testing, principal components, large model analysis, large data sets, inflation.
    JEL: C12 C33
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2014_1410&r=all
  12. By: Claudio, Morana
    Abstract: The paper introduces a new Frequentist model averaging estimation procedure, based on a stacked OLS estimator across models, implementable on cross-sectional, panel, as well as time series data. The proposed estimator shows the same optimal properties of the OLS estimator under the usual set of assumptions concerning the population regression model. Relatively to available alternative approaches, it has the advantage of performing model averaging exante in a single step, optimally selecting models’ weight according to the MSE metric, i.e., by minimizing the squared Euclidean distance between actual and predicted value vectors. Moreover, it is straightforward to implement, only requiring the estimation of a single OLS augmented regression. By exploiting ex-ante a broader information set and benefiting of more degrees of freedom, the proposed approach yields more accurate and (relatively) more efficient estimation than available ex-post methods.
    Keywords: Model Averaging, Model Uncertainty
    JEL: C30 C51
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:310&r=all
  13. By: David T. Frazier; Gael M. Martin; Christian P. Robert
    Abstract: Approximate Bayesian computation (ABC) methods have become increasingly prevalent of late, facilitating as they do the analysis of intractable, or challenging, statistical problems. With the initial focus being primarily on the practical import of ABC, exploration of its formal statistical properties has begun to attract more attention. The aim of this paper is to establish general conditions under which ABC methods are Bayesian consistent, in the sense of producing draws that yield a degenerate posterior distribution at the true parameter (vector) asymptotically (in the sample size). We derive conditions under which arbitrary summary statistics yield consistent inference in the Bayesian sense, with these conditions linked to identiÖcation of the true parameters. Using simple illustrative examples that have featured in the literature, we demonstrate that identiÖcation, and hence consistency, is unlikely to be achieved in many cases, and propose a simple diagnostic procedure that can indicate the presence of this problem. We also formally explore the link between consistency and the use of auxiliary models within ABC, and illustrate the subsequent results in the Lotka-Volterra predator-prey model.
    Keywords: Bayesian consistency, likelihood-free methods, conditioning, auxiliary modelbased ABC, ordinary differential equations, Lotka-Volterra model.
    JEL: C11 C15 C18
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2015-19&r=all
  14. By: Matteo Barigozzi; Marc Hallin
    Abstract: In this paper, we define weighted directed networks for large panels of financial time series where the edges and the associated weights are reflecting the dynamic conditional correlation structure of the panel. Those networks produce a most informative picture of the interconnections among the various series in the panel. In particular, we are combining this network-based analysis and a general dynamic factor decomposition in a study of the volatilities of the stocks of the Standard \&Poor's 100 index over the period 2000-2013. This approach allows us to decompose the panel into two components which represent the two main sources of variation of financial time series: common or market shocks, and the stock-specific or idiosyncratic ones. While the common components, driven by market shocks, are related to the non-diversifiable or {\it systematic} components of risk, the idiosyncratic components show important interdependencies which are nicely described through network structures. Those networks shed some light on the contagion phenomenons associated with financial crises, and help assessing how {\it systemic} a given firm is likely to be. We show how to estimate them by combining dynamic principal components and sparse VAR techniques. The results provide evidence of high positive intra-sectoral and lower, but nevertheless quite important, negative inter-sectoral, dependencies, the Energy and Financials sectors being the most interconnected ones. In particular, the Financials stocks appear to be the most central vertices in the network, making them the main source of contagion.
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1510.05118&r=all
  15. By: Francq, Christian; Jiménez Gamero, Maria Dolores; Meintanis, Simos
    Abstract: Tests for spherical symmetry of the innovation distribution are proposed in multivariate GARCH models. The new tests are of Kolmogorov--Smirnov and Cram\'er--von Mises--type and make use of the common geometry underlying the characteristic function of any spherically symmetric distribution. The asymptotic null distribution of the test statistics as well as the consistency of the tests is investigated under general conditions. It is shown that both the finite sample and the asymptotic null distribution depend on the unknown distribution of the Euclidean norm of the innovations. Therefore a conditional Monte Carlo procedure is used to actually carry out the tests. The validity of this resampling scheme is formally justified. Results on the behavior of the test in finite--samples are included, as well as an application on financial data.
    Keywords: Extended CCC-GARCH; Spherical symmetry; Empirical characteristic function; Conditional Monte Carlo test
    JEL: C12 C15 C32 C58
    Date: 2015–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67411&r=all
  16. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: We consider joint estimation of conditional Value-at-Risk (VaR) at several levels, in the framework of general GARCH-type models. The conditional VaR at level $\alpha$ is expressed as the product of the volatility and the opposite of the $\alpha$-quantile of the innovation. A standard method is to estimate the volatility parameter by Gaussian Quasi-Maximum Likelihood (QML) in a first step, and to use the residuals for estimating the innovations quantiles in a second step. We argue that the Gaussian QML may be inefficient with respect to more general QML and can even be in failure for heavy tailed conditional distributions. We therefore study, for a vector of risk levels, a two-step procedure based on a generalized QML. For a portfolio of VaR's at different levels, confidence intervals accounting for both market and estimation risks are deduced. An empirical study based on stock indices illustrates the theoretical results.
    Keywords: Asymmetric Power GARCH; Distortion Risk Measures; Estimation risk; Non-Gaussian Quasi-Maximum Likelihood; Value-at-Risk
    JEL: C13 C22 C58
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:67195&r=all
  17. By: Sungje Byun; Soojin Jo
    Abstract: How does aggregate profit uncertainty influence investment activity at the firm level? We propose a parsimonious adaptation of a factor-autoregressive conditional heteroscedasticity model to exploit information in a subindustry sales panel for an efficient and tractable estimation of aggregate volatility. The resulting uncertainty measure is then included in an investment forecasting model interacted with firm-specific coefficients. We find that higher profit uncertainty induces firms to lower capital expenditure on average, yet to a considerably different degree: for example, both small and large firms are expected to reduce investment much more than medium-sized firms. This highlights significant and substantial heterogeneity in the uncertainty transmission mechanism.
    Keywords: Econometric and statistical methods; International topics; Domestic demand and components
    JEL: E22 D80 C22 C23
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:15-34&r=all

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.