
on Econometrics 
By:  Biqing Cai; Chaohua Dong; Jiti Gao 
Abstract:  This paper proposes a new statistic to conduct crosssectional independence test for the residuals involved in a parametric panel data model. The proposed test statistic, which is called linear spectral statistic (LSS), is established based on the characteristic function of the empirical spectral distribution (ESD) of the sample correlation matrix of the residuals. The main advantage of the proposed test statistic is that it can capture nonlinear crosssectional dependence. Asymptotic theory for a general class of linear spectral statistics is established, as the crosssectional dimension N and time length T go to infinity proportionally. This type of statistics covers many classical statistics, including the biascorrected Lagrange Multiplier (LM) test statistic and the likelihood ratio test statistic. Furthermore, the power under a local alternative hypothesis is analyzed and the asymptotic distribution of the proposed statistic under this local hypothesis is also established. Finite sample performance shows that the proposed test statistic works well numerically in each individual case and it can also distinguish some dependent but uncorrelated structures, for example, nonlinear MA(1) models and multiple ARCH(1) models. 
Keywords:  Characteristic function, crossâ€“sectional independence, empirical spectral distribution, linear panel data models, MarcenkoPastur Law 
JEL:  C12 C21 C22 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201518&r=all 
By:  Guangming Pan; Jiti Gao; Yanrong Yang; Meihui Guo 
Abstract:  This paper proposes a new statistic to conduct crosssectional independence test for the residuals involved in a parametric panel data model. The proposed test statistic, which is called linear spectral statistic (LSS), is established based on the characteristic function of the empirical spectral distribution (ESD) of the sample correlation matrix of the residuals. The main advantage of the proposed test statistic is that it can capture nonlinear crosssectional dependence. Asymptotic theory for a general class of linear spectral statistics is established, as the crosssectional dimension N and time length T go to infinity proportionally. This type of statistics covers many classical statistics, including the biascorrected Lagrange Multiplier (LM) test statistic and the likelihood ratio test statistic. Furthermore, the power under a local alternative hypothesis is analyzed and the asymptotic distribution of the proposed statistic under this local hypothesis is also established. Finite sample performance shows that the proposed test statistic works well numerically in each individual case and it can also distinguish some dependent but uncorrelated structures, for example, nonlinear MA(1) models and multiple ARCH(1) models. 
Keywords:  Characteristic function, crossâ€“sectional independence, empirical spectral distribution, linear panel data models, MarcenkoPastur Law 
JEL:  C12 C21 C22 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201517&r=all 
By:  Gabriele Fiorentini (Università di Firenze); Alessandro Galesi (CEMFI, Centro de Estudios Monetarios y Financieros); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros) 
Abstract:  We introduce a frequency domain version of the EM algorithm for general dynamic factor models. We consider both AR and ARMA processes, for which we develop iterative indirect inference procedures analogous to the algorithms in Hannan (1969). Although our proposed procedure allows researchers to estimate such models by maximum likelihood with many series even without good initial values, we recommend switching to a gradient method that uses the EM principle to swiftly compute frequency domain analytical scores near the optimum. We successfully employ our algorithm to construct an index that captures the common movements of US sectoral employment growth rates. 
Keywords:  Indirect inference, Kalman filter, sectoral employment, spectral maximum likelihood, WienerKolmogorov filter. 
JEL:  C32 C38 C51 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2014_1411&r=all 
By:  Ho, Katherine; Rosen, Adam M. 
Abstract:  Advances in the study of partial identification allow applied researchers to learn about parameters of interest without making assumptions needed to guarantee point identification. We discuss the roles that assumptions and data play in partial identification analysis, with the goal of providing information to applied researchers that can help them employ these methods in practice. To this end, we present a sample of econometric models that have been used in a variety of recent applications where parameters of interest are partially identified, highlighting common features and themes across these papers. In addition, in order to help illustrate the combined roles of data and assumptions, we present numerical illustrations for a particular application, the joint determination of wages and labor supply. Finally we discuss the benefits and challenges of using partially identifying models in empirical work and point to possible avenues of future research. 
Keywords:  partial identification 
JEL:  C13 C18 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10883&r=all 
By:  McCracken, Michael W. (Federal Reserve Bank of St. Louis); Owyang, Michael T. (Federal Reserve Bank of St. Louis); Sekhposyan, Tatevik (Texas A&M University) 
Abstract:  We assess point and density forecasts from a mixedfrequency vector autoregression (VAR) to obtain intraquarter forecasts of output growth as new information becomes available. The econometric model is specified at the lowest sampling frequency; high frequency observations are treated as different economic series occurring at the low frequency. We impose restrictions on the VAR to account explicitly for the temporal ordering of the data releases. Because this type of data stacking results in a highdimensional system, we rely on Bayesian shrinkage to mitigate parameter proliferation. The relative performance of the model is compared to forecasts from various timeseries models and the Survey of Professional Forecaster's. We further illustrate the possible usefulness of our proposed VAR for causal analysis. 
Keywords:  Vector autoregression; Blocking model; Stacked vector autoregression; Mixedfrequency estimation; Bayesian methods; Nowcasting; Forecasting 
JEL:  C22 C52 C53 
Date:  2015–10–08 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2015030&r=all 
By:  Bartolucci, Francesco; Marino, Maria Francesca; Pandolfi, Silvia 
Abstract:  We introduce a hidden Markov model for dynamic network data where directed relations among a set of units are observed at different time occasions. The model can also be used with minor adjustments to deal with undirected networks. In the directional case, dyads referred to each pair of units are explicitly modelled conditional on the latent states of both units. Given the complexity of the model, we propose a composite likelihood method for making inference on its parameters. This method is studied in detail for the directional case by a simulation study in which different scenarios are considered. The proposed approach is illustrated by an example based on the wellknown Enron dataset about email exchange. 
Keywords:  Dyads; EM algorithm; Enron dataset; Latent Markov models 
JEL:  C13 C14 C18 C3 
Date:  2015–10–14 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67242&r=all 
By:  Huanjun Zhu; Vasilis Sarafidis; Mervyn Silvapulle; Jiti Gao 
Abstract:  This paper develops a method for testing for the presence of a single structural break in panel data models with unobserved heterogeneity represented by a factor error structure. The common factor approach is an appealing way to capture the effect of unobserved variables, such as skills and innate ability in studies of returns to education, common shocks and crosssectional dependence in models of economic growth, law enforcement acts and public attitudes towards crime in statistical modelling of criminal behaviour. Ignoring these variables may result in inconsistent parameter estimates and invalid inferences. We focus on the case where the time frequency of the data may be yearly and thereby the number of time series observations is small, even if the sample covers a rather long period of time. We develop a Distance type statistic based on a Method of Moments estimator that allows for unobserved common factors. Existing structural break tests proposed in the literature are not valid under these circumstances. The asymptotic properties of the test statistic are established for both known and unknown breakpoints. In our simulation study, the method performed well, both in terms of size and power, as well as in terms of successfully locating the time at which the break occurred. The method is illustrated using data from a large sample of banking institutions, providing empirical evidence on the wellknown Gibrat's `Law'. 
Keywords:  Method of moments, unobserved heterogeneity, breakpoint detection, fixed T asymptotics 
JEL:  C11 C15 C18 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201520&r=all 
By:  Fricke, Hans (University of St. Gallen); Frölich, Markus (University of Mannheim); Huber, Martin (University of Fribourg); Lechner, Michael (University of St. Gallen) 
Abstract:  This paper proposes a nonparametric method for evaluating treatment effects in the presence of both treatment endogeneity and attrition/nonresponse bias, using two instrumental variables. Making use of a discrete instrument for the treatment and a continuous instrument for nonresponse/attrition, we identify the average treatment effect on compliers as well as the total population and suggest non and semiparametric estimators. We apply the latter to a randomized experiment at a Swiss University in order to estimate the effect of gym training on students' selfassessed health. The treatment (gym training) and attrition are instrumented by randomized cash incentives paid out conditional on gym visits and by a cash lottery for participating in the followup survey, respectively. 
Keywords:  endogeneity, attrition, local average treatment effect, weighting, instrument, experiment 
JEL:  C14 C21 C23 C24 C26 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp9428&r=all 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Long Liu (Department of Economics, College of Business, University of Texas at San Antonio, One UTSA Circle, TX 782490633) 
Abstract:  This paper revisits the joint and conditional Lagrange Multiplier tests derived by Debarsy and Ertur (2010) for a fixed effects spatial lag regression model with spatial autoregressive error, and derives these tests using artificial Double Length Regressions (DLR). These DLR tests and their corresponding LM tests are compared using an empirical example and a Monte Carlo simulation. 
Keywords:  Double Length Regresson; Spatial Lag Dependence; Spatial Error Dependence; Artificial Regressions; Panel Data; Fixed Effects 
JEL:  C12 R15 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:183&r=all 
By:  Fabio Caccioli; Imre Kondor; G\'abor Papp 
Abstract:  The contour maps of the error of historical resp. parametric estimates for large random portfolios optimized under the risk measure Expected Shortfall (ES) are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns as well as the VaR of the ESoptimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the outofsample and inthesample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations. 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1510.04943&r=all 
By:  Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Luca Repetto (CEMFI, Centro de Estudios Monetarios y Financieros) 
Abstract:  We propose a method to test hypotheses in approximate factor models when the number of restrictions under the null hypothesis grows with the sample size. We use a simple test statistic, based on the sums of squared residuals of the restricted and the unrestricted versions of the model, and derive its asymptotic distribution under different assumptions on the covariance structure of the error term. We show how to standardize the test statistic in the presence of both serial and crosssection correlation to obtain a standard normal limiting distribution. We provide estimators for those quantities that are easy to implement. Finally, we illustrate the small sample performance of these testing procedures through Monte Carlo simulations and apply them to reconsider Reis and Watson (2010)'s hypothesis of existence of a pure inflation factor in the US economy. 
Keywords:  Approximate factor model, hypothesis testing, principal components, large model analysis, large data sets, inflation. 
JEL:  C12 C33 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2014_1410&r=all 
By:  Claudio, Morana 
Abstract:  The paper introduces a new Frequentist model averaging estimation procedure, based on a stacked OLS estimator across models, implementable on crosssectional, panel, as well as time series data. The proposed estimator shows the same optimal properties of the OLS estimator under the usual set of assumptions concerning the population regression model. Relatively to available alternative approaches, it has the advantage of performing model averaging exante in a single step, optimally selecting modelsâ€™ weight according to the MSE metric, i.e., by minimizing the squared Euclidean distance between actual and predicted value vectors. Moreover, it is straightforward to implement, only requiring the estimation of a single OLS augmented regression. By exploiting exante a broader information set and benefiting of more degrees of freedom, the proposed approach yields more accurate and (relatively) more efficient estimation than available expost methods. 
Keywords:  Model Averaging, Model Uncertainty 
JEL:  C30 C51 
URL:  http://d.repec.org/n?u=RePEc:mib:wpaper:310&r=all 
By:  David T. Frazier; Gael M. Martin; Christian P. Robert 
Abstract:  Approximate Bayesian computation (ABC) methods have become increasingly prevalent of late, facilitating as they do the analysis of intractable, or challenging, statistical problems. With the initial focus being primarily on the practical import of ABC, exploration of its formal statistical properties has begun to attract more attention. The aim of this paper is to establish general conditions under which ABC methods are Bayesian consistent, in the sense of producing draws that yield a degenerate posterior distribution at the true parameter (vector) asymptotically (in the sample size). We derive conditions under which arbitrary summary statistics yield consistent inference in the Bayesian sense, with these conditions linked to identiÖcation of the true parameters. Using simple illustrative examples that have featured in the literature, we demonstrate that identiÖcation, and hence consistency, is unlikely to be achieved in many cases, and propose a simple diagnostic procedure that can indicate the presence of this problem. We also formally explore the link between consistency and the use of auxiliary models within ABC, and illustrate the subsequent results in the LotkaVolterra predatorprey model. 
Keywords:  Bayesian consistency, likelihoodfree methods, conditioning, auxiliary modelbased ABC, ordinary differential equations, LotkaVolterra model. 
JEL:  C11 C15 C18 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201519&r=all 
By:  Matteo Barigozzi; Marc Hallin 
Abstract:  In this paper, we define weighted directed networks for large panels of financial time series where the edges and the associated weights are reflecting the dynamic conditional correlation structure of the panel. Those networks produce a most informative picture of the interconnections among the various series in the panel. In particular, we are combining this networkbased analysis and a general dynamic factor decomposition in a study of the volatilities of the stocks of the Standard \&Poor's 100 index over the period 20002013. This approach allows us to decompose the panel into two components which represent the two main sources of variation of financial time series: common or market shocks, and the stockspecific or idiosyncratic ones. While the common components, driven by market shocks, are related to the nondiversifiable or {\it systematic} components of risk, the idiosyncratic components show important interdependencies which are nicely described through network structures. Those networks shed some light on the contagion phenomenons associated with financial crises, and help assessing how {\it systemic} a given firm is likely to be. We show how to estimate them by combining dynamic principal components and sparse VAR techniques. The results provide evidence of high positive intrasectoral and lower, but nevertheless quite important, negative intersectoral, dependencies, the Energy and Financials sectors being the most interconnected ones. In particular, the Financials stocks appear to be the most central vertices in the network, making them the main source of contagion. 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1510.05118&r=all 
By:  Francq, Christian; Jiménez Gamero, Maria Dolores; Meintanis, Simos 
Abstract:  Tests for spherical symmetry of the innovation distribution are proposed in multivariate GARCH models. The new tests are of KolmogorovSmirnov and Cram\'ervon Misestype and make use of the common geometry underlying the characteristic function of any spherically symmetric distribution. The asymptotic null distribution of the test statistics as well as the consistency of the tests is investigated under general conditions. It is shown that both the finite sample and the asymptotic null distribution depend on the unknown distribution of the Euclidean norm of the innovations. Therefore a conditional Monte Carlo procedure is used to actually carry out the tests. The validity of this resampling scheme is formally justified. Results on the behavior of the test in finitesamples are included, as well as an application on financial data. 
Keywords:  Extended CCCGARCH; Spherical symmetry; Empirical characteristic function; Conditional Monte Carlo test 
JEL:  C12 C15 C32 C58 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67411&r=all 
By:  Francq, Christian; Zakoian, JeanMichel 
Abstract:  We consider joint estimation of conditional ValueatRisk (VaR) at several levels, in the framework of general GARCHtype models. The conditional VaR at level $\alpha$ is expressed as the product of the volatility and the opposite of the $\alpha$quantile of the innovation. A standard method is to estimate the volatility parameter by Gaussian QuasiMaximum Likelihood (QML) in a first step, and to use the residuals for estimating the innovations quantiles in a second step. We argue that the Gaussian QML may be inefficient with respect to more general QML and can even be in failure for heavy tailed conditional distributions. We therefore study, for a vector of risk levels, a twostep procedure based on a generalized QML. For a portfolio of VaR's at different levels, confidence intervals accounting for both market and estimation risks are deduced. An empirical study based on stock indices illustrates the theoretical results. 
Keywords:  Asymmetric Power GARCH; Distortion Risk Measures; Estimation risk; NonGaussian QuasiMaximum Likelihood; ValueatRisk 
JEL:  C13 C22 C58 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67195&r=all 
By:  Sungje Byun; Soojin Jo 
Abstract:  How does aggregate profit uncertainty influence investment activity at the firm level? We propose a parsimonious adaptation of a factorautoregressive conditional heteroscedasticity model to exploit information in a subindustry sales panel for an efficient and tractable estimation of aggregate volatility. The resulting uncertainty measure is then included in an investment forecasting model interacted with firmspecific coefficients. We find that higher profit uncertainty induces firms to lower capital expenditure on average, yet to a considerably different degree: for example, both small and large firms are expected to reduce investment much more than mediumsized firms. This highlights significant and substantial heterogeneity in the uncertainty transmission mechanism. 
Keywords:  Econometric and statistical methods; International topics; Domestic demand and components 
JEL:  E22 D80 C22 C23 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:1534&r=all 