
on Econometrics 
By:  Roberto Casarin (Department of Economics, University of Venice Cà Foscari); Daniel Felix Ahelegbey (Department of Economics, University of Venice Cà Foscari); Monica Billio (Department of Economics, University of Venice Cà Foscari) 
Abstract:  In highdimensional vector autoregressive (VAR) models, it is natural to have large number of predictors relative to the number of observations, and a lack of efficiency in estimation and forecasting. In this context, model selection is a difficult issue and standard procedures may often be inefficient. In this paper we aim to provide a solution to these problems. We introduce sparsity on the structure of temporal dependence of a graphical VAR and develop an efficient model selection approach. We follow a Bayesian approach and introduce prior restrictions to control the maximal number of explanatory variables for VAR models. We discuss the joint inference of the temporal dependence, the maximum lag order and the parameters of the model, and provide an efficient Markov chain Monte Carlo procedure. The efficiency of the proposed approach is showed on simulated experiments and real data to model and forecast selected US macroeconomic variables with many predictors. 
Keywords:  Highdimensional Models, Large Vector Autoregression, Model Selection, Prior Distribution, Sparse Graphical Models. 
JEL:  C11 C15 C52 E17 G17 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2014:29&r=ecm 
By:  Pierre Perron (Boston University); Mototsugu Shintani (University of Tokyo and Vanderbilt University); Tomoyoshi Yabu (Keio University) 
Abstract:  This paper proposes a new test for the presence of a nonlinear deterministic trend approximated by a Fourier expansion in a univariate time series for which there is no prior knowledge as to whether the noise component is stationary or contains an autoregressive unit root. Our approach builds on the work of Perron and Yabu (2009a) and is based on a Feasible Generalized Least Squares procedure that uses a superefficient estimator of the sum of the autoregressive coefficients α when α=1. The resulting Wald test statistic asymptotically follows a chisquare limit distribution in both the I(0) and I(1) cases. To improve the finite sample properties of the test, we use a bias corrected version of the OLS estimator of α proposed by Roy and Fuller (2001). We show that our procedure is substantially more powerful than currently available alternatives. We illustrate the usefulness of our method via an application to modeling the trend of global and hemispheric temperatures. 
Keywords:  nonlinear trends, unit root, medianunbiased estimator, GLS procedure, superefficient estimator 
JEL:  C2 
Date:  2015–02–27 
URL:  http://d.repec.org/n?u=RePEc:van:wpaper:vueconsub1500001&r=ecm 
By:  Xu Cheng (Department of Economics, University of Pennsylvania); Zhipeng Liao (Department of Economics, UCLA); Ruoyao Shi (Department of Economics, UCLA) 
Abstract:  This paper studies the averaging generalized method of moments (GMM) estimator that combines a conservative GMM estimator based on valid moment conditions and an aggressive GMM estimator based on both valid and possibly misspecified moment conditions, where the weight is the sample analog of an infeasible optimal weight. It is an alternative to pretest estimators that switch between the conservative and aggressive estimators based on model specification tests. This averaging estimator is robust in the sense that it uniformly dominates the conservative estimator by reducing the risk under any degree of misspecification, whereas the pretest estimators reduce the risk in parts of the parameter space and increase it in other parts. To establish uniform dominance of one estimator over another, we establish asymptotic theories on uniform approximations of the finitesample risk differences between two estimators. These asymptotic results are developed along drifting sequences of data generating processes (DGPs) that model various degrees of local misspecification as well as global misspecification. Extending seminal results on the JamesStein estimator, the uniform dominance is established in nonGaussian semiparametric nonlinear models. The proposed averaging estimator is applied to estimate the human capital production function in a lifecycle labor supply model. 
Keywords:  FiniteSample Risk, Generalized Shrinkage Estimator, GMM Misspecification, Model Averaging, Uniform Approximation 
JEL:  C26 C36 C51 C52 
Date:  2013–08–01 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:15017&r=ecm 
By:  Joseph Esdras; Pedro Galeano; Rosa E. Lillo 
Abstract:  The comparison of the means of two independent samples is one of the most popular problems in realworld data analysis. In the multivariate context, twosample Hotelling's T² frequently used to test the equality of means of two independent Gaussian random samples assuming either the same or a different covariance matrix. In this paper, we derive twosample Hotelling's T² from two functional distributions. The statistics that we propose are based on the functional Mahalanobis semidistance and, under certain conditions, their asymptotic distributions are chisquared, regardless the distribution of the functional random samples. Additionally, we provide the link between the twosample Hotelling's T² semidistance and statistics based on the functional principal components semidistance. A Monte Carlo study indicates that the twosample Hotelling's T² of power those based on the functional principal components semidistance. We analyze a data set of daily temperature records of 35 Canadian weather stations over a year with the goal of testing whether or not the mean temperature functions of the stations in the Eastern and Western Canada regions are equal. The results appear to indicate differences between both regions that are not found with statistics based on the functional principal components semidistance. 
Keywords:  Functional BehrensFisher problem, Functional data analysis, Functional Mahalanobis semidistance, Functional principal components semidistance, Hotelling's T² statistics 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1503&r=ecm 
By:  Rinke, Saskia; Sibbertsen, Philipp 
Abstract:  In this paper the performance of different information criteria for simultaneous model class and lag order selection is evaluated using simulation studies. We focus on the ability of the criteria to distinguish linear and nonlinear models. In the simulation studies, we consider three different versions of the commonly known criteria AIC, SIC and AICc. In addition, we also assess the performance of WIC and evaluate the impact of the error term variance estimator. Our results confirm the findings of different authors that AIC and AICc favor nonlinear over linear models, whereas weighted versions of WIC and all versions of SIC are able to successfully distinguish linear and nonlinear models. However, the discrimination between different nonlinear model classes is more difficult. Nevertheless, the lag order selection is reliable. In general, information criteria involving the unbiased error term variance estimator overfit less and should be preferred to using the usual ML estimator of the error term variance. 
Keywords:  Information Criteria, Nonlinear Time Series, Threshold Models,Monte Carlo 
JEL:  C15 C22 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp548&r=ecm 
By:  Anne PéguinFeissolle (Aix Marseille University (AixMarseille School of Economics), CNRS & EHESS, AixMarseille); Bilel Sanhaji (Aix Marseille University (AixMarseille School of Economics), CNRS & EHESS, AixMarseille) 
Abstract:  We introduce two multivariate constant conditional correlation tests that require little knowledge of the functional relationship determining the conditional correlations. The first test is based on artificial neural networks and the second one is based on a Taylor expansion of each unknown conditional correlation. These new tests can be seen as general misspecification tests of a large set of multivariate GARCHtype models. We investigate the size and the power of these tests through Monte Carlo experiments. Moreover, we study their robustness to nonnormality by simulating some models such as the GARCH?t and Beta?t?EGARCH models. We give some illustrative empirical examples based on financial data. 
Keywords:  multivariate GARCH, neural network, Taylor expansion 
JEL:  C22 C45 C58 
Date:  2015–03–10 
URL:  http://d.repec.org/n?u=RePEc:aim:wpaimx:1516&r=ecm 
By:  YAE IN BAEK (University of California, San Diego); Jin Seo Cho (Yonsei University) 
Abstract:  We develop a method of testing linearity using power transforms of regressors, allowing for stationary processes and time trends. The linear model is a simplifying hypothesis that derives from the power transform model in three different ways, each producing its own identification problem. We call this modeling difficulty the trifold identification problem and show that it may be overcome using a test based on the quasilikelihood ratio (QLR) statistic. More specifically, the QLR statistic may be approximated under each identification problem and the separate null approximations may be combined to produce a composite approximation that embodies the linear model hypothesis. The limit theory for the QLR test statistic depends on a Gaussian stochastic process. In the important special case of a linear time trend regressor and martingale difference errors asymptotic critical values of the test are provided. Test power is analyzed and an empirical application to cropyield distributions is provided. The paper also considers generalizations of the BoxCox transformation, which are associated with the QLR test statistic. 
Keywords:  Box Cox transform; Gaussian stochastic process; Neglected nonlinearity; Power transformation; Quasilikelihood ratio test; Trend exponent; Trifold identification problem. 
JEL:  C12 C18 C46 C52 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2015rwp79&r=ecm 
By:  YAE IN BAEK (University of California, San Diego); Jin Seo Cho (Yonsei University); PETER C.B. PHILLIPS (Yale University, University of Auckland, Singapore Management University & University of Southampton) 
JEL:  C12 C18 C46 C52 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2015rwp79a&r=ecm 
By:  Prosper Donovon; Alastair R. Hall 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1505&r=ecm 
By:  Nelson RamírezRondán (Central Bank of Peru) 
Abstract:  Threshold estimation methods are developed for dynamic panels with individual fixed specific effects covering short time periods. Maximum likelihood estimation of the threshold and the slope parameters is proposed using first difference transformations. Threshold estimate is shown to be consistent and it converges to a doublesided standard Brownian motion distribution, when the number of individuals grows to infinity for a fixed time period; and the slope estimates are consistent and asymptotically normally distributed. The method is applied to a sample of 72 countries and 8 periods of 5year averages to determine the effect of inflation rate on longrun economic growth. 
Keywords:  Threshold Models, Dynamic Panel Data, Maximum Likelihood Estimation, Inflation, Economic Growth 
JEL:  C13 C23 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:apc:wpaper:2015032&r=ecm 
By:  Tim Bollerslev (Duke University, NBER and CREATES); Andrew J. Patton (Duke University); Rogier Quaedvlieg (Maastricht University) 
Abstract:  We propose a new family of easytoimplement realized volatility based forecasting models. The models exploit the asymptotic theory for highfrequency realized volatility estimation to improve the accuracy of the forecasts. By allowing the parameters of the models to vary explicitly with the (estimated) degree of measurement error, the models exhibit stronger persistence, and in turn generate more responsive forecasts, when the measurement error is relatively low. Implementing the new class of models for the S&P500 equity index and the individual constituents of the Dow Jones Industrial Average, we document significant improvements in the accuracy of the resulting forecasts compared to the forecasts from some of the most popular existing models that implicitly ignore the temporal variation in the magnitude of the realized volatility measurement errors. 
Keywords:  Realized volatility, Forecasting, Measurement Errors, HAR, HARQ 
JEL:  C22 C51 C53 C58 
Date:  2015–03–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201514&r=ecm 
By:  Liusha Yang; Romain Couillet; Matthew R. McKay 
Abstract:  We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust Mestimator and on LedoitWolf's shrinkage estimator while assuming samples with heavytailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data. 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1503.08013&r=ecm 
By:  Michael P. Clements (ICMA Centre, Henley Business School, University of Reading); 
Abstract:  Modelbased estimates of future uncertainty are generally based on the insample fit of the model, as when BoxJenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empiricallyestimated model of data revisions for US output growth is used to investigate smallsample properties. 
Keywords:  insample uncertainty, outofsample uncertainty, realtimevintage estimation 
JEL:  C53 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:rdg:icmadp:icmadp201502&r=ecm 
By:  Nikolas Mittag 
Abstract:  Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employeremployee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large twoway fixed effects (TWFE) and workerfirm match effect models without additional assumptions. It computes the exact OLS solution including estimates of the fixed effects and makes testing feasible even with multiway clustered errors. An application using German linked employeremployee data illustrates the advantages: The data reject the assumptions of simpler estimators and omitting match effects biases estimates including the returns to experience and the gender wage gap. Specification test detect both problems. Firm fixed effects, not match effects, are the main channel through which job transitions drive wage dynamics, which underlines the importance of firm heterogeneity for labor market dynamics. 
Keywords:  multiway fixed effects; linked employeremployee data; matching, wage dynamics; 
JEL:  J31 J63 C23 C63 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:cer:papers:wp532&r=ecm 
By:  Jesus Crespo Cuaresma (Department of Economics, Vienna University of Economics and Business); Bettina Grün (Department of Applied Statistics, Johannes Kepler University Linz); Paul Hofmarcher (Department of Economics, Vienna University of Economics and Business); Stefan Humer (Department of Economics, Vienna University of Economics and Business); Mathias Moser (Department of Economics, Vienna University of Economics and Business) 
Abstract:  Posterior analysis in Bayesian model averaging (BMA) applications often includes the assessment of measures of jointness (joint inclusion) across covariates. We link the discussion of jointness measures in the econometric literature to the literature on association rules in data mining exercises. We analyze a group of alternative jointness measures that include those proposed in the BMA literature and several others put forward in the field of data mining. The way these measures address the joint exclusion of covariates appears particularly important in terms of the conclusions that can be drawn from them. Using a dataset of economic growth determinants, we assess how the measurement of jointness in BMA can affect inference about the structure of bivariate inclusion patterns across covariates. 
Keywords:  Bayesian Model Averaging, Jointness, Robust Growth Determinants, Machine Learning, Association Rules 
JEL:  C11 O40 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp193&r=ecm 
By:  Alastair R. Hall; Denise R. Osborn; Nikolaos Sakkas 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1504&r=ecm 
By:  Fedor Iskhakov (University of New South Wales); John Rust (Georgetown University); Bertel Schjerning (University of Copenhagen) 
Abstract:  We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive approximations. We redo their comparison using the more efficient version of NFXP proposed by Rust (1987), which combines successive approximations and NewtonKantorovich iterations to solve the fixed point problem (NFXPNK). We show that MPEC and NFXPNK are similar in performance when the sample size is relatively small. However, in problems with larger sample sizes, NFXPNK outperforms MPEC by a significant margin. 
Keywords:  Structural estimation, dynamic discrete choice, NFXP, MPEC, successive approximations, NewtonKantorovich algorithm. 
Date:  2015–03–23 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1505&r=ecm 