
on Econometrics 
By:  Stypka, Oliver (Faculty of Statistics, Technical University Dortmund); Wagner, Martin (Faculty of Statistics, Technical University Dortmund, Institute for Advanced Studies, Vienna and Bank of Slovenia, Ljubljana); Grabarczyk, Peter (Faculty of Statistics, Technical University Dortmund); Kawka, Rafael (Faculty of Statistics, Technical University Dortmund) 
Abstract:  The paper considers estimation and inference in cointegrating polynomial regressions, i. e., regressions that include deterministic variables, integrated processes and their powers as explanatory variables. The stationary errors are allowed to be serially correlated and the regressors are allowed to be endogenous. The main result shows that estimating such relationships using the Phillips and Hansen (1990) fully modified OLS approach developed for linear cointegrating relationships by incorrectly considering all integrated regressors and their powers as integrated regressors leads to the same limiting distribution as theWagner and Hong (2016) fully modified type estimator developed for cointegrating polynomial regressions. A key ingredient for the main result are novel limit results for kernel weighted sums of properly scaled nonstationary processes involving scaled powers of integrated processes. Even though the simulation results indicate performance advantages of the Wagner and Hong (2016) estimator that are partly present even in large samples, the results of the paper drastically enlarge the useability of the Phillips and Hansen (1990) estimator as implemented in many software packages. 
Keywords:  Cointegrating Polynomial Regression, Cointegration Test, Environmental Kuznets Curve, Fully Modified OLS Estimation, Integrated Process, Nonlinearity 
JEL:  C13 C32 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:333&r=ecm 
By:  Hacioglu Hoke, Sinem (Bank of England); Kapetanios, George (Kings College London) 
Abstract:  This paper provides an approach to estimation and inference for nonlinear conditional mean panel data models, in the presence of crosssectional dependence. We modify the common correlated effects (CCE) correction of Pesaran (2006) to filter out the interactive unobserved multifactor structure. The estimation can be carried out using nonlinear least squares, by augmenting the set of explanatory variables with crosssectional averages of both linear and nonlinear terms. We propose pooled and mean group estimators, derive their asymptotic distributions, and show the consistency and asymptotic normality of the coefficients of the model. The features of the proposed estimators are investigated through extensive Monte Carlo experiments. We apply our method to estimate UK banks’ wholesale funding costs and explore the nonlinear relationship between public debt and output growth. 
Keywords:  Nonlinear panel data model; crosssectional dependence; common correlated effects estimator 
JEL:  C31 C33 C51 
Date:  2017–10–16 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0683&r=ecm 
By:  Ziping Zhao; Daniel P. Palomar 
Abstract:  In econometrics and finance, the vector error correction model (VECM) is an important time series model for cointegration analysis, which is used to estimate the longrun equilibrium variable relationships. The traditional analysis and estimation methodologies assume the underlying Gaussian distribution but, in practice, heavytailed data and outliers can lead to the inapplicability of these methods. In this paper, we propose a robust model estimation method based on the Cauchy distribution to tackle this issue. In addition, sparse cointegration relations are considered to realize feature selection and dimension reduction. An efficient algorithm based on the majorizationminimization (MM) method is applied to solve the proposed nonconvex problem. The performance of this algorithm is shown through numerical simulations. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.05513&r=ecm 
By:  Riccardo Scarpan (University of Waikato); Cristiano Franceschinis (University of Padova); Mara Thiene (University of Padova) 
Abstract:  The recently introduced (Train 2016) logitmixed logit (LML) model is a key advancement in choice modelling: it generalizes many previous parametric and seminonparametric methods to represent taste heterogeneity for bundled nonmarket goods and services. We report results from Monte Carlo experiments designed to assess performance across workable sample sizes and to retrieve datadriven random coefficients distributions in the three variants of the LML model proposed in the seminal paper. Assuming a multimodal data generating process, with a panel of four and eight choices per respondent, we compare the performance of WTPspace LML models with conventional parametric model specifications based on the Mixed logit model with normals (MXLN) in preference and WTP space. Results are encouraging and support the adoption of flexible LML specifications with a high number of parameters as they seem to do better, but only at large enough sample sizes. To explore the saliency of the Monte Carlo results in an empirical application, we use data obtained from a discrete choice experiment to derive preferences for tap water quality in the province of Vicenza (northern Italy). LML models retrieve multimodal and asymmetric distributions of marginal WTPs for water quality attributes. Results show not only how the shape of such distributions vary across tap water attributes, but also the importance of being able to uncover them, considering that they would be hidden when using the MNLN. 
Keywords:  logitmixed logit; flexible taste distributions; panel random utility models 
Date:  2017–10–24 
URL:  http://d.repec.org/n?u=RePEc:wai:econwp:17/23&r=ecm 
By:  Xu Guo (School of Statistics, Beijing Normal University, Beijing.); GaoRong Li (Beijing Institute for Scientific and Engineering Computing, Beijing University of Technology, Beijing.); WingKeung Wong (Department of Finance and Big Data Research Center, Asia University Department of Economics and Finance, Hang Seng Management College Department of Economics, Lingnan University.); Michael McAleer (Department of Quantitative Finance National Tsing Hua University, Taiwan and Econometric Institute Erasmus School of Economics Erasmus University Rotterdam, The Netherlands and Department of Quantitative Economics Complutense University of Madrid, Spain And Institute of Advanced Sciences Yokohama National University, Japan.) 
Abstract:  Parametric production frontier functions are frequently used in stochastic frontier models, but there do not seem to be any empirical test statistics for its plausibility. To bridge the gap in the literature, we develop two test statistics based on local smoothing and an empirical process, respectively. Residualbased wild bootstrap versions of these two test statistics are also suggested. The distributions of technical inefficiency and the noise term are not specified, which allows specification testing of the production frontier function even under heteroscedasticity. Simulation studies and a real data example are presented to examine the finite sample sizes and powers of the test statistics. The theory developed in this paper is useful for production mangers in their decisions on production. 
Keywords:  Production frontier function; Stochastic frontier model; Specification testing; Wild bootstrap; Smoothing process; Empirical process; Simulations. 
JEL:  C0 C13 C14 D81 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:ucm:doicae:1723&r=ecm 
By:  Michael B. Giles; Francisco Bernal 
Abstract:  This paper proposes and analyses a new multilevel Monte Carlo method for the estimation of mean exit times for multidimensional Brownian diffusions, and associated functionals which correspond to solutions to highdimensional parabolic PDEs through the FeynmanKac formula. In particular, it is proved that the complexity to achieve an $\varepsilon$ rootmeansquare error is $O(\varepsilon^{2}\, \!\log \varepsilon^3)$. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.07492&r=ecm 
By:  Anastasia Kratsios; Cody B. Hyndman 
Abstract:  We develop a method for incorporating relevant nonEuclidean geometric information into a broad range of classical filtering and statistical or machine learning algorithms. We apply these techniques to approximate the solution of the nonEuclidean filtering problem to arbitrary precision. We then extend the particle filtering algorithm to compute our asymptotic solution to arbitrary precision. Moreover, we find explicit error bounds measuring the discrepancy between our locally triangulated filter and the true theoretical nonEuclidean filter. Our methods are motivated by certain fundamental problems in mathematical finance. In particular we apply these filtering techniques to incorporate the nonEuclidean geometry present in stochastic volatility models and optimal Markowitz portfolios. We also extend Euclidean statistical or machine learning algorithms to nonEuclidean problems by using the local triangulation technique, which we show improves the accuracy of the original algorithm. We apply the local triangulation method to obtain improvements of the (sparse) principal component analysis and the principal geodesic analysis algorithms and show how these improved algorithms can be used to parsimoniously estimate the evolution of the shape of forwardrate curves. While focused on financial applications, the nonEuclidean geometric techniques presented in this paper can be employed to provide improvements to a range of other statistical or machine learning algorithms and may be useful in other areas of application. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.05829&r=ecm 
By:  Kajal Lahiri; Zhongwen Liang; Huaming Peng 
Abstract:  This paper investigates the asymptotic local power of the the averaged ttest of Im, Pesaran and Shin (2003, IPS hereafter) in the presence of both initial explosive conditions and incidental trends. By utilizing the least squares detrending methods, it is found that the initial condition plays no role in determining the asymptotic local power of the IPS test, a result strikingly different from the finding in Harris et al. (2010), who examined the impact of the initial conditions on local power of IPS test without incidental trends. The paper also presents, via an application of the Fredholm method discussed in Nabeya and Tanaka (1990a, 1990b), the exact asymptotic local power of IPS test, thereby providing theoretical justifications for its lack of asymptotic local power in the neighborhood of unity with the order of N1/2T1 while attaining nontrivial power in the neighborhood of unity that shrinks at the rate N1/4T1. This latter finding is consistent with Moon et al. (2007) and extends their results to IPS test. It is also of practical significance to empirical researchers as the presence of incidental trends in panel unit root test setting is ubiquitous. 
Keywords:  panel data, unit root test, individual heterogeneity 
JEL:  C13 C22 C23 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_6313&r=ecm 
By:  Benjamin David 
Abstract:  The aim of this paper is to highlight the advantages of algorithmic methods for economic research with quantitative orientation. We describe four typical problems involved in econometric modeling, namely the choice of explanatory variables, a functional form, a probability distribution and the inclusion of interactions in a model. We detail how those problems can be solved by using "CART" and "Random Forest" algorithms in a context of massive increasing data availability. We base our analysis on two examples, the identification of growth drivers and the prediction of growth cycles. More generally, we also discuss the application fields of these methods that come from a machinelearning framework by underlining their potential for economic applications. 
Keywords:  decision trees, CART, Random Forest 
JEL:  C4 C18 C38 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:drm:wpaper:201746&r=ecm 
By:  Olivier Sterck 
Abstract:  It is frequent to hear in economic seminars or read in academic papers that an effect is economically significant or economically important. Yet, the economic literature is vague on what economic importance means and how it should be measured. In this paper, I show that existing measures of economic importance are flawed and misused. Using an axiomatic approach, I derive a new method to assess the economic importance of each variable in linear regressions. The new measure is interpreted as the percentage contribution of each explanatory variable to deviations in the dependent variable. As an illustration, the method is applied to the study of the causes of longrun economic development. 
Keywords:  Regression; Economic importance; Effect size; Standardized beta coefficients; Longrun growth 
JEL:  B4 C18 O10 O47 Z13 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:csa:wpaper:2016312&r=ecm 
By:  Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam and Tinbergen Institute); Franco Peracchi (Georgetown University and EIEF) 
Abstract:  The weightedaverage least squares (WALS) approach, introduced by Magnus et al. (2010) in the context of Gaussian linear models, has been shown to enjoy important advantages over other strictly Bayesian and strictly frequentist modelaveraging estimators when accounting for problems of uncertainty in the choice of the regressors. In this paper we extend the WALS approach to deal with uncertainty about the specification of the linear predictor in the wider class of generalized linear models (GLMs). We study the largesample properties of the WALS estimator for GLMs under a local misspecification framework, and the finitesample properties of this estimator by a Monte Carlo experiment the design of which is based on a real empirical analysis of attrition in the first two waves of the Survey of Health, Ageing and Retirement in Europe (SHARE). 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:eie:wpaper:1711&r=ecm 
By:  Guglielmo Maria Caporale; Luis A. GilAlana 
Abstract:  In this paper we propose a new modelling framework for the analysis of macro series that includes both stochastic trends and stochastic cycles in addition to deterministic terms such as linear and nonlinear trends. We examine four US macro series, namely annual and quarterly real GDP and GDP per capita. The results indicate that the behaviour of US GDP can be captured accurately by a model incorporating both stochastic trends and stochastic cycles that allows for somedegree of persistence in the data. Both appear to be meanreverting, although the stochastic trend is nonstationary whilst the cyclical component is stationary, with cycles repeating themselves every 6 – 10 years. 
Keywords:  GDP, GDP per capita, trends, cycles, long memory, fractional integration 
JEL:  C22 E32 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1695&r=ecm 
By:  Davy Paindaveine; Julien Remy; Thomas Verdebout 
Abstract:  We consider the problem of testing, on the basis of a pvariate Gaussian random sample, the null hypothesis H_0: \theta_1= \theta_1^0 against the alternative H_1: \theta_1 \neq \thetab_1^0, where \thetab_1 is the "first" eigenvector of the underlying covariance matrix and \thetab_1^0 is a fixed unit pvector. In the classical setup where eigenvalues \lambda_1>\lambda_2\geq .\geq \lambda_p are fixed, the Anderson (1963) likelihood ratio test (LRT) and the Hallin, Paindavine and Verdebout (2010) Le Cam optimal test for this problem are asymptotically equivalent under the null, hence also under sequences of contiguous alternatives. We show that this equivalence does not survive asymptotic scenarios where \lambda_{n1}\lambda_{n2}=o(r_n) with r_n=O(1/\sqrt{n}). For such scenarios, the Le Cam optimal test still asymptotically meets the nominal level constraint, whereas the LRT becomes extremely liberal. Consequently, the former test should be favored over the latter one whenever the two largest sample eigenvalues are close to each other. By relying on the Le Cam theory of asymptotic experiments, we study in the aforementioned asymptotic scenarios the nonnull and optimality properties of the Le Cam optimal test and show that the null robustness of this test is not obtained at the expense of efficiency. Our asymptotic investigation is extensive in the sense that it allows r_n to converge to zero at an arbitrary rate. To make our results as striking as possible, we not only restrict to the multinormal case but also to singlespiked spectra of the form \lambda_{n1}>\lambda_{n2}=.=\lambda_{np} . 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/259598&r=ecm 
By:  Joshua C C Chan; Yong Song 
Abstract:  Inflation expectations play a key role in determining future economic outcomes. The associated uncertainty provides a direct gauge of how wellanchored the inflation expectations are. We construct a modelbased measure of inflation expectations uncertainty by augmenting a standard unobserved components model of inflation with information from noisy and possibly biased measures of inflation expectations obtained from financial markets. This new modelbased measure of inflation expectations uncertainty is more accurately estimated and can provide valuable information for policymakers. Using US data, we find significant changes in inflation expectations uncertainty during the Great Recession. 
Keywords:  Trend inflation, inflation expectations, stochastic volatility 
JEL:  C11 C32 E31 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201761&r=ecm 