nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒10‒29
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. The Asymptotic Validity of "Standard" Fully Modified OLS Estimation and Inference in Cointegrating Polynomial Regressions By Stypka, Oliver; Wagner, Martin; Grabarczyk, Peter; Kawka, Rafael
  2. Common correlated effect cross-sectional dependence corrections for non-linear conditional mean panel models By Hacioglu Hoke, Sinem; Kapetanios, George
  3. Robust Maximum Likelihood Estimation of Sparse Vector Error Correction Model By Ziping Zhao; Daniel P. Palomar
  4. A Monte Carlo Evaluation of the Logit-Mixed Logit under Asymmetry and Multimodality By Riccardo Scarpan; Cristiano Franceschinis; Mara Thiene
  5. Specification Testing of Production in a Stochastic Frontier Model By Xu Guo; Gao-Rong Li; Wing-Keung Wong; Michael McAleer
  6. Multilevel estimation of expected exit times and other functionals of stopped diffusions By Michael B. Giles; Francisco Bernal
  7. Geometric Learning and Filtering in Finance By Anastasia Kratsios; Cody B. Hyndman
  8. The Local Power of the IPS Test with Both Initial Conditions and Incidental Trends By Kajal Lahiri; Zhongwen Liang; Huaming Peng
  9. Model economic phenomena with CART and Random Forest algorithms By Benjamin David
  10. Beyond the Stars By Olivier Sterck
  11. Weighted-average least squares estimation of generalized linear models By Giuseppe De Luca; Jan R. Magnus; Franco Peracchi
  12. Trends and Cycles in Macro Series: The Case of US Real GDP By Guglielmo Maria Caporale; Luis A. Gil-Alana
  13. Testing for Principal Component Directions under Weak Identifiability By Davy Paindaveine; Julien Remy; Thomas Verdebout
  14. Measuring inflation expectations uncertainty using high-frequency data By Joshua C C Chan; Yong Song

  1. By: Stypka, Oliver (Faculty of Statistics, Technical University Dortmund); Wagner, Martin (Faculty of Statistics, Technical University Dortmund, Institute for Advanced Studies, Vienna and Bank of Slovenia, Ljubljana); Grabarczyk, Peter (Faculty of Statistics, Technical University Dortmund); Kawka, Rafael (Faculty of Statistics, Technical University Dortmund)
    Abstract: The paper considers estimation and inference in cointegrating polynomial regressions, i. e., regressions that include deterministic variables, integrated processes and their powers as explanatory variables. The stationary errors are allowed to be serially correlated and the regressors are allowed to be endogenous. The main result shows that estimating such relationships using the Phillips and Hansen (1990) fully modified OLS approach developed for linear cointegrating relationships by incorrectly considering all integrated regressors and their powers as integrated regressors leads to the same limiting distribution as theWagner and Hong (2016) fully modified type estimator developed for cointegrating polynomial regressions. A key ingredient for the main result are novel limit results for kernel weighted sums of properly scaled nonstationary processes involving scaled powers of integrated processes. Even though the simulation results indicate performance advantages of the Wagner and Hong (2016) estimator that are partly present even in large samples, the results of the paper drastically enlarge the useability of the Phillips and Hansen (1990) estimator as implemented in many software packages.
    Keywords: Cointegrating Polynomial Regression, Cointegration Test, Environmental Kuznets Curve, Fully Modified OLS Estimation, Integrated Process, Nonlinearity
    JEL: C13 C32
    Date: 2017–10
  2. By: Hacioglu Hoke, Sinem (Bank of England); Kapetanios, George (Kings College London)
    Abstract: This paper provides an approach to estimation and inference for non-linear conditional mean panel data models, in the presence of cross-sectional dependence. We modify the common correlated effects (CCE) correction of Pesaran (2006) to filter out the interactive unobserved multifactor structure. The estimation can be carried out using non-linear least squares, by augmenting the set of explanatory variables with cross-sectional averages of both linear and non-linear terms. We propose pooled and mean group estimators, derive their asymptotic distributions, and show the consistency and asymptotic normality of the coefficients of the model. The features of the proposed estimators are investigated through extensive Monte Carlo experiments. We apply our method to estimate UK banks’ wholesale funding costs and explore the non-linear relationship between public debt and output growth.
    Keywords: Non-linear panel data model; cross-sectional dependence; common correlated effects estimator
    JEL: C31 C33 C51
    Date: 2017–10–16
  3. By: Ziping Zhao; Daniel P. Palomar
    Abstract: In econometrics and finance, the vector error correction model (VECM) is an important time series model for cointegration analysis, which is used to estimate the long-run equilibrium variable relationships. The traditional analysis and estimation methodologies assume the underlying Gaussian distribution but, in practice, heavy-tailed data and outliers can lead to the inapplicability of these methods. In this paper, we propose a robust model estimation method based on the Cauchy distribution to tackle this issue. In addition, sparse cointegration relations are considered to realize feature selection and dimension reduction. An efficient algorithm based on the majorization-minimization (MM) method is applied to solve the proposed nonconvex problem. The performance of this algorithm is shown through numerical simulations.
    Date: 2017–10
  4. By: Riccardo Scarpan (University of Waikato); Cristiano Franceschinis (University of Padova); Mara Thiene (University of Padova)
    Abstract: The recently introduced (Train 2016) logit-mixed logit (LML) model is a key advancement in choice modelling: it generalizes many previous parametric and semi-nonparametric methods to represent taste heterogeneity for bundled nonmarket goods and services. We report results from Monte Carlo experiments designed to assess performance across workable sample sizes and to retrieve data-driven random coefficients distributions in the three variants of the LML model proposed in the seminal paper. Assuming a multi-modal data generating process, with a panel of four and eight choices per respondent, we compare the performance of WTP-space LML models with conventional parametric model specifications based on the Mixed logit model with normals (MXL-N) in preference and WTP space. Results are encouraging and support the adoption of flexible LML specifications with a high number of parameters as they seem to do better, but only at large enough sample sizes. To explore the saliency of the Monte Carlo results in an empirical application, we use data obtained from a discrete choice experiment to derive preferences for tap water quality in the province of Vicenza (northern Italy). LML models retrieve multimodal and asymmetric distributions of marginal WTPs for water quality attributes. Results show not only how the shape of such distributions vary across tap water attributes, but also the importance of being able to uncover them, considering that they would be hidden when using the MNL-N.
    Keywords: logit-mixed logit; flexible taste distributions; panel random utility models
    Date: 2017–10–24
  5. By: Xu Guo (School of Statistics, Beijing Normal University, Beijing.); Gao-Rong Li (Beijing Institute for Scientific and Engineering Computing, Beijing University of Technology, Beijing.); Wing-Keung Wong (Department of Finance and Big Data Research Center, Asia University Department of Economics and Finance, Hang Seng Management College Department of Economics, Lingnan University.); Michael McAleer (Department of Quantitative Finance National Tsing Hua University, Taiwan and Econometric Institute Erasmus School of Economics Erasmus University Rotterdam, The Netherlands and Department of Quantitative Economics Complutense University of Madrid, Spain And Institute of Advanced Sciences Yokohama National University, Japan.)
    Abstract: Parametric production frontier functions are frequently used in stochastic frontier models, but there do not seem to be any empirical test statistics for its plausibility. To bridge the gap in the literature, we develop two test statistics based on local smoothing and an empirical process, respectively. Residual-based wild bootstrap versions of these two test statistics are also suggested. The distributions of technical inefficiency and the noise term are not specified, which allows specification testing of the production frontier function even under heteroscedasticity. Simulation studies and a real data example are presented to examine the finite sample sizes and powers of the test statistics. The theory developed in this paper is useful for production mangers in their decisions on production.
    Keywords: Production frontier function; Stochastic frontier model; Specification testing; Wild bootstrap; Smoothing process; Empirical process; Simulations.
    JEL: C0 C13 C14 D81
    Date: 2017–10
  6. By: Michael B. Giles; Francisco Bernal
    Abstract: This paper proposes and analyses a new multilevel Monte Carlo method for the estimation of mean exit times for multi-dimensional Brownian diffusions, and associated functionals which correspond to solutions to high-dimensional parabolic PDEs through the Feynman-Kac formula. In particular, it is proved that the complexity to achieve an $\varepsilon$ root-mean-square error is $O(\varepsilon^{-2}\, |\!\log \varepsilon|^3)$.
    Date: 2017–10
  7. By: Anastasia Kratsios; Cody B. Hyndman
    Abstract: We develop a method for incorporating relevant non-Euclidean geometric information into a broad range of classical filtering and statistical or machine learning algorithms. We apply these techniques to approximate the solution of the non-Euclidean filtering problem to arbitrary precision. We then extend the particle filtering algorithm to compute our asymptotic solution to arbitrary precision. Moreover, we find explicit error bounds measuring the discrepancy between our locally triangulated filter and the true theoretical non-Euclidean filter. Our methods are motivated by certain fundamental problems in mathematical finance. In particular we apply these filtering techniques to incorporate the non-Euclidean geometry present in stochastic volatility models and optimal Markowitz portfolios. We also extend Euclidean statistical or machine learning algorithms to non-Euclidean problems by using the local triangulation technique, which we show improves the accuracy of the original algorithm. We apply the local triangulation method to obtain improvements of the (sparse) principal component analysis and the principal geodesic analysis algorithms and show how these improved algorithms can be used to parsimoniously estimate the evolution of the shape of forward-rate curves. While focused on financial applications, the non-Euclidean geometric techniques presented in this paper can be employed to provide improvements to a range of other statistical or machine learning algorithms and may be useful in other areas of application.
    Date: 2017–10
  8. By: Kajal Lahiri; Zhongwen Liang; Huaming Peng
    Abstract: This paper investigates the asymptotic local power of the the averaged t-test of Im, Pesaran and Shin (2003, IPS hereafter) in the presence of both initial explosive conditions and incidental trends. By utilizing the least squares detrending methods, it is found that the initial condition plays no role in determining the asymptotic local power of the IPS test, a result strikingly different from the finding in Harris et al. (2010), who examined the impact of the initial conditions on local power of IPS test without incidental trends. The paper also presents, via an application of the Fredholm method discussed in Nabeya and Tanaka (1990a, 1990b), the exact asymptotic local power of IPS test, thereby providing theoretical justifications for its lack of asymptotic local power in the neighborhood of unity with the order of N-1/2T-1 while attaining nontrivial power in the neighborhood of unity that shrinks at the rate N-1/4T-1. This latter finding is consistent with Moon et al. (2007) and extends their results to IPS test. It is also of practical significance to empirical researchers as the presence of incidental trends in panel unit root test setting is ubiquitous.
    Keywords: panel data, unit root test, individual heterogeneity
    JEL: C13 C22 C23
    Date: 2017
  9. By: Benjamin David
    Abstract: The aim of this paper is to highlight the advantages of algorithmic methods for economic research with quantitative orientation. We describe four typical problems involved in econometric modeling, namely the choice of explanatory variables, a functional form, a probability distribution and the inclusion of interactions in a model. We detail how those problems can be solved by using "CART" and "Random Forest" algorithms in a context of massive increasing data availability. We base our analysis on two examples, the identification of growth drivers and the prediction of growth cycles. More generally, we also discuss the application fields of these methods that come from a machine-learning framework by underlining their potential for economic applications.
    Keywords: decision trees, CART, Random Forest
    JEL: C4 C18 C38
    Date: 2017
  10. By: Olivier Sterck
    Abstract: It is frequent to hear in economic seminars or read in academic papers that an effect is economically significant or economically important. Yet, the economic literature is vague on what economic importance means and how it should be measured. In this paper, I show that existing measures of economic importance are flawed and misused. Using an axiomatic approach, I derive a new method to assess the economic importance of each variable in linear regressions. The new measure is interpreted as the percentage contribution of each explanatory variable to deviations in the dependent variable. As an illustration, the method is applied to the study of the causes of long-run economic development.
    Keywords: Regression; Economic importance; Effect size; Standardized beta coefficients; Long-run growth
    JEL: B4 C18 O10 O47 Z13
    Date: 2017
  11. By: Giuseppe De Luca (University of Palermo); Jan R. Magnus (Vrije Universiteit Amsterdam and Tinbergen Institute); Franco Peracchi (Georgetown University and EIEF)
    Abstract: The weighted-average least squares (WALS) approach, introduced by Magnus et al. (2010) in the context of Gaussian linear models, has been shown to enjoy important advantages over other strictly Bayesian and strictly frequentist model-averaging estimators when accounting for problems of uncertainty in the choice of the regressors. In this paper we extend the WALS approach to deal with uncertainty about the specification of the linear predictor in the wider class of generalized linear models (GLMs). We study the large-sample properties of the WALS estimator for GLMs under a local misspecification framework, and the finite-sample properties of this estimator by a Monte Carlo experiment the design of which is based on a real empirical analysis of attrition in the first two waves of the Survey of Health, Ageing and Retirement in Europe (SHARE).
    Date: 2017
  12. By: Guglielmo Maria Caporale; Luis A. Gil-Alana
    Abstract: In this paper we propose a new modelling framework for the analysis of macro series that includes both stochastic trends and stochastic cycles in addition to deterministic terms such as linear and non-linear trends. We examine four US macro series, namely annual and quarterly real GDP and GDP per capita. The results indicate that the behaviour of US GDP can be captured accurately by a model incorporating both stochastic trends and stochastic cycles that allows for somedegree of persistence in the data. Both appear to be mean-reverting, although the stochastic trend is nonstationary whilst the cyclical component is stationary, with cycles repeating themselves every 6 – 10 years.
    Keywords: GDP, GDP per capita, trends, cycles, long memory, fractional integration
    JEL: C22 E32
    Date: 2017
  13. By: Davy Paindaveine; Julien Remy; Thomas Verdebout
    Abstract: We consider the problem of testing, on the basis of a p-variate Gaussian random sample, the null hypothesis H_0: \theta_1= \theta_1^0 against the alternative H_1: \theta_1 \neq \thetab_1^0, where \thetab_1 is the "first" eigenvector of the underlying covariance matrix and \thetab_1^0 is a fixed unit p-vector. In the classical setup where eigenvalues \lambda_1>\lambda_2\geq .\geq \lambda_p are fixed, the Anderson (1963) likelihood ratio test (LRT) and the Hallin, Paindavine and Verdebout (2010) Le Cam optimal test for this problem are asymptotically equivalent under the null, hence also under sequences of contiguous alternatives. We show that this equivalence does not survive asymptotic scenarios where \lambda_{n1}-\lambda_{n2}=o(r_n) with r_n=O(1/\sqrt{n}). For such scenarios, the Le Cam optimal test still asymptotically meets the nominal level constraint, whereas the LRT becomes extremely liberal. Consequently, the former test should be favored over the latter one whenever the two largest sample eigenvalues are close to each other. By relying on the Le Cam theory of asymptotic experiments, we study in the aforementioned asymptotic scenarios the non-null and optimality properties of the Le Cam optimal test and show that the null robustness of this test is not obtained at the expense of efficiency. Our asymptotic investigation is extensive in the sense that it allows r_n to converge to zero at an arbitrary rate. To make our results as striking as possible, we not only restrict to the multinormal case but also to single-spiked spectra of the form \lambda_{n1}>\lambda_{n2}=.=\lambda_{np} .
    Date: 2017–10
  14. By: Joshua C C Chan; Yong Song
    Abstract: Inflation expectations play a key role in determining future economic outcomes. The associated uncertainty provides a direct gauge of how well-anchored the inflation expectations are. We construct a model-based measure of inflation expectations uncertainty by augmenting a standard unobserved components model of inflation with information from noisy and possibly biased measures of inflation expectations obtained from financial markets. This new model-based measure of inflation expectations uncertainty is more accurately estimated and can provide valuable information for policymakers. Using US data, we find significant changes in inflation expectations uncertainty during the Great Recession.
    Keywords: Trend inflation, inflation expectations, stochastic volatility
    JEL: C11 C32 E31
    Date: 2017–10

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.