nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒11‒02
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Roughness in spot variance? A GMM approach for estimation of fractional log-normal stochastic volatility models using realized measures By Anine E. Bolko; Kim Christensen; Mikko S. Pakkanen; Bezirgen Veliyev
  2. On GMM Inference: Partial Identification, Identification Strength, and Non-Standard By Don S. Poskitt
  3. Factor and factor loading augmented estimators for panel regression By Jad Beyhum; Eric Gautier
  4. A New Class of Robust Observation-Driven Models By Francisco Blasques; Christian Francq; Sébastien Laurent
  5. On discriminating between lognormal and Pareto tail: A mixture-based approach By Marco Bee
  6. Proxy SVAR identification of monetary policy shocks: MonteCarlo evidence and insights for the US By Herwartz, Helmut; Rohloff, Hannes; Wang, Shu
  7. On Causal Networks of Financial Firms: Structural Identification via Non-parametric Heteroskedasticity By Ruben Hipp
  8. A Practical Guide of Off-Policy Evaluation for Bandit Problems By Masahiro Kato; Kenshi Abe; Kaito Ariu; Shota Yasui
  9. Parsimonious Quantile Regression of Financial Asset Tail Dynamics via Sequential Learning By Xing Yan; Weizhong Zhang; Lin Ma; Wei Liu; Qi Wu
  10. Binary Choice with Asymmetric Loss in a Data-Rich Environment: Theory and an Application to Racial Justice By Andrii Babii; Xi Chen; Eric Ghysels; Rohit Kumar
  11. Low-Rank Approximations of Nonseparable Panel Models By Iv\'an Fern\'andez-Val; Hugo Freeman; Martin Weidner
  12. US shocks and the uncovered interest rate parity By Mengheng Li; Bowen Fu
  13. Time series models for epidemics: leading indicators, control groups and policy assessment By Andrew C. Harvey
  14. Density Forecasting with BVAR Models under Macroeconomic Data Uncertainty By Clements, Michael P.; Galvao, Ana Beatriz
  15. Data science in economics: comprehensive review of advanced machine learning and deep learning methods By Nosratabadi, Saeed; Mosavi, Amir; Duan, Puhong; Ghamisi, Pedram; Filip, Ferdinand; Band, Shahab S.; Reuter, Uwe; Gama, Joao; Gandomi, Amir H.

  1. By: Anine E. Bolko (Aarhus University and CREATES); Kim Christensen (Aarhus University and CREATES); Mikko S. Pakkanen (Imperial College London and CREATES); Bezirgen Veliyev (Aarhus University and CREATES)
    Abstract: In this paper, we develop a generalized method of moments approach for joint estimation of the parameters of a fractional log-normal stochastic volatility model. We show that with an arbitrary Hurst exponent an estimator based on integrated variance is consistent. Moreover, under stronger conditions we also derive a central limit theorem. These results stand even when integrated variance is replaced with a realized measure of volatility calculated from discrete high-frequency data. However, in practice a realized estimator contains sampling error, the effect of which is to skew the fractal coefficient toward "roughness". We construct an analytical approach to control this error. In a simulation study, we demonstrate convincing small sample properties of our approach based both on integrated and realized variance over the entire memory spectrum. We show that the bias correction attenuates any systematic deviance in the estimated parameters. Our procedure is applied to empirical high-frequency data from numerous leading equity indexes. With our robust approach the Hurst index is estimated around 0.05, confirming roughness in integrated variance.
    Keywords: GMM estimation, realized variance, rough volatility, stochastic volatility
    JEL: C10 C50
    Date: 2020–10–19
    URL: http://d.repec.org/n?u=RePEc:aah:create:2020-12&r=all
  2. By: Don S. Poskitt
    Abstract: This paper analyses aspects of GMM inference in moment equality models when the moment Jacobian is allowed to be rank deficient. In this setting first order identification may fail, and the singular values of the Jacobian are not constrained, thereby allowing for varying levels of identification strength. No specific structure is imposed on the functional form of the moment conditions, the long-run variance of the moment conditions can be singular, and the GMM criterion function weighting matrix may also be chosen sub-optimally. Explicit analytic formulations for the asymptotic distributions of estimable functions of the resulting GMM estimator and the asymptotic distributions of GMM criterion test statistics are derived under relatively mild assumptions. The distributions can be computed using standard software without recourse to bootstrap or simulation methods. The practical operation of the theoretical results, and the relationship between lack of identification and identification strength, is illustrated via numerical examples involving instrumental variables estimation of a structural equation with endogenous regressors. The results suggest that although the presence and origin of identification problems can in practice be obscure, the applied researcher can take comfort from the fact that probabilities and quantile values calculated using the new asymptotic sampling distributions of statistics constructed from the standard GMM criterion function will give accurate approximations in the presence of identification issues, irrespective of the latter's source.
    Keywords: asymptotic distribution, estimable function, Laguerre series expansion, observational equivalence, singular values, stochastic dominance
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-40&r=all
  3. By: Jad Beyhum (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Eric Gautier (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: This paper considers linear panel data models where the dependence of the regressors and the unobservables is modelled through a factor structure. The asymptotic setting is such that the number of time periods and the sample size both go to infinity. Non-strong factors are allowed and the number of factors can grow to infinity with the sample size. We study a class of two-step estimators of the regression coefficients. In the first step, factors and factor loadings are estimated. Then, the second step corresponds to the panel regression of the outcome on the regressors and the estimates of the factors and the factor loadings from the first step. Different methods can be used in the first step while the second step is unique. We derive sufficient conditions on the first-step estimator and the data generating process under which the two-step estimator is asymptotically normal. Assumptions under which using an approach based on principal components analysis in the first step yields an asymptotically normal estimator are also given. The two-step procedure exhibits good finite sample properties in simulations.
    Keywords: panel data,interactive fixed effects,factor models,Flexible unobserved heterogeneity,principal components analysis,flexible unobserved het- erogeneity
    Date: 2020–10–04
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02957008&r=all
  4. By: Francisco Blasques (Vrije Universiteit Amsterdam); Christian Francq (University of Lille); Sébastien Laurent (Aix-Marseille University)
    Abstract: This paper introduces a new class of observation-driven models, including score models as a special case. This new class inherits and extends the basic ideas behind the development of score models and addresses a number of unsolved issues in the score literature. In particular, the new class of models (i) allows QML estimation of static parameters, (ii) allows the production of leverage effects in the presence of negative outliers, (iii) allows update asymmetry and asymmetric forecast loss functions in the presence of symmetric or skewed innovations, and (iii) achieves out-of-sample outlier robustness in the presence of sub-exponential tails. We establish the asymptotic properties of the QLE, QMLE, and MLE as well as likelihood ratio and Lagrange multiplier test statistics. The finite sample properties are studied by means of an extensive Monte Carlo study. Finally, we show the empirical relevance of this new class of models on real data.
    Date: 2020–10–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20200073&r=all
  5. By: Marco Bee
    Abstract: A large literature deals with the problem of testing for a Pareto tail and estimating the parameters of the Pareto distribution. We first review the most widely used statistical tools and identify their weaknesses. Then we develop a methodology that exploits all the available information by taking into account the data generating process of the entire population. Accordingly, we estimate a lognormal-Pareto mixture via the EM algorithm and the maximization of the profile likelihood function. Simulation experiments and an empirical application to the size of the US metropolitan areas confirm that the proposed method works well and outperforms two commonly used techniques.
    Keywords: Mixture distributions, EM algorithm, lognormal distribution, Pareto distribution
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:trn:utwprg:2020/9&r=all
  6. By: Herwartz, Helmut; Rohloff, Hannes; Wang, Shu
    Abstract: In empirical macroeconomics, proxy structural vector autoregressive models (SVARs) have become a prominent path towards detecting monetary policy (MP) shocks. However, in practice, the merits of proxy SVARs depend on the relevance and exogeneity of the instrumental information employed. Our Monte Carlo analysis sheds light on the performance of proxy SVARs under realistic scenarios of low relative signal strength attached to MP shocks and alternative assumptions on instrument accuracy. In an empirical application with US data we argue in favor of the specific informational content of instruments based on the dynamic stochastic general equilibrium model of Smets andWouters (2007). A joint assessment of the benchmark proxy SVAR and the outcomes of a structural covariance change model imply that from 1973 until 1979 monetary policy contributed on average between 2.2 and 2.4 units of inflation in the GDP deflator. For the so-called Volcker disinflation starting in 1979Q4, the benchmark structural model shows that the Fed's policy measures effectively reduced the GDP deflator within three years (i.e. by -3.06 units until 1982Q3). While the empirical analysis largely conditions ona small-dimensional trinity SVAR, the benchmark proxy SVAR shocks remain remarkably robust within a six-dimensional factor-augmented model comprising rich information from Michael McCracken's database (FRED-QD).
    Keywords: structural vector autoregression,external instruments,proxy SVAR,heteroskedasticity,monetary policy shocks
    JEL: C15 C32 C36 E47
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:cegedp:404&r=all
  7. By: Ruben Hipp
    Abstract: We investigate the causal structure of financial systems by accounting for contemporaneous relationships. To identify structural parameters, we introduce a novel non-parametric approach that exploits the fact that most financial data empirically exhibit heteroskedasticity. The identification works locally and, thus, allows structural matrices to vary smoothly with time. With this causality in hand, we derive a new measure for systemic relevance. An application on volatility spillovers in the US financial market demonstrates the importance of structural parameters in spillover analyses. Finally, we highlight that the COVID-19 period is mostly an aggregate crisis, with financial firms’ spillovers edging slightly higher.
    Keywords: Econometric and statistical methods; Financial markets; Financial stability
    JEL: C32 C58 L14
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:20-42&r=all
  8. By: Masahiro Kato; Kenshi Abe; Kaito Ariu; Shota Yasui
    Abstract: Off-policy evaluation (OPE) is the problem of estimating the value of a target policy from samples obtained via different policies. Recently, applying OPE methods for bandit problems has garnered attention. For the theoretical guarantees of an estimator of the policy value, the OPE methods require various conditions on the target policy and policy used for generating the samples. However, existing studies did not carefully discuss the practical situation where such conditions hold, and the gap between them remains. This paper aims to show new results for bridging the gap. Based on the properties of the evaluation policy, we categorize OPE situations. Then, among practical applications, we mainly discuss the best policy selection. For the situation, we propose a meta-algorithm based on existing OPE estimators. We investigate the proposed concepts using synthetic and open real-world datasets in experiments.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.12470&r=all
  9. By: Xing Yan; Weizhong Zhang; Lin Ma; Wei Liu; Qi Wu
    Abstract: We propose a parsimonious quantile regression framework to learn the dynamic tail behaviors of financial asset returns. Our model captures well both the time-varying characteristic and the asymmetrical heavy-tail property of financial time series. It combines the merits of a popular sequential neural network model, i.e., LSTM, with a novel parametric quantile function that we construct to represent the conditional distribution of asset returns. Our model also captures individually the serial dependences of higher moments, rather than just the volatility. Across a wide range of asset classes, the out-of-sample forecasts of conditional quantiles or VaR of our model outperform the GARCH family. Further, the proposed approach does not suffer from the issue of quantile crossing, nor does it expose to the ill-posedness comparing to the parametric probability density function approach.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.08263&r=all
  10. By: Andrii Babii; Xi Chen; Eric Ghysels; Rohit Kumar
    Abstract: The importance of asymmetries in prediction problems arising in economics has been recognized for a long time. In this paper, we focus on binary choice problems in a data-rich environment with general loss functions. In contrast to the asymmetric regression problems, the binary choice with general loss functions and high-dimensional datasets is challenging and not well understood. Econometricians have studied binary choice problems for a long time, but the literature does not offer computationally attractive solutions in data-rich environments. In contrast, the machine learning literature has many computationally attractive algorithms that form the basis for much of the automated procedures that are implemented in practice, but it is focused on symmetric loss functions that are independent of individual characteristics. One of the main contributions of our paper is to show that the theoretically valid predictions of binary outcomes with arbitrary loss functions can be achieved via a very simple reweighting of the logistic regression, or other state-of-the-art machine learning techniques, such as boosting or (deep) neural networks. We apply our analysis to racial justice in pretrial detention.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.08463&r=all
  11. By: Iv\'an Fern\'andez-Val; Hugo Freeman; Martin Weidner
    Abstract: We provide estimation methods for panel nonseparable models based on low-rank factor structure approximations. The factor structures are estimated by matrix-completion methods to deal with the computational challenges of principal component analysis in the presence of missing data. We show that the resulting estimators are consistent in large panels, but suffer from approximation and shrinkage biases. We correct these biases using matching and difference-in-difference approaches. Numerical examples and an empirical application to the effect of election day registration on voter turnout in the U.S. illustrate the properties and usefulness of our methods.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.12439&r=all
  12. By: Mengheng Li; Bowen Fu
    Abstract: The literature on uncovered interest rate parity (UIP) shows two empirical puzzles. One is the failure of UIP, and the other is the unstable coefficients in the UIP regression. We propose a time-varying coefficients model with stochastic volatility and US structural shocks (TVC-SVX) to study how US structural shocks affect time-variation in the bilateral UIP relation for twelve countries. An unconditional test and a conditional test for UIP are developed. The former tests if UIP coefficients mean-revert to their theoretical values, whereas the latter tests coefficients at each point in time. Our findings suggest that the failure of UIP results from omitted US factors, in particular US monetary policy, productivity and preference shocks, which are also found to Granger cause local movements of UIP coefficients.
    Keywords: Time-varying parameter, Stochastic volatility, Model uncertainty, Exchange rate, Uncovered interest rate parity
    JEL: C11 C32 F31 F37
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2020-87&r=all
  13. By: Andrew C. Harvey
    Abstract: This article shows how new time series models can be used to track the progress of an epidemic, forecast key variables and evaluate the effects of policies. A class of univariate time series models was developed by Harvey and Kattuman (2020). Here the framework is extended to modelling the relationship between two or more series. The role of common trends is discussed, and it is shown that when there is balanced growth in the logarithms of the growth rates of the cumulated series, simple regression models can be used to forecast using leading indicators. Data on daily deaths from Covid-19 in Italy and the UK provides an example. When growth is not balanced, the model can be extended by including a stochastic trend: the viability of this model is investigated by examining the relationship between new cases and deaths in the Florida second wave of summer 2020. The balanced growth framework is then used as the basis for policy evaluation by showing how some variables can serve as control groups for a target variable. This approach is used to investigate the consequences of Sweden's soft lockdown coronavirus policy.
    Keywords: Balanced growth, Co-integration, Covid-19, Gompertz curve, Kalman filter, Stochastic trend
    JEL: C22 C32
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:nsr:niesrd:517&r=all
  14. By: Clements, Michael P. (University of Reading); Galvao, Ana Beatriz (University of Warwick)
    Abstract: Macroeconomic data are subject to data revisions as later vintages are released. Yet, the usual way of generating real-time density forecasts from BVAR models makes no allowance for this form of data uncertainty. We evaluate two methods that consider data uncertainty when forecasting with BVAR models with/without stochastic volatility. First, the BVAR forecasting model is estimated on real-time vintages. Second, a model of data revisions is included, so that the BVAR is estimated on, and the forecasts conditioned on, estimates of the revised values. We show that both these methods improve the accuracy of density forecasts for US and UK output growth and inflation. We also investigate how the characteristics of the underlying data and revisions processes affect forecasting performance, and provide guidance that may benefit professional forecasters.
    Keywords: real-time forecasting ; inflation and output growth predictive densities ; real-time vintages ; stochastic volatility ;
    JEL: C53
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:wrk:wrkemf:36&r=all
  15. By: Nosratabadi, Saeed; Mosavi, Amir; Duan, Puhong; Ghamisi, Pedram; Filip, Ferdinand; Band, Shahab S.; Reuter, Uwe; Gama, Joao; Gandomi, Amir H.
    Abstract: This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.
    Date: 2020–10–15
    URL: http://d.repec.org/n?u=RePEc:osf:lawarx:kczj5&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.