nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒12‒06
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Exponential GARCH-Ito Volatility Models By Donggyu Kim
  2. Inference of Jumps Using Wavelet Variance By Heng Chen; Mototsugu Shintani
  3. Pair copula constructions of point-optimal sign-based tests for predictive linear and nonlinear regressions By Kaveh Salehzadeh Nobari
  4. Generalized Kernel Ridge Regression for Causal Inference with Missing-at-Random Sample Selection By Rahul Singh
  5. Bounding Treatment Effects by Pooling Limited Information across Observations By Sokbae Lee; Martin Weidner
  6. Kernel Methods for Multistage Causal Inference: Mediation Analysis and Dynamic Treatment Effects By Rahul Singh; Liyuan Xu; Arthur Gretton
  7. A tale of two “AR” models: a spatial analysis of Corsican second home incidence By Yuheng Ling
  8. Reduced Rank Regression Models in Economics and Finance By Gianluca Cubadda; Alain Hecq
  9. Approximate Bayesian Estimation of Stochastic Volatility in Mean Models using Hidden Markov Models: Empirical Evidence from Stock Latin American Markets By Carlos A. Abanto-Valle; Gabriel Rodríguez; Luis M. Castro Cepero; Hernán B. Garrafa-Aragón
  10. Rate-Optimal Cluster-Randomized Designs for Spatial Interference By Michael P. Leung
  11. A machine learning dynamic switching approach to forecasting when there are structural breaks By Jeronymo Marcondes Pinto; Jennifer L. Castle
  12. A Replication of “The effect of the conservation reserve program on rural economies: Deriving a statistical verdict from a null finding” (American Journal of Agricultural Economics, 2019) By Jiarui Tian
  13. Procurements with Bidder Asymmetry in Cost and Risk-Aversion By Gaurab Aryal; Hanna Charankevich; Seungwon Jeong; Dong-Hyuk Kim
  14. Global and Local Components of Output Gaps By Florian Eckert; Nina Mühlebach
  15. Detecting Distributional Differences between Temporal Granularities for Exploratory Time Series Analysis By Sayani Gupta; Rob J Hyndman; Dianne Cook
  16. Impact of COVID-19: Nowcasting and Big Data to Track Economic Activity in Sub-Saharan Africa By Reda Cherif; Karl Walentin; Brandon Buell; Carissa Chen; Jiawen Tang; Nils Wendt

  1. By: Donggyu Kim
    Abstract: This paper introduces a novel Ito diffusion process to model high-frequency financial data, which can accommodate low-frequency volatility dynamics by embedding the discrete-time non-linear exponential GARCH structure with log-integrated volatility in a continuous instantaneous volatility process. The key feature of the proposed model is that, unlike existing GARCH-Ito models, the instantaneous volatility process has a non-linear structure, which ensures that the log-integrated volatilities have the realized GARCH structure. We call this the exponential realized GARCH-Ito (ERGI) model. Given the auto-regressive structure of the log-integrated volatility, we propose a quasi-likelihood estimation procedure for parameter estimation and establish its asymptotic properties. We conduct a simulation study to check the finite sample performance of the proposed model and an empirical study with 50 assets among the S\&P 500 compositions. The numerical studies show the advantages of the new proposed model.
    Date: 2021–11
  2. By: Heng Chen (Currency Department, The Bank of Canada); Mototsugu Shintani (Faculty of Economics, The University of Tokyo)
    Abstract: We consider the statistical inference of jumps in nonparametric regression models with long memory noise. A test statistic is proposed for the presence of jumps based on a robust estimator of the variance of the wavelet coefficients. The sequential applications of tests allow us to estimate the number of jumps and their locations. In comparison with the existing inference procedure, in which test statistic converges very slowly to the extreme value distribution, ours processes a more accurate finite sample performance derived from the asymptotic normality of our test statistic.
    Date: 2021–11
  3. By: Kaveh Salehzadeh Nobari
    Abstract: We propose pair copula constructed point-optimal sign tests in the context of linear and nonlinear predictive regressions with endogenous, persistent regressors, and disturbances exhibiting serial (nonlinear) dependence. The proposed approach entails considering the entire dependence structure of the signs to capture the serial dependence, and building feasible test statistics based on pair copula constructions of the sign process. The tests are exact and valid in the presence of heavy tailed and nonstandard errors, as well as heterogeneous and persistent volatility. Furthermore, they may be inverted to build confidence regions for the parameters of the regression function. Finally, we adopt an adaptive approach based on the split-sample technique to maximize the power of the test by finding an appropriate alternative hypothesis. In a Monte Carlo study, we compare the performance of the proposed "quasi"-point-optimal sign tests based on pair copula constructions by comparing its size and power to those of certain existing tests that are intended to be robust against heteroskedasticity. The simulation results maintain the superiority of our procedures to existing popular tests.
    Date: 2021–11
  4. By: Rahul Singh
    Abstract: I propose kernel ridge regression estimators for nonparametric dose response curves and semiparametric treatment effects in the setting where an analyst has access to a selected sample rather than a random sample; only for select observations, the outcome is observed. I assume selection is as good as random conditional on treatment and a sufficiently rich set of observed covariates, where the covariates are allowed to cause treatment or be caused by treatment -- an extension of missingness-at-random (MAR). I propose estimators of means, increments, and distributions of counterfactual outcomes with closed form solutions in terms of kernel matrix operations, allowing treatment and covariates to be discrete or continuous, and low, high, or infinite dimensional. For the continuous treatment case, I prove uniform consistency with finite sample rates. For the discrete treatment case, I prove root-n consistency, Gaussian approximation, and semiparametric efficiency.
    Date: 2021–11
  5. By: Sokbae Lee; Martin Weidner
    Abstract: We provide novel bounds on average treatment effects (on the treated) that are valid under an unconfoundedness assumption. Our bounds are designed to be robust in challenging situations, for example, when the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations. Namely, the bounds are constructed as sample averages over functions of the observed outcomes such that the contribution of each outcome only depends on the treatment status of a limited number of observations. No information pooling across observations leads to so-called "Manski bounds", while unlimited information pooling leads to standard inverse propensity score weighting. We explore the intermediate range between these two extremes and provide corresponding inference methods. We show in Monte Carlo experiments and through an empirical application that our bounds are indeed robust and informative in practice.
    Date: 2021–11
  6. By: Rahul Singh; Liyuan Xu; Arthur Gretton
    Abstract: We propose kernel ridge regression estimators for mediation analysis and dynamic treatment effects over short horizons. We allow treatments, covariates, and mediators to be discrete or continuous, and low, high, or infinite dimensional. We propose estimators of means, increments, and distributions of counterfactual outcomes with closed form solutions in terms of kernel matrix operations. For the continuous treatment case, we prove uniform consistency with finite sample rates. For the discrete treatment case, we prove root-n consistency, Gaussian approximation, and semiparametric efficiency. We conduct simulations then estimate mediated and dynamic treatment effects of the US Job Corps program for disadvantaged youth.
    Date: 2021–11
  7. By: Yuheng Ling (Università di Corsica)
    Abstract: Spatial autoregressive (AR) models can accommodate various forms of dependence among data with discrete support in a space, and hence are widely used in economics and social science. We examine the relationship between spatial (autoregressive) error models and conditional autoregressive models, considered to be the two main types of spatial AR models. This topic is likely incomplete in the literature and is often overlooked by econometricians. To further develop and broaden this topic, we demonstrate that spatial error and conditional autoregressive models can be made equivalent via hierarchal models, but have different variance-covariance matrices. We then propose a Bayesian approach, known as integrated nested Laplace approximations (INLA), to produce accurate estimates for these models and to speed up inferences. We also discuss how to interpret model coefficients, especially estimates of spatial latent effects. We illustrate the two AR models with the proposed methodology in an application to the second home incidence rates of Corsica, France in 2017. We find that both models can capture spatial dependence, but conditional autoregressive models perform slightly better and produce a higher spatial autocorrelation coefficient. We further illustrate estimates of latent effects by identifying several “hot spots” and “cold spots” in terms of second home incidence rates.
    Date: 2021–12
  8. By: Gianluca Cubadda (DEF & CEIS,University of Rome "Tor Vergata"); Alain Hecq (Maastricht University)
    Abstract: This chapter surveys the importance of reduced rank regression techniques (RRR) for modelling economic and ?nancial time series. We mainly focus on models that are capable to reproduce the presence of common dynamics among variables such as the serial correlation common feature and the multivariate autoregressive index models. Cointegration analysis, for which RRR plays a central role, is not discussed in this chapter as it deserves a speci?c treatment on its own. Instead, we show how to detect and model comovements in time series that are stationary or that have been stationarized after proper transformations. The motivations for the use of RRR in time series econometrics include dimension reductions which simplify complex dynamics and thus making interpretations easier, as well as pursuing e¢ ciency gains in both estimation and prediction. Via the ?nal equation representation, RRR also makes the nexus between multivariate time series and parsimonious marginal ARIMA models. The drawback of RRR, which is common to all the dimension reduction techniques, is that the underlying restrictions may be present or not in the data. We provide in this chapter a couple of empirical applications to illustrate concepts and methods.
    Keywords: Reduced-rank regression, common features, vector autoregressive models, multivariate volatility models, dimension reduction.
    Date: 2021–11–08
  9. By: Carlos A. Abanto-Valle (Department of Statistics, Federal University of Rio de Janeiro); Gabriel Rodríguez (Department of Economics, Pontificia Universidad Católica del Perú); Luis M. Castro Cepero (Department of Statistics, Pontificia Universidad Católica de Chile); Hernán B. Garrafa-Aragón (Escuela de Ingeniería Estadística de la Universidad Nacional de Ingeniería)
    Abstract: The stochastic volatility in mean (SVM) model proposed by Koopman and Uspensky (2002) is revisited. This paper has two goals. The first is to offer a methodology that requires less computational time in simulations and estimates compared with others proposed in the literature as in Abanto-Valle et al. (2021) and others. To achieve the first goal, we propose to approximate the likelihood function of the SVM model applying Hidden Markov Models (HMM) machinery to make possible Bayesian inference in real-time. We sample from the posterior distribution of parameters with a multivariate Normal distribution with mean and variance given by the posterior mode and the inverse of the Hessian matrix evaluated at this posterior mode using importance sampling (IS). The frequentist properties of estimators is anlyzed conducting a simulation study. The second goal is to provide empirical evidence estimating the SVM model using daily data for five Latin American stock markets. The results indicate that volatility negatively impacts returns, suggesting that the volatility feedback effect is stronger than the effect related to the expected volatility. This result is exact and opposite to the finding of Koopman and Uspensky (2002). We compare our methodology with the Hamiltonian Monte Carlo (HMC) and Riemannian HMC methods based on Abanto-Valle et al. (2021). JEL Classification-JE: C11, C15, C22, C51, C52, C58, G12.
    Keywords: Stock Latin American Markets, Stochastic Volatility in Mean, Feed-Back Effect, Hamiltonian Monte Carlo, Hidden Markov Models, Riemannian Manifold Hamiltonian Monte Carlo, Non Linear State Space Models.
    Date: 2021
  10. By: Michael P. Leung
    Abstract: We consider a potential outcomes model in which interference may be present between any two units but the extent of interference diminishes with spatial distance. The causal estimand is the global average treatment effect, which compares counterfactual outcomes when all units are treated to outcomes when none are. We study a class of designs in which space is partitioned into clusters that are randomized into treatment and control. For each design, we estimate the treatment effect using a Horovitz-Thompson estimator that compares the average outcomes of units with all neighbors treated to units with no neighbors treated, where the neighborhood radius is of the same order as the cluster size dictated by the design. We derive the estimator's rate of convergence as a function of the design and degree of interference and use this to obtain estimator-design pairs in this class that achieve near-optimal rates of convergence under relatively minimal assumptions on interference. We prove that the estimators are asymptotically normal and provide a variance estimator. Finally, we discuss practical implementation of the designs by partitioning space using clustering algorithms.
    Date: 2021–11
  11. By: Jeronymo Marcondes Pinto; Jennifer L. Castle
    Abstract: Forecasting economic indicators is an important task for analysts. However, many indicators suffer from structural breaks leading to forecast failure. Methods that are robust following a structural break have been proposed in the literature but they come at a cost: an increase in forecast error variance. We propose a method to select between a set of robust and non-robust forecasting models. Our method uses time-series clustering to identify possible structural breaks in a time series, and then switches between forecasting models depending on the series dynamics. We perform a rigorous empirical evaluation with 400 simulated series with an artificial structural break and with real data economic series: Industrial Production and Consumer Prices for all Western European countries available from the OECD database. Our results show that the proposed method statistically outperforms benchmarks in forecast accuracy for most case scenarios, particularly at short horizons.
    Keywords: Machine Learning, Forecasting, Structural Breaks, Model Selection, Cluster Analysis
    Date: 2021–10–13
  12. By: Jiarui Tian (University of Canterbury)
    Abstract: This study replicates Brown, Lambert, and Wojan (2019) (BLW) and their bootstrapping procedure for calculating ex post power. At the current time there is no generally accepted way of calculating ex post power. BLW provide a novel method for doing this though they provide little justification for their method or evidence of its reliability. My replication makes three contributions. First, it confirms that the data and code provided with their paper is sufficient to reproduce their results. Second, it performs two robustness checks to determine if slight alterations to their procedure affect their results. I determine that including a constant term in their procedure does not affect the results. On the other hand, using a different bootstrapping procedure produces somewhat different results. However, without any ground truth to use as a benchmark, one cannot say which bootstrapping procedure is better. My third contribution is that I use Monte Carlo experiments to assess the performance of BLW’s method. My experimental results indicate that their method is unbiased and produces a relatively narrow range of estimates. This suggests that BLW’s method may provide a reliable method for researchers to calculate ex post power, though further investigation needs to be done.
    Keywords: Ex post power, Statistical insignificance, Monte Carlo experiments, Bootstrapping, Replication
    JEL: C12 C15 C18
    Date: 2021–11–01
  13. By: Gaurab Aryal; Hanna Charankevich; Seungwon Jeong; Dong-Hyuk Kim
    Abstract: We propose an empirical method to analyze data from first-price procurements where bidders are asymmetric in their risk-aversion (CRRA) coefficients and distributions of private costs. Our Bayesian approach evaluates the likelihood by solving type-symmetric equilibria using the boundary-value method and integrates out unobserved heterogeneity through data augmentation. We study a new dataset from Russian government procurements focusing on the category of printing papers. We find that there is no unobserved heterogeneity (presumably because the job is routine), but bidders are highly asymmetric in their cost and risk-aversion. Our counterfactual study shows that choosing a type-specific cost-minimizing reserve price marginally reduces the procurement cost; however, inviting one more bidder substantially reduces the cost, by at least 5.5%. Furthermore, incorrectly imposing risk-neutrality would severely mislead inference and policy recommendations, but the bias from imposing homogeneity in risk-aversion is small.
    Date: 2021–11
  14. By: Florian Eckert (ETH Zurich, Switzerland); Nina Mühlebach (ETH Zurich, Switzerland)
    Abstract: This paper proposes a multi-level dynamic factor model to identify common components in output gap estimates. We pool multiple output gap estimates for 157 countries and decompose them into one global, eight regional, and 157 country-specific cycles.Our approach easily deals with mixed frequencies, ragged edges, and discontinuities in the underlying output gap estimates. To restrict the parameter space in the Bayesian state space model, we apply a stochastic search variable selection approach and base the prior inclusion probabilities on spatial information. Our results suggest that the global and the regional cycles explain a substantial proportion of the output gaps. On average, 18% of a country’s output gap is attributable to the global cycle, 24% to the regional cycle, and 58% to the local cycle.
    Keywords: Multi-Level DFM, Bayesian State Space Model, Output Gap Decomposition, Model Combination, Business Cycles, Variable Selection, Spatial Prior
    JEL: C11 C32 C52 F44 R11
    Date: 2021–11
  15. By: Sayani Gupta; Rob J Hyndman; Dianne Cook
    Abstract: Cyclic temporal granularities are temporal deconstructions of a time period into units such as hour-of-theday and work-day/weekend. They can be useful for measuring repetitive patterns in large univariate time series data, and feed new approaches to exploring time series data. One use is to take pairs of granularities, and make plots of response values across the categories induced by the temporal deconstruction. However, when there are many granularities that can be constructed for a time period, there will also be too many possible displays to decide which might be the more interesting to display. This work proposes a new distance metric to screen and rank the possible granularities, and hence choose the most interesting ones to plot. The distance measure is computed for a single or pairs of cyclic granularities and can be compared across different cyclic granularities or on a collection of time series. The methods are implemented in the open-source R package hakear.
    Keywords: data visualization, cyclic granularities, periodicities, permutation tests, distributional difference, Jensen-Shannon distances, smart meter data, R
    JEL: C55 C65 C80
    Date: 2021
  16. By: Reda Cherif; Karl Walentin; Brandon Buell; Carissa Chen; Jiawen Tang; Nils Wendt
    Abstract: The COVID-19 pandemic underscores the critical need for detailed, timely information on its evolving economic impacts, particularly for Sub-Saharan Africa (SSA) where data availability and lack of generalizable nowcasting methodologies limit efforts for coordinated policy responses. This paper presents a suite of high frequency and granular country-level indicator tools that can be used to nowcast GDP and track changes in economic activity for countries in SSA. We make two main contributions: (1) demonstration of the predictive power of alternative data variables such as Google search trends and mobile payments, and (2) implementation of two types of modelling methodologies, machine learning and parametric factor models, that have flexibility to incorporate mixed-frequency data variables. We present nowcast results for 2019Q4 and 2020Q1 GDP for Kenya, Nigeria, South Africa, Uganda, and Ghana, and argue that our factor model methodology can be generalized to nowcast and forecast GDP for other SSA countries with limited data availability and shorter timeframes.
    Keywords: model prediction; quantile plot; ML model; GDP YoY; data variable; YoY percent change; Factor models; Machine learning; Time series analysis; Spot exchange rates; Mobile banking; Africa; Sub-Saharan Africa
    Date: 2021–05–01

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.