nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒09‒26
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Safe Policy Learning under Regression Discontinuity Designs By Yi Zhang; Eli Ben-Michael; Kosuke Imai
  2. Smoothed bootstrapping kernel density estimation under higher order kernel By Kun Yi; Yoshihiko Nishiyama
  3. What Impulse Response Do Instrumental Variables Identify? By Bonsoo Koo; Seojeong Lee; Myung Hwan Seo
  4. Linear Panel Regression Models with Non-Classical Measurement Errors: An Application to Investment Equations By Kazuhiko Hayakawa; Takashi Yamagata
  5. Optimal Recovery for Causal Inference By Ibtihal Ferwana; Lav R. Varshney
  6. Center-outward Rank- and Sign-based VARMA Portmanteau Tests By Marc Hallin; Hang Liu
  7. Large Volatility Matrix Analysis Using Global and National Factor Models By Sung Hoon Choi; Donggyu Kim
  8. Robust Tests of Model Incompleteness in the Presence of Nuisance Parameters By Shuowen Chen; Hiroaki Kaido
  9. Assessing External Validity in Practice By Sebastian Galiani; Brian Quistorff
  10. Characterizing the Anchoring Effects of Official Forecasts on Private Expectations By Barrera, Carlos
  11. Proposing a global model to manage the bias-variance tradeoff in the context of hedonic house price models By Julian Granna; Wolfgang Brunauer; Stefan Lang
  12. Do Recessions Occur Concurrently Across Countries? A Multinomial Logistic Approach By Poon, Aubrey; Zhu, Dan
  13. The Holt-Winters filter and the one-sided HP filter: A close correspondence By Rodrigo Alfaro; Mathias Drehmann
  14. Sectoral Uncertainty By Efrem Castelnuovo; Kerem Tuzcuoglu; Luis Uzeda
  15. A Hybrid Approach on Conditional GAN for Portfolio Analysis By Jun Lu; Danny Ding
  16. A statistical test of market efficiency based on information theory By Xavier Brouty; Matthieu Garcin
  17. Regime-based Implied Stochastic Volatility Model for Crypto Option Pricing By Danial Saef; Yuanrong Wang; Tomaso Aste

  1. By: Yi Zhang; Eli Ben-Michael; Kosuke Imai
    Abstract: The regression discontinuity (RD) design is widely used for program evaluation with observational data. The RD design enables the identification of the local average treatment effect (LATE) at the treatment cutoff by exploiting known deterministic treatment assignment mechanisms. The primary focus of the existing literature has been the development of rigorous estimation methods for the LATE. In contrast, we consider policy learning under the RD design. We develop a robust optimization approach to finding an optimal treatment cutoff that improves upon the existing one. Under the RD design, policy learning requires extrapolation. We address this problem by partially identifying the conditional expectation function of counterfactual outcome under a smoothness assumption commonly used for the estimation of LATE. We then minimize the worst case regret relative to the status quo policy. The resulting new treatment cutoffs have a safety guarantee, enabling policy makers to limit the probability that they yield a worse outcome than the existing cutoff. Going beyond the standard single-cutoff case, we generalize the proposed methodology to the multi-cutoff RD design by developing a doubly robust estimator. We establish the asymptotic regret bounds for the learned policy using semi-parametric efficiency theory. Finally, we apply the proposed methodology to empirical and simulated data sets.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.13323&r=
  2. By: Kun Yi (Graduate Shool of Economics, Kyoto University); Yoshihiko Nishiyama (Institue of Economic Research, Kyoto University)
    Abstract: Smoothed bootstrap method is a useful method to approximates the bias of Kernel density estimation. However, it can only be applied when the kernel function is of second order. In this study, we propose a novel method to generalize the smoothed bootstrap method to higher order kernel for estimating the bias and construct bias corrected estimator based on it. Theoretical formulation and numerical simulation demonstrate that the proposed method achieve better performance compared to the traditional bias correction method.
    Keywords: kernel density estimation, smoothed bootstrap, bias estimation, higher order kernel
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:1081&r=
  3. By: Bonsoo Koo; Seojeong Lee; Myung Hwan Seo
    Abstract: Instrumental variables (IV) are often used to provide exogenous variation in the impulse response analysis but the heterogeneous effects the IV may identify are rarely discussed. In microeconometrics, on the other hand, it is well understood that an IV identifies the local average treatment effect (Imbens and Angrist, 1994). Recognizing that macro shocks are often composites (e.g., the government spending shock is the sum of sectoral spending shocks), we show that the IV estimand can be written as a weighted average of componentwise impulse responses. Negative weights can arise if the correlations between the IV and the components of the macro shock have different signs. This implies that the response of the variable of interest to an IV shock may evolve in an arbitrary way regardless of the true underlying componentwise impulse responses. We provide conditions under which the IV estimand has a structural interpretation. When additional IVs or auxiliary disaggregated data are available, we derive informative bounds on the componentwise impulse responses. With multiple IVs and disaggregated data, the impulse responses can be point-identified. To demonstrate the empirical relevance of our theory, we revisit the IV estimation of the government spending multiplier in the US. We show that the non-defense spending multiplier larger than one can be obtained from the conventional IV estimates of the aggregate spending multiplier, which are smaller than one.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.11828&r=
  4. By: Kazuhiko Hayakawa; Takashi Yamagata
    Abstract: This paper proposes a new minimum distance estimator for linear panel regression models with measurement error and analyzes its theoretical properties. The model considered is more general than examined in the literature in that (i) measurement error is non-classical in the sense it is allowed to be correlated with true regressors, and (ii) measurement error and idiosyncratic error can be serially correlated. Notably, the proposed estimator does not require any instrumental variables to deal with the endogeneity. The finite sample evidence confirms that the proposed estimator has desirable performance. We revisit the investment model and theoretically illustrate that measurement error is negatively correlated with Tobin's marginal $q$, which is empirically supported by applying the proposed method to US manufacturing firm data for the period 2002-2016. Furthermore, we find that there is a structural break in 2008 and cash flow is insignificant before 2007 but becomes significant after 2009.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1188&r=
  5. By: Ibtihal Ferwana; Lav R. Varshney
    Abstract: It is crucial to successfully quantify causal effects of a policy intervention to determine whether the policy achieved the desired outcomes. We present a deterministic approach to a classical method of policy evaluation, synthetic control (Abadie and Gardeazabal, 2003), to estimate the unobservable outcome of a treatment unit using ellipsoidal optimal recovery (EOpR). EOpR provides policy evaluators with "worst-case" outcomes and "typical" outcomes to help in decision making. It is an approximation-theoretic technique that also relates to the theory of principal components, which recovers unknown observations given a learned signal class and a set of known observations. We show that EOpR can improve pre-treatment fit and bias of the post-treatment estimation relative to other econometrics methods. Beyond recovery of the unit of interest, an advantage of EOpR is that it produces worst-case estimates over the estimations produced by the recovery. We assess our approach on artificially-generated data, on datasets commonly used in the econometrics literature, and also derive results in the context of the COVID-19 pandemic. Such an approach is novel in the econometrics literature for causality and policy evaluation.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.06729&r=
  6. By: Marc Hallin; Hang Liu
    Abstract: Revisiting the pseudo-Gaussian tests of Chitturi (1974), Hosking (1980), and Li and McLeod (1981) for VARMA models from a Le Cam perspective, we first provide a more precise and rigorous description of the asymptotic behavior of the multivariate portmanteau test statistic.Then, based on the concepts of center-outward ranks and signs recently developed in Hallin et al. (2021), we propose a class of multivariateportmanteau rank- and sign-based test statistics which, under the null hypothesis and under a broad family of innovation densities, can be approximated by an asymptotically chi-square variable. The asymptotic properties of these tests are derived; simulations demonstratetheir advantages over their classical pseudo-Gaussian counterpart.
    Keywords: Multivariate ranks and signs, Measure transportation, Distributionfreeness, Le Cam’s asymptotic theory, Multivariate time series
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/349259&r=
  7. By: Sung Hoon Choi; Donggyu Kim
    Abstract: Several large volatility matrix inference procedures have been developed, based on the latent factor model. They often assumed that there are a few of common factors, which can account for volatility dynamics. However, several studies have demonstrated the presence of local factors. In particular, when analyzing the global stock market, we often observe that nation-specific factors explain their own country's volatility dynamics. To account for this, we propose the Double Principal Orthogonal complEment Thresholding (Double-POET) method, based on multi-level factor models, and also establish its asymptotic properties. Furthermore, we demonstrate the drawback of using the regular principal orthogonal component thresholding (POET) when the local factor structure exists. We also describe the blessing of dimensionality using Double-POET for local covariance matrix estimation. Finally, we investigate the performance of the Double-POET estimator in an out-of-sample portfolio allocation study using international stocks from 20 financial markets.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.12323&r=
  8. By: Shuowen Chen; Hiroaki Kaido
    Abstract: Economic models exhibit incompleteness for various reasons. They are, for example, consequences of strategic interaction, state dependence, or self-selection. Whether a model makes such incomplete predictions or not often has policy-relevant implications. We provide a novel test of model incompleteness using a score-based statistic and derive its asymptotic properties. The test is computationally tractable because it suffices to estimate nuisance parameters only under the null hypothesis of model completeness. We illustrate the test by applying it to a model of market entry and a triangular model with a set-valued control function.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.11281&r=
  9. By: Sebastian Galiani; Brian Quistorff
    Abstract: We review, from a practical standpoint, the evolving literature on assessing external validity (EV) of estimated treatment effects. We provide an implementation and real-world assessment of the general EV measures developed in Bo and Galiani (2021). In the context of estimating conditional average treatment effect models for assessing external validity, we provide a novel method utilizing the Group Lasso (Yuan and Lin, 2006) to estimate a tractable regression-based model. This approach can perform better when settings have differing covariate distributions and allows for easily extrapolating the average treatment effect to new settings. We apply these measures to a set of identical field experiments conducted in three different countries (Galiani et al., 2017).
    JEL: C55
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:30398&r=
  10. By: Barrera, Carlos
    Abstract: The paper proposes a method for simultaneously estimating the treatment effects of a change in a policy variable on a numerable set of interrelated outcome variables (different moments from the same probability density function). Firstly, it defines a non-Gaussian probability density function as the outcome variable. Secondly, it uses a functional regression to explain the density in terms of a set of scalar variables. From both the observed and the fitted probability density functions, two sets of interrelated moments are then obtained by simulation. Finally, a set of difference-in-difference estimators can be defined from the available pairs of moments in the sample. A stylized application provides a 29-moment characterization of the direct treatment effects of the Peruvian Central Bank’s forecasts on two sequences of Peruvian firms’ probability densities of expectations (for inflation −π− and real growth −g−) during 2004-2015.
    Keywords: C15, C30, E37, E47, E58, G14.
    JEL: C15 C30 E37 E47 E58 G14
    Date: 2022–08–19
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:114258&r=
  11. By: Julian Granna; Wolfgang Brunauer; Stefan Lang
    Abstract: The most widely used approaches in hedonic price modelling of real estate data and price index construction are Time Dummy and Imputation methods. Both methods, however, reveal extreme approaches regarding regression modeling of real estate data.In the time dummy approach, the data are pooled and the dependence on time is solely modelled via a (nonlinear) time effect through dummies. Possible heterogeneity of effects across time, i.e. interactions with time, are completely ignored. Hence, the approach is prone to biased estimates due to underfitting. The other extreme poses the imputation method where separate regression models are estimated for each time period. Whereas the approach naturally includes interactions with time, the method tends to overfit and therefore increased variability of estimates. In this paper, we therefore propose a generalized approach such that time dummy and imputation methods are special cases. This is achieved by reexpressing the separate regression models in the imputation method as an equivalent global regression model with interactions of all available regressors with time. Our approach is applied to a large dataset on offer prices for private single as well as semi-detached houses in Germany. More specifically, we a) compute a Time Dummy Method index based on a Generalized Additive Model allowing for smooth effects of the continuous covariates on the price utilizing the pooled data set, b) construct an Imputation Approach model, where we fit a regression model separately for each time period, c) finally develop a global model that captures only relevant interactions of the covariates with time. An important methodolical aspect in developing the global model is the usage of modelbased recursive partitioning trees to define data driven and parsimonious time intervals.
    Keywords: hedonic models
    Date: 2022–12
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2022-12&r=
  12. By: Poon, Aubrey (Örebro University School of Business); Zhu, Dan (Monash University)
    Abstract: We develop a novel multinomial logistic model to detect and forecast concurrent recessions across multi-countries. The key advantage of our proposed framework is that we can detect recessions across countries using the additional informational content from the cross-country panel feature of the data. Furthermore, in a simulation study, we show that our proposed model accurately captures the true underlying probabilities. Finally, we apply our proposed framework to a US and UK empirical application. In terms of recession forecastability, the multinomial logistic model with both countries’ interest rate spread and the weekly US NFCI as the set of exogenous predictors was the best performing model. For the counterfactual analysis, we found that a previous US recession will increase the probability of a recession occurring jointly in the US and the UK. However, a tightening of the US NFCI and a negative interest rate spread in both countries only increases the probability of a recession exclusively in the US and UK, respectively.
    Keywords: Recession prediction; multinomial logistic; cross-country; mixed frequency; Bayesian estimation
    JEL: C22 C25 E32 E37
    Date: 2022–09–07
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2022_011&r=
  13. By: Rodrigo Alfaro; Mathias Drehmann
    Abstract: We show that the trend of the one-sided HP filter can be asymptotically approximated by the Holt-Winters (HW) filter. The later is an elegant, moving average representation and facilitates the computation of trends tremendously. We confirm the accuracy of this approximation empirically by comparing the one-sided HP filter with the HW filter for generating credit-to-GDP gaps. We find negligible differences, most of them concentrated at the beginning of the sample.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:959&r=
  14. By: Efrem Castelnuovo; Kerem Tuzcuoglu; Luis Uzeda
    Abstract: We propose a new empirical framework that jointly decomposes the conditional variance of economic time series into a common and a sector-specific uncertainty component. We apply our framework to a large dataset of disaggregated industrial production series for the US economy. Our results indicate that common uncertainty and uncertainty linked to non-durable goods both recorded their pre-pandemic global peaks during the 1973-75 recession. In contrast, durable goods uncertainty recorded its pre-pandemic peak during the global financial crisis of 2008-09. Vector autoregression exercises identify unexpected changes in durable goods uncertainty as drivers of downturns that are both economically and statistically significant, while unexpected hikes in non-durable goods uncertainty are expansionary. Our findings suggest that: (i) uncertainty is heterogeneous at a sectoral level; and (ii) durable goods uncertainty may drive some business cycle effects typically attributed to aggregate uncertainty.
    Keywords: Business fluctuations and cycles; Econometric and statistical methods; Monetary policy and uncertainty
    JEL: E32 E44 C51 C55
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:22-38&r=
  15. By: Jun Lu; Danny Ding
    Abstract: Over the decades, the Markowitz framework has been used extensively in portfolio analysis though it puts too much emphasis on the analysis of the market uncertainty rather than on the trend prediction. While generative adversarial network (GAN), conditional GAN (CGAN), and autoencoding CGAN (ACGAN) have been explored to generate financial time series and extract features that can help portfolio analysis. The limitation of the CGAN or ACGAN framework stands in putting too much emphasis on generating series and finding the internal trends of the series rather than predicting the future trends. In this paper, we introduce a hybrid approach on conditional GAN based on deep generative models that learns the internal trend of historical data while modeling market uncertainty and future trends. We evaluate the model on several real-world datasets from both the US and Europe markets, and show that the proposed HybridCGAN and HybridACGAN models lead to better portfolio allocation compared to the existing Markowitz, CGAN, and ACGAN approaches.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.07159&r=
  16. By: Xavier Brouty; Matthieu Garcin
    Abstract: We determine the amount of information contained in a time series of price returns at a given time scale, by using a widespread tool of the information theory, namely the Shannon entropy, applied to a symbolic representation of this time series. By deriving the exact and the asymptotic distribution of this market information indicator in the case where the efficient market hypothesis holds, we develop a statistical test of market efficiency. We apply it to a real dataset of stock indices, single stock, and cryptocurrency, for which we are able to determine at each date whether the efficient market hypothesis is to be rejected, with respect to a given confidence level.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.11976&r=
  17. By: Danial Saef; Yuanrong Wang; Tomaso Aste
    Abstract: The increasing adoption of Digital Assets (DAs), such as Bitcoin (BTC), rises the need for accurate option pricing models. Yet, existing methodologies fail to cope with the volatile nature of the emerging DAs. Many models have been proposed to address the unorthodox market dynamics and frequent disruptions in the microstructure caused by the non-stationarity, and peculiar statistics, in DA markets. However, they are either prone to the curse of dimensionality, as additional complexity is required to employ traditional theories, or they overfit historical patterns that may never repeat. Instead, we leverage recent advances in market regime (MR) clustering with the Implied Stochastic Volatility Model (ISVM). Time-regime clustering is a temporal clustering method, that clusters the historic evolution of a market into different volatility periods accounting for non-stationarity. ISVM can incorporate investor expectations in each of the sentiment-driven periods by using implied volatility (IV) data. In this paper, we applied this integrated time-regime clustering and ISVM method (termed MR-ISVM) to high-frequency data on BTC options at the popular trading platform Deribit. We demonstrate that MR-ISVM contributes to overcome the burden of complex adaption to jumps in higher order characteristics of option pricing models. This allows us to price the market based on the expectations of its participants in an adaptive fashion.
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2208.12614&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.