nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒01‒24
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding By Jacob Dorn; Kevin Guo; Nathan Kallus
  2. Volatility of volatility estimation: central limit theorems for the Fourier transform estimator and empirical study of the daily time series stylized facts By Giulia Livieri; Maria Elvira Mancino; Stefano Marmi; Giacomo Toscano
  3. Nonparametric Treatment Effect Identification in School Choice By Jiafeng Chen
  4. Efficient Estimation of State-Space Mixed-Frequency VARs: A Precision-Based Approach By Joshua C. C. Chan; Aubrey Poon; Dan Zhu
  5. A parsimonious test of constancy of a positive definite correlation matrix in a multivariate time-varying GARCH model By Jian Kang; Johan Stax Jakobsen; Annastiina Silvennoinen; Timo Teräsvirta; Glen Wade
  6. A Finite Sample Theorem for Longitudinal Causal Inference with Machine Learning: Long Term, Dynamic, and Mediated Effects By Rahul Singh
  7. Semiparametric Conditional Factor Models: Estimation and Inference By Qihui Chen; Nikolai Roussanov; Xiaoliang Wang
  8. The Oracle estimator is suboptimal for global minimum variance portfolio optimisation By Christian Bongiorno; Damien Challet
  9. Deep Quantile and Deep Composite Model Regression By Tobias Fissler; Michael Merz; Mario V. W\"uthrich
  10. A Bayesian take on option pricing with Gaussian processes By Martin Tegner; Stephen Roberts
  11. Assessing the overall validity of randomised controlled trials By Krauss, Alexander
  12. Real-Time Forecast of DSGE Models with Time-Varying Volatility in GARCH Form By Sergey Ivashchenko; Semih Emre Cekin; Rangan Gupta
  13. Fractional integration and cointegration By Javier Haulde; Morten Ørregaard Nielsen

  1. By: Jacob Dorn; Kevin Guo; Nathan Kallus
    Abstract: We study the problem of constructing bounds on the average treatment effect in the presence of unobserved confounding under the marginal sensitivity model of Tan (2006). Combining an existing characterization involving adversarial propensity scores with a new distributionally robust characterization of the problem, we propose novel estimators of these bounds that we call "doubly-valid/doubly-sharp" (DVDS) estimators. Double sharpness corresponds to the fact that DVDS estimators consistently estimate the tightest possible (i.e., sharp) bounds implied by the sensitivity model even when one of two nuisance parameters is misspecified and achieve semiparametric efficiency when all nuisance parameters are suitably consistent. Double validity is an entirely new property for partial identification: DVDS estimators still provide valid, though not sharp, bounds even when most nuisance parameters are misspecified. In fact, even in cases when DVDS point estimates fail to be asymptotically normal, standard Wald confidence intervals may remain valid. In the case of binary outcomes, the DVDS estimators are particularly convenient and possesses a closed-form expression in terms of the outcome regression and propensity score. We demonstrate the DVDS estimators in a simulation study as well as a case study of right heart catheterization.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.11449&r=
  2. By: Giulia Livieri; Maria Elvira Mancino; Stefano Marmi; Giacomo Toscano
    Abstract: We study the asymptotic normality of two estimators of the integrated volatility of volatility based on the Fourier methodology, which does not require the pre-estimation of the spot volatility. We show that the bias-corrected estimator reaches the optimal rate 1/4, while the estimator without bias-correction has a slower convergence rate and a smaller asymptotic variance. Additionally, we provide simulation results that support the theoretical asymptotic distribution of the rate-efficient estimator and show the accuracy of the Fourier estimator in comparison with a rate-optimal estimator based on the pre-estimation of the spot volatility. Finally, we reconstruct the daily volatility of volatility of the S&P500 and EUROSTOXX50 indices over long samples via the rate-optimal Fourier estimator and provide novel insight into the existence of stylized facts about its dynamics.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.14529&r=
  3. By: Jiafeng Chen
    Abstract: We study identification and estimation of treatment effects in common school choice settings, under unrestricted heterogeneity in individual potential outcomes. We propose two notions of identification, corresponding to design- and sampling-based uncertainty, respectively. We characterize the set of causal estimands that are identified for a large variety of school choice mechanisms, including ones that feature both random and non-random tie-breaking; we discuss their policy implications. We also study the asymptotic behavior of nonparametric estimators for these causal estimands. Lastly, we connect our approach to the propensity score approach proposed in Abdulkadiroglu, Angrist, Narita, and Pathak (2017a, forthcoming), and derive the implicit estimands of the latter approach, under fully heterogeneous treatment effects.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.03872&r=
  4. By: Joshua C. C. Chan; Aubrey Poon; Dan Zhu
    Abstract: State-space mixed-frequency vector autoregressions are now widely used for nowcasting. Despite their popularity, estimating such models can be computationally intensive, especially for large systems with stochastic volatility. To tackle the computational challenges, we propose two novel precision-based samplers to draw the missing observations of the low-frequency variables in these models, building on recent advances in the band and sparse matrix algorithms for state-space models. We show via a simulation study that the proposed methods are more numerically accurate and computationally efficient compared to standard Kalman-filter based methods. We demonstrate how the proposed method can be applied in two empirical macroeconomic applications: estimating the monthly output gap and studying the response of GDP to a monetary policy shock at the monthly frequency. Results from these two empirical applications highlight the importance of incorporating high-frequency indicators in macroeconomic models.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.11315&r=
  5. By: Jian Kang (School of Finance, Dongbei University of Finance and Economics); Johan Stax Jakobsen (Copenhagen Business School and CREATES); Annastiina Silvennoinen (NCER, Queensland University of Technology); Timo Teräsvirta (Aarhus University, CREATES, C.A.S.E, Humboldt-Universität zu Berlin); Glen Wade (NCER, Queensland University of Technology)
    Abstract: We construct a parsimonious test of constancy of the correlation matrix in the multivariate conditional correlation GARCH model, where the GARCH equations are time-varying. The alternative to constancy is that the correlations change deterministically as a function of time. The alternative is a covariance matrix, not a correlation matrix, so the test may be viewed as a general test of stability of a constant correlation matrix. The size of the test in finite samples is studied by simulation. An empirical example is given.
    Keywords: Deterministically varying correlation, multiplicative time-varying GARCH, multivariate GARCH, nonstationary volatility, smooth transition GARCH
    JEL: C32 C52 C58
    Date: 2022–01–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2022-01&r=
  6. By: Rahul Singh
    Abstract: I construct and justify confidence intervals for longitudinal causal parameters estimated with machine learning. Longitudinal parameters include long term, dynamic, and mediated effects. I provide a nonasymptotic theorem for any longitudinal causal parameter estimated with any machine learning algorithm that satisfies a few simple, interpretable conditions. The main result encompasses local parameters defined for specific demographics as well as proximal parameters defined in the presence of unobserved confounding. Formally, I prove consistency, Gaussian approximation, and semiparametric efficiency. The rate of convergence is $n^{-1/2}$ for global parameters, and it degrades gracefully for local parameters. I articulate a simple set of conditions to translate mean square rates into statistical inference. A key feature of the main result is a new multiple robustness to ill posedness for proximal causal inference in longitudinal settings.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.14249&r=
  7. By: Qihui Chen; Nikolai Roussanov; Xiaoliang Wang
    Abstract: This paper introduces a simple and tractable sieve estimation of semiparametric conditional factor models with latent factors. We establish large-$N$-asymptotic properties of the estimators and the tests without requiring large $T$. We also develop a simple bootstrap procedure for conducting inference about the conditional pricing errors as well as the shapes of the factor loadings functions. These results enable us to estimate conditional factor structure of a large set of individual assets by utilizing arbitrary nonlinear functions of a number of characteristics without the need to pre-specify the factors, while allowing us to disentangle the characteristics' role in capturing factor betas from alphas (i.e., undiversifiable risk from mispricing). We apply these methods to the cross-section of individual U.S. stock returns and find strong evidence of large nonzero pricing errors that combine to produce arbitrage portfolios with Sharpe ratios above 3.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.07121&r=
  8. By: Christian Bongiorno; Damien Challet
    Abstract: A common misconception is that the Oracle eigenvalue estimator of the covariance matrix yields the best realized portfolio performance. In reality, the Oracle estimator simply modifies the empirical covariance matrix eigenvalues so as to minimize the Frobenius distance between the filtered and the realized covariance matrices. This leads to the best portfolios only when the in-sample eigenvectors coincide with the out-of-sample ones. In all the other cases, the optimal eigenvalue correction can be obtained from the solution of a Quadratic-Programming problem. Solving it shows that the Oracle estimators only yield the best portfolios in the limit of infinite data points per asset and only in stationary systems.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.07521&r=
  9. By: Tobias Fissler; Michael Merz; Mario V. W\"uthrich
    Abstract: A main difficulty in actuarial claim size modeling is that there is no simple off-the-shelf distribution that simultaneously provides a good distributional model for the main body and the tail of the data. In particular, covariates may have different effects for small and for large claim sizes. To cope with this problem, we introduce a deep composite regression model whose splicing point is given in terms of a quantile of the conditional claim size distribution rather than a constant. To facilitate M-estimation for such models, we introduce and characterize the class of strictly consistent scoring functions for the triplet consisting a quantile, as well as the lower and upper expected shortfall beyond that quantile. In a second step, this elicitability result is applied to fit deep neural network regression models. We demonstrate the applicability of our approach and its superiority over classical approaches on a real accident insurance data set.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.03075&r=
  10. By: Martin Tegner; Stephen Roberts
    Abstract: Local volatility is a versatile option pricing model due to its state dependent diffusion coefficient. Calibration is, however, non-trivial as it involves both proposing a hypothesis model of the latent function and a method for fitting it to data. In this paper we present novel Bayesian inference with Gaussian process priors. We obtain a rich representation of the local volatility function with a probabilistic notion of uncertainty attached to the calibrate. We propose an inference algorithm and apply our approach to S&P 500 market data.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.03718&r=
  11. By: Krauss, Alexander
    Abstract: In the biomedical, behavioural and social sciences, the leading method used to estimate causal effects is commonly randomised controlled trials (RCTs) that are generally viewed as both the source and justification of the most valid evidence. In studying the foundation and theory behind RCTs, the existing literature analyses important single issues and biases in isolation that influence causal outcomes in trials (such as randomisation, statistical probabilities and placebos). The common account of biased causal inference is described in a general way in terms of probabilistic imbalances between trial groups. This paper expands the common account of causal bias by distinguishing between the range of biases arising between trial groups but also within one of the groups or across the entire sample during trial design, implementation and analysis. This is done by providing concrete examples from highly influential RCT studies. In going beyond the existing RCT literature, the paper provides a broader, practice-based account of causal bias that specifies the between-group, within-group and across-group biases that affect the estimated causal results of trials – impacting both the effect size and statistical significance. Within this expanded framework, we can better identify the range of different types of biases we face in practice and address the central question about the overall validity of the RCT method and its causal claims. A study can face several smaller biases (related simultaneously to a smaller sample, smaller estimated effect, greater unblinding etc.) that generally add up to greater aggregate bias. Though difficult to measure precisely, it is important to assess and provide information in studies on how much different sources of bias, combined, can explain the estimated causal effect. The RCT method is thereby often the best we have to inform our policy decisions – and the evidence is strengthened when combined with multiple studies and other methods. Yet there is room for continually improving trials and identifying ways to reduce biases they face and to increase their overall validity. Implications are discussed.
    Keywords: philosophy of science; philosophy of medicine; randomised controlled trials; RCTs; bias; validity; internal validity; Marie Curie programme; T&F deal
    JEL: C1
    Date: 2021–11–22
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:112576&r=
  12. By: Sergey Ivashchenko (The North-Western Main Branch of the Bank of Russia; The Institute of Regional Economy Studies (Russian Academy of Sciences); The Financial Research Institute); Semih Emre Cekin (Department of Economics, Turkish-German University, Istanbul, Turkey); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa)
    Abstract: Recent research shows that time-varying volatility plays a crucial role in nonlinear modeling. Contributing to this literature, we suggest a DSGE-GARCH approach that allows for straight-forward computation of DSGE models with time-varying volatility. As an application of our approach, we examine the forecasting performance of the DSGE-GARCH model using Eurozone real-time data. Our findings suggest that the DSGE-GARCH approach is superior in out-of-sample forecasting performance in comparison to various other benchmarks for the forecast of inflation rates, output growth and interest rates, especially in the short term. Comparing our approach to the widely used stochastic volatility specification using in-sample forecasts, we also show that the DSGE-GARCH is superior in in-sample forecast quality and computational effciency. In addition to these results, our approach reveals interesting properties and dynamics of time-varying correlations (conditional correlations).
    Keywords: DSGE, forecasting, GARCH, stochastic volatility, conditional correlations
    JEL: C32 E30 E37
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202204&r=
  13. By: Javier Haulde (Department of Economics, Universidad Pública de Navarra); Morten Ørregaard Nielsen (Aarhus University, Department of Economics and Business Economics and CREATES)
    Abstract: In this chapter we present an overview of the main ideas and methods in the fractional integration and cointegration literature. We do not attempt to give a complete survey of this enormous literature, but rather a more introductory treatment suitable for a researcher or graduate student wishing to learn about this exciting field of research. With this aim, we have surely overlooked many relevant references for which we apologize in advance. Knowledge of standard time series methods, and in particular methods related to nonstationary time series, at the level of a standard graduate course or advanced undergraduate course is assumed.
    Keywords: Arfima model, cofractional, cointegration, fractional Brownian motion, fractional integration, long memory, long-range dependence, nonstationary, strong dependence
    JEL: C22 C32
    Date: 2022–01–10
    URL: http://d.repec.org/n?u=RePEc:aah:create:2022-02&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.