nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒03‒28
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Nonparametric Dynamic Network via Multivariate Quantile Autoregressions By Zongwu Cai; Xiyuan Liu
  2. A Unified Nonparametric Test of Transformations on Distribution Functions with Nuisance Parameters By Xingyu Li; Xiaojun Song; Zhenting Sun
  3. A procedure for upgrading linear-convex combination forecasts with an application to volatility prediction By Verena Monschang; Bernd Wilfling
  4. Optimal Forecast under Structural Breaks By Tae-Hwy Lee; Shahnaz Parsaeian; Aman Ullah
  5. Distributional Counterfactual Analysis in High-Dimensional Setup By Ricardo Masini
  6. Leverage, Influence, and the Jackknife in Clustered Regression Models: Reliable Inference Using summclust By James G. MacKinnon; Morten Ørregaard Nielsen; Matthew D. Webb
  7. Identifying the elasticity of substitution between capital and labour. A pooled GMM panel estimator By Thomas von Brasch; Arvid Raknerud; Trond C. Vigtel
  8. Make the Difference! computationally Trivial Estimators for Grouped Fixed Effects Models By Martin Mugnier
  9. Identification through the Forecast Error Variance Decomposition: an Application to Uncertainty By Andrea Carriero; Alessio Volpicella
  10. Differentially Private Estimation of Heterogeneous Causal Effects By Fengshi Niu; Harsha Nori; Brian Quistorff; Rich Caruana; Donald Ngwe; Aadharsh Kannan
  11. Exponential High-Frequency-Based-Volatility (EHEAVY) Models By Xu, Yongdeng
  12. Neural Generalised AutoRegressive Conditional Heteroskedasticity By Zexuan Yin; Paolo Barucca
  13. Iterated Function Systems driven by non independent sequences: structure and inference By Baye Matar Kandji
  14. Interpolation and shock persistence of prewar U.S. macroeconomic time series: A reconsideration By Hashem Dezhbakhsh; Daniel Levy
  15. Higher-Order Asymptotic Properties of Kernel Density Estimator with Plug-In Bandwidth By Shunsuke Imai; Yoshihiko Nishiyama
  16. Optimality in Multivariate Tie-breaker Designs By Tim P. Morrison; Art B. Owen
  17. Score Driven Generalized Fitness Model for \\Sparse and Weighted Temporal Networks By Domenico Di Gangi; Giacomo Bormetti; Fabrizio Lillo
  18. Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables By Samuele Centorrino; Jean-Pierre Florens; Jean-Michel Loubes

  1. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Xiyuan Liu (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: In this article, we propose a vector autoregressive model for conditional quantiles with functional coefficients to construct a novel class of nonparametric dynamic network systems, of which the interdependences among tail risks such as Value-at-Risk are allowed to vary smoothly with a variable of general economy. Methodologically, we develop an easy-to-implement two-stage procedure to estimate functionals in the dynamic network system by the local linear smoothing technique. We establish the consistency and the asymptotic normality of the proposed estimator under strongly mixing time series settings. The simulation studies are conducted to show that our new methods work fairly well. The potential of the proposed estimation procedures is demonstrated by an empirical study of constructing and estimating a new type of nonparametric dynamic financial network.
    Keywords: Conditional quantile models; Dynamic financial network; Functional coefficient models; Nonparametric estimation; VAR modeling.
    JEL: C14 C58 C45 G32
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202209&r=
  2. By: Xingyu Li; Xiaojun Song; Zhenting Sun
    Abstract: This paper proposes a simple unified approach to testing transformations on cumulative distribution functions (CDFs) with nuisance parameters. We consider testing general parametric transformations on two CDFs, and then generalize the test for multiple CDFs. We construct the test using a numerical bootstrap method which can easily be implemented. The proposed test is shown to be asymptotically size controlled and consistent. Monte Carlo simulations and an empirical application show that the test performs well on finite samples.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.11031&r=
  3. By: Verena Monschang; Bernd Wilfling
    Abstract: We investigate mean-squared-forecast-error (MSE) accuracy improvements for linear-convex combination forecasts, whose components are pretreated by a procedure called 'Vector Autoregressive Forecast Error Modeling' (VAFEM). Assuming that the fore-cast-error series of the individual forecasts are governed by a stable VAR process under classic conditions, we obtain the following results: (i) VAFEM treatment bias-corrects all individual and linear-convex combination forecasts. (ii) Any VAFEM-treated combination has smaller theoretical MSE than its untreated analogue, if the VAR parameters are known. (iii) In empirical applications, VAFEM gains depend on (1) in-sample sizes, (2) out-of-sample forecast horizons, (3) the biasedness of the untreated forecast combination. We demonstrate the VAFEM capacity for realized-volatility forecasting, using S&P 500 data.
    Keywords: Combination forecasts, mean-squared-error loss, VAR forecast-error molding, multivariate least squares estimation
    JEL: C10 C32 C51 C53
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:9722&r=
  4. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Shahnaz Parsaeian (University of Kansas); Aman Ullah (University of California Riverside)
    Abstract: This paper develops an optimal combined estimator to forecast out-of-sample under structural breaks. When it comes to forecasting, using only the post-break observations after the most recent break point may not be optimal. In this paper we propose a new estimation method that exploits the pre-break information. In particular, we show how to combine the estimator using the full-sample (i.e., both the pre-break and post-break data) and the estimator using only the post-break sample. The full-sample estimator is inconsistent when there is a break while it is efficient. The post-break estimator is consistent but inefficient. Hence, depending on the severity of the breaks, the full-sample estimator and the post-break estimator can be combined to balance the consistency and efficiency. We derive the Stein-like combined estimator of the full-sample and the post-break estimators, to balance the bias-variance trade-off. The combination weight depends on the break severity, which we measure by the Wu-Hausman statistic. We examine the properties of the proposed method, analytically in theory, numerically in simulation, and also empirically in forecasting real output growth across nine industrial economies.
    Keywords: Forecasting, Structural breaks, Stein-like combined estimator, Output growth
    JEL: C13 C32 C53
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202208&r=
  5. By: Ricardo Masini
    Abstract: In the context of treatment effect estimation, this paper proposes a new methodology to recover the counterfactual distribution when there is a single (or a few) treated unit and possibly a high-dimensional number of potential controls observed in a panel structure. The methodology accommodates, albeit does not require, the number of units to be larger than the number of time periods (high-dimensional setup). As opposed to model only the conditional mean, we propose to model the entire conditional quantile function (CQF) in the absence of intervention and estimate it using the pre-intervention period using a penalized regression. We derive non-asymptotic bounds for the estimated CQF valid uniformly over the quantiles, allowing the practitioner to re-construct the entire contractual distribution. Moreover, we bound the probability coverage of this estimated CQF which can be used to construct valid confidence intervals for the (possibly random) treatment effect for every post-intervention period or simultaneously. We also propose a new hypothesis test for the sharp null of no-effect based on the $\mathcal{L}^p$ norm of deviation of the estimated CQF to the population one. Interestingly, the null distribution is quasi-pivotal in the sense that it only depends on the estimated CQF, $\mathcal{L}^p$ norm, and the number of post-intervention periods, but not on the size of the post-intervention period. For that reason, critical values can then be easily simulated. We illustrate the methodology is by revisiting the empirical study in Acemoglu et al (2016).
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.11671&r=
  6. By: James G. MacKinnon (Queen's University); Morten Ørregaard Nielsen (Queen's University and CREATES); Matthew D. Webb (Carleton University)
    Abstract: Cluster-robust inference is widely used in modern empirical work in economics and many other disciplines. The key unit of observation is the cluster. We propose measures of "high-leverage'' clusters and "influential'' clusters for linear regression models. The measures of leverage and partial leverage, and functions of them, can be used as diagnostic tools to identify datasets and regression designs in which cluster-robust inference is likely to be challenging. The measures of influence can provide valuable information about how the results depend on the data in the various clusters. We also show how to calculate two jackknife variance matrix estimators, CV3 and CV3J, as a byproduct of our other computations. All these quantities, including the jackknife variance estimators, are computed in a new Stata package called summclust that summarizes the cluster structure of a dataset.
    Keywords: clustered data, cluster-robust variance estimator, grouped data, high-leverage clusters, influential clusters, jackknife, partial leverage, robust inference
    JEL: C10 C12 C21 C23 C87
    Date: 2022–05
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1483&r=
  7. By: Thomas von Brasch; Arvid Raknerud; Trond C. Vigtel (Statistics Norway)
    Abstract: Simultaneity represents a fundamental problem when estimating the elasticity of substitution between capital and labour. To overcome this problem, a wide variety of external instruments has been applied in the literature. However, the use of instruments may lead to wrong inference if they are either weak, or endogenous to the system being estimated. In this paper, we extend the widely used Feenstra (1994) estimator, which does not depend on external instruments, to make it applicable to the problem of estimating the elasticity of substitution between capital and labour. We propose a pooled GMM (PGMM) estimator, examine its properties in a Monte Carlo study and apply it to a Norwegian sample of manufacturing firms. We identify the conditions under which P-GMM yields unbiased estimates and compare it to a fixed effects estimator which is unbiased when factor prices are exogenous – a typical assumption in the literature. We find that the fixed effects estimator is heavily downward biased in the presence of simultaneity. In contrast, the P-GMM estimator is nearly unbiased provided the number of time periods (T) is not too small (say, more than 10). In our application, with an unbalanced sample and T = 12, we estimate the elasticity of substitution to be 1.8 using P-GMM and 1.0 using a fixed effects estimator. Hence, neglecting simultaneity may lead to the conclusion that capital and labour are complements when, in fact, they are substitutes
    Keywords: Elasticity of Substitution; Simultaneity; Factor Demand; Non-Linear GMM; Pooled Estimator
    JEL: C13 C15 C33 C51
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:976&r=
  8. By: Martin Mugnier (Department of Economics, CREST, ENSAE, Institut Polytechnique de Paris, France)
    Abstract: Novel estimators are proposed for linear grouped fixed effects models. Rather than predicting a single grouping of units, they deliver a collection of groupings with the same flavor as the so-called LASSO regularization path. Mild conditions are found that ensure their asymptotic guarantees are the same as the so-called grouped fixed effects and post-spectral estimators (Bonhomme and Manresa, 2015; Chetverikov and Manresa, 2021). In contrast, the new estimators are computationally straigthforward and do not require prior knowledge of the number of groups. Monte Carlo simulations suggest good finite sample performance. Applying the approach to real data provides new insights on the potential network structure of the unobserved heterogeneity.
    Keywords: panel data, grouped fixed effects, time-varying unobserved heterogeneity, k-means clustering
    JEL: C14 C23 C25
    Date: 2022–03–14
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2022-07&r=
  9. By: Andrea Carriero (Queen Mary University of London and University of Bologna); Alessio Volpicella (University of Surrey)
    Abstract: We develop a novel approach to achieve point identification in a Structural Vector Autoregression, based on imposing constraints on the forecast error variance decomposition. We characterize the properties of this approach and provide Bayesian algorithms for estimation and inference. We use the approach to study the effects of uncertainty shocks, allowing for the possibility that uncertainty is an endogenous variable, and distinguishing macroeconomic from financial uncertainty. Using US data we find that macroeconomic uncertainty is mostly endogenous, and that overlooking this fact can lead to distortions on the estimates of its effects. We show that the distinction between macroeconomic and financial uncertainty is empirically relevant. Finally, we study the relation between uncertainty shocks and pure financial shocks, showing that the latter can have attenuated effects if one does not take into account the endogeneity of uncertainty.
    JEL: C11 C32 E32 E37 E44
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:sur:surrec:0322&r=
  10. By: Fengshi Niu; Harsha Nori; Brian Quistorff; Rich Caruana; Donald Ngwe; Aadharsh Kannan
    Abstract: Estimating heterogeneous treatment effects in domains such as healthcare or social science often involves sensitive data where protecting privacy is important. We introduce a general meta-algorithm for estimating conditional average treatment effects (CATE) with differential privacy (DP) guarantees. Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner. We perform a tight privacy analysis by taking advantage of sample splitting in our meta-algorithm and the parallel composition property of differential privacy. In this paper, we implement our approach using DP-EBMs as the base learner. DP-EBMs are interpretable, high-accuracy models with privacy guarantees, which allow us to directly observe the impact of DP noise on the learned causal model. Our experiments show that multi-stage CATE estimators incur larger accuracy loss than single-stage CATE or ATE estimators and that most of the accuracy loss from differential privacy is due to an increase in variance, not biased estimates of treatment effects.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.11043&r=
  11. By: Xu, Yongdeng (Cardiff Business School)
    Abstract: This paper proposes an Exponential HEAVY (EHEAVY) model. The model specifies the dynamics of returns and realized measures of volatility in an exponential form, which guarantees the positivity of volatility without restrictions on parameters and naturally allows the asymmetric effects. It provides a more flexible modelling of the volatility than the HEAVY models. A joint quasi-maximum likelihood estimation and closed form multi-step ahead forecasting is derived. The model is applied to 31 assets extracted from the Oxford-Man Institute's realized library. The empirical results show that the dynamic of return volatility is driven by the realized measure, while the asymmetric effect is captured by the return shock (not by the realized return shock). Hence, both return and realized measure are included in the return volatility equation. Out-of-sample forecast and portfolio exercise further shows the superior forecasting performance of the EHEAVY model, in both statistical and economic sense.
    Keywords: HEAVY model, High-frequency data, Asymmetric effects, Realized variance, Portfolio
    JEL: C32 C53 G11 G17
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2022/5&r=
  12. By: Zexuan Yin; Paolo Barucca
    Abstract: We propose Neural GARCH, a class of methods to model conditional heteroskedasticity in financial time series. Neural GARCH is a neural network adaptation of the GARCH 1,1 model in the univariate case, and the diagonal BEKK 1,1 model in the multivariate case. We allow the coefficients of a GARCH model to be time varying in order to reflect the constantly changing dynamics of financial markets. The time varying coefficients are parameterised by a recurrent neural network that is trained with stochastic gradient variational Bayes. We propose two variants of our model, one with normal innovations and the other with Students t innovations. We test our models on a wide range of univariate and multivariate financial time series, and we find that the Neural Students t model consistently outperforms the others.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.11285&r=
  13. By: Baye Matar Kandji (1CREST, ENSAE, Institut Polytechnique de Paris)
    Abstract: The paper investigates the existence of a strictly stationary solution to an Iterated Function System (IFS) driven by a stationary and ergodic sequence. When the driving sequence is not independent, the strictly stationary solution may admit no moment but we show an exponen- tial control of the trajectories. We exploit these results to prove, under mild conditions, the consistency of the quasi-maximum likelihood es- timator of GARCH models with non independent innovations.
    Keywords: Stochastic Recurrence Equation, Semi-strong GARCH, Quasi Maximum Likelihood, inference without moments
    Date: 2022–01–26
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2022-03&r=
  14. By: Hashem Dezhbakhsh (Department of Economics, Emory University, USA); Daniel Levy (Department of Economics, Emory University, USA; Department of Economics, Bar-Ilan University, Israel; Rimini Centre for Economic Analysis; International Centre for Economic Analysis, Wilfrid Laurier University, Canada; International School of Economics, Tbilisi State University, Georgia)
    Abstract: The U.S. prewar output series exhibit smaller shock-persistence than postwar-series. Some studies suggest this may be due to linear interpolation used to generate missing prewar data. Monte Carlo simulations that support this view generate large standard-errors, making such inference imprecise. We assess analytically the effect of linear interpolation on a nonstationary process. We find that interpolation indeed reduces shock-persistence, but the interpolated series can still exhibit greater shock-persistence than a pure random walk. Moreover, linear interpolation makes the series periodically nonstationary, with parameters of the data generating process and the length of the interpolation time-segments affecting shock-persistence in conflicting ways.
    Keywords: Linear Interpolation, Random Walk, Shock-Persistence, Nonstationary series, Periodic nonstationarity, Stationary series, Prewar US Time Series
    JEL: C01 C02 E01 E30 N10
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:22-05&r=
  15. By: Shunsuke Imai (Faculty of Economics, Kyoto University, JAPAN); Yoshihiko Nishiyama (Institute of Economic Research, Kyoto University, JAPAN,)
    Abstract: This study investigates the effect of bandwidth selection via plug-in method on the asymptotic structure of the nonparametric kernel density estimator. We find that the plug-in method has no effect on the asymptotic structure of the estimator up to the order of O{(nh0)−1/2} = O(n−L/(2L+1)) for a bandwidth h0 and any kernel order L. We also provide the valid Edgeworth expansion up to the order of O{(nh0)−1} and find that the plug-in method starts to have an effect from on the term whose convergence rate is O{(nh0)−1/2h0} = O(n−(L+1)/(2L+1)). In other words, we derive the exact convergence rate of the deviation between the distribution functions of the estimator with a deterministic bandwidth and with the plug-in bandwidth. Monte Carlo experiments are conducted to see whether our approximation improves previous results.
    Keywords: nonparametric statistics, kernel density estimator, plug-in bandwidth, Edgeworth expansion
    JEL: C14
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:1076&r=
  16. By: Tim P. Morrison; Art B. Owen
    Abstract: \begin{abstract} Tie-breaker designs (TBDs), in which subjects with extreme values are assigned treatment deterministically and those in the middle are randomized, are intermediate between regression discontinuity designs (RDDs) and randomized controlled trials (RCTs). TBDs thus provide a convenient mechanism by which to trade off between the treatment benefit of an RDD and the statistical efficiency gains of an RCT. We study a model where the expected response is one multivariate regression for treated subjects and another one for control subjects. For a given set of subject data we show how to use convex optimization to choose treatment probabilities that optimize a prospective $D$-optimality condition (expected information gain) adapted from Bayesian optimal design. We can incorporate economically motivated linear constraints on those treatment probabilities as well as monotonicity constraints that have a strong ethical motivation. Our condition can be used in two scenarios: known covariates with random treatments, and random covariates with random treatments. We find that optimality for the treatment effect coincides with optimality for the whole regression, and that the RCT satisfies moment conditions for optimality. For Gaussian data we can find optimal linear scorings of subjects, one for statistical efficiency and another for short term treatment benefit. We apply the convex optimization solution to some real emergency triage data from MIMIC.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.10030&r=
  17. By: Domenico Di Gangi; Giacomo Bormetti; Fabrizio Lillo
    Abstract: While the vast majority of the literature on models for temporal networks focuses on binary graphs, often one can associate a weight to each link. In such cases the data are better described by a weighted, or valued, network. An important well known fact is that real world weighted networks are typically sparse. We propose a novel time varying parameter model for sparse and weighted temporal networks as a combination of the fitness model, appropriately extended, and the score driven framework. We consider a zero augmented generalized linear model to handle the weights and an observation driven approach to describe time varying parameters. The result is a flexible approach where the probability of a link to exist is independent from its expected weight. This represents a crucial difference with alternative specifications proposed in the recent literature, with relevant implications for the flexibility of the model. Our approach also accommodates for the dependence of the network dynamics on external variables. We present a link forecasting analysis to data describing the overnight exposures in the Euro interbank market and investigate whether the influence of EONIA rates on the interbank network dynamics has changed over time.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.09854&r=
  18. By: Samuele Centorrino; Jean-Pierre Florens; Jean-Michel Loubes
    Abstract: A supervised machine learning algorithm determines a model from a learning sample that will be used to predict new observations. To this end, it aggregates individual characteristics of the observations of the learning sample. But this information aggregation does not consider any potential selection on unobservables and any status-quo biases which may be contained in the training sample. The latter bias has raised concerns around the so-called \textit{fairness} of machine learning algorithms, especially towards disadvantaged groups. In this chapter, we review the issue of fairness in machine learning through the lenses of structural econometrics models in which the unknown index is the solution of a functional equation and issues of endogeneity are explicitly accounted for. We model fairness as a linear operator whose null space contains the set of strictly {\it fair} indexes. A {\it fair} solution is obtained by projecting the unconstrained index into the null space of this operator or by directly finding the closest solution of the functional equation into this null space. We also acknowledge that policymakers may incur a cost when moving away from the status quo. Achieving \textit{approximate fairness} is obtained by introducing a fairness penalty in the learning procedure and balancing more or less heavily the influence between the status quo and a full fair solution.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.08977&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.