nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒01‒01
eighteen papers chosen by
Sune Karlsson, Örebro universitet


  1. Eigen-Analysis for High-Dimensional Time Series Clustering By Bo Zhang; Jiti Gao; Guangming Pan; Yanrong Yang
  2. Regressions under Adverse Conditions By Timo Dimitriadis; Yannick Hoga
  3. Kolmogorov-Smirnov Type Testing for Structural Breaks: A New Adjusted-Range Based Self-Normalization Approach By Hong, Y.; Linton, O. B.; McCabe, B.; Sun, J.; Wang, S.
  4. Large-Sample Properties of the Synthetic Control Method under Selection on Unobservables By Dmitry Arkhangelsky; David Hirshberg
  5. Simultaneously incomplete and incoherent (SII) dynamic LDV models: with an application to financing constraints and firms’ decision to innovate By Hajivassiliou, Vassilis; Savignac, Frédérique
  6. Exponential Time Trends in a Fractional Integration Model By Guglielmo Maria Caporale; Luis Alberiko Gil-Alana
  7. Quasi-Bayes in Latent Variable Models By Sid Kankanala
  8. ABC-based Forecasting in State Space Models By Chaya Weerasinghe; Ruben Loaiza-Maya; Gael M. Martin; David T. Frazier
  9. Econometric Causality: The Central Role of Thought Experiments By James J. Heckman; Rodrigo Pinto
  10. Belief identification by proxy By Elias Tsakas
  11. Identification using Revealed Preferences in Linearly Separable Models By Nikhil Agarwal; Pearl Z. Li; Paulo J. Somaini
  12. The distribution of sample mean-variance portfolio weights By Kan, Raymond; Lassance, Nathan; Wang, Xiaolu
  13. Using multiple outcomes to improve the synthetic control method By Liyang Sun; Eli Ben-Michael; Avi Feller
  14. Comparative Statics for Difference-in-Differences By Finn Christensen
  15. Designing Difference in Difference Studies With Staggered Treatment Adoption: Key Concepts and Practical Guidelines By Seth M. Freedman; Alex Hollingsworth; Kosali I. Simon; Coady Wing; Madeline Yozwiak
  16. What to do when you can't use '1.96' Confidence Intervals for IV By David S. Lee; Justin McCrary; Marcelo J. Moreira; Jack R. Porter; Luther Yap
  17. Asymptotic Error Analysis of Multilevel Stochastic Approximations for the Value-at-Risk and Expected Shortfall By Stéphane Crépey; Noufel Frikha; Azar Louzi; Gilles Pagès
  18. Predicting Recessions in (almost) Real Time in a Big-data Setting By Alexandre Bonnet R. Costa; Pedro Cavalcanti G. Ferreira; Wagner Piazza Gaglianone; Osmani Teixeira C. Guillén; João Victor Issler; Artur Brasil Fialho Rodrigues

  1. By: Bo Zhang; Jiti Gao; Guangming Pan; Yanrong Yang
    Abstract: Cross-sectional structures and temporal tendency are important features of highdimensional time series. Based on eigen-analysis on sample covariance matrices, we propose a novel approach to identifying four popular structures of high-dimensional time series, which are grouped in terms of factor structures and stationarity. The proposed three-step method includes: (1) the ratio statistic of empirical eigenvalues; (2) a projected Augmented Dickey-Fuller Test; (3) a new unit-root test based on the largest empirical eigenvalues. We develop asymptotic properties for these three statistics to ensure the feasibility for the whole procedure. Finite sample performances are illustrated via various simulations. Our results are further applied to analyze U.S. mortality data, U.S. house prices and income, and U.S. sectoral employment, all of which possess cross-sectional dependence as well as non-stationary temporal dependence. It is worth mentioning that we also contribute to statistical justification for the benchmark paper by Lee and Carter (1992) in mortality forecasting.
    Keywords: factor model, non-stationarity, sample covariance matrix, stationarity
    JEL: C18 C32 C55
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2023-22&r=ecm
  2. By: Timo Dimitriadis; Yannick Hoga
    Abstract: We introduce a new regression method that relates the mean of an outcome variable to covariates, given the "adverse condition" that a distress variable falls in its tail. This allows to tailor classical mean regressions to adverse economic scenarios, which receive increasing interest in managing macroeconomic and financial risks, among many others. In the terminology of the systemic risk literature, our method can be interpreted as a regression for the Marginal Expected Shortfall. We propose a two-step procedure to estimate the new models, show consistency and asymptotic normality of the estimator, and propose feasible inference under weak conditions allowing for cross-sectional and time series applications. The accuracy of the asymptotic approximations of the two-step estimator is verified in simulations. Two empirical applications show that our regressions under adverse conditions are valuable in such diverse fields as the study of the relation between systemic risk and asset price bubbles, and dissecting macroeconomic growth vulnerabilities into individual components.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.13327&r=ecm
  3. By: Hong, Y.; Linton, O. B.; McCabe, B.; Sun, J.; Wang, S.
    Abstract: A popular self-normalization (SN) approach in time series analysis uses the variance of a partial sum as a self-normalizer. This is known to be sensitive to irregularities such as persistent autocorrelation, heteroskedasticity, unit roots and outliers. We propose a novel SN approach based on the adjusted-range of a partial sum, which is robust to these aforementioned irregularities. We develop an adjusted-range based Kolmogorov-Smirnov type test for structural breaks for both univariate and multivariate time series, and consider testing parameter constancy in a time series regression setting. Our approach can rectify the well-known power decrease issue associated with existing self-normalized KS tests without having to use backward and forward summations as in Shao and Zhang (2010), and can alleviate the “better size but less power†phenomenon when the existing SN approaches (Shao, 2010; Zhang et al., 2011; Wang and Shao, 2022) are used. Moreover, our proposed tests can cater for more general alternatives. Monte Carlo simulations and empirical studies demonstrate the merits of our approach.
    Keywords: Change-Point Testing, CUSUM Process, Parameter Constancy, Studentization
    JEL: C12 C19
    Date: 2023–11–06
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2367&r=ecm
  4. By: Dmitry Arkhangelsky; David Hirshberg
    Abstract: We analyze the properties of the synthetic control (SC) method in settings with a large number of units. We assume that the selection into treatment is based on unobserved permanent heterogeneity and pretreatment information, thus allowing for both strictly and sequentially exogenous assignment processes. Exploiting duality, we interpret the solution of the SC optimization problem as an estimator for the underlying treatment probabilities. We use this to derive the asymptotic representation for the SC method and characterize sufficient conditions for its asymptotic normality. We show that the critical property that determines the behavior of the SC method is the ability of input features to approximate the unobserved heterogeneity. Our results imply that the SC method delivers asymptotically normal estimators for a large class of linear panel data models as long as the number of pretreatment periods is large, making it a natural alternative to conventional methods built on the Difference-in-Differences.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.13575&r=ecm
  5. By: Hajivassiliou, Vassilis; Savignac, Frédérique
    Abstract: We develop novel methods for establishing coherency and completeness conditions in Static and Dynamic Limited Dependent Variables (LDV) Models. We characterize the two distinct problems as “empty-region ”incoherency and “overlap-region” incoherency or incompleteness and show that the two properties can co-exist. We focus on the class of models that can be Simultaneously Incomplete and Incoherent (SII). We propose estimation strategies based on Conditional Maximum Likelihood Estimation (CMLE) for simultaneous dynamic LDV models without imposing recursivity. Point identification is achieved through sign-restrictions on parameters or other prior assumptions that complete the underlying data process. Using as modelling framework the Panel Bivariate Probit model with State Dependence, we analyse the impact of financing constraints on innovation: ceteris paribus, a firm facing binding finance constraints is substantially less likely to undertake innovation, while the probability that a firm encounters a binding finance constraint more than doubles if the firm is innovative. In addition, a strong role for state dependence in dynamic versions of our models is established.
    Keywords: financing constraints; innovation; dynamic limited dependent variable models; joint bivariate probit model; econometric coherency and completeness conditions; state dependence; Elsevier deal
    JEL: C51 C15 C52
    Date: 2024–01–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119379&r=ecm
  6. By: Guglielmo Maria Caporale; Luis Alberiko Gil-Alana
    Abstract: This paper introduces a new modelling approach that incorporates nonlinear, exponential deterministic terms into a fractional integration model. The proposed model is based on a specific version of Robinson’s (1994) tests and is more general that standard time series models, which only allow for linear trends. Montecarlo simulations show that it performs well in finite sample. Three empirical examples confirm that the suggested specification captures the properties of the data adequately.
    Keywords: exponential time trends, fractional integration, Montecarlo simulations
    JEL: C22 C15
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10774&r=ecm
  7. By: Sid Kankanala
    Abstract: Latent variable models are widely used to account for unobserved determinants of economic behavior. Traditional nonparametric methods to estimate latent heterogeneity do not scale well into multidimensional settings. Distributional restrictions alleviate tractability concerns but may impart non-trivial misspecification bias. Motivated by these concerns, this paper introduces a quasi-Bayes approach to estimate a large class of multidimensional latent variable models. Our approach to quasi-Bayes is novel in that we center it around relating the characteristic function of observables to the distribution of unobservables. We propose a computationally attractive class of priors that are supported on Gaussian mixtures and derive contraction rates for a variety of latent variable models.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.06831&r=ecm
  8. By: Chaya Weerasinghe; Ruben Loaiza-Maya; Gael M. Martin; David T. Frazier
    Abstract: Approximate Bayesian Computation (ABC) has gained popularity as a method for conducting inference and forecasting in complex models, most notably those which are intractable in some sense. In this paper we use ABC to produce probabilistic forecasts in state space models (SSMs). Whilst ABC-based forecasting in correctly-specified SSMs has been studied, the misspecified case has not been investigated, and it is that case which we emphasize. We invoke recent principles of ‘focused’ Bayesian prediction, whereby Bayesian updates are driven by a scoring rule that rewards predictive accuracy; the aim being to produce predictives that perform well in that rule, despite misspecification. Two methods are investigated for producing the focused predictions. In a simulation setting, `coherent' predictions are in evidence for both methods: the predictive constructed via the use of a particular scoring rule predicts best according to that rule. Importantly, both focused methods typically produce more accurate forecasts than an exact, but misspecified, predictive. An empirical application to a truly intractable SSM completes the paper.
    Keywords: Approximate Bayesian computation, auxiliary model, loss-based prediction, focused Bayesian prediction, proper scoring rules, stochastic volatility model
    JEL: C11 C53 C58
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2023-12&r=ecm
  9. By: James J. Heckman (The University of Chicago); Rodrigo Pinto (University of California, Los Angeles)
    Abstract: This paper examines the econometric causal model and the interpretation of empirical evidence based on thought experiments that was developed by Ragnar Frisch and Trygve Haavelmo. We compare the econometric causal model with two currently popular causal frameworks: the Neyman-Rubin causal model and the Do-Calculus. The Neyman-Rubin causal model is based on the language of potential outcomes and was largely developed by statisticians. Instead of being based on thought experiments, it takes statistical experiments as its foundation. The Do-Calculus, developed by Judea Pearl and co-authors, relies on Directed Acyclic Graphs (DAGs) and is a popular causal framework in computer science and applied mathematics. We make the case that economists who uncritically use these frameworks often discard the substantial benefits of the econometric causal model to the detriment of more informative analyses. We illustrate the versatility and capabilities of the econometric framework using causal models developed in economics.
    Keywords: Structural Equation Models, causality, causal inference, directed acyclic graphs, Simultaneous Causality
    JEL: C10 C18
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2023-029&r=ecm
  10. By: Elias Tsakas
    Abstract: It is well known that individual beliefs cannot be identified using traditional choice data, unless we exogenously assume state-independent utilities. In this paper, I propose a novel methodology that solves this long-standing identification problem in a simple way. This method relies on the extending the state space by introducing a proxy, for which the agent has no stakes conditional on the original state space. The latter allows us to identify the agent's conditional beliefs about the proxy given each state realization, which in turn suffices for indirectly identifying her beliefs about the original state space. This approach is analogous to the one of instrumental variables in econometrics. Similarly to instrumental variables, the appeal of this method comes from the flexibility in selecting a proxy.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.13394&r=ecm
  11. By: Nikhil Agarwal; Pearl Z. Li; Paulo J. Somaini
    Abstract: Revealed preference arguments are commonly used when identifying models of both single-agent decisions and non-cooperative games. We develop general identification results for a large class of models that have a linearly separable payoff structure. Our model allows for both discrete and continuous choice sets. It incorporates widely studied models such as discrete and hedonic choice models, auctions, school choice mechanisms, oligopoly pricing and trading games. We characterize the identified set and show that point identification can be achieved either if the choice set is sufficiently rich or if a variable that shifts preferences is available. Our identification results also suggests an estimation approach. Finally, we implement this approach to estimate values in a combinatorial procurement auction for school lunches in Chile.
    JEL: C51 C57
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31868&r=ecm
  12. By: Kan, Raymond (Rotman School of Management, University of Toronto); Lassance, Nathan (Université catholique de Louvain, LIDAM/LFIN, Belgium); Wang, Xiaolu (Ivy College of Business, Iowa State University)
    Abstract: We present a simple stochastic representation for the joint distribution of sample estimates of three scalar parameters and two vectors of portfolio weights that characterize the minimum-variance frontier. This stochastic representation is useful for sampling observations efficiently, deriving moments in closed-form, and studying the distribution and performance of many portfolio strategies that are functions of these five variables. We also present the asymptotic joint distributions of these five variables for both the standard regime and the high-dimensional regime. Both asymptotic distributions are simpler than the finite-sample one, and the one for the high-dimensional regime, i.e., when the number of assets and the sample size go together to infinity at a constant rate, reveals the high-dimensional properties of the considered estimators. Our results extend upon [T. Bodnar, H. Dette, N. Parolya and E. Thorst ́en, Sampling distributions of optimal portfolio weights and characteristics in low and large dimensions, Random Matrices: Theory Appl. 11 (2022)].
    Keywords: Portfolio choice ; estimation risk ; stochastic representation ; high-dimensional asymptotics ; minimum-variance frontier
    Date: 2023–05–18
    URL: http://d.repec.org/n?u=RePEc:ajf:louvlf:2023006&r=ecm
  13. By: Liyang Sun; Eli Ben-Michael; Avi Feller
    Abstract: When there are multiple outcome series of interest, Synthetic Control analyses typically proceed by estimating separate weights for each outcome. In this paper, we instead propose estimating a common set of weights across outcomes, by balancing either a vector of all outcomes or an index or average of them. Under a low-rank factor model, we show that these approaches lead to lower bias bounds than separate weights, and that averaging leads to further gains when the number of outcomes grows. We illustrate this via simulation and in a re-analysis of the impact of the Flint water crisis on educational outcomes.
    Date: 2023–12–11
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:24/23&r=ecm
  14. By: Finn Christensen (Department of Economics, Towson University)
    Abstract: The stable unit treatment value assumption (SUTVA) in causal estimation rules out spillover effects, but spillover effects are the hallmark of many economic models. Testing model predictions with techniques that employ SUTVA are thus problematic. To address this issue, we first show that without the no interference component of SUTVA, the population difference-in-difference (DiD) identifies the difference in the average potential outcomes between the treated and untreated. We call this estimand the marginal average treatment effect among the treated with spillovers (MATTS). Then, in the context of a model whose equilibrium is characterized by a system of smooth equations, we provide comparative statics results which restrict the sign of MATTS. Specifically, we show that MATTS is positive for any nontrivial treatment group whenever treatment has a strictly positive direct effect if and only if the inverse of the negated Jacobian is a B0-matrix by columns. We then provide several conditions on the Jacobian such that its negated inverse is a B-matrix by columns. Additional related results are presented. These predictions can be tested directly within the DiD framework even when the SUTVA is violated. Consequently, the results in this paper render economic models rejectable with reduced form DiD methods.
    Keywords: Comparative statics, difference-in-differences, SUTVA, spillovers, profit maximization hypothesis, refutability, B-matrix.
    JEL: C31 C33 C65 C72 D21 L21
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:tow:wpaper:2023-08&r=ecm
  15. By: Seth M. Freedman; Alex Hollingsworth; Kosali I. Simon; Coady Wing; Madeline Yozwiak
    Abstract: Difference-in-Difference (DID) estimators are a valuable method for identifying causal effects in the public health researcher’s toolkit. A growing methods literature points out potential problems with DID estimators when treatment is staggered in adoption and varies with time. Despite this, no practical guide exists for addressing these new critiques in public health research. We illustrate these new DID concepts with step-by-step examples, code, and a checklist. We draw insights by comparing the simple 2 × 2 DID design (single treatment group, single control group, two time periods) with more complex cases: additional treated groups, additional time periods of treatment, and with treatment effects possibly varying over time. We outline newly uncovered threats to causal interpretation of DID estimates and the solutions the literature has proposed, relying on a decomposition that shows how the more complex DID are an average of simpler 2X2 DID sub-experiments.
    JEL: I0 I1
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31842&r=ecm
  16. By: David S. Lee; Justin McCrary; Marcelo J. Moreira; Jack R. Porter; Luther Yap
    Abstract: To address the well-established large-sample invalidity of the +/-1.96 critical values for the t-ratio in the single variable just-identified IV model, applied research typically qualifies the inference based on the first-stage-F (Staiger and Stock (1997) and Stock and Yogo (2005)). We fully extend this F-based approach to its logical conclusion by presenting new critical values for the t-ratio to additionally accommodate values of F that do not meet existing thresholds needed for validity. These new t-ratio critical values simultaneously fix the main problem of over-rejection (invalidity) and the under-appreciated possibility of under-rejection (conservativeness) that can occur when relying solely on the usual 1.96 critical value. We show that the corresponding new confidence intervals are generally expected to be substantially shorter than competing “robust to weak instrument” intervals, including those from the recommended benchmark of Anderson and Rubin (1949) (AR). In a sample of 89 specifications from 10 recent empirical studies drawn from five general interest journals, the new “VtF” intervals are shorter than AR intervals 100 percent of the time, and even more likely to produce statistically significant results than the usual +/-1.96 procedure.
    JEL: C01 C26 J0
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31893&r=ecm
  17. By: Stéphane Crépey (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité); Noufel Frikha (CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Azar Louzi (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité); Gilles Pagès (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité)
    Abstract: This article is a follow up to Crépey, Frikha, and Louzi (2023), where we introduced a nested stochastic approximation algorithm and its multilevel acceleration for computing the value-at-risk and expected shortfall of a random financial loss. We establish central limit theorems for the renormalized errors associated with both algorithms and their averaged variations. Our findings are substantiated through numerical examples.
    Keywords: value-at-risk, expected shortfall, nested stochastic approximation, multilevel Monte Carlo, Ruppert & Polyak averaging, convergence rate, central limit theorem
    Date: 2023–11–24
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:hal-04304985&r=ecm
  18. By: Alexandre Bonnet R. Costa; Pedro Cavalcanti G. Ferreira; Wagner Piazza Gaglianone; Osmani Teixeira C. Guillén; João Victor Issler; Artur Brasil Fialho Rodrigues
    Abstract: The objective of this paper is to propose an approach for dating recessions in real time (or slightly a posteriori) that is suitable to a big data environment. Our proposal is to mix the canonical correlation approach of Issler and Vahid (2006) with the big data approach defended by Stock and Watson (2014). We incorporate the good elements of each approach into one. This involves solving both the problem of missing data and high dimensionality in big databases, besides defining a decision rule on how to choose the best forecasting model in real time. Our empirical results show it is possible to track the state of the U.S. and European economies using the models developed here, as long as appropriate techniques to reduce the dimensionality of the databases are implemented - canonical correlations coupled with principal component analysis. Depending on the cutoffs chosen, the models predict recessions in real time with an accuracy of 98% and 80%, respectively, for the U.S. and the Euro Area.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:587&r=ecm

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.