nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒09‒02
twenty-six papers chosen by
Sune Karlsson, Örebro universitet


  1. Regression Adjustment for Estimating Distributional Treatment Effects in Randomized Controlled Trials By Tatsushi Oka; Shota Yasui; Yuta Hayakawa; Undral Byambadalai
  2. Bayesian Synthetic Control Methods with Spillover Effects: Estimating the Economic Cost of the 2011 Sudan Split By Shosei Sakaguchi; Hayato Tagawa
  3. Distributional Difference-in-Differences Models with Multiple Time Periods: A Monte Carlo Analysis By Andrea Ciaccio
  4. Regularizing stock return covariance matrices via multiple testing of correlations By Richard Luger
  5. Reduced-Rank Matrix Autoregressive Models: A Medium $N$ Approach By Alain Hecq; Ivan Ricardo; Ines Wilms
  6. Revisiting Randomization with the Cube Method By Laurent Davezies; Guillaume Hollard; Pedro Vergara Merino
  7. Distilling interpretable causal trees from causal forests By Patrick Rehill
  8. Sparse Asymptotic PCA: Identifying Sparse Latent Factors Across Time Horizon By Zhaoxing Gao
  9. Covariance Matrix Analysis for Optimal Portfolio Selection By Lim Hao Shen Keith
  10. OLS Limit Theory for Drifting Sequences of Parameters on the Explosive Side of Unity By Tassos Magdalinos; Katerina Petrova
  11. Estimation of Integrated Volatility Functionals with Kernel Spot Volatility Estimators By Jos\'e E. Figueroa-L\'opez; Jincheng Pang; Bei Wu
  12. A nonparametric test for rough volatility By Carsten H. Chong; Viktor Todorov
  13. Modelling shock propagation and resilience in financial temporal networks By Fabrizio Lillo; Giorgio Rizzini
  14. Causal modelling without counterfactuals and individualised effects By Benedikt H\"oltgen; Robert C. Williamson
  15. Estimation of bid-ask spreads in the presence of serial dependence By Xavier Brouty; Matthieu Garcin; Hugo Roccaro
  16. A Short Note on Event-Study Synthetic Difference-in-Differences Estimators By Diego Ciccia
  17. Multi-dimensional monetary policy shocks based on heteroscedasticity By Marc Burri and Daniel Kaufmann
  18. ROLCH: Regularized Online Learning for Conditional Heteroskedasticity By Simon Hirsch; Jonathan Berrisch; Florian Ziel
  19. Starting Small: Prioritizing Safety over Efficacy in Randomized Experiments Using the Exact Finite Sample Likelihood By Neil Christy; A. E. Kowalski
  20. Local Projections By Òscar Jordà; Alan M. Taylor
  21. Conduct Parameter Estimation in Homogeneous Goods Markets with Equilibrium Existence and Uniqueness Conditions: The Case of Log-linear Specification By Yuri Matsumura; Suguru Otani
  22. Predicting the Distribution of Treatment Effects: A Covariate-Adjustment Approach By Bruno Fava
  23. An Introduction to Causal Discovery By Martin Huber
  24. The Dynamic, the Static, and the Weak factor models and the analysis of high-dimensional time series By Matteo Barigozzi; Marc Hallin
  25. Deep Learning for Economists By Melissa Dell
  26. Constructing Fan Charts from the Ragged Edge of SPF Forecasts By Todd E. Clark; Gergely Ganics; Elmar Mertens

  1. By: Tatsushi Oka; Shota Yasui; Yuta Hayakawa; Undral Byambadalai
    Abstract: In this paper, we address the issue of estimating and inferring the distributional treatment effects in randomized experiments. The distributional treatment effect provides a more comprehensive understanding of treatment effects by characterizing heterogeneous effects across individual units, as opposed to relying solely on the average treatment effect. To enhance the precision of distributional treatment effect estimation, we propose a regression adjustment method that utilizes the distributional regression and pre-treatment information. Our method is designed to be free from restrictive distributional assumptions. We establish theoretical efficiency gains and develop a practical, statistically sound inferential framework. Through extensive simulation studies and empirical applications, we illustrate the substantial advantages of our method, equipping researchers with a powerful tool for capturing the full spectrum of treatment effects in experimental research.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.14074
  2. By: Shosei Sakaguchi; Hayato Tagawa
    Abstract: The synthetic control method (SCM) is widely used for causal inference with panel data, particularly when there are few treated units. SCM assumes the stable unit treatment value assumption (SUTVA), which posits that potential outcomes are unaffected by the treatment status of other units. However, interventions often impact not only treated units but also untreated units, known as spillover effects. This study introduces a novel panel data method that extends SCM to allow for spillover effects and estimate both treatment and spillover effects. This method leverages a spatial autoregressive panel data model to account for spillover effects. We also propose Bayesian inference methods using Bayesian horseshoe priors for regularization. We apply the proposed method to two empirical studies: evaluating the effect of the California tobacco tax on consumption and estimating the economic impact of the 2011 division of Sudan on GDP per capita.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2408.00291
  3. By: Andrea Ciaccio
    Abstract: Researchers are often interested in evaluating the impact of a policy on the entire (or specific parts of the) distribution of the outcome of interest. In this paper, I provide a practical toolkit to recover the whole counterfactual distribution of the untreated potential outcome for the treated group in non-experimental settings with staggered treatment adoption by generalizing the existing quantile treatment effects on the treated (QTT) estimator proposed by Callaway and Li (2019). Besides the QTT, I consider different approaches that anonymously summarize the quantiles of the distribution of the outcome of interest (such as tests for stochastic dominance rankings) without relying on rank invariance assumptions. The finite-sample properties of the estimator proposed are analyzed via different Monte Carlo simulations. Despite being slightly biased for relatively small sample sizes, the proposed method's performance increases substantially when the sample size increases.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2408.01208
  4. By: Richard Luger
    Abstract: This paper develops a large-scale inference approach for the regularization of stock return covariance matrices. The framework allows for the presence of heavy tails and multivariate GARCH-type effects of unknown form among the stock returns. The approach involves simultaneous testing of all pairwise correlations, followed by setting non-statistically significant elements to zero. This adaptive thresholding is achieved through sign-based Monte Carlo resampling within multiple testing procedures, controlling either the traditional familywise error rate, a generalized familywise error rate, or the false discovery proportion. Subsequent shrinkage ensures that the final covariance matrix estimate is positive definite and well-conditioned while preserving the achieved sparsity. Compared to alternative estimators, this new regularization method demonstrates strong performance in simulation experiments and real portfolio optimization.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.09696
  5. By: Alain Hecq; Ivan Ricardo; Ines Wilms
    Abstract: Reduced-rank regressions are powerful tools used to identify co-movements within economic time series. However, this task becomes challenging when we observe matrix-valued time series, where each dimension may have a different co-movement structure. We propose reduced-rank regressions with a tensor structure for the coefficient matrix to provide new insights into co-movements within and between the dimensions of matrix-valued time series. Moreover, we relate the co-movement structures to two commonly used reduced-rank models, namely the serial correlation common feature and the index model. Two empirical applications involving U.S.\ states and economic indicators for the Eurozone and North American countries illustrate how our new tools identify co-movements.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.07973
  6. By: Laurent Davezies; Guillaume Hollard; Pedro Vergara Merino
    Abstract: We propose a novel randomization approach for randomized controlled trials (RCTs), named the cube method. The cube method allows for the selection of balanced samples across various covariate types, ensuring consistent adherence to balance tests and, whence, substantial precision gains when estimating treatment effects. We establish several statistical properties for the population and sample average treatment effects (PATE and SATE, respectively) under randomization using the cube method. The relevance of the cube method is particularly striking when comparing the behavior of prevailing methods employed for treatment allocation when the number of covariates to balance is increasing. We formally derive and compare bounds of balancing adjustments depending on the number of units $n$ and the number of covariates $p$ and show that our randomization approach outperforms methods proposed in the literature when $p$ is large and $p/n$ tends to 0. We run simulation studies to illustrate the substantial gains from the cube method for a large set of covariates.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.13613
  7. By: Patrick Rehill
    Abstract: Machine learning methods for estimating treatment effect heterogeneity promise greater flexibility than existing methods that test a few pre-specified hypotheses. However, one problem these methods can have is that it can be challenging to extract insights from complicated machine learning models. A high-dimensional distribution of conditional average treatment effects may give accurate, individual-level estimates, but it can be hard to understand the underlying patterns; hard to know what the implications of the analysis are. This paper proposes the Distilled Causal Tree, a method for distilling a single, interpretable causal tree from a causal forest. This compares well to existing methods of extracting a single tree, particularly in noisy data or high-dimensional data where there are many correlated features. Here it even outperforms the base causal forest in most simulations. Its estimates are doubly robust and asymptotically normal just as those of the causal forest are.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2408.01023
  8. By: Zhaoxing Gao
    Abstract: This paper proposes a novel method for sparse latent factor modeling using a new sparse asymptotic Principal Component Analysis (APCA). This approach analyzes the co-movements of large-dimensional panel data systems over time horizons within a general approximate factor model framework. Unlike existing sparse factor modeling approaches based on sparse PCA, which assume sparse loading matrices, our sparse APCA assumes that factor processes are sparse over the time horizon, while the corresponding loading matrices are not necessarily sparse. This development is motivated by the observation that the assumption of sparse loadings may not be appropriate for financial returns, where exposure to market factors is generally universal and non-sparse. We propose a truncated power method to estimate the first sparse factor process and a sequential deflation method for multi-factor cases. Additionally, we develop a data-driven approach to identify the sparsity of risk factors over the time horizon using a novel cross-sectional cross-validation method. Theoretically, we establish that our estimators are consistent under mild conditions. Monte Carlo simulations demonstrate that the proposed method performs well in finite samples. Empirically, we analyze daily stock returns for a balanced panel of S&P 500 stocks from January 2004 to December 2016. Through textual analysis, we examine specific events associated with the identified sparse factors that systematically influence the stock market. Our approach offers a new pathway for economists to study and understand the systematic risks of economic and financial systems over time.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.09738
  9. By: Lim Hao Shen Keith
    Abstract: In portfolio risk minimization, the inverse covariance matrix of returns is often unknown and has to be estimated in practice. This inverse covariance matrix also prescribes the hedge trades in which a stock is hedged by all the other stocks in the portfolio. In practice with finite samples, however, multicollinearity gives rise to considerable estimation errors, making the hedge trades too unstable and unreliable for use. By adopting ideas from current methodologies in the existing literature, we propose 2 new estimators of the inverse covariance matrix, one which relies only on the l2 norm while the other utilizes both the l1 and l2 norms. These 2 new estimators are classified as shrinkage estimators in the literature. Comparing favorably with other methods (sample-based estimation, equal-weighting, estimation based on Principal Component Analysis), a portfolio formed on the proposed estimators achieves substantial out-of-sample risk reduction and improves the out-of-sample risk-adjusted returns of the portfolio, particularly in high-dimensional settings. Furthermore, the proposed estimators can still be computed even in instances where the sample covariance matrix is ill-conditioned or singular
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.08748
  10. By: Tassos Magdalinos; Katerina Petrova
    Abstract: A limit theory is developed for the least squares estimator for mildly and purely explosive autoregressions under drifting sequences of parameters with autoregressive roots ρn satisfying ρn → ρ ∈ (—∞, —1] ∪ [1, ∞) and n (|ρn| —1) → ∞. Drifting sequences of innovations and initial conditions are also considered. A standard specification of a short memory linear process for the autoregressive innovations is extended to a triangular array formulation both for the deterministic weights and for the primitive innovations of the linear process, which are allowed to be heteroskedastic L1-mixingales. The paper provides conditions that guarantee the validity of Cauchy limit distribution for the OLS estimator and standard Gaussian limit distribution for the t-statistic under this extended explosive and mildly explosive framework.
    Keywords: triangular array; explosive autoregression; linear process; conditional heteroskedasticity; mixingale; Cauchy distribution
    JEL: C12 C18 C22
    Date: 2024–08–01
    URL: https://d.repec.org/n?u=RePEc:fip:fednsr:98657
  11. By: Jos\'e E. Figueroa-L\'opez; Jincheng Pang; Bei Wu
    Abstract: For a multidimensional It\^o semimartingale, we consider the problem of estimating integrated volatility functionals. Jacod and Rosenbaum (2013) studied a plug-in type of estimator based on a Riemann sum approximation of the integrated functional and a spot volatility estimator with a forward uniform kernel. Motivated by recent results that show that spot volatility estimators with general two-side kernels of unbounded support are more accurate, in this paper, an estimator using a general kernel spot volatility estimator as the plug-in is considered. A biased central limit theorem for estimating the integrated functional is established with an optimal convergence rate. Unbiased central limit theorems for estimators with proper de-biasing terms are also obtained both at the optimal convergence regime for the bandwidth and when applying undersmoothing. Our results show that one can significantly reduce the estimator's bias by adopting a general kernel instead of the standard uniform kernel. Our proposed bias-corrected estimators are found to maintain remarkable robustness against bandwidth selection in a variety of sampling frequencies and functions.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.09759
  12. By: Carsten H. Chong; Viktor Todorov
    Abstract: We develop a nonparametric test for deciding whether volatility of an asset follows a standard semimartingale process, with paths of finite quadratic variation, or a rough process with paths of infinite quadratic variation. The test utilizes the fact that volatility is rough if and only if volatility increments are negatively autocorrelated at high frequencies. It is based on the sample autocovariance of increments of spot volatility estimates computed from high-frequency asset return data. By showing a feasible CLT for this statistic under the null hypothesis of semimartingale volatility paths, we construct a test with fixed asymptotic size and an asymptotic power equal to one. The test is derived under very general conditions for the data-generating process. In particular, it is robust to jumps with arbitrary activity and to the presence of market microstructure noise. In an application of the test to SPY high-frequency data, we find evidence for rough volatility.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.10659
  13. By: Fabrizio Lillo; Giorgio Rizzini
    Abstract: Modelling how a shock propagates in a temporal network and how the system relaxes back to equilibrium is challenging but important in many applications, such as financial systemic risk. Most studies so far have focused on shocks hitting a link of the network, while often it is the node and its propensity to be connected that are affected by a shock. Using as starting point the configuration model, a specific Exponential Random Graph model, we propose a vector autoregressive (VAR) framework to analytically compute the Impulse Response Function (IRF) of a network metric conditional to a shock on a node. Unlike the standard VAR, the model is a nonlinear function of the shock size and the IRF depends on the state of the network at the shock time. We propose a novel econometric estimation method that combines the Maximum Likelihood Estimation and Kalman filter to estimate the dynamics of the latent parameters and compute the IRF, and we apply the proposed methodology to the dynamical network describing the electronic Market of Interbank Deposit (e-MID).
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.09340
  14. By: Benedikt H\"oltgen; Robert C. Williamson
    Abstract: The most common approach to causal modelling is the potential outcomes framework due to Neyman and Rubin. In this framework, outcomes of counterfactual treatments are assumed to be well-defined. This metaphysical assumption is often thought to be problematic yet indispensable. The conventional approach relies not only on counterfactuals, but also on abstract notions of distributions and assumptions of independence that are not directly testable. In this paper, we construe causal inference as treatment-wise predictions for finite populations where all assumptions are testable; this means that one can not only test predictions themselves (without any fundamental problem), but also investigate sources of error when they fail. The new framework highlights the model-dependence of causal claims as well as the difference between statistical and scientific inference.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.17385
  15. By: Xavier Brouty; Matthieu Garcin; Hugo Roccaro
    Abstract: Starting from a basic model in which the dynamic of the transaction prices is a geometric Brownian motion disrupted by a microstructure white noise, corresponding to the random alternation of bids and asks, we propose moment-based estimators along with their statistical properties. We then make the model more realistic by considering serial dependence: we assume a geometric fractional Brownian motion for the price, then an Ornstein-Uhlenbeck process for the microstructure noise. In these two cases of serial dependence, we propose again consistent and asymptotically normal estimators. All our estimators are compared on simulated data with existing approaches, such as Roll, Corwin-Schultz, Abdi-Ranaldo, or Ardia-Guidotti-Kroencke estimators.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.17401
  16. By: Diego Ciccia
    Abstract: I propose an event study extension of Synthetic Difference-in-Differences (SDID) estimators. I show that, in simple and staggered adoption designs, estimators from Arkhangelsky et al. (2021) can be disaggregated into dynamic treatment effect estimators, comparing the lagged outcome differentials of treated and synthetic controls to their pre-treatment average. Estimators presented in this note can be computed using the sdid_event Stata package.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.09565
  17. By: Marc Burri and Daniel Kaufmann
    Abstract: We propose a two-step approach to estimate multi-dimensional monetary policy shocks and their causal effects requiring only daily financial market data and policy events. First, we combine a heteroscedasticity-based identification scheme with recursive zero restrictions along the term structure of interest rates to disentangle multi-dimensional monetary policy shocks and derive an instrumental variables estimator to estimate dynamic causal effects. Second, we propose to use the Kalman filter to compute the linear minimum mean-square-error prediction of the unobserved monetary policy shocks. We apply the approach to examine the causal effects of US monetary policy on the exchange rate. The heteroscedasticity-based monetary policy shocks display a relevant correlation with existing high-frequency surprises. In addition, their dynamic causal effects on the exchange rate are similar. This suggests the approach is a valid alternative if high-frequency identification schemes are not applicable.
    Keywords: Monetary policy shocks, forward guidance, large-scale asset purchases, identification through heteroscedasticity, instrumental variables, term structure of interest rates, exchange rate
    JEL: C3 E3 E4 E5 F3
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:irn:wpaper:24-03
  18. By: Simon Hirsch; Jonathan Berrisch; Florian Ziel
    Abstract: Large-scale streaming data are common in modern machine learning applications and have led to the development of online learning algorithms. Many fields, such as supply chain management, weather and meteorology, energy markets, and finance, have pivoted towards using probabilistic forecasts, which yields the need not only for accurate learning of the expected value but also for learning the conditional heteroskedasticity. Against this backdrop, we present a methodology for online estimation of regularized linear distributional models for conditional heteroskedasticity. The proposed algorithm is based on a combination of recent developments for the online estimation of LASSO models and the well-known GAMLSS framework. We provide a case study on day-ahead electricity price forecasting, in which we show the competitive performance of the adaptive estimation combined with strongly reduced computational effort. Our algorithms are implemented in a computationally efficient Python package.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.08750
  19. By: Neil Christy; A. E. Kowalski
    Abstract: We use the exact finite sample likelihood and statistical decision theory to answer questions of ``why?'' and ``what should you have done?'' using data from randomized experiments and a utility function that prioritizes safety over efficacy. We propose a finite sample Bayesian decision rule and a finite sample maximum likelihood decision rule. We show that in finite samples from 2 to 50, it is possible for these rules to achieve better performance according to established maximin and maximum regret criteria than a rule based on the Boole-Frechet-Hoeffding bounds. We also propose a finite sample maximum likelihood criterion. We apply our rules and criterion to an actual clinical trial that yielded a promising estimate of efficacy, and our results point to safety as a reason for why results were mixed in subsequent trials.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.18206
  20. By: Òscar Jordà; Alan M. Taylor
    Abstract: A central question in applied research is to estimate the effect of an exogenous intervention or shock on an outcome. The intervention can affect the outcome and controls on impact and over time. Moreover, there can be subsequent feedback between outcomes, controls and the intervention. Many of these interactions can be untangled using local projections. This method’s simplicity makes it a convenient and versatile tool in the empiricist’s kit, one that is generalizable to complex settings. This article reviews the state-of-the art for the practitioner, discusses best practices and possible extensions of local projections methods, along with their limitations.
    Keywords: local projections; impulse response; multipliers; bias; inference; instrumental variables; policy evaluation; Kitagawa decomposition; panel data
    JEL: C01 C14 C22 C26 C32 C54
    Date: 2024–08–12
    URL: https://d.repec.org/n?u=RePEc:fip:fedfwp:98669
  21. By: Yuri Matsumura; Suguru Otani
    Abstract: We propose a constrained generalized method of moments estimator (GMM) incorporating theoretical conditions for the unique existence of equilibrium prices for estimating conduct parameters in a log-linear model with homogeneous goods markets. First, we derive such conditions. Second, Monte Carlo simulations confirm that in a log-linear model, incorporating the conditions resolves the problems of implausibly low or negative values of conduct parameters.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.12422
  22. By: Bruno Fava
    Abstract: Important questions for impact evaluation require knowledge not only of average effects, but of the distribution of treatment effects. What proportion of people are harmed? Does a policy help many by a little? Or a few by a lot? The inability to observe individual counterfactuals makes these empirical questions challenging. I propose an approach to inference on points of the distribution of treatment effects by incorporating predicted counterfactuals through covariate adjustment. I show that finite-sample inference is valid under weak assumptions, for example when data come from a Randomized Controlled Trial (RCT), and that large-sample inference is asymptotically exact under suitable conditions. Finally, I revisit five RCTs in microcredit where average effects are not statistically significant and find evidence of both positive and negative treatment effects in household income. On average across studies, at least 13.6% of households benefited and 12.5% were negatively affected.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.14635
  23. By: Martin Huber
    Abstract: In social sciences and economics, causal inference traditionally focuses on assessing the impact of predefined treatments (or interventions) on predefined outcomes, such as the effect of education programs on earnings. Causal discovery, in contrast, aims to uncover causal relationships among multiple variables in a data-driven manner, by investigating statistical associations rather than relying on predefined causal structures. This approach, more common in computer science, seeks to understand causality in an entire system of variables, which can be visualized by causal graphs. This survey provides an introduction to key concepts, algorithms, and applications of causal discovery from the perspectives of economics and social sciences. It covers fundamental concepts like d-separation, causal faithfulness, and Markov equivalence, sketches various algorithms for causal discovery, and discusses the back-door and front-door criteria for identifying causal effects. The survey concludes with more specific examples of causal discovery, e.g. for learning all variables that directly affect an outcome of interest and/or testing identification of causal effects in observational data.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.08602
  24. By: Matteo Barigozzi; Marc Hallin
    Abstract: Several fundamental and closely interconnected issues related to factor models are reviewed and discussed: dynamic versus static loadings, rate-strong versus rate-weak factors, the concept of weakly common component recently introduced by Gersing et al. (2023), the irrelevance of cross-sectional ordering and the assumption of cross-sectional exchangeability, and the problem of undetected strong factors.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.10653
  25. By: Melissa Dell
    Abstract: Deep learning provides powerful methods to impute structured information from large-scale, unstructured text and image datasets. For example, economists might wish to detect the presence of economic activity in satellite images, or to measure the topics or entities mentioned in social media, the congressional record, or firm filings. This review introduces deep neural networks, covering methods such as classifiers, regression models, generative AI, and embedding models. Applications include classification, document digitization, record linkage, and methods for data exploration in massive scale text and image corpora. When suitable methods are used, deep learning models can be cheap to tune and can scale affordably to problems involving millions or billions of data points.. The review is accompanied by a companion website, EconDL, with user-friendly demo notebooks, software resources, and a knowledge base that provides technical details and additional applications.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.15339
  26. By: Todd E. Clark; Gergely Ganics; Elmar Mertens
    Abstract: We develop models that take point forecasts from the Survey of Professional Forecasters (SPF) as inputs and produce estimates of survey-consistent term structures of expectations and uncertainty at arbitrary forecast horizons. Our models combine fixed-horizon and fixed-event forecasts, accommodating time-varying horizons and availability of survey data, as well as potential inefficiencies in survey forecasts. The estimated term structures of SPF-consistent expectations are comparable in quality to the published, widely used short-horizon forecasts. Our estimates of time-varying forecast uncertainty reflect historical variations in realized errors of SPF point forecasts, and generate fan charts with reliable coverage rates.
    Keywords: term structure of expectations; uncertainty; survey forecasts; fan charts
    JEL: E37 C53
    Date: 2024–08–06
    URL: https://d.repec.org/n?u=RePEc:fip:fedcwq:98629

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.