nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒10‒14
twenty-two papers chosen by
Sune Karlsson, Örebro universitet


  1. Bayesian Dynamic Factor Models for High-dimensional Matrix-valued Time Series By Wei Zhang
  2. Empirical likelihood inference for Oaxaca-Blinder decomposition By Otsu, Taisuke; Tanaka, Shiori
  3. Distribution Regression Difference-In-Differences By Iv\'an Fern\'andez-Val; Jonas Meier; Aico van Vuuren; Francis Vella
  4. Instrumental Variables with Unobserved Heterogeneity in Treatment Effects By Magne Mogstad; Alexander Torgovitsky
  5. On LASSO Inference for High Dimensional Predictive Regression By Zhan Gao; Ji Hyung Lee; Ziwei Mei; Zhentao Shi
  6. Group-specific linear trends and the triple-differences in time design By Strezhnev, Anton
  7. The Clustered Dose-Response Function Estimator for continuous treatment with heterogeneous treatment effects By Cerqua Augusto; Di Stefano Roberta; Mattera Raffaele
  8. Improving the Finite Sample Performance of Double/Debiased Machine Learning with Propensity Score Calibration By Daniele Ballinari; Nora Bearth
  9. Nonparametric Identification of Models for Dyadic Data” By Diegert, Paul; Jochmans, Koen
  10. Enhancing Preference-based Linear Bandits via Human Response Time By Shen Li; Yuyang Zhang; Zhaolin Ren; Claire Liang; Na Li; Julie A. Shah
  11. Bootstrap Adaptive Lasso Solution Path Unit Root Tests By Martin C. Arnold; Thilo Reinschl\"ussel
  12. Design of Partial Population Experiments with an Application to Spillovers in Tax Compliance By Cruces, Guillermo; Tortarolo, Dario; Vazquez-Bare, Gonzalo
  13. Performance of Empirical Risk Minimization For Principal Component Regression By Christian Brownlees; Gu{\dh}mundur Stef\'an Gu{\dh}mundsson; Yaping Wang
  14. Estimating Choice Models with Unobserved Expectations over Attributes By Reynaert, Mathias; Xu, Wenxuan; Zhao, Hanlin
  15. Floods and financial stability: Scenario-based evidence from below sea level By Ramon F. A. de Punder; Cees G. H. Diks; Roger J. A. Laeven; Dick J. C. van Dijk
  16. Bandit Algorithms for Policy Learning: Methods, Implementation, and Welfare-performance By Toru Kitagawa; Jeff Rowley
  17. Imputation without nightMARs: Graphical criteria for valid imputation of missing data By Mathur, Maya B; Shpitser, Ilya
  18. Estimating Wage Disparities Using Foundation Models By Keyon Vafa; Susan Athey; David M. Blei
  19. Regime-Switching Factor Models and Nowcasting with Big Data By Omer Faruk Akbal
  20. A robust Beveridge-Nelson decomposition using a score-driven approach with an application By Francisco Blasques; Janneke van Brummelen; Paolo Gorgi; Siem Jan Koopman
  21. State-Space Dynamic Functional Regression for Multicurve Fixed Income Spread Analysis and Stress Testing By Peilun He; Gareth W. Peters; Nino Kordzakhiac; Pavel V. Shevchenko
  22. Asymmetries in the transmission of monetary policy shocks over the business cycle: a Bayesian Quantile Factor Augmented VAR By Velasco, Sofia

  1. By: Wei Zhang
    Abstract: High-dimensional matrix-valued time series are of significant interest in economics and finance, with prominent examples including cross region macroeconomic panels and firms' financial data panels. We introduce a class of Bayesian matrix dynamic factor models that utilize matrix structures to identify more interpretable factor patterns and factor impacts. Our model accommodates time-varying volatility, adjusts for outliers, and allows cross-sectional correlations in the idiosyncratic components. To determine the dimension of the factor matrix, we employ an importance-sampling estimator based on the cross-entropy method to estimate marginal likelihoods. Through a series of Monte Carlo experiments, we show the properties of the factor estimators and the performance of the marginal likelihood estimator in correctly identifying the true dimensions of the factor matrices. Applying our model to a macroeconomic dataset and a financial dataset, we demonstrate its ability in unveiling interesting features within matrix-valued time series.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.08354
  2. By: Otsu, Taisuke; Tanaka, Shiori
    Abstract: This paper proposes an empirical likelihood inference method for the Oaxaca-Blinder decompositions. In contrast to the conventional Wald statistic using the delta method, our approach circumvents the linearization errors and estimation of the variance terms. Furthermore, the shape of the resulting empirical likelihood confidence set is determined flexibly by data. Simulation results illustrate usefulness of the proposed inference method.
    Keywords: empirical likelihood; Oaxaca–Blinder decomposition; two-sample test
    JEL: C14
    Date: 2022–10–01
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:115982
  3. By: Iv\'an Fern\'andez-Val; Jonas Meier; Aico van Vuuren; Francis Vella
    Abstract: We provide a simple distribution regression estimator for treatment effects in the difference-in-differences (DiD) design. Our procedure is particularly useful when the treatment effect differs across the distribution of the outcome variable. Our proposed estimator easily incorporates covariates and, importantly, can be extended to settings where the treatment potentially affects the joint distribution of multiple outcomes. Our key identifying restriction is that the counterfactual distribution of the treated in the untreated state has no interaction effect between treatment and time. This assumption results in a parallel trend assumption on a transformation of the distribution. We highlight the relationship between our procedure and assumptions with the changes-in-changes approach of Athey and Imbens (2006). We also reexamine two existing empirical examples which highlight the utility of our approach.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.02311
  4. By: Magne Mogstad; Alexander Torgovitsky
    Abstract: This chapter synthesizes and critically reviews the modern instrumental variables (IV) literature that allows for unobserved heterogeneity in treatment effects (UHTE). We start by discussing why UHTE is often an essential aspect of IV applications in economics and we explain the conceptual challenges raised by allowing for it. Then we review and survey two general strategies for incorporating UHTE. The first strategy is to continue to use linear IV estimators designed for classical constant (homogeneous) treatment effect models, acknowledge their likely misspecification, and attempt to reverse engineer an attractive interpretation in the presence of UHTE. This strategy commonly leads to interpretations of linear IV that involve local average treatment effects (LATEs). We review the various ways in which the use and justification of LATE interpretations have expanded and contracted since their introduction in the early 1990s. The second strategy is to forward engineer new estimators that explicitly allow for UHTE. This strategy has its roots in the Gronau-Heckman selection model of the 1970s, ideas from which have been revitalized through marginal treatment effects (MTE) analysis. We discuss implementation of MTE methods and draw connections with related control function and bounding methods that are scattered throughout the econometric and causal inference literature.
    JEL: C26 C36
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:32927
  5. By: Zhan Gao; Ji Hyung Lee; Ziwei Mei; Zhentao Shi
    Abstract: LASSO introduces shrinkage bias into estimated coefficients, which can adversely affect the desirable asymptotic normality and invalidate the standard inferential procedure based on the $t$-statistic. The desparsified LASSO has emerged as a well-known remedy for this issue. In the context of high dimensional predictive regression, the desparsified LASSO faces an additional challenge: the Stambaugh bias arising from nonstationary regressors. To restore the standard inferential procedure, we propose a novel estimator called IVX-desparsified LASSO (XDlasso). XDlasso eliminates the shrinkage bias and the Stambaugh bias simultaneously and does not require prior knowledge about the identities of nonstationary and stationary regressors. We establish the asymptotic properties of XDlasso for hypothesis testing, and our theoretical findings are supported by Monte Carlo simulations. Applying our method to real-world applications from the FRED-MD database -- which includes a rich set of control variables -- we investigate two important empirical questions: (i) the predictability of the U.S. stock returns based on the earnings-price ratio, and (ii) the predictability of the U.S. inflation using the unemployment rate.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.10030
  6. By: Strezhnev, Anton
    Abstract: Differences-in-differences designs for estimating causal effects rely on an assumption of ``parallel trends" -- that in the absence of the intervention, treated units would have followed the same outcome trajectory as observed in control units. When parallel trends fails, researchers often turn to alternative strategies that relax this identifying assumption. One popular approach is the inclusion of a group-specific linear time trend in the commonly used two-way fixed effects (TWFE) estimator. In a setting with a single post-treatment and two pre-treatment periods it is well known that this is equivalent to a non-parametric ``triple-differences" estimator which is valid under a ``parallel trends-in-trends" assumption (Egami and Yamauchi, 2023). This paper analyzes the TWFE estimator with group-specific linear time trends in the more general setting with many pre- and post-treatment periods. It shows that this estimator can be interpreted as an average over triple-differences terms involving both pre-treatment and post-treatment observations. As a consequence, this estimator does not identify a convex average of post-treatment ATTs without additional effect homogeneity assumptions even when there is no staggering in treatment adoption. A straightforward solution is to make the TWFE specification fully dynamic with a separate parameter for each relative treatment time. However, identification requires that researchers omit at least two pre-treatment relative treatment time indicators to estimate a group-specific linear trend. The paper shows how to properly extend this estimator to the staggered adoption setting using the approach of Sun and Abraham (2021), correcting a perfect collinearity error in recent implementations of this method in Hassell and Holbein (2024). It concludes with a note of caution for researchers, showing through a replication of Kogan (2021) how inferences from group-specific time trend specifications can be extremely sensitive to arbitrary specification choices when parallel trends violations are present but do not follow an easily observed functional form.
    Date: 2024–09–04
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:dg5ps
  7. By: Cerqua Augusto; Di Stefano Roberta; Mattera Raffaele
    Abstract: Many treatments are non-randomly assigned, continuous in nature, and exhibit heterogeneous effects even at identical treatment intensities. Taken together, these characteristics pose significant challenges for identifying causal effects, as no existing estimator can provide an unbiased estimate of the average causal dose-response function. To address this gap, we introduce the Clustered Dose-Response Function (Cl-DRF), a novel estimator designed to discern the continuous causal relationships between treatment intensity and the dependent variable across different subgroups. This approach leverages both theoretical and data-driven sources of heterogeneity and operates under relaxed versions of the conditional independence and positivity assumptions, which are required to be met only within each identified subgroup. To demonstrate the capabilities of the Cl-DRF estimator, we present both simulation evidence and an empirical application examining the impact of European Cohesion funds on economic growth.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.08773
  8. By: Daniele Ballinari; Nora Bearth
    Abstract: Machine learning techniques are widely used for estimating causal effects. Double/debiased machine learning (DML) (Chernozhukov et al., 2018) uses a double-robust score function that relies on the prediction of nuisance functions, such as the propensity score, which is the probability of treatment assignment conditional on covariates. Estimators relying on double-robust score functions are highly sensitive to errors in propensity score predictions. Machine learners increase the severity of this problem as they tend to over- or underestimate these probabilities. Several calibration approaches have been proposed to improve probabilistic forecasts of machine learners. This paper investigates the use of probability calibration approaches within the DML framework. Simulation results demonstrate that calibrating propensity scores may significantly reduces the root mean squared error of DML estimates of the average treatment effect in finite samples. We showcase it in an empirical example and provide conditions under which calibration does not alter the asymptotic properties of the DML estimator.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.04874
  9. By: Diegert, Paul; Jochmans, Koen
    Abstract: Consider dyadic random variables on units from a given population. It is common to assume that these variables are jointly exchangeable and dissociated. In this case they admit a non-separable specification with two-way unobserved heterogeneity. The analysis of this type of structure is of considerable interest but little is known about their nonparametric identifiability, especially when the unobserved heterogeneity is continuous. We provide conditions under which both the distribution of the observed random variables conditional on the unit-specific heterogeneity and the distribution of the unit-specific heterogeneity itself are uniquely recoverable from knowledge of the joint marginal distribution of the observable random variables alone without imposing parametric restrictions.
    Keywords: Exchangeability; conditional independence; dyadic data; network; two-way; heterogeneity
    Date: 2024–09–17
    URL: https://d.repec.org/n?u=RePEc:tse:wpaper:129722
  10. By: Shen Li; Yuyang Zhang; Zhaolin Ren; Claire Liang; Na Li; Julie A. Shah
    Abstract: Binary human choice feedback is widely used in interactive preference learning for its simplicity, but it provides limited information about preference strength. To overcome this limitation, we leverage human response times, which inversely correlate with preference strength, as complementary information. Our work integrates the EZ-diffusion model, which jointly models human choices and response times, into preference-based linear bandits. We introduce a computationally efficient utility estimator that reformulates the utility estimation problem using both choices and response times as a linear regression problem. Theoretical and empirical comparisons with traditional choice-only estimators reveal that for queries with strong preferences ("easy" queries), choices alone provide limited information, while response times offer valuable complementary information about preference strength. As a result, incorporating response times makes easy queries more useful. We demonstrate this advantage in the fixed-budget best-arm identification problem, with simulations based on three real-world datasets, consistently showing accelerated learning when response times are incorporated.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.05798
  11. By: Martin C. Arnold; Thilo Reinschl\"ussel
    Abstract: We propose sieve wild bootstrap analogues to the adaptive Lasso solution path unit root tests of Arnold and Reinschl\"ussel (2024) arXiv:2404.06205 to improve finite sample properties and extend their applicability to a generalised framework, allowing for non-stationary volatility. Numerical evidence shows the bootstrap to improve the tests' precision for error processes that promote spurious rejections of the unit root null, depending on the detrending procedure. The bootstrap mitigates finite-sample size distortions and restores asymptotically valid inference when the data features time-varying unconditional variance. We apply the bootstrap tests to real residential property prices of the top six Eurozone economies and find evidence of stationarity to be period-specific, supporting the conjecture that exuberance in the housing market characterises the development of Euro-era residential property prices in the recent past.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.07859
  12. By: Cruces, Guillermo (CEDLAS-UNLP); Tortarolo, Dario (World Bank); Vazquez-Bare, Gonzalo (UC Santa Barbara)
    Abstract: We develop a framework to analyze partial population experiments, a generalization of the cluster experimental design where clusters are assigned to different treatment intensities. Our framework allows for heterogeneity in cluster sizes and outcome distributions. We study the large-sample behavior of OLS estimators and cluster-robust variance estimators and show that (i) ignoring cluster heterogeneity may result in severely underpowered experiments and (ii) the cluster-robust variance estimator may be upward-biased when clusters are heterogeneous. We derive formulas for power, minimum detectable effects, and optimal cluster assignment probabilities. All our results apply to cluster experiments, a particular case of our framework. We set up a potential outcomes framework to interpret the OLS estimands as causal effects. We implement our methods in a large-scale experiment to estimate the direct and spillover effects of a communication campaign on property tax compliance. We find an increase in tax compliance among individuals directly targeted with our mailing, as well as compliance spillovers on untreated individuals in clusters with a high proportion of treated taxpayers.
    Keywords: partial population experiments, spillovers, randomized controlled trials, cluster experiments, two-stage designs, property tax, tax compliance
    JEL: C01 C93 H71 H71 H26 H26 H21 H21 O23
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp17256
  13. By: Christian Brownlees; Gu{\dh}mundur Stef\'an Gu{\dh}mundsson; Yaping Wang
    Abstract: This paper establishes bounds on the predictive performance of empirical risk minimization for principal component regression. Our analysis is nonparametric, in the sense that the relation between the prediction target and the predictors is not specified. In particular, we do not rely on the assumption that the prediction target is generated by a factor model. In our analysis we consider the cases in which the largest eigenvalues of the covariance matrix of the predictors grow linearly in the number of predictors (strong signal regime) or sublinearly (weak signal regime). The main result of this paper shows that empirical risk minimization for principal component regression is consistent for prediction and, under appropriate conditions, it achieves optimal performance (up to a logarithmic factor) in both the strong and weak signal regimes.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.03606
  14. By: Reynaert, Mathias; Xu, Wenxuan; Zhao, Hanlin
    Abstract: When making choices, agents often must form expectations about option attributes in the choice set. The information used to form these expectations is usually unobserved by researchers. We develop a discrete choice model where agents make choices with heterogeneous information sets that are unobserved. We demonstrate that preferences can be point-identified through a finite mixture approximation of the unobserved information structure, or set-identified using knowledge from a single agent type. These approaches are compatible with both individual- and market-level data. Applications include replicating Dickstein and Morales (2018) and estimating consumer valuations for future fuel costs without assumptions on expectation formation.
    Keywords: Discrete choice; unobserved information; mixture model; set identification
    JEL: C5 C8 D8
    Date: 2024–09–13
    URL: https://d.repec.org/n?u=RePEc:tse:wpaper:129713
  15. By: Ramon F. A. de Punder (University of Amsterdam); Cees G. H. Diks (University of Amsterdam); Roger J. A. Laeven (University of Amsterdam); Dick J. C. van Dijk (Erasmus University Rotterdam)
    Abstract: When comparing predictive distributions, forecasters are typically not equally interested in all regions of the outcome space. To address the demand for focused fore- cast evaluation, we propose a procedure to transform strictly proper scoring rules into their localized counterparts while preserving strict propriety. This is accomplished by applying the original scoring rule to a censored distribution, acknowledging that censoring emerges as a natural localization device due to its ability to retain precisely all relevant information of the original distribution. Our procedure nests the censored likelihood score as a special case. Among a multitude of others, it also implies a class of censored kernel scores that offers a multivariate alternative to the threshold weighted Continuously Ranked Probability Score (twCRPS), extending its local propriety to more general weight functions than single tail indicators. Within this localized framework, we obtain a generalization of the Neyman Pearson lemma, establishing the censored likelihood ratio test as uniformly most powerful. For other tests of localized equal predictive performance, results of Monte Carlo simulations and empirical applications to risk management, inflation and climate data consistently emphasize the superior power properties of censoring.
    Keywords: Density forecast evaluation, Tests for equal predictive ability, Censoring, Likelihood ratio, CRPS.
    Date: 2023–12–29
    URL: https://d.repec.org/n?u=RePEc:tin:wpaper:20230084
  16. By: Toru Kitagawa; Jeff Rowley
    Abstract: Static supervised learning-in which experimental data serves as a training sample for the estimation of an optimal treatment assignment policy-is a commonly assumed framework of policy learning. An arguably more realistic but challenging scenario is a dynamic setting in which the planner performs experimentation and exploitation simultaneously with subjects that arrive sequentially. This paper studies bandit algorithms for learning an optimal individualised treatment assignment policy. Specifically, we study applicability of the EXP4.P (Exponential weighting for Exploration and Exploitation with Experts) algorithm developed by Beygelzimer et al. (2011) to policy learning. Assuming that the class of policies has a finite Vapnik-Chervonenkis dimension and that the number of subjects to be allocated is known, we present a high probability welfare-regret bound of the algorithm. To implement the algorithm, we use an incremental enumeration algorithm for hyperplane arrangements. We perform extensive numerical analysis to assess the algorithm's sensitivity to its tuning parameters and its welfare-regret performance. Further simulation exercises are calibrated to the National Job Training Partnership Act (JTPA) Study sample to determine how the algorithm performs when applied to economic data. Our findings highlight various computational challenges and suggest that the limited welfare gain from the algorithm is due to substantial heterogeneity in causal effects in the JTPA data.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.00379
  17. By: Mathur, Maya B; Shpitser, Ilya
    Abstract: By revisiting imputation from the modern perspective of missing data graphs, we correct common guidance about which auxiliary variables should be included in an imputation model. We propose a generalized definition of missingness at random (MAR), called “z-MAR'', which makes explicit the set of variables to be analyzed and a separate set of auxiliary variables that are included in the imputation model, but not analyzed. We provide a graphical equivalent of z-MAR that we call the “m-backdoor criterion”. In a sense we formalize, a standard imputation model trained on complete cases is valid for a given analysis if and only if the m-backdoor criterion holds. As the criterion indicates, the standard recommendations to use all available auxiliary variables, or all that are associated with missingness status, can lead to invalid imputation models and biased estimates. This bias arises from collider stratification and can occur even with non-causal estimands. Instead, the set of auxiliary variables should be restricted to those that affect incomplete variables or missingness indicators. These auxiliary variables will always suffice to have a valid imputation model, if such a set does exist among the candidate auxiliary variables. Applying this result does not require full knowledge of the graph.
    Date: 2024–09–10
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:zqne9
  18. By: Keyon Vafa; Susan Athey; David M. Blei
    Abstract: One thread of empirical work in social science focuses on decomposing group differences in outcomes into unexplained components and components explained by observable factors. In this paper, we study gender wage decompositions, which require estimating the portion of the gender wage gap explained by career histories of workers. Classical methods for decomposing the wage gap employ simple predictive models of wages which condition on a small set of simple summaries of labor history. The problem is that these predictive models cannot take advantage of the full complexity of a worker's history, and the resulting decompositions thus suffer from omitted variable bias (OVB), where covariates that are correlated with both gender and wages are not included in the model. Here we explore an alternative methodology for wage gap decomposition that employs powerful foundation models, such as large language models, as the predictive engine. Foundation models excel at making accurate predictions from complex, high-dimensional inputs. We use a custom-built foundation model, designed to predict wages from full labor histories, to decompose the gender wage gap. We prove that the way such models are usually trained might still lead to OVB, but develop fine-tuning algorithms that empirically mitigate this issue. Our model captures a richer representation of career history than simple models and predicts wages more accurately. In detail, we first provide a novel set of conditions under which an estimator of the wage gap based on a fine-tuned foundation model is $\sqrt{n}$-consistent. Building on the theory, we then propose methods for fine-tuning foundation models that minimize OVB. Using data from the Panel Study of Income Dynamics, we find that history explains more of the gender wage gap than standard econometric models can measure, and we identify elements of history that are important for reducing OVB.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.09894
  19. By: Omer Faruk Akbal
    Abstract: This paper shows that the Expectation-Maximization (EM) algorithm for regime-switching dynamic factor models provides satisfactory performance relative to other estimation methods and delivers a good trade-off between accuracy and speed, which makes it especially useful for large dimensional data. Unlike traditional numerical maximization approaches, this methodology benefits from closed-form solutions for parameter estimation, enhancing its practicality for real-time applications and historical data exercises with focus on frequent updates. In a nowcasting application to vintage US data, I study the information content and relative performance of regime-switching model after each data releases in a fifteen year period, which was only feasible due to the time efficiency of the proposed estimation methodology. While existing literature has already acknowledged the performance improvement of nowcasting models under regime-switching, this paper shows that the superior nowcasting performance observed particularly when key economic indicators are released. In a backcasting exercise, I show that the model can closely match the recession starting and ending dates of the NBER despite having less information than actual committee meetings, where the fit between actual dates and model estimates becomes more apparent with the additional available information and recession end dates are fully covered with a lag of three to six months. Given that the EM algorithm proposed in this paper is suitable for various regime-switching configurations, this paper provides economists and policymakers with a valuable tool for conducting comprehensive analyses, ranging from point estimates to information decomposition and persistence of recessions in larger datasets.
    Date: 2024–09–06
    URL: https://d.repec.org/n?u=RePEc:imf:imfwpa:2024/190
  20. By: Francisco Blasques (Vrije Universiteit Amsterdam); Janneke van Brummelen (Vrije Universiteit Amsterdam); Paolo Gorgi (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam)
    Abstract: The equivalence of the Beveridge-Nelson decomposition and the trend-cycle decomposition is well established. In this paper we argue that this equivalence is almost immediate when a Gaussian score-driven location model is considered. We also provide a natural extension towards heavy-tailed distributions for the disturbances which lead to a robust version of the Beveridge-Nelson decomposition.
    Keywords: trend and cycle, filtering, autoregressive integrated moving average model, score-driven model, heavy-tailed distributions
    JEL: C22 E32
    Date: 2024–11–01
    URL: https://d.repec.org/n?u=RePEc:tin:wpaper:20240003
  21. By: Peilun He; Gareth W. Peters; Nino Kordzakhiac; Pavel V. Shevchenko
    Abstract: The Nelson-Siegel model is widely used in fixed income markets to produce yield curve dynamics. The multiple time-dependent parameter model conveniently addresses the level, slope, and curvature dynamics of the yield curves. In this study, we present a novel state-space functional regression model that incorporates a dynamic Nelson-Siegel model and functional regression formulations applied to multi-economy setting. This framework offers distinct advantages in explaining the relative spreads in yields between a reference economy and a response economy. To address the inherent challenges of model calibration, a kernel principal component analysis is employed to transform the representation of functional regression into a finite-dimensional, tractable estimation problem. A comprehensive empirical analysis is conducted to assess the efficacy of the functional regression approach, including an in-sample performance comparison with the dynamic Nelson-Siegel model. We conducted the stress testing analysis of yield curves term-structure within a dual economy framework. The bond ladder portfolio was examined through a case study focused on spread modelling using historical data for US Treasury and UK bonds.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.00348
  22. By: Velasco, Sofia
    Abstract: This paper introduces a Bayesian Quantile Factor Augmented VAR (BQFAVAR) to examine the asymmetric effects of monetary policy throughout the business cycle. Monte Carlo experiments demonstrate that the model effectively captures non-linearities in impulse responses. Analysis of aggregate responses to a contractionary monetary policy shock reveals that financial variables and industrial production exhibit more pronounced impacts during recessions compared to expansions, aligning with predictions from the ’financial accelerator’ propagation mechanism literature. Additionally, inflation displays a higher level of symmetry across economic conditions, consistent with households’ loss aversion in the context of reference-dependent preferences and central banks’ commitment to maintaining price stability. The examination of price rigidities at a granular level, employing sectoral prices and quantities, demonstrates that during recessions, the contractionary policy shock results in a more pronounced negative impact on quantities compared to expansions. This finding provides support for the notion of stronger downward than upward price rigidity, as suggested by ’menu-costs models’. JEL Classification: C11, C32, E32, E37, E52
    Keywords: asymmetric effects of monetary policy, Bayesian Quantile VAR, disaggregate prices, FAVAR, non-linear models
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:ecb:ecbwps:20242983

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.