|
on Computational Economics |
| By: | Julian Sester; Huansang Xu |
| Abstract: | In this paper, we propose an alternative valuation approach for CAT bonds where a pricing formula is learned by deep neural networks. Once trained, these networks can be used to price CAT bonds as a function of inputs that reflect both the current market conditions and the specific features of the contract. This approach offers two main advantages. First, due to the expressive power of neural networks, the trained model enables fast and accurate evaluation of CAT bond prices. Second, because of its fast execution the trained neural network can be easily analyzed to study its sensitivities w.r.t. changes of the underlying market conditions offering valuable insights for risk management. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.25899 |
| By: | Zofia Bracha (Faculty of Economic Sciences, University of Warsaw); Jakub Michańków (TripleSun, Krakow); Paweł Sakowski (Faculty of Economic Sciences, University of Warsaw) |
| Abstract: | This paper explores the application of deep Q-learning to hedging at-the-money options on the S&P 500 index. We develop an agent based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, trained to simulate hedging decisions without making explicit model assumptions on price dynamics. The agent was trained on historical intraday prices of S&P 500 call options across years 2004 to 2024, using a single time series of six predictor variables: option price, underlying asset price, moneyness, time to maturity, realized volatility, and current hedge position. A walk-forward procedure was applied for training, which lead to nearly 17 years of out-of-sample evaluation. The performance of the deep reinforcement learning (DRL) agent is benchmarked against the Black–Scholes delta hedging strategy over the same time period. We assess both approaches using metrics such as annualized return, volatility, information ratio, and Sharpe ratio. To test models’ adaptability, we performed simulations across varying market conditions and added constraints such as transaction costs and risk-awareness penalties. Our results show that the DRL agent can outperform traditional hedging methods, particularly in volatile or high-cost environments, highlighting its robustness and flexibility in practical trading contexts. While the agent consistently outperforms delta hedging, its performance deteriorates when the risk-awareness parameter is higher. We also observed that the longer the time interval used for volatility estimation, the more stable the results. |
| Keywords: | Deep learning, Reinforcement learning, Double Deep Q-netwoorks, options market, options hedging, deep hedging |
| JEL: | C4 C14 C45 C53 C58 G13 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:war:wpaper:2025-25 |
| By: | Federico Gabriele; Aldo Glielmo; Marco Taboga |
| Abstract: | Current macroeconomic models with agent heterogeneity can be broadly divided into two main groups. Heterogeneous-agent general equilibrium (GE) models, such as those based on Heterogeneous Agents New Keynesian (HANK) or Krusell-Smith (KS) approaches, rely on GE and 'rational expectations', somewhat unrealistic assumptions that make the models very computationally cumbersome, which in turn limits the amount of heterogeneity that can be modelled. In contrast, agent-based models (ABMs) can flexibly encompass a large number of arbitrarily heterogeneous agents, but typically require the specification of explicit behavioural rules, which can lead to a lengthy trial-and-error model-development process. To address these limitations, we introduce MARL-BC, a framework that integrates deep multi-agent reinforcement learning (MARL) with Real Business Cycle (RBC) models. We demonstrate that MARL-BC can: (1) recover textbook RBC results when using a single agent; (2) recover the results of the mean-field KS model using a large number of identical agents; and (3) effectively simulate rich heterogeneity among agents, a hard task for traditional GE approaches. Our framework can be thought of as an ABM if used with a variety of heterogeneous interacting agents, and can reproduce GE results in limit cases. As such, it is a step towards a synthesis of these often opposed modelling paradigms. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.12272 |
| By: | Revelas, Christos (Tilburg University, School of Economics and Management) |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:tiu:tiutis:2ee1c0cb-ed62-441e-9bf4-103e06ff245f |
| By: | Yiyao Zhang; Diksha Goel; Hussain Ahmad; Claudia Szabo |
| Abstract: | Financial markets are inherently non-stationary, with shifting volatility regimes that alter asset co-movements and return distributions. Standard portfolio optimization methods, typically built on stationarity or regime-agnostic assumptions, struggle to adapt to such changes. To address these challenges, we propose RegimeFolio, a novel regime-aware and sector-specialized framework that, unlike existing regime-agnostic models such as DeepVol and DRL optimizers, integrates explicit volatility regime segmentation with sector-specific ensemble forecasting and adaptive mean-variance allocation. This modular architecture ensures forecasts and portfolio decisions remain aligned with current market conditions, enhancing robustness and interpretability in dynamic markets. RegimeFolio combines three components: (i) an interpretable VIX-based classifier for market regime detection; (ii) regime and sector-specific ensemble learners (Random Forest, Gradient Boosting) to capture conditional return structures; and (iii) a dynamic mean-variance optimizer with shrinkage-regularized covariance estimates for regime-aware allocation. We evaluate RegimeFolio on 34 large cap U.S. equities from 2020 to 2024. The framework achieves a cumulative return of 137 percent, a Sharpe ratio of 1.17, a 12 percent lower maximum drawdown, and a 15 to 20 percent improvement in forecast accuracy compared to conventional and advanced machine learning benchmarks. These results show that explicitly modeling volatility regimes in predictive learning and portfolio allocation enhances robustness and leads to more dependable decision-making in real markets. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.14986 |
| By: | Tian Guo; Emmanuel Hauptmann |
| Abstract: | In quantitative investing, return prediction supports various tasks, including stock selection, portfolio optimization, and risk management. Quantitative factors, such as valuation, quality, and growth, capture various characteristics of stocks. Unstructured financial data, like news and transcripts, has attracted growing attention, driven by recent advances in large language models (LLMs). This paper examines effective methods for leveraging multimodal factors and newsflow in return prediction and stock selection. First, we introduce a fusion learning framework to learn a unified representation from factors and newsflow representations generated by an LLM. Within this framework, we compare three representative methods: representation combination, representation summation, and attentive representations. Next, building on empirical observations from fusion learning, we explore the mixture model that adaptively combines predictions made by single modalities and their fusion. To mitigate the training instability observed in the mixture model, we introduce a decoupled training approach with theoretical insights. Finally, our experiments on real investment universes yield several insights into effective multimodal modeling of factors and news for stock return prediction. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.15691 |
| By: | Gabriel Nixon Raj |
| Abstract: | This study proposes a regime-aware reinforcement learning framework for long-horizon portfolio optimization. Moving beyond traditional feedforward and GARCH-based models, we design realistic environments where agents dynamically reallocate capital in response to latent macroeconomic regime shifts. Agents receive hybrid observations and are trained using constrained reward functions that incorporate volatility penalties, capital resets, and tail-risk shocks. We benchmark multiple architectures, including PPO, LSTM-based PPO, and Transformer PPO, against classical baselines such as equal-weight and Sharpe-optimized portfolios. Our agents demonstrate robust performance under financial stress. While Transformer PPO achieves the highest risk-adjusted returns, LSTM variants offer a favorable trade-off between interpretability and training cost. The framework promotes regime-adaptive, explainable reinforcement learning for dynamic asset allocation. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.14385 |
| By: | Garg, Devansh |
| Abstract: | We present an agent-based simulation of democratic decision-making in which autonomous learning agents interact under alternative electoral institutions and social structures. The model integrates six voting mechanisms (Plurality, Approval, Borda, IRV, STV, PR with D'Hondt and Sainte-Laguë divisors), a multi-round coalition protocol with binding/non-binding contracts and side-payments, turnout and ballot-error realism, and networked interaction on Erdös–Rényi, Barabási–Albert, and Watts–Strogatz graphs with homophily. Agents use reinforcement learning algorithms (PPO, A2C, A3C) with a social-welfare objective based on the inequality-averse Atkinson function, augmented by fairness regularizers (representation loss, participation fairness, envy-freeness proxy) and explicit participation costs. We report diagnostics-rich evaluations covering representation and proportionality (e.g., Gallagher, Loosemore–Hanby), fragmentation (effective number of parties), strategic behavior, coalition stability, and welfare/inequality. Classic regularities emerge—e.g., two-bloc competition under Plurality (Duverger-consistent), greater proportionality and fragmentation under PR, and differential seat allocation under D'Hondt vs Sainte-Laguë—providing face validity. The framework delivers a reproducible virtual laboratory for mechanism comparison, institutional design, and welfare–fairness trade-off analysis at population scale. |
| Date: | 2025–10–14 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:mp9kh_v1 |
| By: | Boughabi, Houssam |
| Abstract: | This paper develops a stochastic Keynesian model linking inflation, unemployment, and GDP. Inflation follows a fractional Brownian motion, capturing persistent shocks, while a temporal convolutional network forecasts conditional paths, allowing machine learning to account for nonlinear interactions and long-memory effects. Unemployment responds conditionally to inflation thresholds, permitting involuntary joblessness, while GDP depends on both variables, reflecting aggregate demand and labor market frictions. The model is applied to Pakistan, simulating macroeconomic dynamics under alternative policy scenarios. We demonstrate that sustained growth is possible even under persistent inflation, reinforcing the empirical relevance of Keynesian theory in contemporary macroeconomic analysis and highlighting the value of machine learning for policy evaluation. |
| Keywords: | Stagflation, Fractional Brownian Motion, Temporal Convolutional Networks, Keynesian Policy, Pakistan |
| JEL: | C22 C45 E24 E31 E37 O53 |
| Date: | 2025–08–29 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126294 |
| By: | Yuhan Cheng; Heyang Zhou; Yanchu Liu |
| Abstract: | We leverage the capacity of large language models such as Generative Pre-trained Transformer (GPT) in constructing factor models for Chinese futures markets. We successfully obtain 40 factors to design single-factor and multi-factor portfolios through long-short and long-only strategies, conducting backtests during the in-sample and out-of-sample period. Comprehensive empirical analysis reveals that GPT-generated factors deliver remarkable Sharpe ratios and annualized returns while maintaining acceptable maximum drawdowns. Notably, the GPT-based factor models also achieve significant alphas over the IPCA benchmark. Moreover, these factors demonstrate significant performance across extensive robustness tests, particularly excelling after the cutoff date of GPT's training data. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.23609 |
| By: | Jose Blanchet; Mark S. Squillante; Mario Szegedy; Guanyang Wang |
| Abstract: | This tutorial paper introduces quantum approaches to Monte Carlo computation with applications in computational finance. We outline the basics of quantum computing using Grover's algorithm for unstructured search to build intuition. We then move slowly to amplitude estimation problems and applications to counting and Monte Carlo integration, again using Grover-type iterations. A hands-on Python/Qiskit implementation illustrates these concepts applied to finance. The paper concludes with a discussion on current challenges in scaling quantum simulation techniques. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.18614 |
| By: | Ruslan Tepelyan |
| Abstract: | OHLC bar data is a widely used format for representing financial asset prices over time due to its balance of simplicity and informativeness. Bloomberg has recently introduced a new bar data product that includes additional timing information-specifically, the timestamps of the open, high, low, and close prices within each bar. In this paper, we investigate the impact of incorporating this timing data into machine learning models for predicting volume-weighted average price (VWAP). Our experiments show that including these features consistently improves predictive performance across multiple ML architectures. We observe gains across several key metrics, including log-likelihood, mean squared error (MSE), $R^2$, conditional variance estimation, and directional accuracy. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.16137 |
| By: | Liu, Zihao (Tilburg University, School of Economics and Management) |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:tiu:tiutis:440468a5-5c38-4aab-ba29-b83a255fac6a |
| By: | Jinkyu Kim; Hyunjung Yi; Mogan Gim; Donghee Choi; Jaewoo Kang |
| Abstract: | We propose DeepAries , a novel deep reinforcement learning framework for dynamic portfolio management that jointly optimizes the timing and allocation of rebalancing decisions. Unlike prior reinforcement learning methods that employ fixed rebalancing intervals regardless of market conditions, DeepAries adaptively selects optimal rebalancing intervals along with portfolio weights to reduce unnecessary transaction costs and maximize risk-adjusted returns. Our framework integrates a Transformer-based state encoder, which effectively captures complex long-term market dependencies, with Proximal Policy Optimization (PPO) to generate simultaneous discrete (rebalancing intervals) and continuous (asset allocations) actions. Extensive experiments on multiple real-world financial markets demonstrate that DeepAries significantly outperforms traditional fixed-frequency and full-rebalancing strategies in terms of risk-adjusted returns, transaction costs, and drawdowns. Additionally, we provide a live demo of DeepAries at https://deep-aries.github.io/, along with the source code and dataset at https://github.com/dmis-lab/DeepAries, illustrating DeepAries' capability to produce interpretable rebalancing and allocation decisions aligned with shifting market regimes. Overall, DeepAries introduces an innovative paradigm for adaptive and practical portfolio management by integrating both timing and allocation into a unified decision-making process. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.14985 |
| By: | Paglialunga, Elena; Resce, Giuliano; Zanoni, Angela |
| Abstract: | This paper predicts regional unemployment in the European Union by applying machine learning techniques to a dataset covering 198 NUTS-2 regions, 2000 to 2019. Tree-based models substantially outperform traditional regression approaches for this task, while accommodating reinforcement effects and spatial spillovers as determinants of regional labor market outcomes. Inflation—particularly energy-related—emerges as a critical predictor, highlighting vulnerabilities to energy shocks and green transition policies. Environmental policy stringency and eco-innovation capacity also prove significant. Our findings demonstrate the potential of machine learning to support proactive, place-sensitive interventions, aiming to predict and mitigate the uneven socioeconomic impacts of structural change across regions. |
| Keywords: | Regional unemployment; Inflation; Environmental policy; Spatial spillovers; Machine learning. |
| JEL: | E24 J64 Q52 R23 |
| Date: | 2025–10–15 |
| URL: | https://d.repec.org/n?u=RePEc:mol:ecsdps:esdp25101 |
| By: | yan, jiani |
| Abstract: | In social science and epidemiological research, individual risk factors for mortality are often examined in isolation, while approaches that consider multiple risk factors simultaneously remain less common. Using the Health and Retirement Study in the US, the Survey of Health, Ageing and Retirement in Europe and the English Longitudinal Study of Ageing in the UK, we explore the predictability of death with machine learning and explainable AI algorithms, which integrate explanation and prediction simultaneously. Specifically, we extract information from all datasets in seven health-related domains including demographic, socioeconomic, psychology, social connections, childhood adversity, adulthood adversity, and health behaviours. Our self-devised algorithm reveals consistent domain-level patterns across datasets, with demography and socioeconomic factors being the most significant. However, at the individual risk-factor level, notable differences emerge, emphasising the context-specific nature of certain predictors. |
| Date: | 2025–10–14 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:euv7f_v2 |
| By: | Meng cai; Tianze Li |
| Abstract: | The Heston stochastic-local volatility (HSLV) model is widely used to capture both market calibration and realistic volatility dynamics, but simulating its CIR-type variance process is numerically challenging.This paper compare two alternative schemes for HSLV simulation: the truncated Euler method and the backward Euler method with the conventional Euler and almost exact simulation methods in \cite{van2014heston} by using a Monte Carlo method.Numerical results show that the truncated method achieves strong convergence and remains robust under high volatility, while the backward method provides the smallest errors and most stable performance in stress scenarios, though at higher computational cost. |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.24449 |
| By: | Peter B. Dixon; Maureen T. Rimmer |
| Abstract: | Computable General Equilibrium (CGE) modelling started with the publication in 1960 of Johansen's model of Norway. It continues to the present time as an active research and policy field. In a recent count, there were 33, 000 people in the GTAP CGE modelling network alone. This paper identifies GEMPACK software, developed in Australia for solving large scale CGE models in the Johansen school, as one of the factors contributing to the enduring popularity of CGE. The paper tells the story of how GEMPACK came into existence, how it works, and how it relates to Johansen. The paper was prepared for a workshop on bridging the gap between CGE and New Quantitative Trade (NQT) models. In illustrating GEMPACK and showing connections between CGE and NQT, we present a GEMPACK solution and analysis of Eaton and Kortum's seminal NQT model, published in Econometrica in 2002. |
| Keywords: | GEMPACK CGE software, GEMPACK and GAMS, New Quantitative trade modelling, Eaton and Kortum |
| JEL: | C68 F11 F13 C63 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:cop:wpaper:g-357 |
| By: | Daniel Cunha Oliveira; Grover Guzman; Nick Firoozye |
| Abstract: | Robust optimization provides a principled framework for decision-making under uncertainty, with broad applications in finance, engineering, and operations research. In portfolio optimization, uncertainty in expected returns and covariances demands methods that mitigate estimation error, parameter instability, and model misspecification. Traditional approaches, including parametric, bootstrap-based, and Bayesian methods, enhance stability by relying on confidence intervals or probabilistic priors but often impose restrictive assumptions. This study introduces a non-parametric bootstrap framework for robust optimization in financial decision-making. By resampling empirical data, the framework constructs flexible, data-driven confidence intervals without assuming specific distributional forms, thus capturing uncertainty in statistical estimates, model parameters, and utility functions. Treating utility as a random variable enables percentile-based optimization, naturally suited for risk-sensitive and worst-case decision-making. The approach aligns with recent advances in robust optimization, reinforcement learning, and risk-aware control, offering a unified perspective on robustness and generalization. Empirically, the framework mitigates overfitting and selection bias in trading strategy optimization and improves generalization in portfolio allocation. Results across portfolio and time-series momentum experiments demonstrate that the proposed method delivers smoother, more stable out-of-sample performance, offering a practical, distribution-free alternative to traditional robust optimization methods. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.12725 |
| By: | Bach, Philipp; Klaaßen, Sven; Kueck, Jannis; Mattes, Mara; Spindler, Martin |
| Abstract: | Difference-in-differences (DiD) is one of the most popular approaches for empirical research in economics, political science, and beyond. Identification in these models is based on the conditional parallel trends assumption: In the absence of treatment, the average outcome of the treated and untreated group are assumed to evolve in parallel over time, conditional on pre-treatment covariates. We introduce a novel approach to sensitivity analysis for DiD models that assesses the robustness of DiD estimates to violations of this assumption due to unobservable confounders, allowing researchers to transparently assess and communicate the credibility of their causal estimation results. Our method focuses on estimation by Double Machine Learning and extends previous work on sensitivity analysis based on Riesz Representation in cross-sectional settings. We establish asymptotic bounds for point estimates and confidence intervals in the canonical 2 × 2 setting and group-time causal parameters in settings with staggered treatment adoption. Our approach makes it possible to relate the formulation of parallel trends violation to empirical evidence from (1) pre-testing, (2) covariate benchmarking and (3) standard reporting statistics and visualizations. We provide extensive simulation experiments demonstrating the validity of our sensitivity approach and diagnostics and apply our approach to two empirical applications. |
| Keywords: | Sensitivity Analysis, Difference-in-differences, Double Machine Learning, Riesz Representation, Causal Inference |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:fubsbe:330188 |
| By: | Yunhan Liu (Carleton University) |
| Abstract: | Monte Carlo simulations in Stata are often constrained by the software’s memory architecture, particularly when the total number of replications required for inference or robustness checks is large. As memory consumption accumulates over the course of a simulation, performance can degrade severely, with many replications failing because of insufficient available RAM. This poster presents a procedure that bypasses these constraints by dividing the full simulation task into smaller, memory-manageable batches, which are executed independently in separate Stata sessions. The method relies on partitioning the total number of replications, R, into B batches of r replications each, where R=B×r. Each batch is encoded in a distinct Stata do-file, generated automatically via a short Python script. These batch files are then executed sequentially or in parallel using a Bash shell script. Because each batch runs in its own instance of Stata, memory usage is reset between runs, preventing the accumulation of data across replications. This approach allows simulations that were previously infeasible because of RAM limitations to run to completion. In addition to resolving memory constraints, the method enables embarrassingly parallel computation on multicore machines without requiring any specialized parallel-processing software. By assigning different batch files to different processor cores via concurrent shell calls, total run time can be substantially reduced. After a brief setup phase involving preprocessing and batch generation, the entire simulation can be launched with a single command. The proposed workflow improves the feasibility and efficiency of large-scale Monte Carlo experiments in Stata, especially in environments with modest hardware and limited software support for parallelization. |
| Date: | 2025–10–05 |
| URL: | https://d.repec.org/n?u=RePEc:boc:cand25:07 |
| By: | Mangave, Darshan |
| Abstract: | This paper observes at Adam Smith’s idea of the “invisible hand” and ask how it works in today’s world of algorithms and digital platforms. In the Wealth of Nations, Smith explained that when people act in their self-interest then markets balance themselves and society benefits. But now, in this century many economic choices are not made only by people. They are guided by algorithms. For examples, this can be seen in Amazon’s product rankings, Uber’s surge pricing, Google’s search results, Netflix’s recommendations, and AI trading in stock markets. These algorithmic systems connect buyers and sellers quickly, but they also create new problems like reduced competition, unfair pricing, manipulation of consumer choices, and market instability. The paper argues that the invisible hand has not disappeared, but it now takes the form of an “algorithmic hand.” For this hand to truly serve society, there must be careful attention to ethics and policy. |
| Keywords: | Adam Smith, Invisible Hand, Wealth of Nations, Algorithms, Digital Economy, Market Competition, Consumer Behaviour, Algorithmic Pricing, Platform Capitalism, Economic Policy. |
| JEL: | B12 D47 K23 L17 L86 O33 |
| Date: | 2025–09–14 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126154 |
| By: | Alex Asher (StataCorp) |
| Abstract: | Stata's built-in power command accepts user-deRned programs to calculate power, sample size, or effect size. Power can be estimated by simulation, even in complex scenarios where there is no closed- form expression. To estimate sample size given power, multiple simulations are needed. This talk describes how to use simulation to estimate power and sample size using the power command. Learn how to do the following: Write simulation programs that are compatible with all the features of power, ciwidth, and gsdesign Customize graphs and tables using an initializer Control Monte Carlo errors Estimate sample size using the bisection method |
| Date: | 2025–09–04 |
| URL: | https://d.repec.org/n?u=RePEc:boc:lsug25:14 |
| By: | Coleman Drake; Mark K. Meiselbach; Daniel Polsky |
| Abstract: | Enrollment in the Health Insurance Marketplaces created by the Affordable Care Act reached an all-time high of approximately 25 million Americans in 2025, roughly doubling since enhanced premium tax credit subsidies were made available in 2021. The scheduled expiration of enhanced subsidies in 2026 is estimated to leave over seven million Americans without health insurance coverage. Ten states have created supplemental Marketplace subsidies, yet little attention has been paid to how to best structure these subsidies to maximize coverage. Using administrative enrollment data from Maryland's Marketplace, we estimate demand for Marketplace coverage. Then, using estimated parameters and varying budget constraints, we simulate how to optimally allocate supplemental state premium subsidies to mitigate coverage losses from enhanced premium subsidy expiration. We find that premium sensitivity is greatest among enrollees with incomes below 200 percent of the federal poverty level, where the marginal effect of an additional ten dollars in monthly subsidies on the probability of coverage is approximately 6.5 percentage points, and decreases to roughly 2.5 percentage points above 200 percent FPL. Simulation results indicate that each 10 million dollars in annual state subsidies could retain roughly 5, 000 enrollees, though the cost-effectiveness of these subsidies falls considerably once all enrollees below 200 percent of the federal poverty level are fully subsidized. We conclude that states are well positioned to mitigate, but not stop, coverage losses from expanded premium tax credit subsidy expiration. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.13791 |
| By: | Boughabi, Houssam |
| Abstract: | This study simulates the labor market dynamics in Germany during 1914-1920, focusing on the impact of World War I on wage expectations and economic stability. Built upon the historical context of the period, the simulation takes into account the initial wage hikes driven by involuntary unemployment and the exodus of workers from civilian industries to war-related industries. It then explores the subsequent stabilization of wages, as observed in the empirical reality of the era. Using simulated data, the paper examines how workers' expectations of future wages were shaped by current wage conditions amidst the economic strain caused by the war. The study also investigates the role of national unity, government policies, and institutional frameworks in preserving wage stability during market volatility. To analyze wage dynamics, the research applies a dynamic wage model alongside the FIGARCH(1, d, 1) model, estimating long-memory effects in wage volatility. The findings suggest that past economic conditions played a significant role in shaping current wage expectations, with light long-memory properties observed in wage volatility. This simulated analysis offers insights into how economic pressures and government interventions during wartime may have contributed to wage stability, shaping the uncertainty of workers into forecasting the war ending which goes in accordance with the martingale hypothesis of wages expectations. Our study is contextual and qualitative and discusses the properties of economical series based on the context of the period. |
| Keywords: | Wage Volatility, Long Memory Processes, Labor Market Dynamics, Economic History |
| JEL: | C22 E24 J31 N34 |
| Date: | 2025–05–13 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126295 |