|
on Risk Management |
| By: | Rozana Himaz; Vanina Farber; Saut Sagal |
| Abstract: | Households in disaster-prone environments face multiple market failures-credit constraints, coordination breakdowns and behavioural biases that undermine the effectiveness of standalone financial instruments such as insurance. This paper develops a theoretical model showing that welfare-optimal household disaster risk management requires bundling financial instruments across the ex-ante and ex-post disaster risk management cycle covering prevention, mitigation, coping and recovery, layering tools by hazard probability and severity. We show that bundling dominates single-instrument approaches when it simultaneously relaxes distinct market frictions and is complemented by coordination effectiveness. Numerical simulations illustrate hazard-specific optimal portfolios for frequent floods and rare catastrophic earthquakes. We use two programs from Indonesia to illustrate how strategic bundling can be applied in practice in programme design. The framework provides testable predictions and guidance for designing integrated household financial protection systems in developing countries. |
| Keywords: | Risk Layering, Bundling, Market Failures, Disaster Risk Management, Developing Countries, Household Finance |
| JEL: | D81 G22 O16 Q54 H84 D91 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:csa:wpaper:2026-01 |
| By: | Marc Schmitt |
| Abstract: | I construct a Market Stress Probability Index (MSPI) that estimates the probability of high stress in the U.S. equity market one month ahead using information from the cross-section of individual stocks. Using CRSP daily data, each month is summarized by a set of interpretable cross-sectional fragility signals and mapped into a forward-looking stress probability via an L1-regularized logistic regression in a real-time expanding-window design. Out of sample, MSPI tracks major stress episodes and improves discrimination and accuracy relative to a parsimonious benchmark based on lagged market return and realized volatility, delivering calibrated stress probabilities on an economically meaningful scale. Further, I illustrate how MSPI can be used as a probability-based measurement object in financial econometrics. The resulting index provides a transparent and easily updated measure of near-term equity-market stress risk. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.07066 |
| By: | Niakh, Fallou; Bassière, Alicia; Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Robert, Christian |
| Abstract: | This work presents a framework for peer-to-peer (P2P) basis risk management applied to solar electricity generation. The approach leverages physically based simulation models to estimate the day-ahead production forecasts and the actual realized production at the solar farm level. We quantify the financial loss from mismatches between forecasted and actual production using the outputs of these simulations. The framework then implements a parametric insurance mechanism to mitigate these financial losses and combines it with a P2P market structure to enhance participant risk sharing. By integrating day-ahead forecasts and actual production data with physical modeling, this method provides a comprehensive solution to manage production variability, offering practical insights for improving financial resilience in renewable energy systems. The results highlight the potential of combining parametric insurance with P2P mechanisms to foster reliability and collaboration in renewable energy markets. |
| Keywords: | Parametric insurance ; Basis risk ; P2P insurance ; Renewable production insurance |
| Date: | 2025–04–15 |
| URL: | https://d.repec.org/n?u=RePEc:aiz:louvad:2025007 |
| By: | Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Michaelides, Marie (Heriot-Watt University); Trufin, Julien (ULB); Verelst, Harrison (Detralytics) |
| Abstract: | This paper proposes a variant of the well-known boosting trees algorithm to estimate conditional distributions. Since regression trees partition observations into subgroups, the corresponding empirical distributions can be used to define the splitting criterion. Precisely, the parametric approach using Poisson deviance is replaced with a non-parametric one maximizing probabilistic distances between empirical distributions in child nodes. Proceeding inthis way, the actuary obtains an estimated conditional distribution for the response, from which a conditional mean can be derived as well as any other quantity of interest in risk management. The numerical performances of the proposed method are assessed with simulated data while a case study demonstrates its usefulness for insurance applications. |
| Keywords: | Wasserstein distance ; regression trees ; boosting ; conditional distribution ; count data |
| Date: | 2025–11–06 |
| URL: | https://d.repec.org/n?u=RePEc:aiz:louvad:2025024 |
| By: | Stefano Scoleri; Marco Bianchetti; Sergei Kucherenko |
| Abstract: | Quasi Monte Carlo (QMC) and Global Sensitivity Analysis (GSA) techniques are applied for pricing and hedging representative financial instruments of increasing complexity. We compare standard Monte Carlo (MC) vs QMC results using Sobol' low discrepancy sequences, different sampling strategies, and various analyses of performance. We find that QMC outperforms MC in most cases, including the highest-dimensional simulations, showing faster and more stable convergence. Regarding greeks computation, we compare standard approaches, based on finite differences (FD) approximations, with adjoint methods (AAD) providing evidences that, when the number of greeks is small, the FD approach combined with QMC can lead to the same accuracy as AAD, thanks to increased convergence rate and stability, thus saving a lot of implementation effort while keeping low computational cost. Using GSA, we are able to fully explain our findings in terms of reduced effective dimension of QMC simulation, allowed in most cases, but not always, by Brownian Bridge discretization or PCA construction. We conclude that, beyond pricing, QMC is a very effcient technique also for computing risk measures, greeks in particular, as it allows to reduce the computational effort of high dimensional Monte Carlo simulations typical of modern risk management. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.14354 |
| By: | Antonini, Marcello; Henriquez, Josefa; van Kleef, Richard; Melia, Adrian; Paolucci, Francesco |
| Abstract: | Several mandatory health insurance schemes include some consumer choice of coverage (e.g., in terms of deductible levels). Premiums in these schemes are typically community-rated per insurance plan. While community-rated premiums help achieve objectives of fairness, they can also lead to adverse selection across coverage options. Consequently, consumers may sort inefficiently across these coverage options resulting in forgone welfare gains. This paper aims to explore under what conditions risk rating of incremental premiums for more comprehensive coverage (compared to a basic plan) can improve consumer sorting. In a simulation analysis on Chilean data, results show that under perfect risk adjustment, risk rating of incremental premiums can improve consumer sorting. With imperfect risk adjustment, however, the effects of risk rating on consumer sorting are ambigous as incremental premiums will not just reflect the direct effect of more comprehensive coverage on healthcare spending, but also the under/overcompensation from the (imperfect) risk adjustment system. Moreover, we find that in the presence of imperfect risk adjustment, risk rating improves welfare over community rating but does not fully solve the problem of inefficient sorting. |
| Keywords: | health insurance; coverage options; community rating; risk rating; risk adjustment; moral hazard |
| JEL: | F3 G3 |
| Date: | 2026–03–31 |
| URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:137182 |
| By: | German Nova Orozco; Duy-Minh Dang; Peter A. Forsyth |
| Abstract: | Money-back guarantees (MBGs) are features of pooled retirement income products that address bequest concerns by ensuring the initial premium is returned through lifetime payments or, upon early death, as a death benefit to the estate. This paper studies optimal retirement decumulation in an individual tontine account with an MBG overlay under international diversification and systematic longevity risk. The retiree chooses withdrawals and asset allocation dynamically to trade off expected total withdrawals (EW) against the Conditional Value-at-Risk (CVaR) of terminal wealth, subject to realistic investment constraints. The optimization is solved under a plan-to-live convention, while stochastic mortality affects outcomes through its impact on mortality credits at the pool level. We develop a neural-network based computational approach for the resulting high-dimensional, constrained control problem. The MBG is priced ex post under the induced EW--CVaR optimal policy via a simulation-based actuarial rule that combines expected guarantee costs with a prudential tail buffer. Using long-horizon historical return data expressed in real domestic-currency terms, we find that international diversification and longevity pooling jointly deliver the largest improvements in the EW--CVaR trade-off, while stochastic mortality shifts the frontier modestly in the expected direction. The optimal controls use foreign equity primarily as a state-dependent catch-up instrument, and implied MBG loads are driven mainly by tail outcomes (and the chosen prudential buffer) rather than by mean payouts. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.16212 |
| By: | Yijie Wang; Hao Gao; Campbell R. Harvey; Yan Liu; Xinyuan Tao |
| Abstract: | The standard approach to portfolio selection involves two stages: forecast the asset returns and then plug them into an optimizer. We argue that this separation is deeply problematic. The first stage treats cross-sectional prediction errors as equally important across all securities. However, given that final portfolios might differ given distinct risk preferences and investment restrictions, the standard approach fails to recognize that the investor is not just concerned with the average forecast error - but the precision of the forecasts for the specific assets that are most important for their portfolio. Hence, it is crucial to integrate the two stages. We propose a novel implementation utilizing machine learning tools that unifies the expected return generation process and the final optimized portfolio. Our empirical example provides convincing evidence that our end-to-end method outperforms the traditional two-stage approach. In our framework, each investor has their own, endogenously determined, efficient frontier that depends on risk preferences, investor-specific constraints, as well as exposure to market frictions. |
| JEL: | C45 C55 G11 G12 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34861 |
| By: | Eric Cuijpers; Razvan Vlahu |
| Abstract: | Regulatory limits on intragroup exposures constrain capital allocation within multi-national banking groups. We develop a theoretical model of cross-border banking that captures internal capital markets under supranational supervision and borrowing constraints. Our analysis shows that relaxing intragroup exposure limits can amplify risk-taking by enabling parent banks to draw on affiliate resources and reallocate risk toward the home market, particularly when foreign affiliates are large, well capitalized, and subject to weaker liquidity requirements. We characterize the conditions under which this channel operates and discuss its implications for financial stability. Our findings inform the debate on multinational banking groups by showing how risks can emerge within these organizations and how regulatory tools can mitigate them. |
| Keywords: | Multinational banks; Intragroup exposures; Risk-taking; Prudential reg-ulation; Liquidity requirements |
| JEL: | F23 G21 G28 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:dnb:dnbwpp:854 |
| By: | Leow, Maggi |
| Abstract: | This research study examined firm specific (internal) and macroeconomic (external) factors of determining financial risk performance at Target Corporation in the United States during a ten-year period, 2014-2023. The company had variability in operating profitability and a number of indicators had indicated an increase in financial pressure, which is why it is essential to know what factors determine financial risk. Financial ratios and economic indicators were analyzed with the help of SPSS which included correlation and regression models. The three models were created to identify the impact of internal factors, external factors, and a combination of both sets of variables on Operating Profit Margin (OPM) that served as the primary proxy of financial risk. As per the results, Net Profit Margin (NPM) has been identified as the most important internal factor to enhance OPM, whereas Operational Risk has been identified as the most important internal factor to decrease OPM. With GDP and interest rate having weak effects on the outside, it means that Target is more affected by its internal managerial factors than the macroeconomic factors. The results indicate that the optimization of costs, operational failures, and the risk management of interest rate risks are pivotal in reducing the financial risk. This study has the limitation of time as it considers a single company and 10 years of data, which might not portray the industry in general; thus, further studies can involve multiple companies and other financial indicators. |
| Keywords: | Target Corporation, Financial Risk Performance, United States, Operating Profit Margin, Net Profit Margin, Operational Risk, GDP, Interest Rate |
| JEL: | G3 G32 |
| Date: | 2026–01–08 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:127634 |
| By: | Aur\'elien Alfonsi; Ahmed Kebaier |
| Abstract: | For a class of stochastic models with Gaussian and rough mean-reverting volatility that embeds the genuine rough Stein-Stein model, we study the weak approximation rate when using a Euler type scheme with integrated kernels. Our first result is a weak convergence rate for the discretised rough Ornstein-Uhlenbeck process, that is essentially in $\min(3\alpha-1, 1)$, where $\frac{t^{\alpha-1}}{\Gamma(\alpha)} $ is the fractional convolution kernel with $\alpha \in (1/2, 1)$. Then, our main result is to obtain the same convergence rate for the corresponding stochastic rough volatility model with polynomial test functions. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.18234 |
| By: | Lan Bu; Ning Cai; Chenxi Xia; Jingping Yang |
| Abstract: | This paper addresses a key challenge in CDO modeling: achieving a perfect fit to market prices across all tranches using a single, consistent model. The existence of such a perfect-fit model implies the absence of arbitrage among CDO tranches and is thus essential for unified risk management and the pricing of nonstandard credit derivatives. To address this central challenge, we face three primary difficulties: standard parametric models typically fail to achieve a perfect fit; the calibration of standard parametric models inherently relies on computationally intensive simulation-based optimization; and there is a lack of formal theory to determine when a perfect-fit model exists and, if it exists, how to construct it. We propose a theoretical framework to overcome these difficulties. We first introduce and define two compatibility levels of market prices: weak compatibility and strong compatibility. Specifically, market prices across all tranches are said to be weakly (resp. strongly) compatible if there exists a single model (resp. a single conditionally i.i.d. model) that perfectly fits these market prices. We then derive sufficient and necessary conditions for both levels of compatibility by establishing a relationship between compatibility and LP problems. Furthermore, under either condition, we construct a corresponding concrete copula model that achieves a perfect fit. Notably, our framework not only allows for efficient verification of weak compatibility and strong compatibility through LP problems but also facilitates the construction of the corresponding copula models that achieve a perfect fit, eliminating the need for simulation-based optimization. The practical applications of our framework are demonstrated in risk management and the pricing of nonstandard credit derivatives. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.08039 |
| By: | Hainaut, Donatien (Université catholique de Louvain, LIDAM/ISBA, Belgium); Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
| Abstract: | This paper proposes a new approach to risk classification based on Generalized Gaussian Process Regression (GGPR). The response under consideration obeys a distribution belonging to the Exponential Dispersion (ED) family. It typically corresponds to a claim count or a claim severity in the context of insurance studies. GGPR is a supervised machine learning method with Bayesian flavor. Individual random effects obeying a multivariate Normal distribution are connected with the help of their covariance matrix built from a so-called kernel function. The latter enforces smoothness, borrowing information from similar risk profiles. Bayesian Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) are recovered as special cases, assuming a highly-structured prior covariance matrix. Compared to the existing literature, this paper innovates to account for the specificity of data entering insurance studies. First, proper risk exposures are included in model formulation and development. Second, parameters are estimated by minimizing deviance instead of an approximated loglikelihood. Third, categorical features that are often encountered in insurance data bases are coded with the help of an embedding method based on Burt matrices. Fourth, K-means clustering is used to reduce the dimension of the problem and create model points within large insurance portfolios. Numerical illustrations performed on publicly available insurance data sets illustrate the relevance of the GGPR approach to risk classification. Benchmarked against the classical GLM, the performances of GGPR turn out to be excellent given its reduced number of parameters. This suggests that GGPR nicely enriches the actuarial toolkit by providing preliminary predictions that can then be structured with additive scores like those entering GLMs and GAMs. |
| Keywords: | Exponential Dispersion family ; Mixed models ; Risk classification ; Categorical embedding ; Burt distance ; Model points |
| Date: | 2025–03–06 |
| URL: | https://d.repec.org/n?u=RePEc:aiz:louvad:2025004 |
| By: | Miguel C. Herculano |
| Abstract: | Parametric Portfolio Policies (PPP) estimate optimal portfolio weights directly as functions of observable signals by maximizing expected utility, bypassing the need to model asset returns and covariances. However, PPP ignores policy risk. We show that this is consequential, leading to an overstatement of expected utility and an understatement of portfolio risk. We develop Bayesian Parametric Portfolio Policies (BPPP), which place a prior on policy coefficients thereby correcting the decision rule. We derive a general result showing that the utility gap between PPP and BPPP is strictly positive and proportional to posterior parameter uncertainty and signal magnitude. Under a mean--variance approximation, this correction appears as an additional estimation-risk term in portfolio variance, implying that PPP overexposes when signals are strongest and when risk aversion is high. Empirically, in a high-dimensional setting with 242 signals and six factors over 1973--2023, BPPP delivers higher Sharpe ratios, substantially lower turnover, larger investor welfare, and lower tail risk, with advantages that increase monotonically in risk aversion and are strongest during crisis episodes. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.21173 |
| By: | Srijan Sood; Kassiani Papasotiriou; Marius Vaiciulis; Tucker Balch |
| Abstract: | Portfolio Management is the process of overseeing a group of investments, referred to as a portfolio, with the objective of achieving predetermined investment goals. Portfolio optimization is a key component that involves allocating the portfolio assets so as to maximize returns while minimizing risk taken. It is typically carried out by financial professionals who use a combination of quantitative techniques and investment expertise to make decisions about the portfolio allocation. Recent applications of Deep Reinforcement Learning (DRL) have shown promising results when used to optimize portfolio allocation by training model-free agents on historical market data. Many of these methods compare their results against basic benchmarks or other state-of-the-art DRL agents but often fail to compare their performance against traditional methods used by financial professionals in practical settings. One of the most commonly used methods for this task is Mean-Variance Portfolio Optimization (MVO), which uses historical time series information to estimate expected asset returns and covariances, which are then used to optimize for an investment objective. Our work is a thorough comparison between model-free DRL and MVO for optimal portfolio allocation. We detail the specifics of how to make DRL for portfolio optimization work in practice, also noting the adjustments needed for MVO. Backtest results demonstrate strong performance of the DRL agent across many metrics, including Sharpe ratio, maximum drawdowns, and absolute returns. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.17098 |
| By: | Christopher Busch; Rocio Madera |
| Abstract: | The tax and transfer system partially insures households against individual income risk. We build a framework to assess the size and (welfare) value of this partial insurance, which is based on exploiting distributional differences between household gross income and disposable income. Our approach flexibly accounts for these differences and does not require the specification of a tax function. The key feature of the model around which our framework is built is that the degree of partial insurance is directly parameterized, which allows us to solve for the degree of insurance provided by the tax and transfer system as a fixed point. Our approach works with standard homothetic preferences, and is flexible regarding the distributions of income shocks. Only in the nested special case of homoskedastic log-Normal distributions, the ratio of the dispersion of permanent shocks to gross and disposable incomes provides a sufficient statistic for the degree of government-provided consumption insurance. In an application to data from Swedish tax registers, we find that the degree of partial insurance against permanent shocks by the tax and transfer system amounts to about 49%. Resorting to the Panel Study of Income Dynamics, we also document that the model-based measure aligns well with empirical estimates based on survey data on consumption, implying a degree of insurance of about 25% in the United States. |
| Keywords: | idiosyncratic income risk, tax and transfer system, public insurance, partial insurance, incomplete markets |
| JEL: | D31 D52 E21 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12467 |
| By: | Min Dai; Yuchao Dong; Yanwei Jia; Xun Yu Zhou |
| Abstract: | The classical Merton investment problem predicts deterministic, state-dependent portfolio rules; however, laboratory and field evidence suggests that individuals often prefer randomized decisions leading to stochastic and noisy choices. Fudenberg et al. (2015) develop the additive perturbed utility theory to explain the preference for randomization in the static setting, which, however, becomes ill-posed or intractable in the dynamic setting. We introduce the recursive perturbed utility (RPU), a special stochastic differential utility that incorporates an entropy-based preference for randomization into a recursive aggregator. RPU endogenizes the intertemporal trade-off between utilities from randomization and bequest via a discounting term dependent on past accumulated randomization, thereby avoiding excessive randomization and yielding a well-posed problem. In a general Markovian incomplete market with CRRA preferences, we prove that the RPU-optimal portfolio policy (in terms of the risk exposure ratio) is Gaussian and can be expressed in closed form, independent of wealth. Its variance is inversely proportional to risk aversion and stock volatility, while its mean is based on the solution to a partial differential equation. Moreover, the mean is the sum of a myopic term and an intertemporal hedging term (against market incompleteness) that intertwines with policy randomization. Finally, we carry out an asymptotic expansion in terms of the perturbed utility weight to show that the optimal mean policy deviates from the classical Merton policy at first order, while the associated relative wealth loss is of a higher order, quantifying the financial cost of the preference for randomization. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.13544 |
| By: | Yousfi, Ridha |
| Abstract: | This study investigates how liquidity risk affects the performance of deposit money banks in Tunisia, while also examining the moderating influence of nonperforming loans on this relationship. Using a two-step system generalized method of moments (GMM) estimator, the analysis is conducted on a sample of 50 listed banks across six Tunisian countries—Nigeria, Ghana, South Africa, Zambia, Kenya, and Tanzania. Bank performance is measured through return on assets (ROA) and return on equity (ROE), with net interest margin (NIM) serving as a robustness indicator. The results reveal that liquidity risk has a significant and negative impact on bank performance, indicating that higher liquidity risk reduces profitability. Similarly, nonperforming loans negatively and significantly influence bank performance, and their interaction with liquidity risk further exacerbates this adverse effect. These findings are consistent across alternative performance metrics and econometric models that address potential endogeneity issues. Overall, this study provides one of the earliest cross-country empirical insights into how liquidity risk affects DMB performance in Tunisia and contributes to the literature by integrating the joint effect of liquidity risk and nonperforming loans, thereby highlighting the compounded challenges facing banks in the region. |
| Keywords: | Liquidity risk, GMM, Return on asset, Return on equity |
| JEL: | G3 G38 |
| Date: | 2025–03–01 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126784 |
| By: | Yousfi, Ridha |
| Abstract: | This study examines the effect of liquidity risk on the financial stability of Islamic commercial banks in Tunisia over the period 2012–2021. Using a purposive sample of 10 Islamic banks and a balanced panel that yields 78 bank-year observations, we estimate a set of OLS regressions to assess direct and indirect relationships among liquidity risk (measured by the Financing to Deposit Ratio, FDR), credit risk (Non-Performing Financing, NPF), operational efficiency (BOPO) and bank stability (Z-score). Diagnostic tests (normality, multicollinearity, heteroscedasticity and autocorrelation) support the adequacy of the models. Empirical results show that higher liquidity risk (FDR) is associated with lower bank stability, and that elevated NPF significantly reduces stability. Operational efficiency (lower BOPO) is positively associated with bank stability. The analysis also reveals that FDR negatively affects NPF, while NPF strongly reduces operational efficiency. These findings underscore the need for balanced liquidity management that preserves lending activity without compromising solvency and operational performance. Policy implications for regulators and bank managers in emerging Islamic banking systems are discussed. |
| Keywords: | Liquidity Risk; Bank Stability; Islamic Banks; Credit Risk; Operational Efficiency; |
| JEL: | G33 |
| Date: | 2025–01–01 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126782 |
| By: | Sumin Kim; Minjae Kim; Jihoon Kwon; Yoon Kim; Nicole Kagan; Joo Won Lee; Oscar Levy; Alejandro Lopez-Lira; Yongjae Lee; Chanyeol Choi |
| Abstract: | Prediction markets provide a unique setting where event-level time series are directly tied to natural-language descriptions, yet discovering robust lead-lag relationships remains challenging due to spurious statistical correlations. We propose a hybrid two-stage causal screener to address this challenge: (i) a statistical stage that uses Granger causality to identify candidate leader-follower pairs from market-implied probability time series, and (ii) an LLM-based semantic stage that re-ranks these candidates by assessing whether the proposed direction admits a plausible economic transmission mechanism based on event descriptions. Because causal ground truth is unobserved, we evaluate the ranked pairs using a fixed, signal-triggered trading protocol that maps relationship quality into realized profit and loss (PnL). On Kalshi Economics markets, our hybrid approach consistently outperforms the statistical baseline. Across rolling evaluations, the win rate increases from 51.4% to 54.5%. Crucially, the average magnitude of losing trades decreases substantially from 649 USD to 347 USD. This reduction is driven by the LLM's ability to filter out statistically fragile links that are prone to large losses, rather than relying on rare gains. These improvements remain stable across different trading configurations, indicating that the gains are not driven by specific parameter choices. Overall, the results suggest that LLMs function as semantic risk managers on top of statistical discovery, prioritizing lead-lag relationships that generalize under changing market conditions. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.07048 |
| By: | Klinge, Jonathan (Center for Mathematical Economics, Bielefeld University); Schmeck, Maren Diane (Center for Mathematical Economics, Bielefeld University) |
| Abstract: | We study a dynamic model of a non-life insurance portfolio. The foundation of the model is a compound Poisson process that represents the claims side of the insurer. To introduce clusters of claims appearing, e.g. with catastrophic events, this process is time-changed by a Lévy subordinator. The subordinator is chosen so that it evolves, on average, at the same speed as calendar time, creating a trade-off between intensity and severity. We show that such a transformation always has a negative impact on the probability of ruin. Despite the expected total claim amount remaining invariant, it turns out that the probability of ruin as a function of the initial capital falls arbitrarily slowly depending on the choice of the subordinator. |
| Keywords: | Cramér-Lundberg Model, Ruin-Theory, Subordination, Subexponential Distribution, Regular Variation |
| Date: | 2026–02–20 |
| URL: | https://d.repec.org/n?u=RePEc:bie:wpaper:765 |
| By: | Janosch Brenzel-Weiss; Winfried Koeniger; Arnau Valladares-Esteban |
| Abstract: | We calibrate a lifecycle portfolio-choice model of homeowners facing uninsurable income risk to show that tax deductions for mortgage interest payments and voluntary pension contributions have sizable effects on household portfolios and macroprudential risks. The deductions reduce the after-tax cost of debt and increase the after-tax return of pension savings so that the mortgage incidence increases and portfolios shift from home equity and liquid assets towards pension savings. Because the consumption responses to a house-price decline are heterogeneous, the distribution of household debt shapes the quantitative effect of the tax deductions on the homeowners' resilience after a house price bust. |
| Keywords: | mortgage amortization, tax incentives, household consumption, portfolio choice, housing busts, economic stability, macroprudential policy |
| JEL: | D14 D15 D31 E21 G11 G21 H24 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12436 |
| By: | Pieter Nel (Department of Economics, University of Pretoria); Renee van Eyden (Department of Economics, University of Pretoria) |
| Abstract: | Does media sentiment create artificial volatility, or do stock markets efficiently filter media sentiment as noise? This study tests these hypotheses using daily data (1994-2024) across the S&P 500, Dow Jones, and NASDAQ. Principal Component Analysis decomposes four uncertainty measures into fundamental uncertainty (PC1) and media-amplified supply sentiment (PC2). EGARCH modeling reveals that media sentiment mutes rather than amplifies volatility contradicting behavioral finance predictions. Time Varying Granger causality tests suggests no causality from uncertainty variables to volatility, but volatility has a causal relationship with fundamental uncertainty. The asymmetric relationship demonstrates that information flows from stock markets to uncertainty sentiment, not uncertainty sentiment to stock markets. These findings support rational updating hypothesis where investors observe volatility and correctly infer elevated uncertainty, rather than being misled by media sentiment. |
| Keywords: | Media sentiment, EGARCH modeling, Principal component analysis, Time-varying causality |
| JEL: | G41 C58 E44 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:pre:wpaper:202605 |
| By: | Mourahib, Anas (Université catholique de Louvain, LIDAM/ISBA, Belgium); Kiriliouk, Anna (UNamur); Segers, Johan (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
| Abstract: | Estimating the parameters of max-stable parametric models poses significant challenges, particularly when some parameters lie on the boundary of the parameter space. This situation arises when a subset of variables exhibits extreme values simultaneously, while the remaining variables do not—a phenomenon referred to as an extreme direction in the literature. In this paper, we propose a novel estimator for the parameters of a general parametric mixture model, incorporating a penalization approach based on a pseudo-norm. This penalization plays a crucial role in accurately identifying parameters at the boundary of the parameter space. Additionally, our estimator comes with a data-driven algorithm to detect groupsof variables corresponding to extreme directions. We assess the performance of our estimator in terms of both parameter estimation and the identification of extreme directions through extensive simulation studies. Finally, we apply our methods to data on river discharges and financial portfolio losses. |
| Date: | 2025–06–19 |
| URL: | https://d.repec.org/n?u=RePEc:aiz:louvad:2025015 |
| By: | Luke Morgan; Carlos Ramírez; André F. Silva; Andrei Zlate |
| Abstract: | During times of increased trade policy uncertainty and geopolitical tensions, supply chain disruptions can be an important source of instability. Due to the interconnected nature of modern economies, problems in one market can often ripple across others, triggering logistical bottlenecks and longer delivery times. |
| Date: | 2026–01–30 |
| URL: | https://d.repec.org/n?u=RePEc:fip:fedgfn:102442 |
| By: | Robben, Jens (University of Amsterdam); Barigou, Karim (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
| Abstract: | Accurate forecasts of weekly mortality are essential for public health and the insurance industry. We develop a forecasting framework that extends the Lee–Carter model with age- and region-specific seasonal effects and penalized distributed lag non-linear components that capture the delayed and non-linear effects of heat, cold, and influenza on mortality. The model accommodates overdispersed mortality rates via a negative binomial distribution. We model the temporal dynamics of the latent factors in the model using SARIMAX processes and capture cross-regional dependencies through a copula-based approach. Using regional French mortality data (1990–2019), we demonstrate that the proposed framework yields well-calibrated forecast distributions and improves predictive accuracy relative to benchmark models. The results further show substantial heterogeneity in temperature- and influenza-related relative risks between ages and regions. These findings underscore the importance of incorporating exogenous drivers and dependence structures into a weekly mortality forecasting framework. |
| Keywords: | Stochastic mortality modeling ; seasonal mortality ; distributed lag non-linear models ; excess mortality |
| Date: | 2025–09–29 |
| URL: | https://d.repec.org/n?u=RePEc:aiz:louvad:2025016 |