|
on Econometrics |
By: | Todd Prono |
Abstract: | In heavy-tailed cases, variance targeting the Student's-t estimator proposed in Bollerslev (1987) for the linear GARCH model is shown to be robust to density misspecification, just like the popular Quasi-Maximum Likelihood Estimator (QMLE). The resulting Variance-Targeted, Non-Gaussian, Quasi-Maximum Likelihood Estimator (VTNGQMLE) is shown to possess a stable limit, albeit one that is highly non-Gaussian, with an ill-defined variance. The rate of convergence to this non-standard limit is slow relative √n and dependent upon unknown parameters. Fortunately, the sub-sample bootstrap is applicable, given a carefully constructed normalization. Surprisingly, both Monte Carlo experiments and empirical applications reveal VTNGQMLE to sizably outperform QMLE and other performance-enhancing (relative to QMLE) alternatives. In an empirical application, VTNGQMLE is applied to VIX (option-implied volatility of the S&P 500 Index). The resulting GARCH variance estimates are then used to forecast option-implied volatility of volatility (VVIX), thus demonstrating a link between historical volatility of VIX and risk-neutral volatility-of-volatility. |
Keywords: | GARCH; VIX; VVIX; Heavy tails; Robust estimation; Variance forecasting; Volatility; Volatility-of-volatility |
JEL: | C13 C22 C58 |
Date: | 2025–08–27 |
URL: | https://d.repec.org/n?u=RePEc:fip:fedgfe:2025-75 |
By: | Wenze Li |
Abstract: | We propose a simple modification to the wild bootstrap procedure and establish its asymptotic validity for linear regression models with many covariates and heteroskedastic errors. Monte Carlo simulations show that the modified wild bootstrap has excellent finite sample performance compared with alternative methods that are based on standard normal critical values, especially when the sample size is small and/or the number of controls is of the same order of magnitude as the sample size. |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2506.20972 |
By: | Fischer, Manfred M.; LeSage, James P. |
Abstract: | Spatial econometrics deals with econometric modeling in the presence of spatial dependence and heterogeneity, where observations correspond to specific spatial units such as points or regions. Traditional estimation techniques assume independent observations and are inadequate when spatial dependence exists. This article provides an overview of spatial econometric models, highlighting the challenges posed by spatial dependence in cross-sectional data. It examines key models, including the Spatial Autoregressive (SAR), Spatial Error (SEM), and Spatial Durbin (SDM) models, while detailing maximum likelihood estimation (MLE) techniques and computational advancements for handling large datasets. Alternative estimation approaches, such as the generalized method of moments, Bayesian methods, non-parametric locally linear models, and matrix exponential spatial models, are also discussed. The article explores methods applicable to continuous, dichotomous, and censored variables. Interpreting spatial regression model estimates correctly is crucial for drawing valid inferences. Distinguishing between direct, indirect (spillover), and total effects and careful specification of the spatial weight matrix is essential. Misinterpretation can lead to flawed conclusions, undermining policy relevance – especially when assessing interventions with potential spillovers. By adhering to rigorous interpretation practices, researchers can fully leverage spatial regression models while mitigating analytical pitfalls. |
Keywords: | Bayesian methods; censored dependent models; cross-sectional models; generalized method of moments; marginal effects; matrix exponential spatial models; maximum likelihood; non-parametric locally linear models; spatial dependence; spillover effects |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:wiw:wus046:72854367 |
By: | Matias D. Cattaneo; Jason M. Klusowski; Ruiqi Rae Yu |
Abstract: | Recursive decision trees have emerged as a leading methodology for heterogeneous causal treatment effect estimation and inference in experimental and observational settings. These procedures are fitted using the celebrated CART (Classification And Regression Tree) algorithm [Breiman et al., 1984], or custom variants thereof, and hence are believed to be "adaptive" to high-dimensional data, sparsity, or other specific features of the underlying data generating process. Athey and Imbens [2016] proposed several "honest" causal decision tree estimators, which have become the standard in both academia and industry. We study their estimators, and variants thereof, and establish lower bounds on their estimation error. We demonstrate that these popular heterogeneous treatment effect estimators cannot achieve a polynomial-in-$n$ convergence rate under basic conditions, where $n$ denotes the sample size. Contrary to common belief, honesty does not resolve these limitations and at best delivers negligible logarithmic improvements in sample size or dimension. As a result, these commonly used estimators can exhibit poor performance in practice, and even be inconsistent in some settings. Our theoretical insights are empirically validated through simulations. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.11381 |
By: | Degui Li (University of Macau); Yuning Li (University of York); Peter C.B. Phillips (Yale University) |
Abstract: | This paper studies high-dimensional curve time series with common stochastic trends. A dual functional factor model structure is adopted with a high-dimensional factor model for the observed curve time series and a low-dimensional factor model for the latent curves with common trends. A functional PCA technique is applied to estimate the common stochastic trends and functional factor loadings. Under some regularity conditions we derive the mean square convergence and limit distribution theory for the developed estimates, allowing the dimension and sample size to jointly diverge to infinity. We propose an easy-to-implement criterion to consistently select the number of common stochastic trends and further discuss model estimation when the nonstationary factors are cointegrated. Extensive Monte-Carlo simulations and two empirical applications to large-scale temperature curves in Australia and log-price curves of S&P 500 stocks are conducted, showing finite-sample performance and providing practical implementations of the new methodology. |
Date: | 2025–09–15 |
URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2460 |
By: | Oliver Cassagneau-Francis (UCL Centre for Education Policy and Equalising Opportunities) |
Abstract: | Recent work has highlighted the significant variation in returns to higher education across individuals. I develop a novel methodology --- exploiting recent advances in the identification of mixture models --- which groups individuals according to their prior ability and estimates the wage returns to a university degree by group, and show that the model is non-parametrically identified. Applying the method to data from a UK cohort study, the findings reflect recent evidence that skills and ability are multidimensional. The flexible model allows the returns to university to vary across the (multi-dimensional) ability distribution, a flexibility missing from commonly used additive models, but which I show is empirically important. Returns are generally increasing in ability for both men and women, but vary non-monotonically across the ability distribution. |
Keywords: | Mixture models; Distributions; Treatment effects; Higher education; Wages; Human capital; Cognitive and non-cognitive abilities. |
JEL: | E24 I23 I26 J24 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:ucl:cepeow:25-10 |
By: | Hyung Joo Kim; Dong Hwan Oh |
Abstract: | We propose a novel estimation framework for option pricing models that incorporates local, state-dependent information to improve out-of-sample forecasting performance. Rather than modifying the underlying option pricing model, such as the Heston-Nandi GARCH or the Heston stochastic volatility framework, we introduce a local M-estimation approach that conditions on key state variables including VIX, realized volatility, and time. Our method reweights historical observations based on their relevance to current market conditions, using kernel functions with bandwidths selected via a validation procedure. This adaptive estimation improves the model’s responsiveness to evolving dynamics while maintaining tractability. Empirically, we show that local estimators substantially outperform traditional non-local approaches in forecasting near-term option implied volatilities. The improvements are particularly pronounced in low-volatility environments and across the cross-section of options. The local estimators also outperform the non-local estimators in explaining future option returns. Our findings suggest that local information, when properly incorporated into the estimation process, can enhance the accuracy and robustness of option pricing models. |
Keywords: | Local maximum likelihood; Implied volatility forecasting; Option pricing; Model misspecification |
JEL: | C14 C51 C53 C58 G13 |
Date: | 2025–08–27 |
URL: | https://d.repec.org/n?u=RePEc:fip:fedgfe:2025-76 |
By: | Francois-Michel Boire; Thibaut Duprey; Alexander Ueberfeldt |
Abstract: | This paper studies how financial shocks shape the distribution of output growth by introducing a quantile-augmented vector autoregression (QAVAR), which integrates quantile regressions into a structural VAR framework. The QAVAR preserves standard shock identification while delivering flexible, nonparametric forecasts of conditional moments and tail risk measures for gross domestic product (GDP). Applying the model to financial conditions and credit spread shocks, we find that adverse financial shocks worsen the downside risk to GDP growth significantly, while the median and upper percentiles respond more moderately. This underscores the importance of nonlinearities and heterogeneous tail dynamics in assessing macro-financial risks. |
Keywords: | Central bank research; Econometric and statistical methods; Financial markets; Financial stability; Monetary and financial indicators |
JEL: | C32 C53 E32 E44 G01 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:bca:bocawp:25-25 |
By: | John T. Rickard; William A. Dembski; James Rickards |
Abstract: | Bayesian inference is widely used in many different fields to test hypotheses against observations. In most such applications, an assumption is made of precise input values to produce a precise output value. However, this is unrealistic for real-world applications. Often the best available information from subject matter experts (SMEs) in a given field is interval range estimates of the input probabilities involved in Bayes Theorem. This paper provides two key contributions to extend Bayes Theorem to an interval type-2 (IT2) version. First, we develop an IT2 version of Bayes Theorem that uses a novel and conservative method to avoid potential inconsistencies in the input IT2 MFs that otherwise might produce invalid output results. We then describe a novel and flexible algorithm for encoding SME-provided intervals into IT2 fuzzy membership functions (MFs), which we can use to specify the input probabilities in Bayes Theorem. Our algorithm generalizes and extends previous work on this problem that primarily addressed the encoding of intervals into word MFs for Computing with Words applications. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.08834 |
By: | Eiji Goto; Jan P.A.M. Jacobs; Simon van Norden |
Abstract: | We investigate the causes of changing productivity growth trend perceptions using a novel state-space framework for statistically efficient estimation of growth trends in the presence of data revision. Uncertainty around contemporary US productivity growth trends has been exacerbated by data revisions that typically occur several years after the initial data release, as well as by publication lags. However, the largest source of revisions in perceived trends comes from future realizations of productivity growth. This underlines the importance of estimation uncertainty in estimates of trend productivity growth. |
Keywords: | productivity, real-time data, news, trend-cycle decomposition |
JEL: | C32 C51 E6 E24 O47 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:een:camaaa:2025-53 |
By: | Hao Wang; Jingshu Peng; Yanyan Shen; Xujia Li; Lei Chen |
Abstract: | Stock recommendation is critical in Fintech applications, which use price series and alternative information to estimate future stock performance. Although deep learning models are prevalent in stock recommendation systems, traditional time-series forecasting training often fails to capture stock trends and rankings simultaneously, which are essential consideration factors for investors. To tackle this issue, we introduce a Multi-Task Learning (MTL) framework for stock recommendation, \textbf{M}omentum-\textbf{i}ntegrated \textbf{M}ulti-task \textbf{Stoc}k \textbf{R}ecommendation with Converge-based Optimization (\textbf{MiM-StocR}). To improve the model's ability to capture short-term trends, we novelly invoke a momentum line indicator in model training. To prioritize top-performing stocks and optimize investment allocation, we propose a list-wise ranking loss function called Adaptive-k ApproxNDCG. Moreover, due to the volatility and uncertainty of the stock market, existing MTL frameworks face overfitting issues when applied to stock time series. To mitigate this issue, we introduce the Converge-based Quad-Balancing (CQB) method. We conducted extensive experiments on three stock benchmarks: SEE50, CSI 100, and CSI 300. MiM-StocR outperforms state-of-the-art MTL baselines across both ranking and profitable evaluations. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.10461 |
By: | Stéphane Lhuissier |
Abstract: | I propose a dynamic factor model with time-varying skewness to assess asymmetric risk around the economic outlook across a set of macroeconomic aggregates. Applied to U.S. data, the model shows that macroeconomic skewness is procyclical, displays significant independent variations from GDP growth skewness, and does not require conditioning on financial variables to manifest. Compared to univariate benchmarks, the model improves the detection of downside risk to growth and delivers more accurate predictive distributions, especially during downturns. These findings underscore the value of using a richer information set to quantify the balance of macroeconomic risks. |
Keywords: | Dynamic Factor Models, Markov-Switching, Skewness |
JEL: | C34 C38 C53 E37 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:bfr:banfra:1004 |
By: | Sagi Schwartz; Qinling Wang; Fang Fang |
Abstract: | Predicting default is essential for banks to ensure profitability and financial stability. While modern machine learning methods often outperform traditional regression techniques, their lack of transparency limits their use in regulated environments. Explainable artificial intelligence (XAI) has emerged as a solution in domains like credit scoring. However, most XAI research focuses on post-hoc interpretation of black-box models, which does not produce models lightweight or transparent enough to meet regulatory requirements, such as those for Internal Ratings-Based (IRB) models. This paper proposes a hybrid approach: post-hoc interpretations of black-box models guide feature selection, followed by training glass-box models that maintain both predictive power and transparency. Using the Lending Club dataset, we demonstrate that this approach achieves performance comparable to a benchmark black-box model while using only 10 features - an 88.5% reduction. In our example, SHapley Additive exPlanations (SHAP) is used for feature selection, eXtreme Gradient Boosting (XGBoost) serves as the benchmark and the base black-box model, and Explainable Boosting Machine (EBM) and Penalized Logistic Tree Regression (PLTR) are the investigated glass-box models. We also show that model refinement using feature interaction analysis, correlation checks, and expert input can further enhance model interpretability and robustness. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.11389 |
By: | Jirong Zhuang; Xuan Wu |
Abstract: | Constructing the implied volatility surface (IVS) is reframed as a meta-learning problem training across trading days to learn a general process that reconstructs a full IVS from few quotes, eliminating daily recalibration. We introduce the Volatility Neural Process, an attention-based model that uses a two-stage training: pre-training on SABR-generated surfaces to encode a financial prior, followed by fine-tuning on market data. On S&P 500 options (2006-2023; out-of-sample 2019-2023), our model outperforms SABR, SSVI, Gaussian Process, and an ablation trained only on real data. Relative to the ablation, the SABR-induced prior reduces RMSE by about 40% and dominates in mid- and long-maturity regions where quotes are sparse. The learned prior suppresses large errors, providing a practical, data-efficient route to stable IVS construction with a single deployable model. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.11928 |