nep-ecm New Economics Papers
on Econometrics
Issue of 2025–11–03
23 papers chosen by
Sune Karlsson, Örebro universitet


  1. Mixed LR-$C(\alpha)$-type tests for irregular hypotheses, general criterion functions and misspecified models By Jean-Marie Dufour; Purevdorj Tuvaandorj
  2. Identification and Debiased Learning of Causal Effects with General Instrumental Variables By Shuyuan Chen; Peng Zhang; Yifan Cui
  3. Causal Inference in High-Dimensional Generalized Linear Models with Binary Outcomes By Jing Kong
  4. Local Overidentification and Efficiency Gains in Modern Causal Inference and Data Combination By Xiaohong Chen; Haitian Xie
  5. On the Asymptotics of the Minimax Linear Estimator By Jing Kong
  6. Prediction Intervals for Model Averaging By Zhongjun Qu; Wendun Wang; Xiaomeng Zhang
  7. SLIM: Stochastic Learning and Inference in Overidentified Models By Xiaohong Chen; Min Seong Kim; Sokbae Lee; Myung Hwan Seo; Myunghyun Song
  8. Equilibrium-Constrained Estimation of Recursive Logit Choice Models By Hung Tran; Tien Mai; Minh Hoang Ha
  9. Centered MA Dirichlet ARMA for Financial Compositions: Theory & Empirical Evidence By Harrison Katz
  10. A high-frequency approach to Realized Risk Measures By Federico Gatta; Fabrizio Lillo; Piero Mazzarisi
  11. Distributional regression for seasonal data: an application to river flows By Samuel Perreault; Silvana M. Pesenti; Daniyal Shahzad
  12. Denoising Complex Covariance Matrices with Hybrid ResNet and Random Matrix Theory: Cryptocurrency Portfolio Applications By Andres Garcia-Medina
  13. Parameter Inference for Structural System Identification Based on Static State Estimation By Alahmad, Ahmad; Mínguez Solana, Roberto; Porras Soriano, Rocío; Lozano Galant, José Antonio; Turmo, José
  14. Evaluating Local Policies in Centralized Markets By Dmitry Arkhangelsky; Wisse Rutgers
  15. Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations By Alexandre Boumezoued; Adel Cherchali; Vincent Lemaire; Gilles Pag\`es; Mathieu Truc
  16. Parameter Proliferation in Nowcasting: Issues and Approaches—An Application to Nowcasting China’s Real GDP By Mr. Paul Cashin; Mr. Fei Han; Ivy Sabuga; Jing Xie; Fan Zhang
  17. An Empirical Framework for Discrete Games with Costly Information Acquisition By Youngjae Jeong
  18. LLM Survey Framework: Coverage, Reasoning, Dynamics, Identification By Jing Cynthia Wu; Jin Xi; Shihan Xie
  19. Estimating the New Keynesian Phillips Curve (NKPC) with Fat-tailed Events By ., Kaustubh; Gopalakrishnan, Pawan Gopalakrishnan; Ranjan, Abhishek Ranjan
  20. Bayesian Analysis of Business Cycles in Japan by Extending the Markov Switching Model By WATANABE, Toshiaki
  21. Robust Yield Curve Estimation for Mortgage Bonds Using Neural Networks By Sina Molavipour; Alireza M. Javid; Cassie Ye; Bj\"orn L\"ofdahl; Mikhail Nechaev
  22. Testing Most Influential Sets By Lucas Darius Konrad; Nikolas Kuschnig
  23. Latent class models with persistence in regime changes: a distributed lag analysis By Luis Orea; K Hervé Dakpo

  1. By: Jean-Marie Dufour; Purevdorj Tuvaandorj
    Abstract: This paper introduces a likelihood ratio (LR)-type test that possesses the robustness properties of \(C(\alpha)\)-type procedures in an extremum estimation setting. The test statistic is constructed by applying separate adjustments to the restricted and unrestricted criterion functions, and is shown to be asymptotically pivotal under minimal conditions. It features two main robustness properties. First, unlike standard LR-type statistics, its null asymptotic distribution remains chi-square even under model misspecification, where the information matrix equality fails. Second, it accommodates irregular hypotheses involving constrained parameter spaces, such as boundary parameters, relying solely on root-\(n\)-consistent estimators for nuisance parameters. When the model is correctly specified, no boundary constraints are present, and parameters are estimated by extremum estimators, the proposed test reduces to the standard LR-type statistic. Simulations with ARCH models, where volatility parameters are constrained to be nonnegative, and parametric survival regressions with potentially monotone increasing hazard functions, demonstrate that our test maintains accurate size and exhibits good power. An empirical application to a two-way error components model shows that the proposed test can provide more informative inference than the conventional \(t\)-test.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.17070
  2. By: Shuyuan Chen; Peng Zhang; Yifan Cui
    Abstract: Instrumental variable methods are fundamental to causal inference when treatment assignment is confounded by unobserved variables. In this article, we develop a general nonparametric framework for identification and learning with multi-categorical or continuous instrumental variables. Specifically, we propose an additive instrumental variable framework to identify mean potential outcomes and the average treatment effect with a weighting function. Leveraging semiparametric theory, we derive efficient influence functions and construct consistent, asymptotically normal estimators via debiased machine learning. Extensions to longitudinal data, dynamic treatment regimes, and multiplicative instrumental variables are further developed. We demonstrate the proposed method by employing simulation studies and analyzing real data from the Job Training Partnership Act program.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20404
  3. By: Jing Kong
    Abstract: This paper proposes a debiased estimator for causal effects in high-dimensional generalized linear models with binary outcomes and general link functions. The estimator augments a regularized regression plug-in with weights computed from a convex optimization problem that approximately balances link-derivative-weighted covariates and controls variance; it does not rely on estimated propensity scores. Under standard conditions, the estimator is $\sqrt{n}$-consistent and asymptotically normal for dense linear contrasts and causal parameters. Simulation results show the superior performance of our approach in comparison to alternatives such as inverse propensity score estimators and double machine learning estimators in finite samples. In an application to the National Supported Work training data, our estimates and confidence intervals are close to the experimental benchmark.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16669
  4. By: Xiaohong Chen; Haitian Xie
    Abstract: This paper studies nonparametric local (over-)identification, in the sense of Chen and Santos (2018), and the associated semiparametric efficiency in modern causal frameworks. We develop a unified approach that begins by translating structural models with latent variables into their induced statistical models of observables and then analyzes local overidentification through conditional moment restrictions. We apply this approach to three leading models: (i) the general treatment model under unconfoundedness, (ii) the negative control model, and (iii) the long-term causal inference model under unobserved confounding. The first design yields a locally just-identified statistical model, implying that all regular asymptotically linear estimators of the treatment effect share the same asymptotic variance, equal to the (trivial) semiparametric efficiency bound. In contrast, the latter two models involve nonparametric endogeneity and are naturally locally overidentified; consequently, some doubly robust orthogonal moment estimators of the average treatment effect are inefficient. Whereas existing work typically imposes strong conditions to restore just-identification before deriving the efficiency bound, we relax such assumptions and characterize the general efficiency bound, along with efficient estimators, in the overidentified models (ii) and (iii).
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16683
  5. By: Jing Kong
    Abstract: Many causal estimands, such as average treatment effects under unconfoundedness, can be written as continuous linear functionals of an unknown regression function. We study a weighting estimator that sets weights by a minimax procedure: solving a convex optimization problem that trades off worst-case conditional bias against variance. Despite its growing use, general root-$n$ theory for this method has been limited. This paper fills that gap. Under regularity conditions, we show that the minimax linear estimator is root-$n$ consistent and asymptotically normal, and we derive its asymptotic variance. These results justify ignoring worst-case bias when forming large-sample confidence intervals and make inference less sensitive to the scaling of the function class. With a mild variance condition, the estimator attains the semiparametric efficiency bound, so an augmentation step commonly used in the literature is not needed to achieve first-order optimality. Evidence from simulations and three empirical applications, including job-training and minimum-wage policies, points to a simple rule: in designs satisfying our regularity conditions, standard-error confidence intervals suffice; otherwise, bias-aware intervals remain important.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16661
  6. By: Zhongjun Qu; Wendun Wang; Xiaomeng Zhang
    Abstract: A rich set of frequentist model averaging methods has been developed, but their applications have largely been limited to point prediction, as measuring prediction uncertainty in general settings remains an open problem. In this paper we propose prediction intervals for model averaging based on conformal inference. These intervals cover out-of-sample realizations of the outcome variable with a pre-specified probability, providing a way to assess predictive uncertainty beyond point prediction. The framework allows general model misspecification and applies to averaging across multiple models that can be nested, disjoint, overlapping, or any combination thereof, with weights that may depend on the estimation sample. We establish coverage guarantees under two sets of assumptions: exact finite-sample validity under exchangeability, relevant for cross-sectional data, and asymptotic validity under stationarity, relevant for time-series data. We first present a benchmark algorithm and then introduce a locally adaptive refinement and split-sample procedures that broaden applicability. The methods are illustrated with a cross-sectional application to real estate appraisal and a time-series application to equity premium forecasting.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16224
  7. By: Xiaohong Chen; Min Seong Kim; Sokbae Lee; Myung Hwan Seo; Myunghyun Song
    Abstract: We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan--Hansen $J$-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and $n = 10^5$, SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in $J$-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to $n = 10^6$.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20996
  8. By: Hung Tran; Tien Mai; Minh Hoang Ha
    Abstract: The recursive logit (RL) model provides a flexible framework for modeling sequential decision-making in transportation and choice networks, with important applications in route choice analysis, multiple discrete choice problems, and activity-based travel demand modeling. Despite its versatility, estimation of the RL model typically relies on nested fixed-point (NFXP) algorithms that are computationally expensive and prone to numerical instability. We propose a new approach that reformulates the maximum likelihood estimation problem as an optimization problem with equilibrium constraints, where both the structural parameters and the value functions are treated as decision variables. We further show that this formulation can be equivalently transformed into a conic optimization problem with exponential cones, enabling efficient solution using modern conic solvers such as MOSEK. Experiments on synthetic and real-world datasets demonstrate that our convex reformulation achieves accuracy comparable to traditional methods while offering significant improvements in computational stability and efficiency, thereby providing a practical and scalable alternative for recursive logit model estimation.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16886
  9. By: Harrison Katz
    Abstract: Observation-driven Dirichlet models for compositional time series often use the additive log-ratio (ALR) link and include a moving-average (MA) term built from ALR residuals. In the standard B--DARMA recursion, the usual MA regressor $\alr(\mathbf{Y}_t)-\boldsymbol{\eta}_t$ has nonzero conditional mean under the Dirichlet likelihood, which biases the mean path and blurs the interpretation of MA coefficients. We propose a minimal change: replace the raw regressor with a \emph{centered} innovation $\boldsymbol{\epsilon}_t^{\circ}=\alr(\mathbf{Y}_t)-\mathbb{E}\{\alr(\mathbf{Y}_t)\mid \boldsymbol{\eta}_t, \phi_t\}$, computable in closed form via digamma functions. Centering restores mean-zero innovations for the MA block without altering either the likelihood or the ALR link. We provide simple identities for the conditional mean and the forecast recursion, show first-order equivalence to a digamma-link DARMA while retaining a closed-form inverse to $\boldsymbol{\mu}_t$, and give ready-to-use code. A weekly application to the Federal Reserve H.8 bank-asset composition compares the original (raw-MA) and centered specifications under a fixed holdout and rolling one-step origins. The centered formulation improves log predictive scores with essentially identical point error and markedly cleaner Hamiltonian Monte Carlo diagnostics.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.18903
  10. By: Federico Gatta; Fabrizio Lillo; Piero Mazzarisi
    Abstract: We propose a new approach, termed Realized Risk Measures (RRM), to estimate Value-at-Risk (VaR) and Expected Shortfall (ES) using high-frequency financial data. It extends the Realized Quantile (RQ) approach proposed by Dimitriadis and Halbleib by lifting the assumption of return self-similarity, which displays some limitations in describing empirical data. More specifically, as the RQ, the RRM method transforms intra-day returns in intrinsic time using a subordinator process, in order to capture the inhomogeneity of trading activity and/or volatility clustering. Then, microstructural effects resulting in non-zero autocorrelation are filtered out using a suitable moving average process. Finally, a fat-tailed distribution is fitted on the cleaned intra-day returns. The return distribution at low frequency (daily) is then extrapolated via either a characteristic function approach or Monte Carlo simulations. VaR and ES are estimated as the quantile and the tail mean of the distribution, respectively. The proposed approach is benchmarked against the RQ through several experiments. Extensive numerical simulations and an empirical study on 18 US stocks show the outperformance of our method, both in terms of the in-sample estimated risk measures and in the out-of-sample risk forecasting
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16526
  11. By: Samuel Perreault; Silvana M. Pesenti; Daniyal Shahzad
    Abstract: Risk assessment in casualty insurance, such as flood risk, traditionally relies on extreme-value methods that emphasizes rare events. These approaches are well-suited for characterizing tail risk, but do not capture the broader dynamics of environmental variables such as moderate or frequent loss events. To complement these methods, we propose a modelling framework for estimating the full (daily) distribution of environmental variables as a function of time, that is a distributional version of typical climatological summary statistics, thereby incorporating both seasonal variation and gradual long-term changes. Aside from the time trend, to capture seasonal variation our approach simultaneously estimates the distribution for each instant of the seasonal cycle, without explicitly modelling the temporal dependence present in the data. To do so, we adopt a framework inspired by GAMLSS (Generalized Additive Models for Location, Scale, and Shape), where the parameters of the distribution vary over the seasonal cycle as a function of explanatory variables depending only on the time of year, and not on the past values of the process under study. Ignoring the temporal dependence in the seasonal variation greatly simplifies the modelling but poses inference challenges that we clarify and overcome. We apply our framework to daily river flow data from three hydrometric stations along the Fraser River in British Columbia, Canada, and analyse the flood of the Fraser River in early winter of 2021.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.18639
  12. By: Andres Garcia-Medina
    Abstract: Covariance matrices estimated from short, noisy, and non-Gaussian financial time series-particularly cryptocurrencies-are notoriously unstable. Empirical evidence indicates that these covariance structures often exhibit power-law scaling, reflecting complex and hierarchical interactions among assets. Building on this insight, we propose a power-law covariance model to characterize the collective dynamics of cryptocurrencies and develop a hybrid estimator that integrates Random Matrix Theory (RMT) with Residual Neural Networks (ResNets). The RMT component regularizes the eigenvalue spectrum under high-dimensional noise, while the ResNet learns data-driven corrections to recover latent structural dependencies. Monte Carlo simulations show that ResNet-based estimators consistently minimize both Frobenius and minimum-variance (MV) losses across diverse covariance models. Empirical experiments on 89 cryptocurrencies (2020-2025), using a training period ending at the local BTC maximum in November 2021 and testing through the subsequent bear market, demonstrate that a two-step estimator combining hierarchical filtering with ResNet corrections yields the most profitable and balanced portfolios, remaining robust under market regime shifts. These findings highlight the potential of combining RMT, deep learning, and power-law modeling to capture the intrinsic complexity of financial systems and enhance portfolio optimization under realistic conditions.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.19130
  13. By: Alahmad, Ahmad; Mínguez Solana, Roberto; Porras Soriano, Rocío; Lozano Galant, José Antonio; Turmo, José
    Abstract: Building on previous work that introduced an observability analysis (OA) based on static stateestimation (SSE) for structural system identification (SSI), this study extends SSE to performparameterinference by augmenting the state vector with structural parameters in addition to conventionalstate variables. Performing OA beforehand ensures that the selected measurements enable uniqueand robust parameter recovery. The estimation problem is formulated as a weighted nonlinearleast-squares optimization and solved through an iterative nonlinear process, with both structuralanalysis and parameter derivatives of the stiffness matrix obtained numerically using third-partyfinite element software.The framework unifies state and parameter estimation in a single formulation, enables the use ofhigh-fidelity models for response and sensitivity calculations, and incorporates robust handling ofheterogeneous measurements and faulty data. Numerical studies showaccurate recovery of stiffnessparameters under varied measurement layouts, uncertainty levels, and data quality, confirming themethod's robustness for model updating and structural health monitoring.
    Keywords: Structural health monitoring; Structural system identification; State estimation; Parameter estimation; Observability analysis
    Date: 2025–10–21
    URL: https://d.repec.org/n?u=RePEc:cte:wsrepe:48232
  14. By: Dmitry Arkhangelsky; Wisse Rutgers
    Abstract: We study a policy evaluation problem in centralized markets. We show that the aggregate impact of any marginal reform, the Marginal Policy Effect (MPE), is nonparametrically identified using data from a baseline equilibrium, without additional variation in the policy rule. We achieve this by constructing the equilibrium-adjusted outcome: a policy-invariant structural object that augments an agent's outcome with the full equilibrium externality their participation imposes on others. We show that these externalities can be constructed using estimands that are already common in empirical work. The MPE is identified as the covariance between our structural outcome and the reform's direction, providing a flexible tool for optimal policy targeting and a novel bridge to the Marginal Treatment Effects literature.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20032
  15. By: Alexandre Boumezoued; Adel Cherchali; Vincent Lemaire; Gilles Pag\`es; Mathieu Truc
    Abstract: Estimating risk measures such as large loss probabilities and Value-at-Risk is fundamental in financial risk management and often relies on computationally intensive nested Monte Carlo methods. While Multi-Level Monte Carlo (MLMC) techniques and their weighted variants are typically more efficient, their effectiveness tends to deteriorate when dealing with irregular functions, notably indicator functions, which are intrinsic to these risk measures. We address this issue by introducing a novel MLMC parametrization that significantly improves performance in practical, non-asymptotic settings while maintaining theoretical asymptotic guarantees. We also prove that antithetic sampling of MLMC levels enhances efficiency regardless of the regularity of the underlying function. Numerical experiments motivated by the calculation of economic capital in a life insurance context confirm the practical value of our approach for estimating loss probabilities and quantiles, bridging theoretical advances and practical requirements in financial risk estimation.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.18995
  16. By: Mr. Paul Cashin; Mr. Fei Han; Ivy Sabuga; Jing Xie; Fan Zhang
    Abstract: This paper evaluates three approaches to address parameter proliferation issue in nowcasting: (i) variable selection using adjusted stepwise autoregressive integrated moving average with exogenous variables (AS-ARIMAX); (ii) regularization in machine learning (ML); and (iii) dimensionality reduction via principal component analysis (PCA). Utilizing 166 variables, we estimate our models from 2007Q2 to 2019Q4 using rolling-window regression, while applying these three approaches. We then conduct a pseudo out-of-sample performance comparison of various nowcasting models—including Bridge, MIDAS, U-MIDAS, dynamic factor model (DFM), and machine learning techniques including Ridge Regression, LASSO, and Elastic Net to predict China's annualized real GDP growth rate from 2020Q1 to 2023Q1. Our findings suggest that the LASSO method outperform all other models, but only when guided by economic judgment and sign restrictions in variable selection. Notably, simpler models like Bridge with AS-ARIMAX variable selection yield reliable estimates nearly comparable to those from LASSO, underscoring the importance of effective variable selection in capturing strong signals.
    Keywords: China; GDP; Nowcasting
    Date: 2025–10–24
    URL: https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/217
  17. By: Youngjae Jeong
    Abstract: This paper develops a novel econometric framework for static discrete choice games with costly information acquisition. In traditional discrete games, players are assumed to perfectly know their own payoffs when making decisions, ignoring that information acquisition can be a strategic choice. In the proposed framework, I relax this assumption by allowing players to face uncertainty about their own payoffs and to optimally choose both the precision of information and their actions, balancing the expected payoffs from precise information against the information cost. The model provides a unified structure to analyze how information and strategic interactions jointly determine equilibrium outcomes. The model primitives are point identified, and the identification results are illustrated through Monte Carlo experiments. The empirical application of the U.S airline entry game shows that the low-cost carriers acquire less precise information about profits and incur lower information costs than other airlines, which is consistent with their business model that focuses on cost efficiency. The analysis highlights how differences in firms' information strategies can explain observed heterogeneity in market entry behavior and competition.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.19140
  18. By: Jing Cynthia Wu; Jin Xi; Shihan Xie
    Abstract: We propose a new LLM-based survey framework that enables retrospective coverage, economic reasoning, dynamic effects, and clean identification. We recover human-comparable treatment effects in a multi-wave randomized controlled trial of inflation expectations surveys, at 1/1000 the cost. To demonstrate the framework’s full potential, we extend the benchmark human survey (10 waves, 2018–2023) to over 50 waves dating back to 1990. We further examine the economic mechanisms underlying agents’ expectation formation, identifying the mean-reversion and individual-attention channels. Finally, we trace dynamic treatment effects and demonstrate clean identification. Together, these innovations demonstrate that LLM surveys enable research designs unattainable with human surveys.
    JEL: C83 E31 E52
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34308
  19. By: ., Kaustubh; Gopalakrishnan, Pawan Gopalakrishnan; Ranjan, Abhishek Ranjan
    Abstract: This paper provides estimation of the New Keynesian Phillips curve accounting for the unexpected large shocks such as Covid-19. The recent pandemic distorted the estimates of the output gap derived using the regular trend cycle decomposition of GDP (HP Filter, BP Filter, Kalman Filter). We propose a modified unobserved components model (UCM) by introducing an additional Student-t distributed irregular component in the trend cycle decomposition of GDP, which successfully isolates transitory shocks like COVID-19 from trend and cycle estimates. We also construct a model-based measure of inflation expectations that captures adaptive learning from a long inflation history and real-time updating during the pandemic. For India, we find a stable linear NKPC. Our results demonstrate that accounting for fat-tailed events is crucial for obtaining reliable Phillips curve estimates in emerging markets.
    Keywords: Philips Curve, Potential Growth, Output Gap, Inflation Expectations, Unobserved Components Model, Kalman Filter, Almon Lag
    JEL: C51 C60 E32
    Date: 2025–10–01
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:126329
  20. By: WATANABE, Toshiaki
    Abstract: This paper analyzes business cycles in Japan by applying Markov switching (MS) models to monthly data on the coincident indicator of composite index (CI) during the period of 1985/01-2025/05 calculated by Economic and Social Research Institute (ESRI), Cabinet Office, the Government of Japan. During the latter half of the sample period, the Japanese economy experienced major shocks such as the global financial crisis in 2008, the Great East Japan Earthquake in 2011 and the COVID-19 pandemic in 2020. CI fell sharply during these periods, which make it difficult to estimate business cycle turning points using the simple MS model. In this paper, the MS model is extended by incorporating Student's t-error and stochastic volatility (SV). Since it is difficult to evaluate the likelihood once SV is introduced, a Bayesian method via Markov chain Monte Carlo (MCMC) is employed. The MS model with t-error or SV is shown to provide the estimates of the business cycle turning points close to those published by ESRI. A new method for evaluating marginal likelihood is evaluated. Bayesian model comparison based on marginal likelihood provides evidence that t-error is not needed once SV is introduced. Using the MS model with normal error and SV, structural changes in CI's mean growth rates during booms and recessions are also analyezed and two break points are found in the both mean growth rates. One is 2008/10 and the other is 2010/02, during which the mean growth rate during recession falls and that during boom rises due to the global financial crisis.
    JEL: C11 C22 C51 C52 E32
    Date: 2025–08–24
    URL: https://d.repec.org/n?u=RePEc:hit:hiasdp:hias-e-148
  21. By: Sina Molavipour; Alireza M. Javid; Cassie Ye; Bj\"orn L\"ofdahl; Mikhail Nechaev
    Abstract: Robust yield curve estimation is crucial in fixed-income markets for accurate instrument pricing, effective risk management, and informed trading strategies. Traditional approaches, including the bootstrapping method and parametric Nelson-Siegel models, often struggle with overfitting or instability issues, especially when underlying bonds are sparse, bond prices are volatile, or contain hard-to-remove noise. In this paper, we propose a neural networkbased framework for robust yield curve estimation tailored to small mortgage bond markets. Our model estimates the yield curve independently for each day and introduces a new loss function to enforce smoothness and stability, addressing challenges associated with limited and noisy data. Empirical results on Swedish mortgage bonds demonstrate that our approach delivers more robust and stable yield curve estimates compared to existing methods such as Nelson-Siegel-Svensson (NSS) and Kernel-Ridge (KR). Furthermore, the framework allows for the integration of domain-specific constraints, such as alignment with risk-free benchmarks, enabling practitioners to balance the trade-off between smoothness and accuracy according to their needs.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.21347
  22. By: Lucas Darius Konrad; Nikolas Kuschnig
    Abstract: Small subsets of data with disproportionate influence on model outcomes can have dramatic impacts on conclusions, with a few data points sometimes overturning key findings. While recent work has developed methods to identify these most influential sets, no formal theory exists to determine when their influence reflects genuine problems rather than natural sampling variation. We address this gap by developing a principled framework for assessing the statistical significance of most influential sets. Our theoretical results characterize the extreme value distributions of maximal influence and enable rigorous hypothesis tests for excessive influence, replacing current ad-hoc sensitivity checks. We demonstrate the practical value of our approach through applications across economics, biology, and machine learning benchmarks.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20372
  23. By: Luis Orea (Department of Economics - Universidad de Oviedo = University of Oviedo); K Hervé Dakpo (UMR PSAE - Paris-Saclay Applied Economics - AgroParisTech - Université Paris-Saclay - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: One of the approaches to address the issue of production heterogeneity is to use latent class models. In most of these models, class membership either does not vary or might change freely over time. While the first assumption becomes increasingly untenable as the number of observed periods becomes larger, the second assumption is difficult to justify if important factors exist that prevent firms from switching classes back and forth several times. The present paper aims to develop a latent class model that allows firms to change from one class to another over time while permitting some degree of persistence in class membership. Our model can be used in settings with more than two classes and estimated using unbalanced panel datasets. An application of the empirical model in the context of dairy farm intensification is also provided. We find evidence of moderate resistance to replacing one milk production system with another in this sector, especially for small farms. Despite this, the standard latent class model performs reasonably well in terms of class-membership probabilities and temporal patterns.
    Abstract: L'une des approches permettant d'aborder la question de l'hétérogénéité de la production consiste à utiliser des modèles de classes latentes. Dans la plupart de ces modèles, l'appartenance à une classe ne varie pas ou peut changer librement au fil du temps. Si la première hypothèse devient de plus en plus intenable à mesure que le nombre de périodes observées augmente, la seconde est difficile à justifier s'il existe des facteurs importants qui empêchent les entreprises de changer plusieurs fois de classe. Le présent article vise à développer un modèle de classes latentes qui permette aux entreprises de passer d'une classe à une autre au fil du temps tout en autorisant un certain degré de persistance dans l'appartenance à une classe. Notre modèle peut être utilisé dans des contextes comportant plus de deux classes et estimé à l'aide d'ensembles de données de panel déséquilibrés. Une application du modèle empirique dans le contexte de l'intensification des exploitations laitières est également fournie. Nous constatons une résistance modérée au remplacement d'un système de production laitière par un autre dans ce secteur, en particulier pour les petites exploitations. Malgré cela, le modèle de classe latente standard fonctionne raisonnablement bien en termes de probabilités d'appartenance à une classe et de schémas temporels.
    Keywords: Stochastic frontier, Class persistence, Latent class model
    Date: 2025–10–10
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05322436

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.