nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒06‒10
24 papers chosen by
Sune Karlsson, Örebro universitet


  1. Inference in a Stationary/Nonstationary Autoregressive Time-Varying-Parameter Model By Donald W. K. Andrews; Ming Li
  2. A quantile-based nonadditive fixed effects model By Xin Liu
  3. Two-way Fixed Effects and Differences-in-Differences Estimators in Heterogeneous Adoption Designs By Cl\'ement de Chaisemartin; Xavier D'Haultf{\oe}uille
  4. Identification and Estimation of Nonseparable Triangular Equations with Mismeasured Instruments By Shaomin Wu
  5. Asymptotic Properties of the Distributional Synthetic Controls By Lu Zhang; Xiaomeng Zhang; Xinyu Zhang
  6. Optimal Bias-Correction and Valid Inference in High-Dimensional Ridge Regression: A Closed-Form Solution By Zhaoxing Gao
  7. Optimal nonparametric estimation of the expected shortfall risk By Daniel Bartl; Stephan Eckstein
  8. (Empirical) Bayes Approaches to Parallel Trends By Soonwoo Kwon; Jonathan Roth
  9. Dynamic Local Average Treatment Effects By Ravi B. Sojitra; Vasilis Syrgkanis
  10. Finite-Sample Inference on Auction Bid Distributions Using Transaction Prices By David M. Kaplan; Xin Liu
  11. Quantifying the Internal Validity of Weighted Estimands By Alexandre Poirier; Tymon S{\l}oczy\'nski
  12. Stochastic Volatility in Mean: Efficient Analysis by a Generalized Mixture Sampler By Daichi Hiraki; Siddhartha Chib; Yasuhiro Omori
  13. A Comparison of Traditional and Deep Learning Methods for Parameter Estimation of the Ornstein-Uhlenbeck Process By Jacob Fein-Ashley
  14. Identifying the Volatility Risk Price Through the Leverage Effect By Xu Cheng; Eric Renault; Paul Sangrey?
  15. A Bayesian semi-parametric approach to stochastic frontier models with inefficiency heterogeneity By Deng, Yaguo
  16. How do applied researchers use the Causal Forest? A methodological review of a method By Patrick Rehill
  17. Tuning parameter selection in econometrics By Denis Chetverikov
  18. Sequential monitoring for explosive volatility regimes By Lajos Horvath; Lorenzo Trapani; Shixuan Wang
  19. Percentage Coefficient (bp) -- Effect Size Analysis (Theory Paper 1) By Xinshu Zhao; Dianshi Moses Li; Ze Zack Lai; Piper Liping Liu; Song Harris Ao; Fei You
  20. Panel Data Analysis By Rüttenauer, Tobias; Kapelle, Nicole
  21. Detailed Gender Wage Gap Decompositions: Controlling for Worker Unobserved Heterogeneity Using Network Theory By Jamie Fogel; Bernardo Modenesi
  22. Demistifying Inference after Adaptive Experiments By Aur\'elien Bibaut; Nathan Kallus
  23. Calibration of the rating transition model for high and low default portfolios By Jian He; Asma Khedher; Peter Spreij
  24. Robust Bellman State Prediction with Learning and Model Preferences By Estey, Clayton

  1. By: Donald W. K. Andrews (Yale University); Ming Li (National University of Singapore)
    Abstract: This paper considers nonparametric estimation and inference in first-order autoregressive (AR(1)) models with deterministically time- varying parameters. A key feature of the proposed approach is to allow for time-varying stationarity in some time periods, time-varying nonstationarity (i.e., unit root or local-to-unit root behavior) in other periods, and smooth transitions between the two. The estimation of the AR parameter at any time point is based on a local least squares regression method, where the relevant initial condition is endogenous. We obtain limit distributions for the AR parameter estimator and t-statistic at a given point T in time when the parameter exhibits unit root, local-to-unity, or stationary/stationary-like behavior at time T. These results are used to construct confidence intervals and median- unbiased interval estimators for the AR parameter at any specified point in time. The confidence intervals have correct uniform asymptotic coverage probability regardless of the time-varying stationarity/ nonstationary behavior of the observations.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2389&r=
  2. By: Xin Liu
    Abstract: I propose a quantile-based nonadditive fixed effects panel model to study heterogeneous causal effects. Similar to standard fixed effects (FE) model, my model allows arbitrary dependence between regressors and unobserved heterogeneity, but it generalizes the additive separability of standard FE to allow the unobserved heterogeneity to enter nonseparably. Similar to structural quantile models, my model's random coefficient vector depends on an unobserved, scalar ''rank'' variable, in which outcomes (excluding an additive noise term) are monotonic at a particular value of the regressor vector, which is much weaker than the conventional monotonicity assumption that must hold at all possible values. This rank is assumed to be stable over time, which is often more economically plausible than the panel quantile studies that assume individual rank is iid over time. It uncovers the heterogeneous causal effects as functions of the rank variable. I provide identification and estimation results, establishing uniform consistency and uniform asymptotic normality of the heterogeneous causal effect function estimator. Simulations show reasonable finite-sample performance and show my model complements fixed effects quantile regression. Finally, I illustrate the proposed methods by examining the causal effect of a country's oil wealth on its military defense spending.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.03826&r=
  3. By: Cl\'ement de Chaisemartin; Xavier D'Haultf{\oe}uille
    Abstract: We consider treatment-effect estimation under a parallel trends assumption, in heterogeneous adoption designs where no unit is treated at period one, and units receive a weakly positive dose at period two. First, we develop a test of the assumption that the treatment effect is mean independent of the treatment, under which the commonly-used two-way-fixed-effects estimator is consistent. When this test is rejected, we propose alternative, robust estimators. If there are stayers with a period-two treatment equal to 0, the robust estimator is a difference-in-differences (DID) estimator using stayers as the control group. If there are quasi-stayers with a period-two treatment arbitrarily close to zero, the robust estimator is a DID using units with a period-two treatment below a bandwidth as controls. Finally, without stayers or quasi-stayers, we propose non-parametric bounds, and an estimator relying on a parametric specification of treatment-effect heterogeneity. We use our results to revisit Pierce and Schott (2016) and Enikolopov et al. (2011).
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.04465&r=
  4. By: Shaomin Wu
    Abstract: In this paper, I study the nonparametric identification and estimation of the marginal effect of an endogenous variable $X$ on the outcome variable $Y$, given a potentially mismeasured instrument variable $W^*$, without assuming linearity or separability of the functions governing the relationship between observables and unobservables. To address the challenges arising from the co-existence of measurement error and nonseparability, I first employ the deconvolution technique from the measurement error literature to identify the joint distribution of $Y, X, W^*$ using two error-laden measurements of $W^*$. I then recover the structural derivative of the function of interest and the "Local Average Response" (LAR) from the joint distribution via the "unobserved instrument" approach in Matzkin (2016). I also propose nonparametric estimators for these parameters and derive their uniform rates of convergence. Monte Carlo exercises show evidence that the estimators I propose have good finite sample performance.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.13735&r=
  5. By: Lu Zhang; Xiaomeng Zhang; Xinyu Zhang
    Abstract: This paper enhances our comprehension of the Distributional Synthetic Control (DSC) proposed by Gunsilius (2023), focusing on its asymptotic properties. We first establish the DSC estimator's asymptotic optimality. The essence of this optimality lies in the treatment effect estimator given by DSC achieves the lowest possible squared prediction error among all potential treatment effect estimators that depend on an average of quantiles of control units. We also establish the convergence of the DSC weights when some requirements are met, as well as the convergence rate. A significant aspect of our research is that we find DSC synthesis forms an optimal weighted average, particularly in situations where it is impractical to perfectly fit the treated unit's quantiles through the weighted average of the control units' quantiles. To corroborate our theoretical insights, we provide empirical evidence derived from simulations.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00953&r=
  6. By: Zhaoxing Gao
    Abstract: Ridge regression is an indispensable tool in big data econometrics but suffers from bias issues affecting both statistical efficiency and scalability. We introduce an iterative strategy to correct the bias effectively when the dimension $p$ is less than the sample size $n$. For $p>n$, our method optimally reduces the bias to a level unachievable through linear transformations of the response. We employ a Ridge-Screening (RS) method to handle the remaining bias when $p>n$, creating a reduced model suitable for bias-correction. Under certain conditions, the selected model nests the true one, making RS a novel variable selection approach. We establish the asymptotic properties and valid inferences of our de-biased ridge estimators for both $p n$, where $p$ and $n$ may grow towards infinity, along with the number of iterations. Our method is validated using simulated and real-world data examples, providing a closed-form solution to bias challenges in ridge regression inferences.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00424&r=
  7. By: Daniel Bartl; Stephan Eckstein
    Abstract: We address the problem of estimating the expected shortfall risk of a financial loss using a finite number of i.i.d. data. It is well known that the classical plug-in estimator suffers from poor statistical performance when faced with (heavy-tailed) distributions that are commonly used in financial contexts. Further, it lacks robustness, as the modification of even a single data point can cause a significant distortion. We propose a novel procedure for the estimation of the expected shortfall and prove that it recovers the best possible statistical properties (dictated by the central limit theorem) under minimal assumptions and for all finite numbers of data. Further, this estimator is adversarially robust: even if a (small) proportion of the data is maliciously modified, the procedure continuous to optimally estimate the true expected shortfall risk. We demonstrate that our estimator outperforms the classical plug-in estimator through a variety of numerical experiments across a range of standard loss distributions.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00357&r=
  8. By: Soonwoo Kwon; Jonathan Roth
    Abstract: We consider Bayes and Empirical Bayes (EB) approaches for dealing with violations of parallel trends. In the Bayes approach, the researcher specifies a prior over both the pre-treatment violations of parallel trends $\delta_{pre}$ and the post-treatment violations $\delta_{post}$. The researcher then updates their posterior about the post-treatment bias $\delta_{post}$ given an estimate of the pre-trends $\delta_{pre}$. This allows them to form posterior means and credible sets for the treatment effect of interest, $\tau_{post}$. In the EB approach, the prior on the violations of parallel trends is learned from the pre-treatment observations. We illustrate these approaches in two empirical applications.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.11839&r=
  9. By: Ravi B. Sojitra; Vasilis Syrgkanis
    Abstract: We consider Dynamic Treatment Regimes (DTRs) with one sided non-compliance that arise in applications such as digital recommendations and adaptive medical trials. These are settings where decision makers encourage individuals to take treatments over time, but adapt encouragements based on previous encouragements, treatments, states, and outcomes. Importantly, individuals may choose to (not) comply with a treatment recommendation, whenever it is made available to them, based on unobserved confounding factors. We provide non-parametric identification, estimation, and inference for Dynamic Local Average Treatment Effects, which are expected values of multi-period treatment contrasts among appropriately defined complier subpopulations. Under standard assumptions in the Instrumental Variable and DTR literature, we show that one can identify local average effects of contrasts that correspond to offering treatment at any single time step. Under an additional cross-period effect-compliance independence assumption, which is satisfied in Staggered Adoption settings and a generalization of them, which we define as Staggered Compliance settings, we identify local average treatment effects of treating in multiple time periods.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01463&r=
  10. By: David M. Kaplan (University of Missouri); Xin Liu (Washington State University)
    Abstract: We provide finite-sample, nonparametric, uniform confidence bands for the bid distribution's quantile function in first-price, second-price, descending, and ascending auctions with symmetric independent private values, when only the transaction price (highest or second-highest bid) is observed. Even with a varying number of bidders, finite-sample coverage is exact. With a fixed number of bidders, we also derive uniform confidence bands robust to auction-level unobserved heterogeneity. This includes new bounds on the bid quantile function in terms of the transaction price quantile function. We also provide results on computation, median-unbiased quantile estimation, and pointwise quantile inference. Empirically, our new methodology is applied to timber auction data to examine heterogeneity across appraisal value and number of bidders, which helps assess the combination of symmetric independent private values and exogenous participation.
    Keywords: first-price; order statistics; second-price; uniform confidence band; unobserved heterogeneity
    JEL: C57
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:2403&r=
  11. By: Alexandre Poirier; Tymon S{\l}oczy\'nski
    Abstract: In this paper we study a class of weighted estimands, which we define as parameters that can be expressed as weighted averages of the underlying heterogeneous treatment effects. The popular ordinary least squares (OLS), two-stage least squares (2SLS), and two-way fixed effects (TWFE) estimands are all special cases within our framework. Our focus is on answering two questions concerning weighted estimands. First, under what conditions can they be interpreted as the average treatment effect for some (possibly latent) subpopulation? Second, when these conditions are satisfied, what is the upper bound on the size of that subpopulation, either in absolute terms or relative to a target population of interest? We argue that this upper bound provides a valuable diagnostic for empirical research. When a given weighted estimand corresponds to the average treatment effect for a small subset of the population of interest, we say its internal validity is low. Our paper develops practical tools to quantify the internal validity of weighted estimands.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.14603&r=
  12. By: Daichi Hiraki; Siddhartha Chib; Yasuhiro Omori
    Abstract: In this paper we consider the simulation-based Bayesian analysis of stochastic volatility in mean (SVM) models. Extending the highly efficient Markov chain Monte Carlo mixture sampler for the SV model proposed in Kim et al. (1998) and Omori et al. (2007), we develop an accurate approximation of the non-central chi-squared distribution as a mixture of thirty normal distributions. Under this mixture representation, we sample the parameters and latent volatilities in one block. We also detail a correction of the small approximation error by using additional Metropolis-Hastings steps. The proposed method is extended to the SVM model with leverage. The methodology and models are applied to excess holding yields in empirical studies, and the SVM model with leverage is shown to outperform competing volatility models based on marginal likelihoods.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.13986&r=
  13. By: Jacob Fein-Ashley
    Abstract: We consider the Ornstein-Uhlenbeck (OU) process, a stochastic process widely used in finance, physics, and biology. Parameter estimation of the OU process is a challenging problem. Thus, we review traditional tracking methods and compare them with novel applications of deep learning to estimate the parameters of the OU process. We use a multi-layer perceptron to estimate the parameters of the OU process and compare its performance with traditional parameter estimation methods, such as the Kalman filter and maximum likelihood estimation. We find that the multi-layer perceptron can accurately estimate the parameters of the OU process given a large dataset of observed trajectories; however, traditional parameter estimation methods may be more suitable for smaller datasets.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.11526&r=
  14. By: Xu Cheng (University of Pennsylvania); Eric Renault (University of Warwick); Paul Sangrey? (Noom Inc)
    Abstract: In asset pricing models with stochastic volatility, uncertainty about volatility affects risk premia through two channels: aversion to decreasing returns and aversion to increasing volatility. We analyze the identification of and robust inference for structural parameters measuring investors' aversions to these risks: the return risk price and the volatility risk price. In the presence of a leverage effect (instantaneous causality between the asset return and its volatility), we study the identification of both structural parameters with the price data only, without relying on additional option pricing models or option data. We analyze this identification challenge in a nonparametric discrete-time exponentially affine model, complementing the continuous-time approach of Bandi and Renò (2016). We then specialize to a parametric model and derive the implied minimum distance criterion relating the risk prices to the asset return and volatility's joint distribution. This criterion is almost flat when the leverage effect is small, and we introduce identification-robust confidence sets for both risk prices regardless of the magnitude of the leverage effect.
    Keywords: leverage effect, nonparametric identication, stochastic volatility, volatility factor, volatility risk price, weak identication
    JEL: C12 C14 C38 C58 G12
    Date: 2024–04–23
    URL: http://d.repec.org/n?u=RePEc:pen:papers:24-013&r=
  15. By: Deng, Yaguo
    Abstract: In this chapter, we present a semiparametric Bayesian approach for stochastic frontier (SF) models that incorporates exogenous covariates into the inefficiency component by using a Dirichlet process model for conditional distributions. We highlight the advantages of our method by contrasting it with traditional SF models and parametric Bayesian SF models using two different applications in the agricultural sector. In the first application, the accounting data of 2, 500 dairy farms from five countries are analyzed. In the second case study, data from forty-three smallholder rice producers in the Tarlac region of the Philippines from 1990 to 1997 are analyzed. Our empirical results suggest that the semi-parametric Bayesian stochastic frontier model outperforms its counterparts in predictive efficiency, highlighting its robustness and utility in different agricultural contexts.
    Keywords: Bayesian semi-parametric inference; Efficiency; Heterogeneity; Production function; Stochastic frontier analysis
    Date: 2024–04–23
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:43837&r=
  16. By: Patrick Rehill
    Abstract: This paper conducts a methodological review of papers using the causal forest machine learning method for flexibly estimating heterogeneous treatment effects. It examines 133 peer-reviewed papers. It shows that the emerging best practice relies heavily on the approach and tools created by the original authors of the causal forest such as their grf package and the approaches given by them in examples. Generally researchers use the causal forest on a relatively low-dimensional dataset relying on randomisation or observed controls to identify effects. There are several common ways to then communicate results -- by mapping out the univariate distribution of individual-level treatment effect estimates, displaying variable importance results for the forest and graphing the distribution of treatment effects across covariates that are important either for theoretical reasons or because they have high variable importance. Some deviations from this common practice are interesting and deserve further development and use. Others are unnecessary or even harmful.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.13356&r=
  17. By: Denis Chetverikov
    Abstract: I review some of the main methods for selecting tuning parameters in nonparametric and $\ell_1$-penalized estimation. For the nonparametric estimation, I consider the methods of Mallows, Stein, Lepski, cross-validation, penalization, and aggregation in the context of series estimation. For the $\ell_1$-penalized estimation, I consider the methods based on the theory of self-normalized moderate deviations, bootstrap, Stein's unbiased risk estimation, and cross-validation in the context of Lasso estimation. I explain the intuition behind each of the methods and discuss their comparative advantages. I also give some extensions.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.03021&r=
  18. By: Lajos Horvath; Lorenzo Trapani; Shixuan Wang
    Abstract: In this paper, we develop two families of sequential monitoring procedure to (timely) detect changes in a GARCH(1, 1) model. Whilst our methodologies can be applied for the general analysis of changepoints in GARCH(1, 1) sequences, they are in particular designed to detect changes from stationarity to explosivity or vice versa, thus allowing to check for volatility bubbles. Our statistics can be applied irrespective of whether the historical sample is stationary or not, and indeed without prior knowledge of the regime of the observations before and after the break. In particular, we construct our detectors as the CUSUM process of the quasi-Fisher scores of the log likelihood function. In order to ensure timely detection, we then construct our boundary function (exceeding which would indicate a break) by including a weighting sequence which is designed to shorten the detection delay in the presence of a changepoint. We consider two types of weights: a lighter set of weights, which ensures timely detection in the presence of changes occurring early, but not too early after the end of the historical sample; and a heavier set of weights, called Renyi weights which is designed to ensure timely detection in the presence of changepoints occurring very early in the monitoring horizon. In both cases, we derive the limiting distribution of the detection delays, indicating the expected delay for each set of weights. Our theoretical results are validated via a comprehensive set of simulations, and an empirical application to daily returns of individual stocks.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.17885&r=
  19. By: Xinshu Zhao (Department of Communication, Faculty of Social Science, University of Macau); Dianshi Moses Li (Centre for Empirical Legal Studies, Faculty of Law, University of Macau); Ze Zack Lai (Department of Communication, Faculty of Social Science, University of Macau); Piper Liping Liu (School of Media and Communication, Shenzhen University); Song Harris Ao (Department of Communication, Faculty of Social Science, University of Macau); Fei You (Department of Communication, Faculty of Social Science, University of Macau)
    Abstract: Percentage coefficient (bp) has emerged in recent publications as an additional and alternative estimator of effect size for regression analysis. This paper retraces the theory behind the estimator. It's posited that an estimator must first serve the fundamental function of enabling researchers and readers to comprehend an estimand, the target of estimation. It may then serve the instrumental function of enabling researchers and readers to compare two or more estimands. Defined as the regression coefficient when dependent variable (DV) and independent variable (IV) are both on conceptual 0-1 percentage scales, percentage coefficients (bp) feature 1) clearly comprehendible interpretation and 2) equitable scales for comparison. Thus, the coefficient (bp) serves both functions effectively and efficiently, thereby serving some needs not completely served by other indicators such as raw coefficient (bw) and standardized beta. Another fundamental premise of the functionalist theory is that "effect" is not a monolithic concept. Rather, it is a collection of compartments, each of which measures a component of the conglomerate that we call "effect." A regression coefficient (b), for example, measures one aspect of effect, which is unit effect, aka efficiency, as it indicates the unit change in DV associated with a one-unit increase in IV. Percentage coefficient (bp) indicates the change in DV in percentage points associated with a whole scale increase in IV. It is meant to be an all-encompassing indicator of the all-encompassing concept of effect, but rather an interpretable and comparable indicator of efficiency, one of the key components of effect.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.19495&r=
  20. By: Rüttenauer, Tobias; Kapelle, Nicole (Humboldt-Universität zu Berlin)
    Abstract: Panel data offer a valuable lens through which social science phenomena can be examined over time. With panel data, we can overcome some of the fundamental problems with conventional cross-sectional analyses by focusing on the within-unit changes rather than the differences between units. This chapter delves into the foundations, recent advancements, and critical issues associated with panel data analysis. The chapter illustrates the basic concepts of random effects (RE) and fixed effects (FE) estimators. Moving beyond the fundamentals, we provide an intuition for various recent developments and advances in the field of panel data methods, paying particular attention to the identification of time-varying treatment effects or impact functions. To illustrate practical application, we investigate how marriage influences sexual satisfaction. While married individuals report a higher sexual satisfaction than un-married respondents (between-comparison), individuals experience a decline in satisfaction after marriage compared to their pre-marital levels (within-comparison).
    Date: 2024–05–01
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:3mfzq&r=
  21. By: Jamie Fogel; Bernardo Modenesi
    Abstract: Recent advances in the literature of decomposition methods in economics have allowed for the identification and estimation of detailed wage gap decompositions. In this context, building reliable counterfactuals requires using tighter controls to ensure that similar workers are correctly identified by making sure that important unobserved variables such as skills are controlled for, as well as comparing only workers with similar observable characteristics. This paper contributes to the wage decomposition literature in two main ways: (i) developing an economic principled network based approach to control for unobserved worker skills heterogeneity in the presence of potential discrimination; and (ii) extending existing generic decomposition tools to accommodate for potential lack of overlapping supports in covariates between groups being compared, which is likely to be the norm in more detailed decompositions. We illustrate the methodology by decomposing the gender wage gap in Brazil.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.04365&r=
  22. By: Aur\'elien Bibaut; Nathan Kallus
    Abstract: Adaptive experiments such as multi-arm bandits adapt the treatment-allocation policy and/or the decision to stop the experiment to the data observed so far. This has the potential to improve outcomes for study participants within the experiment, to improve the chance of identifying best treatments after the experiment, and to avoid wasting data. Seen as an experiment (rather than just a continually optimizing system) it is still desirable to draw statistical inferences with frequentist guarantees. The concentration inequalities and union bounds that generally underlie adaptive experimentation algorithms can yield overly conservative inferences, but at the same time the asymptotic normality we would usually appeal to in non-adaptive settings can be imperiled by adaptivity. In this article we aim to explain why, how, and when adaptivity is in fact an issue for inference and, when it is, understand the various ways to fix it: reweighting to stabilize variances and recover asymptotic normality, always-valid inference based on joint normality of an asymptotic limiting sequence, and characterizing and inverting the non-normal distributions induced by adaptivity.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01281&r=
  23. By: Jian He; Asma Khedher; Peter Spreij
    Abstract: In this paper we develop Maximum likelihood (ML) based algorithms to calibrate the model parameters in credit rating transition models. Since the credit rating transition models are not Gaussian linear models, the celebrated Kalman filter is not suitable to compute the likelihood of observed migrations. Therefore, we develop a Laplace approximation of the likelihood function and as a result the Kalman filter can be used in the end to compute the likelihood function. This approach is applied to so-called high-default portfolios, in which the number of migrations (defaults) is large enough to obtain high accuracy of the Laplace approximation. By contrast, low-default portfolios have a limited number of observed migrations (defaults). Therefore, in order to calibrate low-default portfolios, we develop a ML algorithm using a particle filter (PF) and Gaussian process regression. Experiments show that both algorithms are efficient and produce accurate approximations of the likelihood function and the ML estimates of the model parameters.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00576&r=
  24. By: Estey, Clayton
    Abstract: I contribute to stochastic modeling methodology in a theoretical framework spanning core decisions in the model's lifetime. These are predicting an out-of-sample unit's latent state even from non-series data, deciding when to start and stop learning about the state variable, and choosing models from important trade-offs. States evolve from linear dynamics with time-varying predictors and coefficients (drift) and generalized continuous noise (diffusion). Coefficients must address misprediction costs, data complexity, and distributional uncertainty (ambiguity) about the state's diffusion and stopping time. I exactly solve a stochastic dynamic program robust to worst-case costs from both uncertainties. The Bellman optimal coefficients extend generalized ridge regression by out-of-sample components impacting value changes given state changes. Performance issues trigger sequential analysis whether learning alternative models, given the effort, is better than keeping baseline. Learning is method-general and stops in fewest average attempts within decision errors. I derive preference functions for comparing models with state and cost-change constraints to decide a model, joint-time state and value distributions, and other properties beneficial to modelers.
    Date: 2024–04–13
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:75fc9&r=

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.