|
on Econometrics |
By: | Annalivia Polselli |
Abstract: | With the violation of the assumption of homoskedasticity, least squares estimators of the variance become inefficient and statistical inference conducted with invalid standard errors leads to misleading rejection rates. Despite a vast cross-sectional literature on the downward bias of robust standard errors, the problem is not extensively covered in the panel data framework. We investigate the consequences of the simultaneous presence of small sample size, heteroskedasticity and data points that exhibit extreme values in the covariates ('good leverage points') on the statistical inference. Focusing on one-way linear panel data models, we examine asymptotic and finite sample properties of a battery of heteroskedasticity-consistent estimators using Monte Carlo simulations. We also propose a hybrid estimator of the variance-covariance matrix. Results show that conventional standard errors are always dominated by more conservative estimators of the variance, especially in small samples. In addition, all types of HC standard errors have excellent performances in terms of size and power tests under homoskedasticity. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.17676&r=ecm |
By: | Joann Jasiak; Aryan Manafi Neyazi |
Abstract: | We examine finite sample performance of the Generalized Covariance (GCov) residual-based specification test for semiparametric models with i.i.d. errors. The residual-based multivariate portmanteau test statistic follows asymptotically a $\chi^2$ distribution when the model is estimated by the GCov estimator. The test is shown to perform well in application to the univariate mixed causal-noncausal MAR, double autoregressive (DAR) and multivariate Vector Autoregressive (VAR) models. We also introduce a bootstrap procedure that provides the limiting distribution of the test statistic when the specification test is applied to a model estimated by the maximum likelihood, or the approximate or quasi-maximum likelihood under a parametric assumption on the error distribution. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05373&r=ecm |
By: | Gregory Faletto |
Abstract: | To address the bias of the canonical two-way fixed effects estimator for difference-in-differences under staggered adoptions, Wooldridge (2021) proposed the extended two-way fixed effects estimator, which adds many parameters. However, this reduces efficiency. Restricting some of these parameters to be equal helps, but ad hoc restrictions may reintroduce bias. We propose a machine learning estimator with a single tuning parameter, fused extended two-way fixed effects (FETWFE), that enables automatic data-driven selection of these restrictions. We prove that under an appropriate sparsity assumption FETWFE identifies the correct restrictions with probability tending to one. We also prove the consistency, asymptotic normality, and oracle efficiency of FETWFE for two classes of heterogeneous marginal treatment effect estimators under either conditional or marginal parallel trends, and we prove consistency for two classes of conditional average treatment effects under conditional parallel trends. We demonstrate FETWFE in simulation studies and an empirical application. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05985&r=ecm |
By: | Demetrescu, Matei; Rodrigues, Paulo MM; Taylor, AM Robert |
Abstract: | We develop new tests for predictability, based on the Lagrange Multiplier [LM] principle, in the context of quantile regression [QR] models which allow for persistent and endogenous predictors driven by conditionally and/or unconditionally heteroskedastic errors. Of the extant predictive QR tests in the literature, only the moving blocks bootstrap implementation, due to Fan and Lee (2019), of theWald-type test of Lee (2016) can allow for conditionally heteroskedastic errors in the context of a QR model with persistent predictors. In common with all other tests in the literature it cannot, however, allow for any form of unconditionally heteroskedastic behaviour in the errors. The LM-based approach we adopt in this paper is obtained from a simple auxiliary linear test regression which facilitates inference based on established instrumental variable methods. We demonstrate that, as a result, the tests we develop, based on either conventional or heteroskedasticity-consistent standard errors in the auxiliary regression, are robust under the null hypothesis of no predictability to conditional heteroskedasticity and to unconditional heteroskedasticity in the errors driving the predictors, with no need for bootstrap implementation. Tests are developed both for predictability at a single quantile, and also jointly over a set of quantiles. Simulation results highlight the superior finite sample size and power properties of our proposed LM tests over the tests of Lee (2016) and Fan and Lee (2019) for both conditionally and unconditionally heteroskedastic errors. An empirical application to the equity premium for the S&P 500 highlights the practical usefulness of our proposed tests, uncovering significant evidence of predictability in the left and right tails of the returns distribution for a number of predictors containing information on market or firm risk. |
Keywords: | Predictive regression, Conditional quantile, Unknown persistence, Endogeneity, Time-varying volatility |
Date: | 2024–01–03 |
URL: | http://d.repec.org/n?u=RePEc:esy:uefcwp:37486&r=ecm |
By: | Philipp Otto; Osman Do\u{g}an; S\"uleyman Ta\c{s}p{\i}nar |
Abstract: | This paper explores the estimation of a dynamic spatiotemporal autoregressive conditional heteroscedasticity (ARCH) model. The log-volatility term in this model can depend on (i) the spatial lag of the log-squared outcome variable, (ii) the time-lag of the log-squared outcome variable, (iii) the spatiotemporal lag of the log-squared outcome variable, (iv) exogenous variables, and (v) the unobserved heterogeneity across regions and time, i.e., the regional and time fixed effects. We examine the small and large sample properties of two quasi-maximum likelihood estimators and a generalized method of moments estimator for this model. We first summarize the theoretical properties of these estimators and then compare their finite sample properties through Monte Carlo simulations. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05898&r=ecm |
By: | Paul Clarke; Annalivia Polselli |
Abstract: | Machine Learning (ML) algorithms are powerful data-driven tools for approximating high-dimensional or non-linear nuisance functions which are useful in practice because the true functional form of the predictors is ex-ante unknown. In this paper, we develop estimators of policy interventions from panel data which allow for non-linear effects of the confounding regressors, and investigate the performance of these estimators using three well-known ML algorithms, specifically, LASSO, classification and regression trees, and random forests. We use Double Machine Learning (DML) (Chernozhukov et al., 2018) for the estimation of causal effects of homogeneous treatments with unobserved individual heterogeneity (fixed effects) and no unobserved confounding by extending Robinson (1988)'s partially linear regression model. We develop three alternative approaches for handling unobserved individual heterogeneity based on extending the within-group estimator, first-difference estimator, and correlated random effect estimator (Mundlak, 1978) for non-linear models. Using Monte Carlo simulations, we find that conventional least squares estimators can perform well even if the data generating process is non-linear, but there are substantial performance gains in terms of bias reduction under a process where the true effect of the regressors is non-linear and discontinuous. However, for the same scenarios, we also find -- despite extensive hyperparameter tuning -- inference to be problematic for both tree-based learners because these lead to highly non-normal estimator distributions and the estimator variance being severely under-estimated. This contradicts the performance of trees in other circumstances and requires further investigation. Finally, we provide an illustrative example of DML for observational panel data showing the impact of the introduction of the national minimum wage in the UK. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.08174&r=ecm |
By: | Andrew Chesher; Adam Rosen; Yuanqi Zhang |
Abstract: | Many structural econometric models include latent variables on whose probability distributions one may wish to place minimal restrictions. Leading examples in panel data models are individual-specific variables sometimes treated as “fixed effects” and, in dynamic models, initial conditions. This paper presents a generally applicable method for characterizing sharp identified sets when models place no restrictions on the probability distribution of certain latent variables and no restrictions on their covariation with other variables. In our analysis latent variables on which restrictions are undesirable are removed, leading to econometric analysis robust to misspecification of restrictions on their distributions which are commonplace in the applied panel data literature. Endogenous explanatory variables are easily accommodated. Examples of application to some static and dynamic binary, ordered and multiple discrete choice and censored panel data models are presented. |
Date: | 2024–01–08 |
URL: | http://d.repec.org/n?u=RePEc:azt:cemmap:01/24&r=ecm |
By: | Matias D. Cattaneo; Fang Han; Zhexiao Lin |
Abstract: | In two influential contributions, Rosenbaum (2005, 2020) advocated for using the distances between component-wise ranks, instead of the original data values, to measure covariate similarity when constructing matching estimators of average treatment effects. While the intuitive benefits of using covariate ranks for matching estimation are apparent, there is no theoretical understanding of such procedures in the literature. We fill this gap by demonstrating that Rosenbaum's rank-based matching estimator, when coupled with a regression adjustment, enjoys the properties of double robustness and semiparametric efficiency without the need to enforce restrictive covariate moment assumptions. Our theoretical findings further emphasize the statistical virtues of employing ranks for estimation and inference, more broadly aligning with the insights put forth by Peter Bickel in his 2004 Rietz lecture (Bickel, 2004). |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.07683&r=ecm |
By: | Augusto Cerqua; Marco Letta; Fiammetta Menchetti |
Abstract: | Without a credible control group, the most widespread methodologies for estimating causal effects cannot be applied. To fill this gap, we propose the Machine Learning Control Method (MLCM), a new approach for causal panel analysis based on counterfactual forecasting with machine learning. The MLCM estimates policy-relevant causal parameters in short- and long-panel settings without relying on untreated units. We formalize identification in the potential outcomes framework and then provide estimation based on supervised machine learning algorithms. To illustrate the advantages of our estimator, we present simulation evidence and an empirical application on the impact of the COVID-19 crisis on educational inequality in Italy. We implement the proposed method in the companion R package MachineControl. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05858&r=ecm |
By: | Annalivia Polselli |
Abstract: | The presence of units with extreme values in the dependent and/or independent variables (i.e., vertical outliers, leveraged data) has the potential to severely bias regression coefficients and/or standard errors. This is common with short panel data because the researcher cannot advocate asymptotic theory. Example include cross-country studies, cell-group analyses, and field or laboratory experimental studies, where the researcher is forced to use few cross-sectional observations repeated over time due to the structure of the data or research design. Available diagnostic tools may fail to properly detect these anomalies, because they are not designed for panel data. In this paper, we formalise statistical measures for panel data models with fixed effects to quantify the degree of leverage and outlyingness of units, and the joint and conditional influences of pairs of units. We first develop a method to visually detect anomalous units in a panel data set, and identify their type. Second, we investigate the effect of these units on LS estimates, and on other units' influence on the estimated parameters. To illustrate and validate the proposed method, we use a synthetic data set contaminated with different types of anomalous units. We also provide an empirical example. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05700&r=ecm |
By: | Lihua Lei; Brad Ross |
Abstract: | We develop a more flexible approach for identifying and estimating average counterfactual outcomes when several but not all possible outcomes are observed for each unit in a large cross section. Such settings include event studies and studies of outcomes of "matches" between agents of two types, e.g. workers and firms or people and places. When outcomes are generated by a factor model that allows for low-dimensional unobserved confounders, our method yields consistent, asymptotically normal estimates of counterfactual outcome means under asymptotics that fix the number of outcomes as the cross section grows and general outcome missingness patterns, including those not accommodated by existing methods. Our method is also computationally efficient, requiring only a single eigendecomposition of a particular aggregation of any factor estimates constructed using subsets of units with the same observed outcomes. In a semi-synthetic simulation study based on matched employer-employee data, our method performs favorably compared to a Two-Way-Fixed-Effects-model-based estimator. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.07520&r=ecm |
By: | William Liu |
Abstract: | I develop the theory around using control functions to instrument hazard models, allowing the inclusion of endogenous (e.g., mismeasured) regressors. Simple discrete-data hazard models can be expressed as binary choice panel data models, and the widespread Prentice and Gloeckler (1978) discrete-data proportional hazards model can specifically be expressed as a complementary log-log model with time fixed effects. This allows me to recast it as GMM estimation and its instrumented version as sequential GMM estimation in a Z-estimation (non-classical GMM) framework; this framework can then be leveraged to establish asymptotic properties and sufficient conditions. Whilst this paper focuses on the Prentice and Gloeckler (1978) model, the methods and discussion developed here can be applied more generally to other hazard models and binary choice models. I also introduce my Stata command for estimating a complementary log-log model instrumented via control functions (available as ivcloglog on SSC), which allows practitioners to easily instrument the Prentice and Gloeckler (1978) model. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.03165&r=ecm |
By: | Jean-Jacques Forneron |
Abstract: | When fitting a particular Economic model on a sample of data, the model may turn out to be heavily misspecified for some observations. This can happen because of unmodelled idiosyncratic events, such as an abrupt but short-lived change in policy. These outliers can significantly alter estimates and inferences. A robust estimation is desirable to limit their influence. For skewed data, this induces another bias which can also invalidate the estimation and inferences. This paper proposes a robust GMM estimator with a simple bias correction that does not degrade robustness significantly. The paper provides finite-sample robustness bounds, and asymptotic uniform equivalence with an oracle that discards all outliers. Consistency and asymptotic normality ensue from that result. An application to the "Price-Puzzle, " which finds inflation increases when monetary policy tightens, illustrates the concerns and the method. The proposed estimator finds the intuitive result: tighter monetary policy leads to a decline in inflation. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05342&r=ecm |
By: | Peter Knaus; Sylvia Fr\"uhwirth-Schnatter |
Abstract: | Many current approaches to shrinkage within the time-varying parameter framework assume that each state is equipped with only one innovation variance for all time points. Sparsity is then induced by shrinking this variance towards zero. We argue that this is not sufficient if the states display large jumps or structural changes, something which is often the case in time series analysis. To remedy this, we propose the dynamic triple gamma prior, a stochastic process that has a well-known triple gamma marginal form, while still allowing for autocorrelation. Crucially, the triple gamma has many interesting limiting and special cases (including the horseshoe shrinkage prior) which can also be chosen as the marginal distribution. Not only is the marginal form well understood, we further derive many interesting properties of the dynamic triple gamma, which showcase its dynamic shrinkage characteristics. We develop an efficient Markov chain Monte Carlo algorithm to sample from the posterior and demonstrate the performance through sparse covariance modeling and forecasting of the returns of the components of the EURO STOXX 50 index. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.10487&r=ecm |
By: | Marcin Pitera; Thorsten Schmidt; {\L}ukasz Stettner |
Abstract: | The assessment of risk based on historical data faces many challenges, in particular due to the limited amount of available data, lack of stationarity, and heavy tails. While estimation on a short-term horizon for less extreme percentiles tends to be reasonably accurate, extending it to longer time horizons or extreme percentiles poses significant difficulties. The application of theoretical risk scaling laws to address this issue has been extensively explored in the literature. This paper presents a novel approach to scaling a given risk estimator, ensuring that the estimated capital reserve is robust and conservatively estimates the risk. We develop a simple statistical framework that allows efficient risk scaling and has a direct link to backtesting performance. Our method allows time scaling beyond the conventional square-root-of-time rule, enables risk transfers, such as those involved in economic capital allocation, and could be used for unbiased risk estimation in small sample settings. To demonstrate the effectiveness of our approach, we provide various examples related to the estimation of value-at-risk and expected shortfall together with a short empirical study analysing the impact of our method. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05655&r=ecm |
By: | Lajos Horv\'ath; Lorenzo Trapani |
Abstract: | We propose a family of weighted statistics based on the CUSUM process of the WLS residuals for the online detection of changepoints in a Random Coefficient Autoregressive model, using both the standard CUSUM and the Page-CUSUM process. We derive the asymptotics under the null of no changepoint for all possible weighing schemes, including the case of the standardised CUSUM, for which we derive a Darling-Erdos-type limit theorem; our results guarantee the procedure-wise size control under both an open-ended and a closed-ended monitoring. In addition to considering the standard RCA model with no covariates, we also extend our results to the case of exogenous regressors. Our results can be applied irrespective of (and with no prior knowledge required as to) whether the observations are stationary or not, and irrespective of whether they change into a stationary or nonstationary regime. Hence, our methodology is particularly suited to detect the onset, or the collapse, of a bubble or an epidemic. Our simulations show that our procedures, especially when standardising the CUSUM process, can ensure very good size control and short detection delays. We complement our theory by studying the online detection of breaks in epidemiological and housing prices series. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.11710&r=ecm |
By: | \'Alvaro Cartea; Gerardo Duran-Martin; Leandro S\'anchez-Betancourt |
Abstract: | This paper develops a framework to predict toxic trades that a broker receives from her clients. Toxic trades are predicted with a novel online Bayesian method which we call the projection-based unification of last-layer and subspace estimation (PULSE). PULSE is a fast and statistically-efficient online procedure to train a Bayesian neural network sequentially. We employ a proprietary dataset of foreign exchange transactions to test our methodology. PULSE outperforms standard machine learning and statistical methods when predicting if a trade will be toxic; the benchmark methods are logistic regression, random forests, and a recursively-updated maximum-likelihood estimator. We devise a strategy for the broker who uses toxicity predictions to internalise or to externalise each trade received from her clients. Our methodology can be implemented in real-time because it takes less than one millisecond to update parameters and make a prediction. Compared with the benchmarks, PULSE attains the highest PnL and the largest avoided loss for the horizons we consider. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05827&r=ecm |
By: | Jos\'e Luis Montiel Olea; Chen Qiu; J\"org Stoye |
Abstract: | We apply classical statistical decision theory to a large class of treatment choice problems with partial identification, revealing important theoretical and practical challenges but also interesting research opportunities. The challenges are: In a general class of problems with Gaussian likelihood, all decision rules are admissible; it is maximin-welfare optimal to ignore all data; and, for severe enough partial identification, there are infinitely many minimax-regret optimal decision rules, all of which sometimes randomize the policy recommendation. The opportunities are: We introduce a profiled regret criterion that can reveal important differences between rules and render some of them inadmissible; and we uniquely characterize the minimax-regret optimal rule that least frequently randomizes. We apply our results to aggregation of experimental estimates for policy adoption, to extrapolation of Local Average Treatment Effects, and to policy making in the presence of omitted variable bias. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.17623&r=ecm |
By: | Niccolò Lomys (CSEF and Università degli Studi di Napoli Federico II.); Emanuele Tarantino (Luiss University, EIEF, and CEPR) |
Abstract: | We theoretically study the problem of a researcher seeking to identify and estimate the search cost distribution when a share of agents in the population observes some peers’ choices. To begin with, we show that social information changes agents’ optimal search and, as a result, the distributions of observable outcomes identifying the search model. Consequently, neglecting social information leads to non-identification of the search cost distribution. Whether, as a result, search frictions are under or overestimated depends on the dataset’s content. Next, we present empirical strategies that restore identification and correct estimation. First, we show how to recover robust bounds on the search cost distribution by imposing only minimal assumptions on agents’ social information. Second, we explore how leveraging additional data or stronger assumptions can help obtain more informative estimates. |
Keywords: | Search & Learning; Social Information; Identification; Networks; Robustness; Partial Identification. |
JEL: | C1 C5 C8 D1 D6 D8 |
Date: | 2023–11–24 |
URL: | http://d.repec.org/n?u=RePEc:sef:csefwp:694&r=ecm |
By: | Narine Yegoryan (HU Berlin); Daniel Guhl (HU Berlin); Friederike Paetz (Clausthal University of Technology) |
Abstract: | Identifying consumer heterogeneity is a central topic in marketing. While the main focus has been on developing models and estimation procedures that allow uncovering consumer heterogeneity in preferences, a new stream of literature has focused on models that account for consumers’ heterogeneous attribute information usage. These models acknowledge that consumers may ignore subsets of attributes when making decisions, also commonly termed “attribute nonattendance" (ANA). In this paper, we explore the performance of choice models that explicitly account for ANA across ten different applications, which vary in terms of the choice context, the associated financial risk, and the complexity of the purchase decision. We systematically compare five different models that either neglect ANA and preference heterogeneity, account only for one at a time, or account for both across these applications. First, we showcase that ANA occurs across all ten applications. It prevails even in simple settings and high-stakes decisions. Second, we contribute by examining the direction and the magnitude of biases in parameters. We find that the location of zero with regard to the preference distribution affects the expected direction of biases in preference heterogeneity (i.e., variance) parameters. Neglecting ANA when the preference distribution is away from zero, often related to whether the attribute enables vertical differentiation of products, may lead to an overestimation of preference heterogeneity. In contrast, neglecting ANA when the preference distribution spreads on both sides of zero, often related to attributes enabling horizontal differentiation, may lead to an underestimation of preference heterogeneity. Lastly, we present how the empirical results translate into managerial implications and provide guidance to practitioners on when these models are beneficial. |
Keywords: | choice modeling; preference heterogeneity; attribute non-attendance; inattention; |
Date: | 2023–12–15 |
URL: | http://d.repec.org/n?u=RePEc:rco:dpaper:482&r=ecm |
By: | Daniel Ngo; Keegan Harris; Anish Agarwal; Vasilis Syrgkanis; Zhiwei Steven Wu |
Abstract: | We consider a panel data setting in which one observes measurements of units over time, under different interventions. Our focus is on the canonical family of synthetic control methods (SCMs) which, after a pre-intervention time period when all units are under control, estimate counterfactual outcomes for test units in the post-intervention time period under control by using data from donor units who have remained under control for the entire post-intervention period. In order for the counterfactual estimate produced by synthetic control for a test unit to be accurate, there must be sufficient overlap between the outcomes of the donor units and the outcomes of the test unit. As a result, a canonical assumption in the literature on SCMs is that the outcomes for the test units lie within either the convex hull or the linear span of the outcomes for the donor units. However despite their ubiquity, such overlap assumptions may not always hold, as is the case when, e.g., units select their own interventions and different subpopulations of units prefer different interventions a priori. We shed light on this typically overlooked assumption, and we address this issue by incentivizing units with different preferences to take interventions they would not normally consider. Specifically, we provide a SCM for incentivizing exploration in panel data settings which provides incentive-compatible intervention recommendations to units by leveraging tools from information design and online learning. Using our algorithm, we show how to obtain valid counterfactual estimates using SCMs without the need for an explicit overlap assumption on the unit outcomes. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.16307&r=ecm |
By: | Viola Monostoriné Grolmusz (Central Bank of Hungary) |
Abstract: | Forecast combinations have been repeatedly shown to outperform individual professional forecasts and complicated time series models in accuracy. Their ease of use and accuracy makes them important tools for policy decisions. While simple combinations work remarkably well in some situations, time-varying combinations can be even more accurate in other real-life scenarios involving economic forecasts. This paper uses a regime switching framework to model the time-variation in forecast combination weights. I use an optimization problem based on asymmetric loss functions in deriving optimal forecast combination weights. The switching framework is based on the work of Elliott and Timmermann (2005), however I extend their setup by using asymmetric quadratic loss in the optimization problem. This is an important extension, since with my setup it is possible to quantify and analyze optimal forecast biases for different directions and levels of asymmetry in the loss function, contributing to the vast literature on forecast bias. I interpret the equations for the optimal weights through analytical examples and examine how the weights depend on the model parameters, the level of asymmetry of the loss function and the transition probabilities and starting state. |
Keywords: | Forecast combination, Loss functions, Time-varying combination weights, Markov switching.. |
JEL: | C53 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:mnb:wpaper:2023/3&r=ecm |
By: | Igor Ferreira Batista Martins; Hedibert Freitas Lopes |
Abstract: | This paper expands traditional stochastic volatility models by allowing for time-varying skewness without imposing it. While dynamic asymmetry may capture the likely direction of future asset returns, it comes at the risk of leading to overparameterization. Our proposed approach mitigates this concern by leveraging sparsity-inducing priors to automatically selects the skewness parameter as being dynamic, static or zero in a data-driven framework. We consider two empirical applications. First, in a bond yield application, dynamic skewness captures interest rate cycles of monetary easing and tightening being partially explained by central banks' mandates. In an currency modeling framework, our model indicates no skewness in the carry factor after accounting for stochastic volatility which supports the idea of carry crashes being the result of volatility surges instead of dynamic skewness. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.00282&r=ecm |
By: | Milan Ficura (Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic); Jiri Witzany (Faculty of Finance and Accounting, Prague University of Economics and Business, Czech Republic.) |
Abstract: | We propose how deep neural networks can be used to calibrate the parameters of Stochastic-Volatility Jump-Diffusion (SVJD) models to historical asset return time series. 1-Dimensional Convolutional Neural Networks (1D-CNN) are used for that purpose. The accuracy of the deep learning approach is compared with machine learning methods based on shallow neural networks and hand-crafted features, and with commonly used statistical approaches such as MCMC and approximate MLE. The deep learning approach is found to be accurate and robust, outperforming the other approaches in simulation tests. The main advantage of the deep learning approach is that it is fully generic and can be applied to any SVJD model from which simulations can be drawn. An additional advantage is the speed of the deep learning approach in situations when the parameter estimation needs to be repeated on new data. The trained neural network can be in these situations used to estimate the SVJD model parameters almost instantaneously. |
Keywords: | Stochastic volatility, price jumps, SVJD, neural networks, deep learning, CNN |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2023_36&r=ecm |
By: | K. B. Gubbels; J. Y. Ypma; C. W. Oosterlee |
Abstract: | We introduce a class of copulas that we call Principal Component Copulas. This class intends to combine the strong points of copula-based techniques with principal component-based models, which results in flexibility when modelling tail dependence along the most important directions in multivariate data. The proposed techniques have conceptual similarities and technical differences with the increasingly popular class of factor copulas. Such copulas can generate complex dependence structures and also perform well in high dimensions. We show that Principal Component Copulas give rise to practical and technical advantages compared to other techniques. We perform a simulation study and apply the copula to multivariate return data. The copula class offers the possibility to avoid the curse of dimensionality when estimating very large copula models and it performs particularly well on aggregate measures of tail risk, which is of importance for capital modeling. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.13195&r=ecm |
By: | Yuan Liao; Xinjie Ma; Andreas Neuhierl; Zhentao Shi |
Abstract: | This paper addresses a key question in economic forecasting: does pure noise truly lack predictive power? Economists typically conduct variable selection to eliminate noises from predictors. Yet, we prove a compelling result that in most economic forecasts, the inclusion of noises in predictions yields greater benefits than its exclusion. Furthermore, if the total number of predictors is not sufficiently large, intentionally adding more noises yields superior forecast performance, outperforming benchmark predictors relying on dimension reduction. The intuition lies in economic predictive signals being densely distributed among regression coefficients, maintaining modest forecast bias while diversifying away overall variance, even when a significant proportion of predictors constitute pure noises. One of our empirical demonstrations shows that intentionally adding 300~6, 000 pure noises to the Welch and Goyal (2008) dataset achieves a noteworthy 10% out-of-sample R square accuracy in forecasting the annual U.S. equity premium. The performance surpasses the majority of sophisticated machine learning models. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.05593&r=ecm |
By: | Masahiro Kato |
Abstract: | We address the problem of best arm identification (BAI) with a fixed budget for two-armed Gaussian bandits. In BAI, given multiple arms, we aim to find the best arm, an arm with the highest expected reward, through an adaptive experiment. Kaufmann et al. (2016) develops a lower bound for the probability of misidentifying the best arm. They also propose a strategy, assuming that the variances of rewards are known, and show that it is asymptotically optimal in the sense that its probability of misidentification matches the lower bound as the budget approaches infinity. However, an asymptotically optimal strategy is unknown when the variances are unknown. For this open issue, we propose a strategy that estimates variances during an adaptive experiment and draws arms with a ratio of the estimated standard deviations. We refer to this strategy as the Neyman Allocation (NA)-Augmented Inverse Probability weighting (AIPW) strategy. We then demonstrate that this strategy is asymptotically optimal by showing that its probability of misidentification matches the lower bound when the budget approaches infinity, and the gap between the expected rewards of two arms approaches zero (small-gap regime). Our results suggest that under the worst-case scenario characterized by the small-gap regime, our strategy, which employs estimated variance, is asymptotically optimal even when the variances are unknown. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.12741&r=ecm |
By: | Agathe Sadeghi; Achintya Gopal; Mohammad Fesanghary |
Abstract: | A deeper comprehension of financial markets necessitates understanding not only the statistical dependencies among various entities but also the causal dependencies. This paper extends the Constraint-based Causal Discovery from Heterogeneous Data algorithm to account for lagged relationships in time-series data (an algorithm we call CD-NOTS), shedding light on the complex causal relations between different financial assets and variables. We compare the performance of different algorithmic choices, such as the choice of conditional independence test, to give general advice on the effective way to use CD-NOTS. Using the results from the simulated data, we apply CD-NOTS to a broad range of indices and factors in order to identify causal connections among the entities, thereby showing how causal discovery can serve as a valuable tool for factor-based investing, portfolio diversification, and comprehension of market dynamics. Further, we show our algorithm is a more effective alternative to other causal discovery algorithms since the assumptions of our algorithm are more realistic in terms of financial data, a conclusion we find is statistically significant. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2312.17375&r=ecm |
By: | Yoosoon Chang (Department of Economics, Indiana University); Yongok Choi (School of Economics, Chung-Ang University); Chang Sik Kim (Department of Economics, Sungkyunkwan University); J. Isaac Miller (Department of Economics, University of Missouri); Joon Y. Park (Department of Economics, Indiana University) |
Abstract: | We employ a semiparametric functional coefficient panel approach to allow an economic relationship of interest to have both country-specific heterogeneity and a common component that may be nonlinear in the covariate and may vary over time. Surfaces of the common component of coefficients and partial derivatives (elasticities) are estimated and then decomposed by functional principal components, and we introduce a bootstrap-based procedure for inference on the loadings of the functional principal components. Applying this approach to national energy-GDP elasticities, we find that elasticities are driven by common components that are distinct across two groups of countries yet have leading functional principal components that share similarities. The groups roughly correspond to OECD and non-OECD countries, but we utilize a novel methodology to regroup countries based on common energy consumption patterns to minimize root mean squared error within groups. The common component of the group containing more developed countries has an additional functional principal component that decreases the elasticity of the wealthiest countries in recent decades. |
Keywords: | energy consumption, energy-GDP elasticity, partially linear semiparametric panel model, functional coefficient panel model |
JEL: | C14 C23 C51 Q43 |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:2401&r=ecm |
By: | Christopher Palmer |
Abstract: | This paper develops a control-function methodology accounting for endogenous or mismeasured regressors in hazard models. I provide sufficient identifying assumptions and regularity conditions for the estimator to be consistent and asymptotically normal. Applying my estimator to the subprime mortgage crisis, I quantify what caused the foreclosure rate to triple across the 2003-2007 subprime cohorts. To identify the elasticity of default with respect to housing prices, I use various home-price instruments including historical variation in home-price cyclicality. Loose credit played a significant role in the crisis, but much of the increase in defaults across cohorts was caused by home-price declines unrelated to lending standards, with a 10% decline in home prices increasing subprime mortgage default rates by 50%. |
JEL: | C26 C41 G01 G21 R31 R38 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32000&r=ecm |