
on Econometrics 
By:  Fang, Qin; Guo, Shaojun; Qiao, Xinghao 
Abstract:  Covariance function estimation is a fundamental task in multivariate functional data analysis and arises in many applications. In this paper, we consider estimating sparse covariance functions for highdimensional functional data, where the number of random functions p is comparable to, or even larger than the sample size n. Aided by the Hilbert–Schmidt norm of functions, we introduce a new class of functional thresholding operators that combine functional versions of thresholding and shrinkage, and propose the adaptive functional thresholding estimator by incorporating the variance effects of individual entries of the sample covariance function into functional thresholding. To handle the practical scenario where curves are partially observed with errors, we also develop a nonparametric smoothing approach to obtain the smoothed adaptive functional thresholding estimator and its binned implementation to accelerate the computation. We investigate the theoretical properties of our proposals when p grows exponentially with n under both fully and partially observed functional scenarios. Finally, we demonstrate that the proposed adaptive functional thresholding estimators significantly outperform the competitors through extensive simulations and the functional connectivity analysis of two neuroimaging datasets. 
Keywords:  binning; highdimensional functional data; functional connectivity; functional sparsity; local linear smoothing; partially observed functional data; 11771447; T&F deal 
JEL:  C1 
Date:  2023–05–26 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:118700&r=ecm 
By:  Jia Chen AuthorNameFirst: Jia (Department of Economics and Related Studies, University of York); Yongcheol Shin (Department of Economics and Related Studies, University of York); Chaowen Zheng (Department of Economics, University of Reading) 
Abstract:  We propose a simple twostep procedure for estimating the dynamic quantile panel data model with unobserved interactive effects. To account for the endogeneity induced by correlation between factors and lagged dependent variable/regressors, we first estimate factors consistently via an iterative principal component analysis. In the second step, we run a quantile regression for the augmented model with estimated factors and estimate the slope parameters. In particular, we adopt a smoothed quantile regression analysis where the quantile loss function is smoothed to have welldefined derivatives. The proposed twostep estimator is consistent and asymptotically normally distributed, but subject to asymptotic bias due to the incidental parameters. We then apply the splitpanel jackknife approach to correct the bias. Monte Carlo simulations confirm that our proposed estimator has good finite sample performance. Finally, we demonstrate the usefulness of our proposed approach with an application to the analysis of bilateral trade for 380 country pairs over 59 years. 
Keywords:  dynamic quantile panel data model, interactive effects, principal component analysis, smoothed quantile regression, bilateral trade flows 
JEL:  C31 C33 F14 
Date:  2023–06–11 
URL:  http://d.repec.org/n?u=RePEc:rdg:emxxdp:emdp202306&r=ecm 
By:  Haiqi Li AuthorNameFirst: Haiqi (College of Finance and Statistics, Hunan University, Changsha, China); Jing Zhang (College of Finance and Statistics, Hunan University, Changsha, China); Chaowen Zheng (Department of Economics, University of Reading) 
Abstract:  This paper proposes a generalized quantile cointegrating regressive model for nonstationary time series, allowing coefficients to be unknown functions of informative covariates at each quantile level. Using a local polynomial quantile regressive method, we obtain the estimator for the functional coefficients at each quantile level, which is shown to be nonparametrically superconsistent. To alleviate the endogeneity of the model, this paper proposes a fully modified local polynomial quantile cointegrating regressive estimator which is shown to follow a mixed normal distribution asymptotically. We then propose two types of test statistics related to functional coefficient quantile cointegrating model. The first is to test the stability of the cointegrating vector to determine whether the conventional fixedcoefficient cointegration model is appropriate or not. The second is to test the presence of the varying coefficient cointegrating relationship among the economic variables based on a modified quantile residual cumulative sum (MQCS) statistic. Monte Carlo simulation results show that the two tests perform quite well in finite samples. Finally, by using the proposed functional coefficient quantile cointegrating model, this paper examines the validity of the purchasing power parity (PPP) theory between China, Japan, South Korea and the United States, respectively. 
Keywords:  bootstrap method, functional coefficient quantile cointegrating model, local polynomial approach, PPP theory 
JEL:  C12 C13 
Date:  2023–06–12 
URL:  http://d.repec.org/n?u=RePEc:rdg:emxxdp:emdp202307&r=ecm 
By:  Wei Tian 
Abstract:  Policy evaluation in empirical microeconomics has been focusing on estimating the average treatment effect and more recently the heterogeneous treatment effects, often relying on the unconfoundedness assumption. We propose a method based on the interactive fixed effects model to estimate treatment effects at the individual level, which allows both the treatment assignment and the potential outcomes to be correlated with the unobserved individual characteristics. This method is suitable for panel datasets where multiple related outcomes are observed for a large number of individuals over a small number of time periods. Monte Carlo simulations show that our method outperforms related methods. To illustrate our method, we provide an example of estimating the effect of health insurance coverage on individual usage of hospital emergency departments using the Oregon Health Insurance Experiment data. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.01969&r=ecm 
By:  Christian Holberg; Susanne Ditlevsen 
Abstract:  Uniformly valid inference for cointegrated vector autoregressive processes has so far proven difficult due to certain discontinuities arising in the asymptotic distribution of the least squares estimator. We show how asymptotic results from the univariate case can be extended to multiple dimensions and how inference can be based on these results. Furthermore, we show that the novel instrumental variable procedure proposed by [20] (IVX) yields uniformly valid confidence regions for the entire autoregressive matrix. The results are applied to two specific examples for which we verify the theoretical findings and investigate finite sample properties in simulation experiments. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.03632&r=ecm 
By:  Christopher D. Walker 
Abstract:  This paper considers Bayesian inference for the partially linear model. Our approach exploits a parametrization of the regression function that is tailored toward estimating a lowdimensional parameter of interest. The key property of the parametrization is that it generates a Neyman orthogonal moment condition meaning that the lowdimensional parameter is less sensitive to the estimation of nuisance parameters. Our large sample analysis supports this claim. In particular, we derive sufficient conditions under which the posterior for the lowdimensional parameter contracts around the truth at the parametric rate and is asymptotically normal with a variance that coincides with the semiparametric efficiency bound. These conditions allow for a larger class of nuisance parameters relative to the original parametrization of the regression model. Overall, we conclude that a parametrization that embeds Neyman orthogonality can be a useful device for debiasing posterior distributions in semiparametric models. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.03816&r=ecm 
By:  Alejandro SanchezBecerra 
Abstract:  Experimenters often collect baseline data to study heterogeneity. I propose the first valid confidence intervals for the VCATE, the treatment effect variance explained by observables. Conventional approaches yield incorrect coverage when the VCATE is zero. As a result, practitioners could be prone to detect heterogeneity even when none exists. The reason why coverage worsens at the boundary is that all efficient estimators have a locallydegenerate influence function and may not be asymptotically normal. I solve the problem for a broad class of multistep estimators with a predictive first stage. My confidence intervals account for higherorder terms in the limiting distribution and are fast to compute. I also find new connections between the VCATE and the problem of deciding whom to treat. The gains of targeting treatment are (sharply) bounded by half the square root of the VCATE. Finally, I document excellent performance in simulation and reanalyze an experiment from Malawi. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.03363&r=ecm 
By:  Jiti Gao; Bin Peng; Yayi Yan 
Abstract:  This paper considers a timevarying vector errorcorrection model that allows for different time series behaviours (e.g., unitroot and locally stationary processes) to interact with each other to coexist. From practical perspectives, this framework can be used to estimate shifts in the predictability of nonstationary variables, test whether economic theories hold periodically, etc. We first develop a timevarying Granger Representation Theorem, which facilitates the establishment of asymptotic properties for the model, and then propose estimation and inferential methods and theory for both shortrun and longrun coefficients. We also propose an information criterion to estimate the lag length, a singularvalue ratio test to determine the cointegration rank, and a hypothesis test to examine the parameter stability. To validate the theoretical findings, we conduct extensive simulations. Finally, we demonstrate the empirical relevance by applying the framework to investigate the rational expectations hypothesis of the U.S. term structure. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.17829&r=ecm 
By:  Timothy B. Armstrong; Patrick Kline; Liyang Sun 
Abstract:  Empirical research typically involves a robustnessefficiency tradeoff. A researcher seeking to estimate a scalar parameter can invoke strong assumptions to motivate a restricted estimator that is precise but may be heavily biased, or they can relax some of these assumptions to motivate a more robust, but variable, unrestricted estimator. When a bound on the bias of the restricted estimator is available, it is optimal to shrink the unrestricted estimator towards the restricted estimator. For settings where a bound on the bias of the restricted estimator is unknown, we propose adaptive shrinkage estimators that minimize the percentage increase in worst case risk relative to an oracle that knows the bound. We show that adaptive estimators solve a weighted convex minimax problem and provide lookup tables facilitating their rapid computation. Revisiting five empirical studies where questions of model specification arise, we examine the advantages of adapting to  rather than testing for  misspecification. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.14265&r=ecm 
By:  Giovanni Ballarin 
Abstract:  Linear time series models are the workhorse of structural macroeconometric analysis. However, economic theory as well as data suggests that nonlinear and asymmetric effects might be important to study to understand the potential effects of policy makers' choices. Taking a dynamical system view, this paper compares known approaches to construct impulse responses in nonlinear time series models and proposes a new approach that more directly relies on the underlying model properties. Nonparametric estimation of autoregressive models is discussed under natural physical dependence assumptions as well as inference for structural impulse responses. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.19089&r=ecm 
By:  Eiji Kurozumi; Anton Skrobotov 
Abstract:  In this study, we consider a fourregime bubble model under the assumption of timevarying volatility and propose the algorithm of estimating the break dates with volatility correction: First, we estimate the emerging date of the explosive bubble, its collapsing date, and the recovering date to the normal market under assumption of homoskedasticity; second, we collect the residuals and then employ the WLSbased estimation of the bubble dates. We demonstrate by Monte Carlo simulations that the accuracy of the break dates estimators improve significantly by this twostep procedure in some cases compared to those based on the OLS method. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.02977&r=ecm 
By:  Ivan Jeliazkov; Shubham Karnawat; Mohammad Arshad Rahman; Angela Vossmeyer 
Abstract:  This article develops a random effects quantile regression model for panel data that allows for increased distributional flexibility, multivariate heterogeneity, and timeinvariant covariates in situations where mean regression may be unsuitable. Our approach is Bayesian and builds upon the generalized asymmetric Laplace distribution to decouple the modeling of skewness from the quantile parameter. We derive an efficient simulationbased estimation algorithm, demonstrate its properties and performance in targeted simulation studies, and employ it in the computation of marginal likelihoods to enable formal Bayesian model comparisons. The methodology is applied in a study of U.S. residential rental rates following the Global Financial Crisis. Our empirical results provide interesting insights on the interaction between rents and economic, demographic and policy variables, weigh in on key modeling features, and overwhelmingly support the additional flexibility at nearly all quantiles and across several subsamples. The practical differences that arise as a result of allowing for flexible modeling can be nontrivial, especially for quantiles away from the median. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.13687&r=ecm 
By:  Bruno Ferman; Ot\'avio Tecchio 
Abstract:  In many situations, researchers are interested in identifying dynamic effects of an irreversible treatment with a static binary instrumental variable (IV). For example, in evaluations of dynamic effects of training programs, with a single lottery determining eligibility. A common approach in these situations is to report perperiod IV estimates. Under a dynamic extension of standard IV assumptions, we show that such IV estimators identify a weighted sum of treatment effects for different latent groups and treatment exposures. However, there is possibility of negative weights. We consider point and partial identification of dynamic treatment effects in this setting under different sets of assumptions. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.18114&r=ecm 
By:  Wei Tian 
Abstract:  The synthetic control estimator (Abadie et al., 2010) is asymptotically unbiased assuming that the outcome is a linear function of the underlying predictors and that the treated unit can be well approximated by the synthetic control before the treatment. When the outcome is nonlinear, the bias of the synthetic control estimator can be severe. In this paper, we provide conditions for the synthetic control estimator to be asymptotically unbiased when the outcome is nonlinear, and propose a flexible and datadriven method to choose the synthetic control weights. Monte Carlo simulations show that compared with the competing methods, the nonlinear synthetic control method has similar or better performance when the outcome is linear, and better performance when the outcome is nonlinear, and that the confidence intervals have good coverage probabilities across settings. In the empirical application, we illustrate the method by estimating the impact of the 2019 antiextradition law amendments bill protests on Hong Kong's economy, and find that the yearlong protests reduced real GDP per capita in Hong Kong by 11.27% in the first quarter of 2020, which was larger in magnitude than the economic decline during the 1997 Asian financial crisis or the 2008 global financial crisis. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.01967&r=ecm 
By:  Sungyoon Lee; Sokbae Lee 
Abstract:  In recent years, there has been a significant growth in research focusing on minimum $\ell_2$ norm (ridgeless) interpolation least squares estimators. However, the majority of these analyses have been limited to a simple regression error structure, assuming independent and identically distributed errors with zero mean and common variance, independent of the feature vectors. Additionally, the main focus of these theoretical analyses has been on the outofsample prediction risk. This paper breaks away from the existing literature by examining the mean squared error of the ridgeless interpolation least squares estimator, allowing for more general assumptions about the regression errors. Specifically, we investigate the potential benefits of overparameterization by characterizing the mean squared error in a finite sample. Our findings reveal that including a large number of unimportant parameters relative to the sample size can effectively reduce the mean squared error of the estimator. Notably, we establish that the estimation difficulties associated with the variance term can be summarized through the trace of the variancecovariance matrix of the regression errors. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.12883&r=ecm 
By:  Tommaso Manfè (University of Chicago); Luca Nunziata (University of Padova and IZA) 
Abstract:  We discuss the potentially severe bias in the DifferenceinDifference (DiD) design of commonlyused methods, including the regression specification known as TwoWayFixedEffects (TWFE), when researchers must invoke the conditional trend assumption but the distribution of the covariates changes over time. Building on Abadie (2005), we propose a Double Inverse Probability Weighting (DIPW) estimator for repeated crosssections based on both the probability of being treated and of belonging to the posttreatment period and derive its doublyrobust version (DRDIPW), similarly as in Santâ€™Anna and Zhao (2020). Through Monte Carlo simulations, we compare its performance with a number of methods suggested by the literature, which span from the basic TWFE (and our proposed correction) to semiparametric estimators, including those using machinelearning firststage estimates, following Chernozhukov et al. (2018). Results show that DRDIPW outperforms the other estimators in most realistic scenarios, even if TWFE corrections provide substantial benefits. Following Sequeira (2016), we estimate the effect of tariff reduction on bribing behavior by analyzing trades between South Africa and Mozambique during the period 2006â€“2014. Contrarily to the replication by Chang (2020), our findings show that the effect is lower in magnitude than the one presented in the original paper. 
Keywords:  DifferenceinDifference, MonteCarlo simulations, Semiparametric, Machinelearning. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:pad:wpaper:0305&r=ecm 
By:  Christian Gourieroux; Quinlan Lee 
Abstract:  The goal of this paper is to extend the method of estimating Impluse Response Functions (IRFs) by means of Local Projection (LP) in a nonlinear dynamic framework. We discuss the existence of a nonlinear autoregressive representation for a Markov process, and explain how their Impulse Response Functions are directly linked to the nonlinear Local Projection, as in the case for the linear setting. We then present a nonparametric LP estimator, and compare its asymptotic properties to that of IRFs obtained through direct estimation. We also explore issues of identification for the nonlinear IRF in the multivariate framework, which remarkably differs in comparison to the Gaussian linear case. In particular, we show that identification is conditional on the uniqueness of deconvolution. Then, we consider IRF and LP in augmented Markov models. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.18145&r=ecm 
By:  David M. Kaplan (University of Missouri); Xin Liu (Washington State University) 
Abstract:  We propose and study two confidence intervals (CIs) centered at an estimator that is intentionally biased in order to reduce mean squared error. The first CI simply uses an unbiased estimator's standard error. It is not obvious that this CI should work well; indeed, for confidence levels below 68.3%, the coverage probability can be near zero, and the CI using the biased estimator's standard error is also known to suffer from undercoverage. However, for confidence levels 91.7% and higher, even if the unbiased and biased estimators have identical mean squared error (which yields a bound on coverage probability), our CI is better than the benchmark CI centered at the unbiased estimator: they are the same length, but our CI has higher coverage probability (lower coverage error rate), regardless of the magnitude of bias. That is, whereas generally there is a tradeoff that requires increasing CI length in order to reduce the coverage error rate, in this case we can reduce the error rate for free (without increasing length) simply by recentering the CI at the biased estimator instead of the unbiased estimator. If rounding to the nearest hundredth of a percent, then even at the 90% confidence level our CI's worstcase coverage probability is 90.00% and can be significantly higher depending on the magnitude of bias. In addition to its favorable statistical properties, our proposed CI applies broadly and is simple to compute, making it attractive in practice. Building on these results, our second CI trades some of the first CI's "excess" coverage probability for shorter length. It also dominates the benchmark CI (centered at unbiased estimator) for conventional confidence levels, with higher coverage probability and shorter length, so we recommend this CI in practice. 
Keywords:  averaging estimators, biasvariance tradeoff, coverage probability, mean squared error, smoothing 
JEL:  C13 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:2308&r=ecm 
By:  Rong J. B. Zhu 
Abstract:  Estimating weights in the synthetic control method involves an optimization procedure that simultaneously selects and aligns control units in order to closely match the treated unit. However, this simultaneous selection and alignment of control units may lead to a loss of efficiency in the synthetic control method. Another concern arising from the aforementioned procedure is its susceptibility to underfitting due to imperfect pretreatment fit. It is not uncommon for the linear combination, using nonnegative weights, of pretreatment period outcomes for the control units to inadequately approximate the pretreatment outcomes for the treated unit. To address both of these issues, this paper proposes a simple and effective method called Synthetic Matching Control (SMC). The SMC method begins by performing the univariate linear regression to establish a proper match between the pretreatment periods of the control units and the treated unit. Subsequently, a SMC estimator is obtained by synthesizing (taking a weighted average) the matched controls. To determine the weights in the synthesis procedure, we propose an approach that utilizes a criterion of unbiased risk estimator. Theoretically, we show that the synthesis way is asymptotically optimal in the sense of achieving the lowest possible squared error. Extensive numerical experiments highlight the advantages of the SMC method. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.02584&r=ecm 
By:  Lei Wang 
Abstract:  Economists often implement TSLS to handle endogeneity. The bias of TSLS is severe when the number of instruments is large. Hence, JIVE has been proposed to reduce bias of overidentified TSLS. However, both methods have critical drawbacks. While overidentified TSLS has a large bias with a large degree of overidentification, JIVE is unstable. In this paper, I bridge the optimization problems of TSLS and JIVE, solve the connected problem and propose a new estimator TSJI. TSJI has a userdefined parameter $\lambda$. By approximating the bias of the TSJI up to op(1/N), I find a $\lambda$ value that produces approximately unbiased TSJI. TSJI with the selected $\lambda$ value not only has the same first order distribution as TSLS when the number of firststage and secondstage regressors are fixed, but also is consistent and asymptotically normal under manyinstrument asymptotics. Under three different simulation settings, I test TSJI against TSLS and JIVE with instruments of different strengths. TSJI clearly outperforms TSLS and JIVE in simulations. I apply TSJI to two empirical studies. TSJI mostly agrees with TSLS and JIVE, but it also gives different conclusions from TSLS and JIVE for specific cases. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.17615&r=ecm 
By:  Duarte, Belmiro P.M.; Atkinson, Anthony C.; P. Singh, Satya; S. Reis, Marco 
Abstract:  We find experimental plans for hypothesis testing when a prior ordering of experimental groups or treatments is expected. Despite the practical interest of the topic, namely in dose finding, algorithms for systematically calculating good plans are still elusive. Here, we consider the IntersectionUnion principle for constructing optimal experimental designs for testing hypotheses about ordered treatments. We propose an optimizationbased formulation to handle the problem when the power of the test is to be maximized. This formulation yields a complex objective function which we handle with a surrogatebased optimizer. The algorithm proposed is demonstrated for several ordering relations. The relationship between designs maximizing power for the IntersectionUnion Test (IUT) and optimality criteria used for linear regression models is analyzed; we demonstrate that IUTbased designs are well approximated by C–optimal designs and maximum entropy sampling designs while DAoptimal designs are equivalent to balanced designs. Theoretical and numerical results supporting these relations are presented. 
Keywords:  optimal design of experiments; hypothesis testing; ordered treatments; surrogate optimization; power function; alphabetic optimality 
JEL:  C1 
Date:  2023–04–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:115187&r=ecm 
By:  Matthieu Garcin 
Abstract:  We are interested in the nonparametric estimation of the probability density of price returns, using the kernel approach. The output of the method heavily relies on the selection of a bandwidth parameter. Many selection methods have been proposed in the statistical literature. We put forward an alternative selection method based on a criterion coming from information theory and from the physics of complex systems: the bandwidth to be selected maximizes a new measure of complexity, with the aim of avoiding both overfitting and underfitting. We review existing methods of bandwidth selection and show that they lead to contradictory conclusions regarding the complexity of the probability distribution of price returns. This has also some striking consequences in the evaluation of the relevance of the efficient market hypothesis. We apply these methods to real financial data, focusing on the Bitcoin. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.13123&r=ecm 
By:  F. Cipollini; G.M. Gallo; A. Palandri 
Abstract:  We focus on the timevarying modeling of VaR at a given coverage τ, assessing whether the quantiles of the distribution of the returns standardized by their conditional means and standard deviations exhibit predictable dynamics. Models are evaluated via simulation, determining the merits of the asymmetric Mean Absolute Deviation as a loss function to rank forecast performances. The empirical application on the Fama–French 25 value–weighted portfolios with a moving forecast window shows substantial improvements in forecasting conditional quantiles by keeping the predicted quantile unchanged unless the empirical frequency of violations falls outside a datadriven interval around τ. 
Keywords:  Risk management;Value at Risk;dynamic quantile;asymmetric loss function;forecast evaluation 
Date:  2023 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:202308&r=ecm 
By:  Chen, Yudong; Wang, Tengyao; Samworth, Richard J. 
Abstract:  We introduce and study two new inferential challenges associated with the sequential detection of change in a highdimensional mean vector. First, we seek a confidence interval for the changepoint, and second, we estimate the set of indices of coordinates in which the mean changes. We propose an online algorithm that produces an interval with guaranteed nominal coverage, and whose length is, with high probability, of the same order as the average detection delay, up to a logarithmic factor. The corresponding support estimate enjoys control of both false negatives and false positives. Simulations confirm the effectiveness of our methodology, and we also illustrate its applicability on the U.S. excess deaths data from 2017 to 2020. The supplementary material, which contains the proofs of our theoretical results, is available online. 
Keywords:  confidence interval; sequential method; sparsity; support estimate; EP/T02772X/1; EP/P031447/1 
JEL:  C1 
Date:  2023–05–26 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:119449&r=ecm 
By:  Atsushi Inoue; \`Oscar Jord\`a; Guido M. Kuersteiner 
Abstract:  An impulse response function describes the dynamic evolution of an outcome variable following a stimulus or treatment. A common hypothesis of interest is whether the treatment affects the outcome. We show that this hypothesis is best assessed using significance bands rather than relying on commonly displayed confidence bands. Under the null hypothesis, we show that significance bands are trivial to construct with standard statistical software using the LM principle, and should be reported as a matter of routine when displaying impulse responses graphically. 
Date:  2023–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2306.03073&r=ecm 
By:  Andrew J. Patton; Yasin Simsek 
Abstract:  We propose methods to improve the forecasts from generalized autoregressive score (GAS) models (Creal et. al, 2013; Harvey, 2013) by localizing their parameters using decision trees and random forests. These methods avoid the curse of dimensionality faced by kernelbased approaches, and allow one to draw on information from multiple state variables simultaneously. We apply the new models to four distinct empirical analyses, and in all applications the proposed new methods significantly outperform the baseline GAS model. In our applications to stock return volatility and density prediction, the optimal GAS tree model reveals a leverage effect and a variance risk premium effect. Our study of stockbond dependence finds evidence of a flighttoquality effect in the optimal GAS forest forecasts, while our analysis of highfrequency trade durations uncovers a volumevolatility effect. 
Date:  2023–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2305.18991&r=ecm 