|
on Econometric Time Series |
By: | Aknouche, Abdelhakim; Almohaimeed, Bader; Dimitrakopoulos, Stefanos |
Abstract: | This paper proposes a noisy GARCH model with two volatility sequences (an unobserved and an observed one) and a stochastic time-varying conditional kurtosis. The unobserved volatility equation, equipped with random coefficients, is a linear function of the past squared observations and of the past observed volatility. The observed volatility is the conditional mean of the unobserved volatility, thus following the standard GARCH specification, where its coefficients are equal to the means of the random coefficients. The means and the variances of the random coefficients as well as the unobserved volatilities are estimated using a three-stage procedure. First, we estimate the means of the random coefficients, using the Gaussian quasi-maximum likelihood estimator (QMLE), then, the variances of the random coefficients, using a weighted least squares estimator (WLSE), and finally the latent volatilities through a filtering process, under the assumption that the random parameters follow an Inverse Gaussian distribution, with the innovation being normally distributed. Hence, the conditional distribution of the model is the Normal Inverse Gaussian (NIG), which entails a closed form expression for the posterior mean of the unobserved volatility. Consistency and asymptotic normality of the QMLE and WLSE are established under quite tractable assumptions. The proposed methodology is illustrated with various simulated and real examples. |
Keywords: | Noised volatility GARCH, Randon coefficient GARCH, Markov switching GARCH, QMLE, Weighted least squares, filtering volatility, time-varying conditional kurtosis. |
JEL: | C13 C22 C51 C58 |
Date: | 2024–03–15 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:120456&r=ets |
By: | Hong, Y.; Linton, O. B.; McCabe, B.; Sun, J.; Wang, S. |
Abstract: | A popular self-normalization (SN) approach in time series analysis uses the variance of a partial sum as a self-normalizer. This is known to be sensitive to irregularities such as persistent autocorrelation, heteroskedasticity, unit roots and outliers. We propose a novel SN approach based on the adjusted-range of a partial sum, which is robust to these aforementioned irregularities. We develop an adjusted-range based Kolmogorov-Smirnov type test for structural breaks for both univariate and multivariate time series, and consider testing parameter constancy in a time series regression setting. Our approach can rectify the well-known power decrease issue associated with existing self-normalized KS tests without having to use backward and forward summations as in Shao and Zhang (2010), and can alleviate the “better size but less power†phenomenon when the existing SN approaches (Shao, 2010; Zhang et al., 2011; Wang and Shao, 2022) are used. Moreover, our proposed tests can cater for more general alternatives. Monte Carlo simulations and empirical studies demonstrate the merits of our approach. |
Keywords: | Change-Point Testing, CUSUM Process, Parameter Constancy, Studentization |
JEL: | C12 C19 |
Date: | 2023–11–06 |
URL: | http://d.repec.org/n?u=RePEc:cam:camjip:2316&r=ets |
By: | Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, Japan; Rimini Centre for Economic Analysis); Blessings Majoni (National Graduate Institute for Policy Studies, Japan) |
Abstract: | Common factor stochastic volatility (CSV) models capture the commonality that is often observed in volatility patterns. However, they assume that all the time variation in volatility is driven by a single multiplicative factor. This paper has two contributions. Firstly we develop a novel CSV model in which the volatility follows an inverse gamma process (CSV-IG), which implies fat Student’s t tails for the observed data. We obtain an analytic expression for the likelihood of this CSV model, which facilitates the numerical calculation of the marginal and predictive likelihood for model comparison. We also show that it is possible to simulate exactly from the posterior distribution of the volatilities using mixtures of gammas. Secondly, we generalize this CSV-IG model by parsimoniously substituting conditionally homoscedastic shocks with heteroscedastic factors which interact multiplicatively with the common factor in an approximate factor model (CSV-IG-AF). In empirical applications we compare these models to other multivariate stochastic volatility models, including different types of CSV models and exact factor stochastic volatility (FSV) models. The models are estimated using daily exchange rate returns of 8 currencies. A second application estimates the models using 20 macroeconomic variables for each of four countries: US, UK, Japan and Brazil. The comparison method is based on the predictive likelihood. In the application to exchange rate data we find strong evidence of CSV and that the best model is the IG-CSV-AF. In the Macro application we find that 1) the CSV-IG model performs better than all other CSV models, 2) the CSV-IG-AF is the best model for the US, 3) the CSV-IG is the best model for Brazil and 4) exact factor SV models are the best for UK and JP. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:24-04&r=ets |
By: | Silvia Goncalves; Serena Ng |
Abstract: | A crucial input into causal inference is the imputed counterfactual outcome. Imputation error can arise because of sampling uncertainty from estimating the prediction model using the untreated observations, or from out-of-sample information not captured by the model. While the literature has focused on sampling uncertainty, it vanishes with the sample size. Often overlooked is the possibility that the out-of-sample error can be informative about the missing counterfactual outcome if it is mutually or serially correlated. Motivated by the best linear unbiased predictor (\blup) of \citet{goldberger:62} in a time series setting, we propose an improved predictor of potential outcome when the errors are correlated. The proposed \pup\; is practical as it is not restricted to linear models, can be used with consistent estimators already developed, and improves mean-squared error for a large class of strong mixing error processes. Ignoring predictability in the errors can distort conditional inference. However, the precise impact will depend on the choice of estimator as well as the realized values of the residuals. |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.08130&r=ets |
By: | Gunes Kamber; James Morley; Benjamin Wong |
Abstract: | We revisit some popular univariate trend-cycle decomposition methods given the Covid-era data and find that only the output gap estimates from the Beveridge-Nelson filter remain both intuitive and reliable throughout the crisis and its aftermath. The real-time Hodrick-Prescott filter estimates for the output gap just prior to the pandemic are highly unreliable, although the estimated gap during the pandemic is reasonably similar to that of the Beveridge-Nelson filter. The Hamilton filter produces reliable estimates, but suffers from base effects that imply a purely mechanical spike in the output gap exactly two years after the onset of the crisis, in line with the filter horizon. Notably, unlike with the Beveridge-Nelson and Hodrick-Prescott filters, forecasts of the output gap for the Hamilton filter do not settle down to zero given plausible projected values of future output growth and display large spurious dynamics due to base effects given a simulated Covid-like shock in the projection. We also provide some refinements to the original Beveridge-Nelson filter that produce even more intuitive estimates of the output gap, while retaining the same strong revision properties. |
Keywords: | Beveridge-Nelson decomposition, output gap, real-time reliability |
JEL: | C18 E17 E32 |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2024-24&r=ets |
By: | Yunyun Wang; Tatsushi Oka; Dan Zhu |
Abstract: | Macro variables frequently display time-varying distributions, driven by the dynamic and evolving characteristics of economic, social, and environmental factors that consistently reshape the fundamental patterns and relationships governing these variables. To better understand the distributional dynamics beyond the central tendency, this paper introduces a novel semi-parametric approach for constructing time-varying conditional distributions, relying on the recent advances in distributional regression. We present an efficient precision-based Markov Chain Monte Carlo algorithm that simultaneously estimates all model parameters while explicitly enforcing the monotonicity condition on the conditional distribution function. Our model is applied to construct the forecasting distribution of inflation for the U.S., conditional on a set of macroeconomic and financial indicators. The risks of future inflation deviating excessively high or low from the desired range are carefully evaluated. Moreover, we provide a thorough discussion about the interplay between inflation and unemployment rates during the Global Financial Crisis, COVID, and the third quarter of 2023. |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2403.12456&r=ets |
By: | Martín Almuzara; Víctor Sancibrián |
Abstract: | We study estimation and inference in panel data regression models when the regressors of interest are macro shocks, which speaks to a large empirical literature that targets impulse responses via local projections. Our results hold under general dynamics and are uniformly valid over the degree of signal-to-noise of aggregate shocks. We show that the regression scores feature strong cross-sectional dependence and a known autocorrelation structure induced only by leads of the regressor. In general, including lags as controls and then clustering over the cross-section leads to simple, robust inference. |
Keywords: | panel data; local projections; impulse responses; aggregate shocks; inference; heterogeneity |
JEL: | C32 C33 C38 C51 |
Date: | 2024–03–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fednsr:97956&r=ets |