nep-ets New Economics Papers
on Econometric Time Series
Issue of 2021‒02‒01
twelve papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Asymptotic Properties of Least Squares Estimator in Local to Unity Processes with Fractional Gaussian Noises By Wang, Xiaohu; Xiao, Weilin; Yu, Jun
  2. Bayesian analysis of seasonally cointegrated VAR model By Justyna Wr\'oblewska
  3. Bayesian Fuzzy Clustering with Robust Weighted Distance for Multiple ARIMA and Multivariate Time-Series By Pacifico, Antonio
  4. Time-Transformed Test for the Explosive Bubbles under Non-stationary Volatility By Eiji Kurozumi; Anton Skrobotov; Alexey Tsarev
  5. Three questions regarding impulse responses and their interpretation found from sign restrictions By Sam Ouliaris; Adrian Pagan
  6. Integrated nested Laplace approximations for threshold stochastic volatility models By Lopes Moreira Da Veiga, María Helena; Rue, Havard; Marín Díazaraque, Juan Miguel; Zea Bermudez, P. De
  7. Long-term prediction intervals with many covariates By Sayar Karmakar; Marek Chudy; Wei Biao Wu
  8. Structural Estimation of Time-Varying Spillovers: An Application to International Credit Risk Transmission By Boeckelmann Lukas; Stalla-Bourdillon Arthur
  9. A geometric analysis of nonlinear dynamics and its application to financial time series By Isao Shoji; Masahiro Nozawa
  10. Split-then-Combine simplex combination and selection of forecasters By Antonio Martin Arroyo; Aranzazu de Juan Fernandez
  11. Real-time Inflation Forecasting Using Non-linear Dimension Reduction Techniques By Niko Hauzenberger; Florian Huber; Karin Klieber
  12. GARCH Analyses of Risk and Uncertainty in the Theories of the Interest Rate of Keynes and Kalecki By Hubert Gabrisch

  1. By: Wang, Xiaohu (Fudan University); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper derives asymptotic properties of the least squares estimator of the autoregressive parameter in local to unity processes with errors being fractional Gaussian noises with the Hurst parameter H. It is shown that the estimator is consistent when H ∈ (0, 1). Moreover, the rate of convergence is n when H ∈ [0.5, 1). The rate of convergence is n2H when H ∈ (0, 0.5). Furthermore, the limit distribution of the centered least squares estimator depends on H. When H = 0.5, the limit distribution is the same as that obtained in Phillips (1987a) for the local to unity model with errors for which the standard functional central theorem is applicable. When H > 0.5 or when H
    Keywords: Least squares; Local to unity; Fractional Brownian motion; Fractional Ornstein-Uhlenbeck process
    JEL: C22
    Date: 2020–12–23
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2020_027&r=all
  2. By: Justyna Wr\'oblewska
    Abstract: The paper aims at developing the Bayesian seasonally cointegrated model for quarterly data. We propose the prior structure, derive the set of full conditional posterior distributions, and propose the sampling scheme. The identification of cointegrating spaces is obtained \emph{via} orthonormality restrictions imposed on vectors spanning them. In the case of annual frequency, the cointegrating vectors are complex, which should be taken into account when identifying them. The point estimation of the cointegrating spaces is also discussed. The presented methods are illustrated by a simulation experiment and are employed in the analysis of money and prices in the Polish economy.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.14820&r=all
  3. By: Pacifico, Antonio
    Abstract: The paper suggests and develops a computational approach to improve hierarchical fuzzy clustering time-series analysis when accounting for high dimensional and noise problems in dynamic data. A Robust Weighted Distance measure between pairs of sets of Auto-Regressive Integrated Moving Average models is used. It is robust because Bayesian Model Selection methodology is performed with a set of conjugate informative priors in order to discover the most probable set of clusters capturing different dynamics and interconnections among time-varying data, and weighted because each time-series is 'adjusted' by own Posterior Model Size distribution in order to group dynamic data objects into 'ad hoc' homogenous clusters. Monte Carlo methods are used to compute exact posterior probabilities for each cluster chosen and thus avoid the problem of increasing the overall probability of errors that plagues classical statistical methods based on significance tests. Empirical and simulated examples describe the functioning and the performance of the procedure. Discussions with related works and possible extensions of the methodology to jointly deal with endogeneity issues and misspecified dynamics in high dimensional multicountry setups are also displayed.
    Keywords: Distance Measures; Fuzzy Clustering; ARIMA Time-Series; Bayesian Model Selection; MCMC Integrations.
    JEL: C1 C52 C61
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:104379&r=all
  4. By: Eiji Kurozumi; Anton Skrobotov; Alexey Tsarev
    Abstract: This paper is devoted to testing for the explosive bubble under time-varying non-stationary volatility. Because the limiting distribution of the seminal Phillips et al. (2011) test depends on the variance function and usually requires a bootstrap implementation under heteroskedasticity, we construct the test based on a deformation of the time domain. The proposed test is asymptotically pivotal under the null hypothesis and its limiting distribution coincides with that of the standard test under homoskedasticity, so that the test does not require computationally extensive methods for inference. Appealing finite sample properties are demonstrated through Monte-Carlo simulations. An empirical application demonstrates that the upsurge behavior of cryptocurrency time series in the middle of the sample is partially explained by the volatility change.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.13937&r=all
  5. By: Sam Ouliaris; Adrian Pagan
    Abstract: When sign restrictions are used in SVARs impulse responses are only set identified. If sign restrictions are just given for a single shock the shocks may not be separated, and so the resulting structural equations can be unacceptable. Thus, in a supply demand model, if only signs are given for the impulse responses to a demand shock this may result in two supply curves being in the SVAR. One needs to find the identified set so that this effect is excluded. Granziera el al’s (2018) frequentist approach to inference potentially suffers from this issue. One also has to recognize that the identified set should be adjusted so that it produces responses to the same size shock. Finally, because researchers are often unwilling to set out sign restrictions to separate all shocks, we describe how this can be done with a SVAR/VAR system rather than a straight SVAR.
    Keywords: SVAR, Sign Restrictions, Identified Set
    JEL: E37 C51 C52
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2020-101&r=all
  6. By: Lopes Moreira Da Veiga, María Helena; Rue, Havard; Marín Díazaraque, Juan Miguel; Zea Bermudez, P. De
    Abstract: The aim of the paper is to implement the integrated nested Laplace (INLA) approximations,known to be very fast and efficient, for a threshold stochastic volatility model. INLAreplaces MCMC simulations with accurate deterministic approximations. We use properal though not very informative priors and Penalizing Complexity (PC) priors. The simulation results favor the use of PC priors, specially when the sample size varies from small to moderate. For these sample sizes, they provide more accurate estimates of the model'sparameters, but as sample size increases both type of priors lead to reliable estimates of the parameters. We also validate the estimation method in-sample and out-of-sample by applying it to six series of returns including stock market, commodity and crypto currency returns and by forecasting their one-day-ahead volatilities, respectively. Our empirical results support that the TSV model does a good job in forecasting the one-day-ahead volatility of stock market and gold returns but faces difficulties when the volatility of returns is extreme, which occurs in the case of cryptocurrencies.
    Keywords: Threshold Stochastic Volatility Model; Pc Priors; Inla
    JEL: C58 C52 C32 C13
    Date: 2021–01–27
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:31804&r=all
  7. By: Sayar Karmakar; Marek Chudy; Wei Biao Wu
    Abstract: Accurate forecasting is one of the fundamental focus in the literature of econometric time-series. Often practitioners and policy makers want to predict outcomes of an entire time horizon in the future instead of just a single $k$-step ahead prediction. These series, apart from their own possible non-linear dependence, are often also influenced by many external predictors. In this paper, we construct prediction intervals of time-aggregated forecasts in a high-dimensional regression setting. Our approach is based on quantiles of residuals obtained by the popular LASSO routine. We allow for general heavy-tailed, long-memory, and nonlinear stationary error process and stochastic predictors. Through a series of systematically arranged consistency results we provide theoretical guarantees of our proposed quantile-based method in all of these scenarios. After validating our approach using simulations we also propose a novel bootstrap based method that can boost the coverage of the theoretical intervals. Finally analyzing the EPEX Spot data, we construct prediction intervals for hourly electricity prices over horizons spanning 17 weeks and contrast them to selected Bayesian and bootstrap interval forecasts.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.08223&r=all
  8. By: Boeckelmann Lukas; Stalla-Bourdillon Arthur
    Abstract: We propose a novel approach to quantify spillovers on financial markets based on a structural version of the Diebold-Yilmaz framework. Key to our approach is a SVAR-GARCH model that is statistically identified by heteroskedasticity, economically identified by maximum shock contribution and that allows for time-varying forecast error variance decompositions. We analyze credit risk spillovers between EZ sovereign and bank CDS. Methodologically, we find the model to better match economic narratives compared with common spillover approaches and to be more reactive than models relying on rolling window estimations. We find, on average, spillovers to explain 37% of the variation in our sample, amid a strong variation of the latter over time.
    Keywords: CDS, spillover, sovereign debt, systemic risk, SVAR, identification by heteroskedasticity
    JEL: C58 G01 G18 G21
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:798&r=all
  9. By: Isao Shoji; Masahiro Nozawa
    Abstract: A geometric method to analyze nonlinear oscillations is discussed. We consider a nonlinear oscillation modeled by a second order ordinary differential equation without specifying the function form. By transforming the differential equation into the system of first order ordinary differential equations, the trajectory is embedded in $R^3$ as a curve, and thereby the time evolution of the original state can be translated into the behavior of the curve in $R^3$, or the vector field along the curve. We analyze the vector field to investigate the dynamic properties of a nonlinear oscillation. While the function form of the model is unspecified, the vector fields and those associated quantities can be estimated by a nonparametric filtering method. We apply the proposed analysis to the time series of the Japanese stock price index. The application shows that the vector field and its derivative will be used as the tools of picking up various signals that help understanding of the dynamic properties of the stock price index.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.11825&r=all
  10. By: Antonio Martin Arroyo; Aranzazu de Juan Fernandez
    Abstract: This paper considers the Split-Then-Combine (STC) approach (Arroyo and de Juan, 2014) to combine forecasts inside the simplex space, the sample space of positive weights adding up to one. As it turns out, the simplicial statistic given by the center of the simplex compares favorably against the fixed-weight, average forecast. Besides, we also develop a Combine-After-Selection (CAS) method to get rid of redundant forecasters. We apply these two approaches to make out-of-sample one-step ahead combinations and subcombinations of forecasts for several economic variables. This methodology is particularly useful when the sample size is smaller than the number of forecasts, a case where other methods (e.g., Least Squares (LS) or Principal Component Analysis (PCA)) are not applicable.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.11935&r=all
  11. By: Niko Hauzenberger; Florian Huber; Karin Klieber
    Abstract: In this paper, we assess whether using non-linear dimension reduction techniques pays off for forecasting inflation in real-time. Several recent methods from the machine learning literature are adopted to map a large dimensional dataset into a lower dimensional set of latent factors. We model the relationship between inflation and these latent factors using state-of-the-art time-varying parameter (TVP) regressions with shrinkage priors. Using monthly real-time data for the US, our results suggest that adding such non-linearities yields forecasts that are on average highly competitive to ones obtained from methods using linear dimension reduction techniques. Zooming into model performance over time moreover reveals that controlling for non-linear relations in the data is of particular importance during recessionary episodes of the business cycle.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.08155&r=all
  12. By: Hubert Gabrisch (The Vienna Institute for International Economic Studies, wiiw)
    Abstract: This study attempts to identify uncertainty in the long-term rate of interest based on the controversial interest rate theories of Keynes and Kalecki. While Keynes stated that the future of the rate of interest is uncertain because it is numerically incalculable, Kalecki was convinced that it could be predicted. The theories are empirically tested using a reduced-form GARCH-in-mean model assigned to six globally leading financial markets. The obtained results support Keynes’s theory – the long-term rate of interest is a nonergodic financial phenomenon. Analyses of the relation between the interest rate and macroeconomic variables without interest uncertainty are thus seriously incomplete.
    Keywords: uncertainty, interest rate, Keynes, Kalecki, GARCH
    JEL: B26 C58 E43 E47
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:wii:wpaper:191&r=all

This nep-ets issue is ©2021 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.