|
on Econometric Time Series |
By: | Niels Haldrup (Aarhus University and CREATES); Oskar Knapik (Aarhus University and CREATES); Tommaso Proietti (University of Rome “Tor Vergata” and Creates) |
Abstract: | We consider the issue of modeling and forecasting daily electricity spot prices on the Nord Pool Elspot power market. We propose a method that can handle seasonal and non-seasonal persistence by modelling the price series as a generalized exponential process. As the presence of spikes can distort the estimation of the dynamic structure of the series we consider an iterative estimation strategy which, conditional on a set of parameter estimates, clears the spikes using a data cleaning algorithm, and reestimates the parameters using the cleaned data so as to robustify the estimates. Conditional on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better strategy than forecasting the daily average directly. |
Keywords: | Robust estimation, long-memory, seasonality, electricity spot prices, Nord Pool power market, forecasting, robust Kalman lter, generalized exponential model |
JEL: | C1 C5 C53 Q4 |
Date: | 2016–03–18 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-08&r=ets |
By: | Tetsuya Takaishi |
Abstract: | The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran. |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1603.08114&r=ets |
By: | Seok Young Hong; Oliver Linton; Hui Jun Zhang; |
Abstract: | We propose several multivariate variance ratio statistics. We derive the asymptotic distribution of the statistics and scalar functions thereof under the null hypothesis that returns are unpredictable after a constant mean adjustment (i.e., under the weak form Efficient Market Hypothesis). We do not impose the no leverage assumption of Lo and MacKinlay (1988) but our asymptotic standard errors are relatively simple and in particular do not require the selection of a bandwidth parameter. We extend the framework to allow for a time varying risk premium through common systematic factors. We show the limiting behaviour of the statistic under a multivariate fads model and under a moderately explosive bubble process: these alternative hypotheses give opposite predictions with regards to the long run value of the statistics. We apply the methodology to five weekly size-sorted CRSP portfolio returns from 1962 to 2013 in three subperiods. period, for small and medium cap stocks. The main findings are not substantially affected by allowing for a common factor time varying risk premium. |
Keywords: | Bubbles; Fads; Martingale; Momentum; Predictability |
JEL: | C10 C32 G10 G12 |
Date: | 2015–03–24 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:1552&r=ets |
By: | Barigozzi, Matteo; Lippi, Marco; Luciani, Matteo |
Abstract: | The paper studies Non-Stationary Dynamic Factor Models such that: (1) the factors F are I(1) and singular, i.e. F has dimension r and is driven by a q-dimensional white noise, the common shocks, with q |
Keywords: | Cointegration for singular vectors ; Dynamic Factor Models for I(1) variables ; Granger Representation Theorem for singular vectors |
JEL: | C01 E00 |
Date: | 2016–02–16 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-18&r=ets |
By: | Warusawitharana, Missaka |
Abstract: | While many studies find that the tail distribution of high frequency stock returns follow a power law, there are only a few explanations for this finding. This study presents evidence that time-varying volatility can account for the power law property of high frequency stock returns. The power law coefficients obtained by estimating a conditional normal model with nonparametric volatility show a striking correspondence to the power law coefficients estimated from returns data for stocks in the Dow Jones index. A cross-sectional regression of the data coefficients on the model-implied coefficients yields a slope close to one, supportive of the hypothesis that the two sets of power law coefficients are identical. Further, for most of the stocks in the sample taken individually, the model-implied coefficient falls within the 95 percent confidence interval for the coefficient estimated from returns data. |
Keywords: | Tail distributions ; high frequency returns ; power laws ; time-varying volatility |
JEL: | C58 D30 G12 |
Date: | 2016–03–18 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2016-22&r=ets |
By: | Han Lin Shang; Rob J Hyndman |
Abstract: | Age-specific mortality rates are often disaggregated by different attributes, such as sex, state and ethnicity. Forecasting age-specific mortality rates at the national and sub-national levels plays an important role in developing social policy. However, independent forecasts of age-specific mortality rates at the sub-national levels may not add up to the forecasts at the national level. To address this issue, we consider the problem of reconciling age-specific mortality rate forecasts from the viewpoint of grouped univariate time series forecasting methods (Hyndman, Ahmed, et al., 2011), and extend these methods to functional time series forecasting, where age is considered as a continuum. The grouped functional time series methods are used to produce point forecasts of mortality rates that are aggregated appropriately across different disaggregation factors. For evaluating forecast uncertainty, we propose a bootstrap method for reconciling interval forecasts. Using the regional age-specific mortality rates in Japan, obtained from the Japanese Mortality Database, we investigate the one- to ten-step-ahead point and interval forecast accuracies between the independent and grouped functional time series forecasting methods. The proposed methods are shown to be useful for reconciling forecasts of age-specific mortality rates at the national and sub-national levels, and they also enjoy improved forecast accuracy averaged over different disaggregation factors. |
Keywords: | Forecast reconciliation, hierarchical time series forecasting, bottom-up, optimal combination, Japanese Mortality Database |
JEL: | C14 C32 J11 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2016-4&r=ets |
By: | Tingting Cheng; Jiti Gao; Peter CB Phillips |
Abstract: | Ergodic theorem shows that ergodic averages of the posterior draws converge in probability to the posterior mean under the stationarity assumption. The literature also shows that the posterior distribution is asymptotically normal when the sample size of the original data considered goes to infinity. To the best of our knowledge, there is little discussion on the large sample behaviour of the posterior mean. In this paper, we aim to fill this gap. In particular, we extend the posterior mean idea to the conditional mean case, which is conditioning on a given summary statistics of the original data. We stablish a new asymptotic theory for the conditional mean estimator for the case when both the sample size of the original data concerned and the number of Markov chain Monte Carlo iterations go to infinity. Simulation studies show that this conditional mean estimator has very good finite sample performance. In addition, we employ the conditional mean estimator to estimate a GARCH(1,1) model for S&P 500 stock returns and find that the conditional mean estimator performs better than quasi-maximum likelihood estimation in terms of out-of-sample forecasting. |
Keywords: | Bayesian average, conditional mean estimation, ergodic theorem, summary statistic |
JEL: | C11 C15 C21 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2016-5&r=ets |
By: | Bin Jiang; Anastasios Panagiotelis; George Athanasopoulos; Rob Hyndman; Farshid Vahid |
Abstract: | Estimating the rank of the coefficient matrix is a major challenge in multivariate regression, including vector autoregression (VAR). In this paper, we develop a novel fully Bayesian approach that allows for rank estimation. The key to our approach is reparameterizing the coefficient matrix using its singular value decomposition and conducting Bayesian inference on the decomposed parameters. By implementing a stochastic search variable selection on the singular values of the coefficient matrix, the ultimate selected rank can be identified as the number of nonzero singular values. Our approach is appropriate for small multivariate regressions as well as for higher dimensional models with up to about 40 predictors. In macroeconomic forecasting using VARs, the advantages of shrinkage through proper Bayesian priors are well documented. Consequently, the shrinkage approach proposed here that selects or averages over low rank coefficient matrices is evaluated in a forecasting environment. We show in both simulations and empirical studies that our Bayesian approach provides forecasts that are better than those of the most promising benchmark methods, dynamic factor models and factor augmented VARs. |
Keywords: | Singular value decomposition, model selection, vector autoregression, macroeconomic forecasting, dynamic factor models |
JEL: | C11 C52 C53 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2016-6&r=ets |
By: | Shujie Ma (Department of Statistics, University of California, Riverside); Liangjun Su (Singapore Management University) |
Abstract: | In this paper we study the estimation of a large dimensional factor model when the factor loadings exhibit an unknown number of changes over time. We propose a novel three-step procedure to detect the breaks if any and then identify their locations. In the first step, we divide the whole time span into subintervals and fit a conventional factor model on each interval. In the second step, we apply the adaptive fused group Lasso to identify intervals containing a break. In the third step, we devise a grid search method to estimate the location of the break on each identified interval. We show that with probability approaching one our method can identify the correct number of changes and estimate the break locations. Simulation studies indicate superb finite sample performance of our method. We apply our method to investigate Stock and Watson’s (2009) U.S. monthly macroeconomic data set and identify five breaks in the factor loadings, spanning 1959-2006. |
Keywords: | Break point; Convergence rate; Factor model; Fused Lasso; Group Lasso; Information criterion; Principal component; Structural change; Super-consistency; Time-varying parameter. |
JEL: | C12 C33 C33 C38 |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:siu:wpaper:05-2016&r=ets |
By: | Florian Huber (Department of Economics, Vienna University of Economics and Business); Martin Feldkircher (Oesterreichische Nationalbank (OeNB)) |
Abstract: | Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this paper we derive the shrinkage prior of Griffin et al. (2010) for the VAR case and its relevant conditional posterior distributions. This framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariances of the VAR along with Gamma priors on a set of local and global prior scaling parameters. This prior setup is then generalized by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. A simulation exercise shows that the proposed framework yields more precise estimates of the model parameters and impulse response functions. In addition, a forecasting exercise applied to US data shows that the proposed prior outperforms other specifications in terms of point and density predictions. |
Keywords: | Normal-Gamma prior, density predictions, hierarchical modeling |
JEL: | C11 C30 C53 E52 |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp221&r=ets |