|
on Econometric Time Series |
By: | Matthieu Garcin (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Natixis Asset Management - SAMS, LABEX Refi - ESCP Europe); Clément Goulet (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, LABEX Refi - ESCP Europe) |
Abstract: | In this paper we propose a new model for estimating returns and volatility. Our approach is based both on the wavelet denoising technique and on the variational theory. We assess that the volatility can be expressed as a non-parametric functional form of past returns. Therefore, we are able to forecast both returns and volatility and to build confidence intervals for predicted returns. Our technique outperforms classical time series theory. Our model does not require the stationarity of the observed log-returns, it preserves the volatility stylised facts and it is based on a fully non-parametric form. This non-parametric form is obtained thanks to the multiplicative noise theory. To our knowledge, this is the first time that such a method is used for financial modeling. We propose an application to intraday and daily financial data. |
Keywords: | Volatility modeling,non variational calculus,wavelet theory,trading strategy |
Date: | 2015–09 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:halshs-01244292&r=ets |
By: | Yang, Yuan; Wang, Lu |
Abstract: | We develop a procedure that efficiently computes likelihood function in nonlinear dynamic stochastic general equilibrium (DSGE) models. The procedure employs linearization to the measurement equation and delivers competitive results as the fully-adapted particle filter. The resulting likelihood approximation has much lower Monte Carlo variance than currently available particle filters, which greatly enhances the likelihood-based inference of DSGE models. We illustrate our procedure in applications to Bayesian estimation of a new Keynesian macroeconomic model. |
Keywords: | DSGE model, auxiliary particle filter, Bayesian estimation |
JEL: | C11 C15 C32 C63 |
Date: | 2015–11 |
URL: | http://d.repec.org/n?u=RePEc:cpm:dynare:047&r=ets |
By: | Antoine Mandel (Centre d'Economie de la Sorbonne - Paris School of Economics); Amir Sani (Centre d'Economie de la Sorbonne - Paris School of Economics) |
Abstract: | Combining forecasts has been demonstrated as a robust solution to noisy data, structural breaks, unstable forecasters and shifting environmental dynamics. In practice, sophisticated combination methods have failed to consistently outperform the mean over multiple horizons, pools of varying forecasters and different endogenous variables. This paper addresses the challenge to “develop methods better geared to the intermittent and evolving nature of predictive relations”, noted in Stock and Watson (2001), by proposing an adaptive non-parametric “meta” approach that provides a time-varying hedge against the performance of the mean for any selected forecast combination approach. This approach arguably solves the so-called “Forecast Combination Puzzle” using a meta-algorithm that adaptively hedges weights between the mean and a specific forecast combination algorithm or pool of forecasters augmented with one or more forecast combination algorithms. Theoretical performance bounds are reported and empirical performance is evaluated on the seven country macroeconomic output and inflation dataset introduced in Stock and Watson (2001) as well as the Euro-area Survey of Professional Forecasters. |
Keywords: | Forecast combinations; Forecast combination puzzle; Machine learning; Econometrics |
JEL: | C71 D85 |
Date: | 2016–04 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:16036r&r=ets |
By: | Kim Christensen (Aarhus University and CREATES); Roel Oomen (Deutsche Bank AG (London) and London School of Economics & Political Science (LSE) - Department of Statistics); Roberto Renò (Department of Economics, University of Verona) |
Abstract: | The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the continuous-time Itô semi-martingale model in such a way that the fundamental arbitrage-free property is preserved. We then develop a non-parametric test statistic that allows for the identification of drift bursts from noisy high-frequency data. We apply this methodology to a comprehensive set of tick data and show that drift bursts form an integral part of the price dynamics across equities, fixed income, currencies and commodities. We find that the majority of identified drift bursts are accompanied by strong price reversals and these can therefore be regarded as “flash crashes” that span brief periods of severe market disruption without any material longer term price impacts. |
Keywords: | flash crashes, drift bursts, volatility bursts, nonparametric statistics, reversals |
JEL: | G10 C58 |
Date: | 2016–09–27 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2016-28&r=ets |
By: | Gabriele Fiorentini (UNIVERSITÀ DI FIRENZE and RCEA); Alessandro Galesi (Banco de España); Enrique Sentana (CEMFI) |
Abstract: | We make two complementary contributions to efficiently estimate dynamic factor models: a frequency domain EM algorithm and a swift iterated indirect inference procedure for ARMA models with no asymptotic efficiency loss for any finite number of iterations. Although our procedures can estimate such models with many series without good initial values, near the optimum we recommend switching to a gradient method that analytically computes spectral scores using the EM principle. We successfully employ our methods to construct an index that captures the common movements of US sectoral employment growth rates, which we compare to the indices obtained by semiparametric methods |
Keywords: | indirect inference, Kalman filter, sectoral employment, spectral maximum likelihood, Wiener-Kolmogorov filter |
JEL: | C32 C38 C51 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:1619&r=ets |
By: | Giuseppe Cavaliere (Università di Bologna); Luca De Angelis (Università di Bologna); Luca Fanelli |
Abstract: | We investigate the asymptotic and finite sample properties of the most widely used information criteria for co-integration rank determination in ‘partial’ systems, i.e. in co-integrated Vector Autoregressive (VAR) models where a sub-set of variables of interest is modeled conditional on another sub-set of variables. The asymptotic properties of the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC) and the Hannan-Quinn Information Criterion (HQC) are established, and consistency of BIC and HQC is proved. No- tably, consistency of BIC and HQC is robust to violations of the hypothesis of weak exogeneity of the conditioning variables with respect to the co-integration parameters. More precisely, BIC and HQC recover the true co-integration rank from the partial system analysis also when the conditional model does not convey all information about the co-integration parameters. This result opens up interesting possibilities for practitioners who can determine the co-integration rank in partial systems without being concerned with the weak exogeneity of the conditioning variables. A Monte Carlo experiment which considers large systems as data generating process shows that BIC and HQC applied in partial systems perform reasonably well in small samples and comparatively better than ‘traditional’ approaches for co-integration rank determination. We further show the usefulness of our approach and the benefits of the conditional system anal- ysis to co-integration rank determination with two empirical illustrations, both based on the estimation of VAR systems on U.S. quarterly data. Overall, our analysis clearly shows that the gains of combining information criteria with partial systems analysis are indisputable. |
Keywords: | Information criteria, Co-integration, Partial system, Conditional model, VAR. Criteri di informazione, co-integrazione, modello condizionato, VAR |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:135&r=ets |
By: | Andrew Clare; James Seaton; Peter N. Smith; Stephen Thomas |
Abstract: | Sequence risk is a poorly understood, but crucial aspect of the risk faced by many investors. Using US equity data from 1872-2015 we apply the concept of Perfect Withdrawal Rates to show how this risk can be significantly reduced by applying simple, trend following investment strategies. We also show that knowing the CAPE ratio at the beginning of a decumulation period is useful for predicting and enhancing the sustainable withdrawal rate. |
Keywords: | Sequence Risk; Perfect Withdrawal Rate; Decumulation; Trend-Following; CAPE |
JEL: | G10 G11 G22 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:16/11&r=ets |
By: | Patrick Leung; Catherine S. Forbes; Gael M. Martin; Brendan McCabe |
Abstract: | This paper proposes new automated proposal distributions for sequential Monte Carlo algorithms, including particle filtering and related sequential importance sampling methods. The wrights for these proposal distributions are easily established, as is the unbiasedness property of the resultant likelihood estimators, so that the methods may be used within a particle Markov chain Monte Carlo (PMCMC) inferential setting. Simulation exercises, based on a range of state space models, are used to demonstrate the linkage between the signal-to-noise ratio of the system and the performance of the new particle filters, in comparison with existing filters. In particular, we demonstrate that one of our proposed filters performs well in a high signal-to-noise ratio setting, that is, when the observation is informative in identifying the location of the unobserved state. A second filter, deliberately designed to draw proposals that are informed by both the current observation and past states, is shown to work well across a range of signal-to noise ratios and to be much more robust than the auxiliary particle filter, which is often used as the default choice. We then extend the study to explore the performance of the PMCMC algorithm using the new filters to estimate the likelihood function, once again in comparison with existing alternatives. Taking into consideration robustness to the signal-to-noise ratio, computation time and the efficiency of the chain, the second of the new filters is again found to be the best-performing method. Application of the preferred filter to a stochastic volatility model for weekly Australian/US exchange rate returns completes the paper. |
Keywords: | Bayesian inference, non-Gaussian time series, state space models, unbiased likelihood estimation, sequential Monte Carlo |
JEL: | C11 C32 C53 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2016-17&r=ets |