|
on Econometric Time Series |
By: | Belen Garcia Carceles; Belén García Cárceles; Bernardí Cabrer Borrás; Jose Manuel Pavía Miralles |
Abstract: | Time series modeling by the use of automatic signal extraction methods has been widely studied and used in different contexts of economic analysis. The methodological innovation of ARIMA / SARIMA models estimation made significant contributions to the understanding of temporal dynamics of events, even when the time structure was apparently irregular and unpredictable. The popularity of these models was reflected in the development of applications that implemented algorithms that automaticaly extract temporal patterns of the series and provide a reasonably accurate adjustment by a mathematical model, making it also in a quick and consistent manner. One of the most common use of these programs is in the univariate analysis context, to achieve its filtering for its posterior use in a multivariate structure. However, there is significant untapped potential in the results provided by those applications. In this paper there's a description of the methodology with which the use of TRAMO SEATS and X13 ARIMA is implemented directly in a multivariate structure. Specifically, we have applied data analysis techniques related to artificial neural networks. UNder the neural networks philosophy, events are conceived as linked nodes which activate or not depending on the intensity of an imput signal. At that point come into play STRETCH or X13. To illustrate the methodology and the use of the model, series of health-related time are used, and a consistent model able to "react" to the dynamic interrelations of the variables considered is described. Standard panel data modeling is included in the example and compared with the new methodology. |
Keywords: | Spain, Germany, Netherlands, Sweeden, Belgium., Modeling: new developments, Forecasting and projection methods |
Date: | 2015–07–01 |
URL: | http://d.repec.org/n?u=RePEc:ekd:008007:8669&r=ets |
By: | Hacène Djellout (LMBP - Laboratoire de Mathématiques Blaise Pascal - UBP - Université Blaise Pascal - Clermont-Ferrand 2 - CNRS - Centre National de la Recherche Scientifique); Arnaud Guillin (IUF - Institut Universitaire de France - M.E.N.E.S.R. - Ministère de l'Éducation nationale, de l’Enseignement supérieur et de la Recherche, LMBP - Laboratoire de Mathématiques Blaise Pascal - UBP - Université Blaise Pascal - Clermont-Ferrand 2 - CNRS - Centre National de la Recherche Scientifique); Yacouba Samoura (LMBP - Laboratoire de Mathématiques Blaise Pascal - UBP - Université Blaise Pascal - Clermont-Ferrand 2 - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Realized statistics based on high frequency returns have become very popular in financial economics. In recent years, different non-parametric estimators of the variation of a log-price process have appeared. These were developed by many authors and were motivated by the existence of complete records of price data. Among them are the realized quadratic (co-)variation which is perhaps the most well known example, providing a consistent estimator of the integrated (co-)volatility when the logarithmic price process is continuous. Limit results such as the weak law of large numbers or the central limit theorem have been proved in different contexts. In this paper, we propose to study the large deviation properties of realized (co-)volatility (i.e., when the number of high frequency observations in a fixed time interval increases to infinity. More specifically, we consider a bivariate model with synchronous observation schemes and correlated Brownian motions of the following form: $dX_{\ell,t} = \sigma_{\ell,t}dB_{\ell,t}+b_{\ell}(t,\omega)dt$ for $\ell=1,2$, where $X_{\ell}$ denotes the log-price, we are concerned with the large deviation estimation of the vector $V_t^n(X)=\left(Q_{1,t}^n(X), Q_{2,t}^n(X), C_{t}^n(X)\right)$ where $Q_{\ell,t}^n(X)$ and $C_{t}^n(X)$ represente the estimator of the quadratic variational processes $Q_{\ell,t}=\int_0^t\sigma_{\ell,s}^2ds$ and the integrated covariance $C_t=\int_0^t\sigma_{1,s}\sigma_{2,s}\rho_sds$ respectively, with $\rho_t=cov(B_{1,t}, B_{2,t})$. Our main motivation is to improve upon the existing limit theorems. Our large deviations results can be used to evaluate and approximate tail probabilities of realized (co-)volatility. As an application we provide the large deviation for the standard dependence measures between the two assets returns such as the realized regression coefficients up to time $t$, or the realized correlation. Our study should contribute to the recent trend of research on the (co-)variance estimation problems, which are quite often discussed in high-frequency financial data analysis. |
Keywords: | large deviations,diffusion,discrete-time observation,Realised Volatility and covolatility |
Date: | 2017–01–30 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-01082903&r=ets |
By: | Hacène Djellout (LMBP - Laboratoire de Mathématiques Blaise Pascal - UBP - Université Blaise Pascal - Clermont-Ferrand 2 - CNRS - Centre National de la Recherche Scientifique); Hui Jiang (Nanjing University of Aeronautics and Astronautics - Department of Mathematics) |
Abstract: | Recently a considerable interest has been paid on the estimation problem of the realized volatility and covolatility by using high-frequency data of financial price processes in financial econometrics. Threshold estimation is one of the useful techniques in the inference for jump-type stochastic processes from discrete observations. In this paper, we adopt the threshold estimator introduced by Mancini where only the variations under a given threshold function are taken into account. The purpose of this work is to investigate large and moderate deviations for the threshold estimator of the integrated variance-covariance vector. This paper is an extension of the previous work in Djellout Guillin and Samoura where the problem has been studied in absence of the jump component. We will use the approximation lemma to prove the LDP. As the reader can expect we obtain the same results as in the case without jump. |
Keywords: | Jump Poisson,Large deviation principle,Quadratic variation,Threshold estimator |
Date: | 2017–03–19 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-01147189&r=ets |
By: | Bartolucci, Francesco; Pigini, Claudia |
Abstract: | Strict exogeneity of covariates other than the lagged dependent variable, and conditional on unobserved heterogeneity, is often required for consistent estimation of binary panel data models. This assumption is likely to be violated in practice because of feedback effects from the past of the outcome variable on the present value of covariates and no general solution is yet available. In this paper, we provide the conditions for a logit model formulation that takes into account feedback effects without specifying a joint parametric model for the outcome and predetermined explanatory variables. Our formulation is based on the equivalence between Granger's definition of noncausality and a modification of the Sims' strict exogeneity assumption for nonlinear panel data models, introduced by Chamberlain1982 and for which we provide a more general theorem. We further propose estimating the model parameters with a recent fixed-effects approach based on pseudo conditional inference, adapted to the present case, thereby taking care of the correlation between individual permanent unobserved heterogeneity and the model's covariates as well. Our results hold for short panels with a large number of cross-section units, a case of great interest in microeconomic applications. |
Keywords: | fixed effects, noncausality, predetermined covariates, pseudo-conditional inference, strict exogeneity |
JEL: | C12 C23 C25 |
Date: | 2017–03–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:77486&r=ets |
By: | Antoch, Jaromir; Hanousek, Jan; Horvath, Lajos; Huskova, Marie; Wang, Shixuan |
Abstract: | The detection of the (structural) break or so called change point problem has drawn increasing attention from both theoretical and applied economic and financial research over the last decade. A large part of the existing research concentrates on the detection and asymptotic properties of the change point problem for panels with a large time dimension T. In this article we study a different approach, i.e., we consider the asymptotic properties with respect to N (number of panel members) while keeping T fixed. This situation (N ? 8 but T being fixed and rather small) is typically related to large (firm-level) data containing financial information about an immerse number of firms/stocks across a limited number of years/quarters/months. We propose a general approach for testing for the break(s) in this setup, which also allows their detection. In particular, we show the asymptotic behavior of the test statistics, along with an alternative wild bootstrap procedure that could be used to generate the critical values of the test statistics. The theoretical approach is supplemented by numerous simulations and extended by an empirical illustration. In the practical application we demonstrate the testing procedure in the framework of the four factors CAPM model. In particular, we estimate breaks in monthly returns of the US mutual funds during the period January 2006 to February 2010 which covers the subprime crises. |
Keywords: | Change point problem; stationarity; panel data; bootstrap; four factor CAPM model; US mutual funds. |
JEL: | C10 C23 C33 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11891&r=ets |
By: | Christophe Chorro (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Florian Ielpo (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Unigestion SA - UNIGESTION , IPAG Business School); Benoît Sévi (LEMNA - Laboratoire d'économie et de management de Nantes Atlantique - UN - Université de Nantes) |
Abstract: | The extraction of the jump component in dynamics of asset prices haw witnessed a considerably growing body of literature. Of particular interest is the decomposition of returns' quadratic variation between their continuous and jump components. Recent contributions highlight the importance of this component in forecasting volatility at different horizons. In this article, we extend a methodology developed in Maheu and McCurdy (2011) to exploit the information content of intraday data in forecasting the density of returns at horizons up to sixty days. We follow Boudt et al. (2011) to detect intraday returns that should be considered as jumps. The methodology is robust to intra-week periodicity and further delivers estimates of signed jumps in contrast to the rest of the literature where only the squared jump component can be estimated. Then, we estimate a bivariate model of returns and volatilities where the jump component is independently modeled using a jump distribution that fits the stylized facts of the estimated jumps. Our empirical results for S&P 500 futures, U.S. 10-year Treasury futures, USD/CAD exchange rate and WTI crude oil futures highlight the importance of considering the continuous/jump decomposition for density forecasting while this is not the case for volatility point forecast. In particular, we show that the model considering jumps apart from the continuous component consistenly deliver better density forecasts for forecasting horizons ranging from 1 to 30 days. |
Keywords: | leverage effect,density forecasting,jumps,realized volatility,bipower variation,median realized volatility |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01442618&r=ets |
By: | Dominique Guegan (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Giovanni De Luca (Parthenope University - University Parthenope of Naples); Giorgia Rivieccio (Parthenope University - University Parthenope of Naples) |
Abstract: | We present the three-stage pseudo maximum likelihood estimation in order to reduce the computational burdens when a copula-based model is applied to multiple time-series in high dimensions. The method is applied to general stationary Markov time series, under some assumptions which include a time-invariant copula as well as marginal distributions, extending the results of Yi and Liao [2010]. We explore, via simulated and real data, the performance of the model compared to the classical vectorial autoregressive model, giving the implications of misspecified assumptions for margins and/or joint distribution and providing tail dependence measures of economic variables involved in the analysis. |
Keywords: | Copula function,Three stage estimator,Multiple time series |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01439860&r=ets |