|
on Econometrics |
By: | Griffin, Jim; Steel, Mark F.J. |
Abstract: | This paper discusses Bayesian inference for stochastic volatility models based on continuous superpositions of Ornstein-Uhlenbeck processes. These processes represent an alternative to the previously considered discrete superpositions. An interesting class of continuous superpositions is defined by a Gamma mixing distribution which can define long memory processes. We develop efficient Markov chain Monte Carlo methods which allow the estimation of such models with leverage effects. This model is compared with a two-component superposition on the daily Standard and Poor's 500 index from 1980 to 2000. |
Keywords: | Leverage effect; Levy process; Long memory; Markov chain Monte Carlo; Stock price |
JEL: | C32 G10 C11 |
Date: | 2008–10–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:11071&r=ecm |
By: | Christophe Hurlin (Laboratoire d'Economie d'Orléans - Université d'Orléans - CNRS : FRE2783); Gilbert Colletaz (Laboratoire d'Economie d'Orléans - Université d'Orléans - CNRS : FRE2783); Sessi Tokpavi (Laboratoire d'Economie d'Orléans - Université d'Orléans - CNRS : FRE2783); Bertrand Candelon (Laboratoire d'Economie d'Orléans - Université d'Orléans - CNRS : FRE2783) |
Abstract: | This paper proposes a new duration-based backtesting procedure for VaR forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e. the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration based backtesting procedures. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypothesis (Christoffersen, 1998). Third, feasibility of the tests is improved. Fourth, Monte-Carlo simulations show that for realistic sample sizes, our GMM test outperforms traditional duration based test. An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the ex-post evaluation of the risk by regulation authorities. Without any doubt, this paper provides a strong support for the empirical application of duration-based tests for VaR forecasts. |
Keywords: | Value-at-Risk; backtesting; GMM; duration-based test |
Date: | 2008–10–10 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00329495_v1&r=ecm |
By: | Xiaohong Chen (Cowles Foundation, Yale University); Roger Koenker (Dept. of Economics, University of Illinois at Urbana-Champaign; Dept. of Economics, Boston College) |
Abstract: | Parametric copulas are shown to be attractive devices for specifying quantile autoregressive models for nonlinear time-series. Estimation of local, quantile-specific copula-based time series models offers some salient advantages over classical global parametric approaches. Consistency and asymptotic normality of the proposed quantile estimators are established under mild conditions, allowing for global misspecification of parametric copulas and marginals, and without assuming any mixing rate condition. These results lead to a general framework for inference and model specification testing of extreme conditional value-at-risk for financial time series data. |
Keywords: | Quantile autoregression, Copula, Ergodic nonlinear Markov models |
JEL: | C22 C63 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1679&r=ecm |
By: | Roxana Chiriac (Universität Konstanz); Valeri Voev |
Abstract: | This paper proposes a methodology for modelling time series of realized covariance matrices in order to forecast multivariate risks. The approach allows for flexible dynamic dependence patterns and guarantees positive definiteness of the resulting forecasts without imposing parameter restrictions. We provide an empirical application of the model, in which we show by means of stochastic dominance tests that the returns from an optimal portfolio based on the model’s forecasts second-order dominate returns of portfolios optimized on the basis of traditional MGARCH models. This result implies that any risk-averse investor, regardless of the type of utility function, would be better-off using our model. |
Date: | 2008–09–01 |
URL: | http://d.repec.org/n?u=RePEc:knz:cofedp:0806&r=ecm |
By: | Visser, Marcel P. |
Abstract: | Daily volatility proxies based on intraday data, such as the high-low range and the realized volatility, are important to the specification of discrete time volatility models, and to the quality of their parameter estimation. The main result of this paper is a simple procedure for combining such proxies into a single, highly efficient volatility proxy. The approach is novel in optimizing proxies in relation to the scale factor (the volatility) in discrete time models, rather than optimizing proxies as estimators of the quadratic variation. For the S&P 500 index tick data over the years 1988-2006 the procedure yields a proxy which puts, among other things, more weight on the sum of the highs than on the sum of the lows over ten-minute intervals. The empirical analysis indicates that this finite-grid optimized proxy outperforms the standard five-minute realized volatility by at least 40%, and the limiting case of the square root of the quadratic variation by 25%. |
Keywords: | volatility proxy; realized volatility; quadratic variation; scale factor; arch/garch/stochastic volatility; variance of logarithm |
JEL: | G1 C65 C52 C22 |
Date: | 2008–10–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:11001&r=ecm |
By: | Chiara Tommasi (University of Milano) |
Abstract: | Usually, in the Theory of Optimal Experimental Design the model is assumed to be known at the design stage. In practice, however, more competing models may be plausible for the same data. Thus, a possibility is to find an optimal design which take both model discrimination and parameter estimation into consideration. In this paper we follow a different approach: we find a design which is optimum for estimation purposes but is also robust to a misspecified model. In other words, the optimum design is "good" for estimating the unknown parameters even if the assumed model is not correct. |
Keywords: | D-optimality, information sandwich variance matrix, maximum likelihood estimator, |
Date: | 2008–09–23 |
URL: | http://d.repec.org/n?u=RePEc:bep:unimip:1078&r=ecm |
By: | Marc Hallin; Roman Liska |
Abstract: | Macroeconometric data often come under the form of large panels of time series, themselves decomposing into smaller but still quite large subpanels or blocks. We show how the dynamic factor analysis method proposed in Forni et al (2000), combined with the identification method of Hallin and Liska (2007), allows for identifying and estimating joint and block-specific common factors. This leads to a more sophisticated analysis of the structures of dynamic interrelations within and between the blocks in such datasets, along with an informative decomposition of explained variances. The method is illustrated with an analysis of the Industrial Production Index data for France, Germany, and Italy. |
Keywords: | Panel data; Time series; High dimensional data; Dynamic factor model; Business cycle; Block specific factors; Dynamic principal components; Information criterion |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2008_012&r=ecm |
By: | Thomas Busch (Danske Bank and CREATES); Bent Jesper Christensen (University of Aarhus and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES) |
Abstract: | We study the forecasting of future realized volatility in the foreign exchange, stock, and bond markets, and of the separate continuous sample path and jump components of this, from variables in the information set, including implied volatility backed out from option prices. Recent nonparametric statistical techniques of Barndorff-Nielsen & Shephard (2004, 2006) are used to separate realized volatility into its continuous and jump components, which enhances forecasting performance, as shown by Andersen, Bollerslev & Diebold (2007). The heterogeneous autoregressive (HAR) model of Corsi (2004) is applied with implied volatility as an additional forecasting variable, and separating the forecasts of the two realized components. A new vector HAR (VecHAR) model for the resulting simultaneous system is introduced, controlling for possible endogeneity issues. Implied volatility contains incremental information about future volatility in all three markets, even when separating the continuous and jump components of past realized volatility in the information set, and it is an unbiased forecast in the foreign exchange and stock markets. In the foreign exchange market, implied volatility completely subsumes the information content of daily, weekly, and monthly realized volatility measures when forecasting future realized volatility or the continuous or jump component of this. In out-of-sample forecasting experiments, implied volatility alone is the preferred forecast of future realized volatility in all three markets, as mean absolute forecast error increases if realized volatility components are included in the forecast. Perhaps surprisingly, the jump component of realized volatility is, to some extent, predictable, and options appear to be calibrated to incorporate information about future jumps in all three markets. |
Keywords: | Bipower variation, HAR, Heterogeneous Autoregressive Model, implied volatility, jumps, options, realized volatility, VecHAR, volatility forecasting |
JEL: | C22 C32 F31 G1 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1181&r=ecm |
By: | Roger Hammersland (Statistics Norway) |
Abstract: | This paper addresses how to enhance the role of data in structural model design by utilizing structural breaks and superfluous information as auxiliary tools of exact identification. To illustrate the procedure and to study the simultaneous interplay between financial variables and the real side of the economy a simultaneous equation model is constructed on Norwegian aggregate data. In this model, while innovations to stock prices and credit do cause short run movements in real activity, such innovations do not precede real economy movements in the long run. |
Keywords: | Structural vector Error Correction modeling; Identification; Cointegration; Financial variables and the real economy. |
JEL: | C30 C32 C50 C51 C53 C53 E44 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:562&r=ecm |
By: | Catherine Dehon; Marjorie Gassner; Vincenzo Verardi |
Abstract: | In the presence of outliers in a dataset, a least squares estimation may not be the most adequate choice to get representative results. Indeed estimations could have been excessively infuenced even by a very limited number of atypical observations. In this article, we propose a new Hausman-type test to check for this. The test is based on the trade-off between robustness and effciency and allows to conclude if a least squares estimation is appropriate or if a robust method should be preferred. An economic example is provided to illustrate the usefulness of the test. |
Keywords: | Effciency, Hausman Test, Linear Regression, Outliers, Robustness, S-estimator |
JEL: | C12 C13 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2008_006&r=ecm |
By: | Marmer, Vadim; Hnatkovska, Viktoria; Tang, Yao |
Abstract: | This paper presents testing procedures for comparison of misspecified calibrated models. The proposed tests are of the Vuong-type (Vuong, 1989; Rivers and Vuong, 2002). In our framework, an econometrician selects values for the parameters in order to match some characteristics of the data with those implied by the competing theoretical models. We assume that all competing models are misspecified, and suggest a test for the null hypothesis that all considered models provide equivalent fit to the data characteristics, against the alternative that one of the models is a better approximation. We consider both nested and non-nested cases. Our discussion includes the case when parameters are estimated to match one set of moments and the models are evaluated by their ability to match another. We also relax the dependence of models' ranking on the choice of weight matrix by suggesting averaged and sup procedures, as well as constructing confidence sets for weight matrices favorable for one of the models. The proposed method is illustrated by comparing standard cash-in-advance and portfolio adjustment cost models. Our comparison is based on the ability of the two models to match the impulse responses of output and inflation to money growth shocks. We find that both models provide equally poor fit to the data and therefore adjustment costs do not play a significant role in explaining the impulse response dynamics. |
Keywords: | misspecified models; calibration; matching; minimum distance estimation |
Date: | 2008–10–16 |
URL: | http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2008-14&r=ecm |
By: | Alessandro De Gregorio (Università di Milano, Italy); Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT) |
Abstract: | In this paper we propose the use of $\phi$-divergences as test statistics to verify simple hypotheses about a one-dimensional parametric diffusion process $\de X_t = b(X_t, \theta)\de t + \sigma(X_t, \theta)\de W_t$, from discrete observations $\{X_{t_i}, i=0, \ldots, n\}$ with $t_i = i\Delta_n$, $i=0, 1, \ldots, n$, under the asymptotic scheme $\Delta_n\to0$, $n\Delta_n\to\infty$ and $n\Delta_n^2\to 0$. The class of $\phi$-divergences is wide and includes several special members like Kullback-Leibler, R\'enyi, power and $\alpha$-divergences. We derive the asymptotic distribution of the test statistics based on $\phi$-divergences. The limiting law takes different forms depending on the regularity of $\phi$. These convergence differ from the classical results for independent and identically distributed random variables. Numerical analysis is used to show the small sample properties of the test statistics in terms of estimated level and power of the test. |
Keywords: | diffusion processes, empirical level, divergences, |
Date: | 2008–08–06 |
URL: | http://d.repec.org/n?u=RePEc:bep:unimip:1076&r=ecm |
By: | Laurens Cherchye; Bram De Rock; Jeroen Sabbe; Frederic Vermeulen |
Abstract: | We present an IP-based nonparametric (revealed preference) testing procedure for rational consumption behavior in terms of general collective models, which include consumption externalities and public consumption. An empirical application to data drawn from the Russia Longitudinal Monitoring Survey (RLMS) demonstrates the practical usefulness of the procedure. Finally, we present extensions of the testing procedure to evaluate the goodness-of-fit of the collective model subject to testing, and to quantify and improve the power of the corresponding collective rationality tests. |
Keywords: | collective consumption model, revealed preferences, nonparametric rationality tests, integer programming (IP) |
JEL: | D11 D12 C14 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2008_001&r=ecm |
By: | Li, Minqiang |
Abstract: | Asset price bubbles can arise unintentionally when one uses continuous-time diffusion processes to model financial quantities. We propose a flexible damped diffusion framework that is able to break many types of bubbles and preserve the martingale pricing approach. Damping can be done on either the diffusion or drift function. Oftentimes, certain solutions to the valuation PDE can be ruled out by requiring the solution to be a limit of martingale prices for damped diffusion models. Monte Carlo study shows that with finite time-series length, maximum likelihood estimation often fails to detect the damped diffusion function while fabricates nonlinear drift function. |
Keywords: | Damped diffusion; asset price bubbles; martingale pricing; maximum likelihood estimation |
JEL: | G12 G13 C60 |
Date: | 2008–07–30 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:11185&r=ecm |
By: | Matsuki, Takashi; Usami, Ryoichi |
Abstract: | This study investigates the existence of regional convergence of per capita outputs in China from 1952–2004, particularly focusing on considering the presence of multiple structural breaks in the provincial-level panel data. First, the panel-based unit root test that allows for occurrence of multiple breaks at various break dates across provinces is developed; this test is based on the p-value combination approach suggested by Fisher (1932). Next, the test is applied to China’s provincial real per capita outputs to examine the regional convergence in China. To obtain the p-values of unit root tests for each province, which are combined to construct the panel unit root test, this study assumes three data generating processes: a driftless random walk process, an ARMA process, and an AR process with cross-sectionally dependent errors in Monte Carlo simulation. The results obtained from this study reveal that the convergence of the provincial per capita outputs exists in each of the three geographically classified regions—the Eastern, Central, and Western regions—of China. |
Keywords: | panel unit root test;multiple breaks;combining p-values;nonstationary panels;China;convergence |
JEL: | O47 C12 C33 |
Date: | 2007–03–17 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:10167&r=ecm |
By: | Christophe Croux; Catherine Dehon |
Abstract: | Nonparametric correlation measures at the Kendall and Spearman correlation are widely used in the behavioral sciences. These measures are often said to be robust, in the sense of being resistant to outlying observations. In this note we formally study their robustness by means of their infuence functions. Since robustness of an estimator often comes at the price of a loss in precision, we compute effciencies at the normal model. A comparison with robust correlation measures derived from robust covariance matrices is made. We conclude that both Spearman and Kendall correlation measures combine good robustness properties with high effciency. |
Keywords: | Asymptotic Variance, Correlation, Gross-Error Sensitivity, Infuence function, Kendall correlation, Robustness, Spearman correlation. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2008_002&r=ecm |
By: | Pepa Ramirez; Rosa E. Lillo; Michael P. Wiper |
Abstract: | Two types of transitions can be found in the Markovian Arrival process or MAP: with and without arrivals. In transient transitions the chain jumps from one state to another with no arrival; in effective transitions, a single arrival occurs. We assume that in practice, only arrival times are observed in a MAP. This leads us to define and study the Effective Markovian Arrival process or E-MAP. In this work we define identifiability of MAPs in terms of equivalence between the corresponding E-MAPs and study conditions under which two sets of parameters induce identical laws for the observable process, in the case of 2 and 3-states MAP. We illustrate and discuss our results with examples. |
Keywords: | Batch Markovian Arrival process, Hidden Markov models, Identifiability problems |
Date: | 2008–09 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws084613&r=ecm |
By: | Seymen, Atilim |
Abstract: | The paper questions the reasonability of using forecast error variance decompositions for assessing the role of different structural shocks in business cycle fluctuations. It is shown that the forecast error variance decomposition is related to a dubious definition of the business cycle. A historical variance decomposition approach is proposed to overcome the problems related to the forecast error variance decomposition. |
Keywords: | Business Cycles, Structural Vector Autoregression Models, Forecast Error Variance Decomposition, Historical Variance Decomposition |
JEL: | C32 E32 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewdip:7388&r=ecm |
By: | Cecilia Frale; David Veredas |
Abstract: | We estimate the monthly volatility of the US economy from 1968 to 2006 by extending the coincident index model of Stock and Watson (1991). Our volatility index, which we call VOLINX, has four applications. First, it sheds light on the Great Moderation. VOLINX captures the decrease in the volatility in the mid-80s as well as the different episodes of stress over the sample period. In the 70s and early 80s the stagflation and the two oil crises marked the pace of the volatility whereas 09/11 is the most relevant shock after the moderation. Second, it helps to understand the economic indicators that cause volatility. While the main determinant of the coincident index is industrial production, VOLINX is mainly affected by employment and income. Third, it adapts the confidence bands of the forecasts. In and out-of-sample evaluations show that the confidence bands may differ up to 50% with respect to a model with constant variance. Last, the methodology we use permits us to estimate monthly GDP, which has conditional volatility that is partly explained by VOLINX. These applications can be used by policy makers for monitoring and surveillance of the stress of the economy. |
Keywords: | Great Moderation, temporal disaggregation, volatility, dynamic factor models, Kalman filter |
JEL: | C32 C51 E32 E37 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2008_008&r=ecm |
By: | Sonali Das (CSIR, Pretoria); Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg) |
Abstract: | This paper develops large-scale Bayesian Vector Autoregressive (BVAR) models, based on 268 quarterly series, for forecasting annualized real house price growth rates for large-, medium- and small-middle-segment housing for the South African economy. Given the in-sample period of 1980:01 to 2000:04, the large-scale BVARs, estimated under alternative hyperparameter values specifying the priors, are used to forecast real house price growth rates over a 24-quarter out-of-sample horizon of 2001:01 to 2006:04. The forecast performance of the large-scale BVARs are then compared with classical and Bayesian versions of univariate and multivariate Vector Autoregressive (VAR) models, merely comprising of the real growth rates of the large-, medium- and small-middle-segment houses, and a large-scale Dynamic Factor Model (DFM), which comprises of the same 268 variables included in the large-scale BVARs. Based on the one- to four-quarters ahead Root Mean Square Errors (RMSEs) over the out-of-sample horizon, we find the large-scale BVARs to not only outperform all the other alternative models, but to also predict the recent downturn in the real house price growth rates for the three categories of the middle-segment-housing over an ex ante period of 2007:01 to 2008:02. |
Keywords: | Dynamic Factor Model, BVAR, Forecast Accuracy |
JEL: | C11 C13 C33 C53 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:200831&r=ecm |
By: | Michal Franta (Czech National Bank; CERGE-EI); Branislav Saxa (Czech National Bank; CERGE-EI); Katerina Smidkova (Czech National Bank; Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic) |
Abstract: | Inflation persistence has been put forward as one of the potential reasons of divergence among euro area members. It has also been proposed that the new EU Member States (NMS) may struggle with even higher persistence due to convergence factors. We argue that persistence may not be as different between the two country groups as one might expect. However, this empirical result can only be obtained if the adequate estimation methods, reflecting the scope of the convergence process the NMS went through, are applied. We emphasize that a time-varying mean models suggest similar or lower inflation persistence for the NMS compared to euro area countries while more traditional parametric statistical measures assuming a constant mean deliver substantially higher persistence estimates for the NMS than for the euro area countries. This difference is due to frequent breaks in inflation time series in the NMS. Structural persistence measures show that backward-looking behavior may be a more important component in explaining inflation dynamics in the NMS than in the euro area countries. |
Keywords: | inflation persistence, new hybrid Phillips curve, new member states, time-varying mean |
JEL: | E31 C22 C11 C32 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2008_25&r=ecm |
By: | Erich Gundlach; Martin Paldam (School of Economics and Management, University of Aarhus, Denmark) |
Abstract: | Acemoglu, Johnson, Robinson, and Yared (2008) demonstrate that estimation of the standard adjustment model with country-fixed and time-fixed effects removes the statistical significance of income as a causal factor of democracy. We argue that their empirical approach must produce insignificant income effects and that a small change in the estimation process immediately reveals the strong effect of income on democracy. |
Keywords: | Democracy, Modernization hypothesis, fixed-effects estimation |
JEL: | D72 O43 |
Date: | 2008–10–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:aarhec:2008-13&r=ecm |
By: | Campbell, Danny |
Abstract: | This paper reports the findings from a discrete choice experiment study designed to estimate the economic benefits associated with rural landscape improvements in Ireland. Using a mixed logit model, the panel nature of the dataset is exploited to retrieve willingness to pay values for every individual in the sample. This departs from customary approaches in which the willingness to pay estimates are normally expressed as measures of central tendency of an a priori distribution. In a different vein from analysis conducted in previous discrete choice experiment studies, this paper uses random effects models for panel data to identify the determinants of the individual-specific willingness to pay estimates. In comparison with the standard methods used to incorporate individual-specific variables into the analysis of discrete choice experiments, the analytical approach outlined in this paper is shown to add considerably more validity and explanatory power to welfare estimates |
Keywords: | Agri-environment, discrete choice experiments, mixed logit, panel data, random effects, willingness to pay, Demand and Price Analysis, Environmental Economics and Policy, C33, C35, Q24, Q51, |
Date: | 2008–01–14 |
URL: | http://d.repec.org/n?u=RePEc:ags:aes007:7975&r=ecm |
By: | Astrid Mathiassen (Statistics Norway) |
Abstract: | This paper examines the performance of a particular method for predicting poverty. The method is a supplement to the approach of measuring poverty through a fully-fledged household expenditure survey. As most developing countries cannot justify the expenses of frequent household expenditure surveys, low cost methods are of interest, and such models have been developed and used. The basic idea is a model for predicting the proportion of poor households in a population based on estimates from a total consumption regression relation, using data from a household expenditure survey. As a result, the model links the proportion of poor households to the explanatory variables of the consumption relation. These explanatory variables are fast to collect and are easy to measure. Information on the explanatory variables may be collected through annual light surveys. Several applications have shown that this information, together with the poverty model, can produce poverty estimates with confidence intervals of a similar magnitude as the poverty estimates from the household expenditure surveys. There is, however, limited evidence for how well the methods perform in predicting poverty from other surveys. A series of seven household expenditure surveys conducted in Uganda in the period 1993-2006 are available, allowing us to test the predictive ability of the models. We have tested the poverty models by using data from one survey to predict the proportion of poor households in other surveys, and vice versa. All the models predict similar poverty trends, whereas the respective levels are predicted differently. Although in most cases the predictions are precise, sometimes they differ significantly from the poverty level estimated from the survey directly. A long time span between surveys may explain some of these cases, as do large and sudden changes in poverty. |
Keywords: | Poverty prediction; Poverty model; Money metric poverty; Uganda; Household Survey |
JEL: | C31 C42 C81 D12 D31 I32 |
Date: | 2008–10 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:560&r=ecm |
By: | Alessandro De Gregorio (Università di Milano, Italy); Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT) |
Abstract: | In this paper a new dissimilarity measure to identify groups of assets dynamics is proposed. The underlying generating process is assumed to be a diffusion process solution of stochastic differential equations and observed at discrete time. The mesh of observations is not required to shrink to zero. As distance between two observed paths, the quadratic distance of the corresponding estimated Markov operators is considered. Analysis of both synthetic data and real financial data from NYSE/NASDAQ stocks, give evidence that this distance seems capable to catch differences in both the drift and diffusion coefficients contrary to other commonly used metrics. |
Keywords: | Clustering of time series; discretely observed diffusion processes, financial assets, markov processes, |
Date: | 2008–09–18 |
URL: | http://d.repec.org/n?u=RePEc:bep:unimip:1077&r=ecm |