|
on Econometrics |
By: | José Olmo (Department of Economics, City University, London) |
Abstract: | The extremal index (theta) is the key parameter for extending extreme value theory results from IID to stationary sequences. It determines the extent of clustering found in the largest observations of a stationary sequence {Xi}. This paper introduces an alternative interpretation of theta as a ratio of the limiting expected value of two random variables defined by extreme levels un, vn and a partition of the stationary sequence into blocks. These random variables consist on elements of the sequence of block maxima exceeding such levels. The estimator of theta derived from this interpretation is simple and follows a binomial distribution. This estimator is asymptotically unbiased in contrast to other estimators for theta (blocks method and runs method). Under certain conditions this methodology can be extended to moderately high levels u'n and v'n. The estimator obtained in this context is consistent. Furthermore, it has a binomial distribution that converges to a normal distribution with mean theta. This family of estimators outperform the rest of candidates commonly used to estimate theta. Some simulation experiments reinforce these findings. These experiments highlight the importance of block size selection and provide some guidance to proceed in practice with the estimation of the extremal index. |
Keywords: | Extremal index, extreme value theory, order statistic |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:0601&r=ecm |
By: | Doz, Catherine; Giannone, Domenico; Reichlin, Lucrezia |
Abstract: | This paper shows consistency of a two step estimator of the parameters of a dynamic approximate factor model when the panel of time series is large (n large). In the first step, the parameters are first estimated from an OLS on principal components. In the second step, the factors are estimated via the Kalman smoother. This projection allows to consider dynamics in the factors and heteroskedasticity in the idiosyncratic variance. The analysis provides theoretical backing for the estimator considered in Giannone, Reichlin, and Sala (2004) and Giannone, Reichlin, and Small (2005). |
Keywords: | Factor Models; Kalman filter; large cross-sections; principal components |
JEL: | C32 C33 C51 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6043&r=ecm |
By: | Fabrizio Cipollini (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); Robert F. Engle (Department of Finance, Stern School of Business, New York University); Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti") |
Abstract: | The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multivariate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copula functions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure. |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2006_15&r=ecm |
By: | Karel Mertens |
Abstract: | A test for the cointegrating rank of a vector autoregressive (VAR) process with a possible shift and broken linear trend is proposed. The break point is assumed to be known. The setup is a VAR process for cointegrated variables. The tests are not likelihood ratio tests but the deterministic terms including the broken trends are removed first by a GLS procedure and a likelihood ratio type test is applied to the adjusted series. The asymptotic null distribution of the test is derived and it is shown by a Monte Carlo experiment that the test has better small sample properties in many cases than a corresponding Gaussian likelihood ratio test for the cointegrating rank. |
Keywords: | Cointegration, structural break, vector autoregressive process, error correction model |
JEL: | C32 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2006/34&r=ecm |
By: | Adam Clements; Stan Hurn; Scott White (National Centre for Econometric Research) |
Abstract: | Many approaches have been proposed for estimating stochastic volatility (SV) models, a number of which are filtering methods. While non-linear filtering methods are superior to linear approaches, non-linear filtering methods have not gained a wide acceptance in the econometrics literature due to their computational cost. This paper proposes a discretised non-linear filtering (DNF) algorithm for the estimation of latent variable models. It is shown that the DNF approach leads to significant computational gains relative to other procedures in the context of SV estimation without any associated loss in accuracy. It is also shown how a number of extensions to standard SV models can be accommodated within the DNF algorithm. |
Keywords: | non-linear filtering, stochastic volatility, state-space models, asymmetries, latent factors, two factor volatility models |
Date: | 2006–08 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2006-3&r=ecm |
By: | Stanislav Anatolyev (New Economic School) |
Abstract: | We enhance the theory of asymptotic inference about predictive ability by considering the case when a set of variables used to construct predictions is sizable. To this end, we consider an alternative asymptotic framework where the number of predictors tends to innity with the sample size, although more slowly. Depending on the situation the asymptotic normal distribution of an average prediction criterion either gains additional variance as in the few predictors case, or gains non-zero bias which has no analogs in the few predictors case. By properly modifying conventional test statistics it is possible to remove most size distortions when there are many predictors, and improve test sizes even when there are few of them. |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cfr:cefirw:w0096&r=ecm |
By: | Stan Hurn; J.Jeisman; K.A. Lindsay (National Centre for Econometric Research) |
Abstract: | Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This paper provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox-Ingersoll-Ross and Ornstein-Uhlenbeck equations respectively. |
Keywords: | stochastic differential equations, parameter estimation, maximum likelihood, simulation, moments |
Date: | 2006–07 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2006-2&r=ecm |
By: | Monica Billio (Department of Economics, University Of Venice Ca’ Foscari); Massimiliano Caporin (massimiliano.caporin@unipd.it) |
Abstract: | We propose a generalization of the Dynamic Conditional Correlation multivariate GARCH model of Engle (2002) and of the Asymmetric Dynamic Conditional Correlation model of Cappiello et al. (2006). The model we propose introduces a block structure in parameter matrices that allows for interdependence with a reduced number of parameters. Our model nests the Flexible Dynamic Conditional Correlation model of Billio et al. (2006) and is named Quadratic Flexible Dynamic Conditional Correlation Multivariate GARCH. In the paper, we provide conditions for positive definiteness of the conditional correlations. We also present an empirical application to the Italian stock market comparing alternative correlation models for portfolio risk evaluation. |
Keywords: | Dynamic correlations, Block-structures, Flexible correlation models |
JEL: | C51 C32 G18 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:53_06&r=ecm |
By: | Eickmeier, Sandra; Ziegler, Christina |
Abstract: | This paper surveys existing factor forecast applications for real economic activity and inflation by means of a meta-analysis and contributes to the current debate on the determinants of the forecast performance of large-scale dynamic factor models relative to other models. We find that, on average, factor forecasts are slightly better than other models’ forecasts. In particular, factor models tend to outperform small-scale models, whereas they perform slightly worse than alternative methods which are also able to exploit large datasets. Our results further suggest that factor forecasts are better for US than for UK macroeconomic variables, and that they are better for US than for euro-area output; however, there are no significant differences between the relative factor forecast performance for US and euro-area inflation. There is also some evidence that factor models are better suited to predict output at shorter forecast horizons than at longer horizons. These findings all relate to the forecasting environment (which cannot be influenced by the forecasters). Among the variables capturing the forecasting design (which can, by contrast, be influenced by the forecasters), the size of the dataset from which factors are extracted seems to positively affect the relative factor forecast performance. There is some evidence that quarterly data lend themselves better to factor forecasts than monthly data. Rolling forecasts are preferable to recursive forecasts. The factor estimation technique seems to matter as well. Other potential determinants - namely whether forecasters rely on a balanced or an unbalanced panel, whether restrictions implied by the factor structure are imposed in the forecasting equation or not and whether an iterated or a direct multi-step forecast is made - are found to be rather irrelevant. Moreover, we find no evidence that pre-selecting the variables to be included in the panel from which factors are extracted helped to improve factor forecasts in the past. |
Keywords: | Factor models, forecasting, meta-analysis |
JEL: | C2 C3 E37 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdp1:5170&r=ecm |
By: | Marco Lombardi (European central Bank); Giorgio Calzolari (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti") |
Abstract: | The alpha-stable family of distributions constitutes a generalization of the Gaussian distribution, allowing for asymmetry and thicker tails. Its many useful properties, including a central limit theorem, are especially appreciated in the financial field. However, estimation difficulties have up to now hindered its diffusion among practitioners. In this paper we propose an indirect estimation approach to stochastic volatility models with alpha-stable innovations that exploits, as auxiliary model, a GARCH(1,1) with t-distributed innovations. We consider both cases of heavytailed noise in the returns or in the volatility. The approach is illustrated by means of a detailed simulation study and an application to currency crises. |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2006_07&r=ecm |
By: | Stanislav Anatolyev (New Economic School); Nikolay Gospodinov (Concordia University) |
Abstract: | While the predictability of excess stock returns is statistically small, their sign and volatility exhibit a substantially larger degree of dependence over time. We capitalize on this observation and consider prediction of excess stock returns by decomposing the equity premium into a product of sign and absolute value components and carefully modeling the marginal predictive densities of the two parts. Then we construct the joint density of a positively valued (absolute returns) random variable and a discrete binary (sign) random variable by copula methods and discuss computation of the conditional mean predictor. Our empirical analysis of US stock return data shows among other interesting ndings that despite the large unconditional correlation between the two multiplicative components they are conditionally very weakly dependent. |
Keywords: | Stock returns predictability; Directional forecasting; Absolute returns; Joint predictive distribution; Copulas. |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cfr:cefirw:w0095&r=ecm |
By: | Adrian pagan; Don Harding (National Centre for Econometric Research) |
Abstract: | Macroeconometric and financial researchers often use secondary or constructed binary random variables that differ in terms of their statistical properties from the primary random variables used in microeconometric studies. One important difference between primary and secondary binary variables is that while the former are, in many instances, independently distributed (i.d.) the later are rarely i.d. We show how popular rules for constructing binary states determine the degree and nature of the dependence in those states. When using constructed binary variables as regressands a common mistake is to ignore the dependence by using a probit model. We present an alternative non-parametric method that allows for dependence and apply that method to the issue of using the yield spread to predict recessions. |
Keywords: | Business cycle; binary variable, Markov chain, probit model, yield curve |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2006-1&r=ecm |
By: | Massimiliano Caporin (Department of Economics, Università di Padova); Domenico Sartore (Department of Economics, University Of Venice Ca’ Foscari) |
Abstract: | This paper provides the theoretical and operational framework for estimating past values of relevant time series starting from a (limited) information set. We consider a general approach that includes as special cases time series aggregation and temporal and/or spatial disaggregation problems. Furthermore, we explore the relevant problems and the possible solutions associated with a retropolation exercise, evidencing that linear models could be the preferred representation for the production of the needed data. The methodology is designed with a focus on economic time series but it could be considered even for other statistical areas. An empirical example is presented: we analyze the back-calculation of Eu15 Industrial Production Index comparing our approach with the Eurostat official one. |
Keywords: | benchmarking,retropolation, historical reconstruction, back-forecasting, missing past values, aggregation, disaggregation. |
JEL: | C10 C82 C50 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:56_06&r=ecm |
By: | de Luna, Xavier (Department of Statistics, Umeå University); Johansson, Per (Institute for Labour Market Policy Evaluation) |
Abstract: | We perform inference on the effect of a treatment on survival times in studies where the treatment assignment is not randomized and the assignment time is not known in advance. We estimate survival functions on a treated and a control group which are made comparable through matching on observed covariates. The inference is performed by conditioning on waiting time to treatment, that is time between the entrance in the study and treatment. This can be done only when sufficient data is available. In other cases, averaging over waiting times is a possibility, although the classical interpretation of the estimated survival functions is lost unless hazards are not functions of the waiting times. To show unbiasedness and to obtain an estimator of the variance, we build on the potential outcome framework, which was introduced by J. Neyman in the context of randomized experiments, and adapted to observational studies by D. B. Rubin. Our approach does not make parametric or distributional assumptions. In particular, we do not assume proportionality of the hazards compared. Small sample performance of the estimator and a derived test of no treatment effect are studied in a Monte Carlo study. |
Keywords: | Effect of a treatment; treatment |
JEL: | J64 |
Date: | 2007–01–16 |
URL: | http://d.repec.org/n?u=RePEc:hhs:ifauwp:2007_001&r=ecm |
By: | Stan Hurn; Ralf Becker (National Centre for Econometric Research) |
Abstract: | This paper considers an important practical problem in testing time-series data for nonlinearity in mean. Most popular tests reject the null hypothesis of linearity too frequently if the the data are heteroskedastic. Two approaches to redressing this size distortion are considered, both of which have been proposed previously in the literature although not in relation to this particular problem. These are the heteroskedasticity-robust-auxiliary-regression approach and the wild bootstrap. Simulation results indicate that both approaches are effective in reducing the size distortion and that the wild bootstrap others better performance in smaller samples. Two practical examples are then used to illustrate the procedures and demonstrate the potential pitfalls encountered when using non-robust tests. |
Keywords: | nonlinearity in mean, heteroskedasticity, wild bootstrap, empirical size and power |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2007-2&r=ecm |
By: | Ovidiu Precup (King’s College London); Giulia Iori (Department of Economics, City University, London) |
Abstract: | On a high-frequency scale, financial time series are not homogeneous, therefore standard correlation measures can not be directly applied to the raw data. To deal with this problem the time series have to be either homogenized through interpolation or methods that can handle raw non-synchronous time series need to be employed. This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series. The three methods are tested on simulated data and actual trades time series. The temporal evolution of the correlation matrix is revealed through the analysis of the full correlation matrix and of the Minimum Spanning Tree representation. To perform the analysis we implement several measures from the theory of random weighted networks. |
Keywords: | High-Frequency Correlation, Fourier method, Epps Effect, Minimum Spanning Tree, random networks |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:0504&r=ecm |
By: | Giovanni De Luca (Dipartimento di Statistica e Matematica per la Ricerca Economica Università di Napoli Parthenope.); Giampiero M. Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti") |
Abstract: | Financial market price formation and exchange activity can be investigated by means of ultra-high frequency data. In this paper we investigate an extension of the Autoregressive Conditional Duration (ACD) model of Engle and Russell (1998) by adopting a mixture of distribution approach with time varying weights. Empirical estimation of the Mixture ACD model shows that the limitations of the standard base model and its inadequacy of modelling the behavior in the tail of the distribution are suitably solved by our model. When the weights are made dependent on some market activity data, the model lends itself to some structural interpretation related to price formation and information diffusion in the market. |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2005_11&r=ecm |
By: | Roberto Leon-Gonzalez; Riccardo Scarpa |
Abstract: | A Benefit Function Transfer obtains estimates of Willingness-to-Pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the Benefit Transfer model. A more expensive alternative to estimate WTP is to analyse only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian Model Averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a Benefit function and reveals whether such information can be transferred to new sites for which only a small dataset is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is ‘poolable’. |
Keywords: | Benefit Transfer; Bayesian Model Averaging; Exchangeability; Non-market Valuation; Panel Data |
JEL: | C11 C33 C81 Q23 Q26 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:07/1&r=ecm |
By: | Christian Kascha; Karel Mertens |
Abstract: | An important question in empirical macroeconomics is whether structural vector autoregressions (SVARs) can reliably discriminate between competing DSGE models. Several recent papers have suggested that one reason SVARs may fail to do so is because they are finite-order approximations to infinite-order processes. In this context, we investigate the performance of models that do not suffer from this type of misspecification. We estimate VARMA and state space models using simulated data from a standard economic model and compare true with estimated impulse responses. For our examples, we find that one cannot gain much by using algorithms based on a VARMA representation. However, algorithms that are based on the state space representation do outperform VARs. Unfortunately, these alternative estimates remain heavily biased and very imprecise. The findings of this paper suggest that the reason SVARs perform weakly in these types of simulation studies is not because they are simple finite-order approximations. Given the properties of the generated data, their failure seems almost entirely due to the use of small samples. |
Keywords: | Structural VARs, VARMA, State Space Models, Identification, Business Cycles |
JEL: | E32 C15 C52 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2006/37&r=ecm |
By: | Christian T. Brownlees (Università di Firenze, Dipartimento di Statistica "G. Parenti"); Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti") |
Abstract: | The financial econometrics literature on Ultra High-Frequency Data (UHFD) has been growing steadily in recent years. However, it is not always straightforward to construct time series of interest from the raw data and the consequences of data handling procedures on the subsequent statistical analysis are not fully understood. Some results could be sample or asset specific and in this paper we address some of these issues focussing on the data produced by the New York Stock Exchange, summarizing the structure of their TAQ ultra high-frequency dataset. We review and present a number of methods for the handling of UHFD, and explain the rationale and implications of using such algorithms. We then propose procedures to construct the time series of interest from the raw data. Finally, we examine the impact of data handling on statistical modeling within the context of financial durations ACD models. |
Keywords: | Ultra-high Frequency Data, ACD models, Outliers, New York Stock Exchange |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2006_03&r=ecm |
By: | Adrian Pagan; Hashem Pesaran (National Centre for Econometric Research) |
Abstract: | This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah (1989), and shows that structural equations for which there are known permanent shocks must have no error correction terms present in them, thereby freeing up the latter to be used as instruments in estimating their parameters. The proposed approach is illustrated by a re-examination of the identification scheme used in a monetary model by Wickens and Motta (2001), and in a well known paper by Gali (1992) which deals with the construction of an IS-LM model with supply-side effects. We show that the latter imposes more short-run restrictions than are needed because of a failure to fully utilize the cointegration information. |
Keywords: | Permanent shocks, structural identi?cation, error correction models, IS-LM models |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2007-1&r=ecm |
By: | Giulia Iori (Department of Economics, City University, London); Ovidiu V. Precup |
Abstract: | In this paper we implement a Fourier method to estimate high frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measure and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analysed from the full correlation matrix and its Minimum Spanning Tree representation. The analysis is performed by implementing measures from the theory of random weighted networks. |
Keywords: | High Frequency Correlation, Fourier Method, Random weighted Networks |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:0610&r=ecm |
By: | Giampiero M. Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); Edoardo Otranto (Università di Sassari, Dipartimento di Economia, Impresa e Regolamentazione) |
Abstract: | In this paper we suggest ways to characterize the transmission mechanisms of volatility between markets by making use of a new Markov Switching bivariate model where the state of one variable feeds into the transition probability of the state of the other. The comparison between this model and other Markov Switching models allows us to derive statistical tests stressing the role of one market relative to another (contagion, interdependence, comovement, independence, Granger causality). We estimate the model on the weekly high–low range of several Asian markets, with a specific interest in the role of Hong Kong. |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2005_10&r=ecm |
By: | Duflo, Esther; Glennerster, Rachel; Kremer, Michael |
Abstract: | This paper is a practical guide (a toolkit) for researchers, students and practitioners wishing to introduce randomization as part of a research design in the field. It first covers the rationale for the use of randomization, as a solution to selection bias and a partial solution to publication biases. Second, it discusses various ways in which randomization can be practically introduced in a field settings. Third, it discusses designs issues such as sample size requirements, stratification, level of randomization and data collection methods. Fourth, it discusses how to analyze data from randomized evaluations when there are departures from the basic framework. It reviews in particular how to handle imperfect compliance and externalities. Finally, it discusses some of the issues involved in drawing general conclusions from randomized evaluations, including the necessary use of theory as a guide when designing evaluations and interpreting results. |
Keywords: | development; experiments; program evaluation |
JEL: | C93 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6059&r=ecm |
By: | Alberto Abadie; Alexis Diamond; Jens Hainmueller |
Abstract: | Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a large-scale tobacco control program that California implemented in 1988. We demonstrate that following Proposition 99 tobacco consumption fell markedly in California relative to a comparable synthetic control region. We estimate that by the year 2000 annual per-capita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99. Given that many policy interventions and events of interest in social sciences take place at an aggregate level (countries, regions, cities, etc.) and affect a small number of aggregate units, the potential applicability of synthetic control methods to comparative case studies is very large, especially in situations where traditional regression methods are not appropriate. The methods proposed in this article produce informative inference regardless of the number of available comparison units, the number of available time periods, and whether the data are individual (micro) or aggregate (macro). Software to compute the estimators proposed in this article is available at the authors' web-pages. |
JEL: | C21 C23 H75 I18 K32 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0335&r=ecm |
By: | Martin Fukac; Adrian Pagan (National Centre for Econometric Research) |
Abstract: | We advance the proposal that DSGE models should not just be estimated and evaluated with reference to full information methods. These make strong assumptions and therefore there is uncertainty about their impact upon results. Some limited information analysis which can be used in a complementary way seems important. Because it is sometimes difficult to implement limited information methods when there are unobservable non-stationary variables in the system we present a simple method of overcoming this that involves normalizing the non-stationary variables with their permanent components and then estimating the estimating the resulting Euler equations. We illustrate the interaction between full and limited information methods in the context of a well-known open economy model of Lubik and Schorfheide. The transformation was effective in revealing possible mis-specifications in the equations of LS's system and the limited information analysis highlighted the role of priors in having a major influence upon the estimates. |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2006-6&r=ecm |
By: | Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); Edoardo Otranto (Università di Sassari, Dipartimento di Economia, Impresa e Regolamentazione) |
Abstract: | The integration of financial markets across countries has modified the way prices react to news. Innovations originating in one market diffuse to other markets following patterns which usually stress the presence of interdependence. In some cases, though, covariances across markets have an asymmetric component which reflects the dominance of one over the others. The volatility transmission mechanisms in such events may be more complex than what can be modelled as a multivariate GARCH model. In this paper we adopt a new Markov Switching approach and we suppose that periods of high volatility and periods of low volatility represent the states of an ergodic Markov Chain where the transition probability is made dependent on the state of the “dominant” series. We provide some theoretical background and illustrate the model on Asian markets data showing support for the idea of dominant market and the good prediction performance of the model on a multi-period horizon. |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2006_04&r=ecm |
By: | Antonio Matas-Mir (European central Bank); Denise R. Osborn (University of Manchester, Centre for Growth and Business Cycle Research , Economic Studies, School of Social Sciences); Marco Lombardi (European central Bank) |
Abstract: | We study the impact of seasonal adjustment on the properties of business cycle expansion and recession regimes using analytical, simulation and empirical methods. Analytically, we show that the X-11 adjustment filter both reduces the magnitude of change at turning points and reduces the depth of recessions, with specific effects depending on the length of the recession. A simulation analysis using Markov switching models confirms these properties, with particularly undesirable effects in delaying the recognition of the end of a recession. However, seasonal adjustment can have desirable properties in clarifying the true regime when this is well underway. The empirical findings, based on four coincident US business cycle indicators, reinforce the analytical and simulation results by showing that seasonal adjustment leads to the identification of longer and shallower recessions than obtained using unadjusted data. |
Date: | 2005–09 |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2005_15&r=ecm |