nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒02‒09
29 papers chosen by
Sune Karlsson
Orebro University

  1. Forecasting Time Series with Long Memory and Level Shifts, A Bayesian Approach By Silvestro Di Sanzo
  2. Bayesian Inference on Dynamic Models with Latent Factors By Monica Billio; Roberto Casarin; Domenico Sartore
  3. Statistical tests and estimators of the rank of a matrix and their applications in econometric modelling By Gonzalo Camba-Méndez; George Kapetanios
  4. Robust Performance Hypothesis Testing with the Sharpe Ratio By Oliver Ledoit; Michael Wolf
  5. Semiparametric estimation of duration models when the parameters are subject to inequality constraints and the error distribution is unknown By Kulan Ranasinghe; Mervyn J. Silvapulle
  6. A Maximum Likelihood Method for the Incidental Parameter Problem By Marcelo Moreira
  7. An analysis of the indicator saturation estimator as a robust regression By Søren Johansen; Bent Nielsen
  8. Nonparametric Identification and Estimation of Multivariate Mixtures By Hiroyuki Kasahara; Katsumi Shimotsu
  9. THRET: Threshold Regression with Endogenous Threshold Variables. By Andros Kourtellos; Chih Ming Tan; Thanasis Stengos
  10. Harmonic Regression Models: A Comparative Review with Applications. By Michael Artis; José G. Clavel; Mathias Hoffmann; Dilip Nachane
  11. Estimation of k-factor GIGARCH process : a Monte Carlo study By Abdou Kâ Diongue; Dominique Guegan
  12. Estimating probabilities of default with support vector machines By Härdle, Wolfgang; Moro, Rouslan A.; Schäfer, Dorothea
  13. Solving, Estimating and Selecting Nonlinear Dynamic Models without the Curse of Dimensionality By Viktor Winschel; Markus Krätzig
  14. Simple nonparametric estimators for unemployment duration analysis By Wichert, Laura; Wilke, Ralf A.
  15. Copula-Based Tests for Cross-Sectional Independence in Panel Models By Hong-Ming Huang; Chihwa Kao; Giovanni Urga
  16. Application of the Generalized Method of Moments for Estimating Continuous-Time Models of U.S. Short-Term Interest Rates By Balázs Cserna
  17. Unimodal regression in the two-parameter exponential family with constant or known dispersion parameter By Pettersson, Kjell
  18. Control of the False Discovery Rate under Dependence using the Bootstrap and Subsampling By Joseph P. Romano; Azeem M. Shaikh; Michael Wolf
  19. Structural Constant Conditional Correlation By Enzo Weber
  20. First order asymptotic theory for parametric misspecification tests of GARCH models By Andreea Halunga; Chris D. Orme
  21. Nonlinearity, Nonstationarity, and Thick Tails: How They Interact to Generate Persistency in Memory By J. Isaac Miller; Joon Y. Park
  22. Support Vector Regression Based GARCH Model with Application to Forecasting Volatility of Financial Returns By Shiyi Chen; Kiho Jeong; Wolfgang Härdle
  23. Effect of noise filtering on predictions : on the routes of chaos By Dominique Guegan
  24. Exploring a stochastic frontier model when the dependent variable is a count By Eduardo Fé Rodríguez
  25. Market linkages, variance spillovers and correlation stability: empirical evidences of financial contagion By Monica Billio; Massimiliano Caporin
  26. Business Cycle Analysis with Multivariate Markov Switching Models By Monica Billio; Jacques Anas; Laurent Ferrara; Marco Lo Duca
  27. Bounds analysis of competing risks : a nonparametric evaluation of the effect of unemployment benefits on migration in Germany By Arntz, Melanie; Lo, Simon M. S.; Wilke, Ralf A.
  28. Which Structural Parameters Are "Structural"? Identifying the Sources of Instabilities in Economic Models By Inoue, Atsushi; Rossi, Barbara
  29. Weighted smooth transition regressions By Ralf Becker; Denise Osborn

  1. By: Silvestro Di Sanzo (Department of Economics, University Of Alicante)
    Abstract: Recent studies have showed that it is troublesome, in practice, to distinguish between long memory and nonlinear processes. Therefore, it is of obvious interest to try to capture both features of long memory and non-linearity into a single time series model to be able to assess their relative importance. In this paper we put forward such a model, where we combine the features of long memory and Markov nonlinearity. A Markov Chain Monte Carlo algorithm is proposed to estimate the model and evaluate its forecasting performance using Bayesian predictive densities. The resulting forecasts are a significant improvement over those obtained by the linear long memory and Markov switching models.
    Keywords: Markov-Switching models, Bootstrap, Gibbs Sampling
    JEL: C11 C15 C22
    Date: 2007
  2. By: Monica Billio (Department of Economics, University Of Venice Cà Foscari); Roberto Casarin (University of Brescia); Domenico Sartore (Department of Economics, University Of Venice Cà Foscari)
    Abstract: In time series analysis, latent factors are often introduced to model the heterogeneous time evolution of the observed processes. The presence of unobserved components makes the maximum likelihood estimation method more difficult to apply. A Bayesian approach can sometimes be preferable since it permits to treat general state space models and makes easier the simulation based approach to parameters estimation and latent factors filtering. The paper examines economic time series models in a Bayesian perspective focusing, through some examples, on the extraction of the business cycle components. We briefly review some general univariate Bayesian dynamic models and discuss the simulation based techniques, such as Gibbs sampling, adaptive importance sampling and finally suggest the use of the particle filter, for parameter estimation and latent factor extraction.
    Keywords: Bayesian Dynamic Models, Simulation Based Inference, Particle Filters, Latent Factors, Business Cycle
    JEL: C11 C15 C22 C63 O40
    Date: 2007
  3. By: Gonzalo Camba-Méndez (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); George Kapetanios (Queen Mary, University of London, Mile End Road, London, E1 4NS, United Kingdom.)
    Abstract: Testing and estimating the rank of a matrix of estimated parameters is key in a large variety of econometric modelling scenarios. This paper describes general methods to test for and estimate the rank of a matrix, and provides details on a variety of modelling scenarios in the econometrics literature where such methods are required. Four different methods to test the true rank of a general matrix are described, as well as one method that can handle the case of a matrix subject to parameter constraints associated with defineteness structures. The technical requirements for the implementation of the tests of rank of a general matrix differ and hence there are merits to all of them that justify their use in applied work. Nonetheless, we review available evidence of their small sample properties in the context of different modelling scenarios where all, or some, are applicable. JEL Classification: C12, C15, C32.
    Keywords: Multiple time series, model specification, tests of rank.
    Date: 2008–01
  4. By: Oliver Ledoit; Michael Wolf
    Abstract: Applied researchers often test for the difference of the Sharpe ratios of two investment strategies. A very popular tool to this end is the test of Jobson and Korkie (1981), which has been corrected by Memmel (2003). Unfortunately, this test is not valid when returns have tails heavier than the normal distribution or are of time series nature. Instead, we propose the use of robust inference methods. In particular, we suggest to construct a studentized time series bootstrap confidence interval for the difference of the Sharpe ratios and to declare the two ratios different if zero is not contained in the obtained interval. This approach has the advantage that one can simply resample from the observed data as opposed to some null-restricted data. A simulation study demonstrates the improved finite sample performance compared to existing methods. In addition, two applications to real data are provided.
    Keywords: Bootstrap, HAC inference, Sharpe ratio
    JEL: C12 C14 C22
    Date: 2008–01
  5. By: Kulan Ranasinghe; Mervyn J. Silvapulle
    Abstract: The parameters in duration models are usually estimated by a Quasi Maximum Likelihood Estimator [QMLE]. This estimator is efficient if the errors are iid and exponentially distributed. Otherwise, it may not be the most efficient. Motivated by this, a class of estimators has been introduced by Drost and Werker (2004). Their estimator is asymptotically most efficient when the error distribution is unknown. However, the practical relevance of their method remains to be evaluated. Further, although some parameters in several common duration models are known to be nonnegative, this estimator may turn out to be negative. This paper addresses these two issues. We propose a new semiparametric estimator when there are inequality constraints on parameters, and a simulation study evaluates the two semiparametric estimators. The results lead us to conclude the following when the error distribution is unknown: (i) If there are no inequality constraints on parameters then the Drost-Werker estimator is better than the QMLE, and (ii) if there are inequality constraints on parameters then the estimator proposed in this paper is better than the Drost-Werker estimator and the QMLE. In conclusion, this paper recommends estimators that are better than the often used QMLE for estimating duration models.
    Keywords: Adaptive inference; Conditional duration model; Constrained inference; Efficient semiparametric estimation; Order restricted inference; Semiparametric efficiency bound.
    JEL: C14 C41
    Date: 2008–01
  6. By: Marcelo Moreira
    Abstract: This paper uses the invariance principle to solve the incidental parameter problem. We seek group actions that preserve the structural parameter and yield a maximal invariant in the parameter space with fixed dimension. M-estimation from the likelihood of the maximal invariant statistic yields the maximum invariant likelihood estimator (MILE). We apply our method to (i) a stationary autoregressive model with fixed effects; (ii) an agent-specific monotonic transformation model; (iii) an instrumental variable (IV) model; and (iv) a dynamic panel data model with fixed effects. In the first two examples, there exist group actions that completely discard the incidental parameters. In a stationary autoregressive model with fixed effects, MILE coincides with existing conditional and integrated likelihood methods. The invariance principle also gives a new perspective to the marginal likelihood approach. In an agent-specific monotonic transformation model, our approach yields an estimator that is consistent and asymptotically normal when errors are Gaussian. In an instrumental variable (IV) model, this paper unifies asymptotic results under strong instruments (SIV) and many weak instruments (MWIV) frameworks. We obtain consistency, asymptotic normality, and optimality results for the limited information maximum likelihood estimator directly from the invariant likelihood. Our approach is parallel to M-estimation in problems in which the number of parameters does not change with the sample size. In a dynamic panel data model with N individuals and T time periods, MILE is consistent as long as NT goes to infinity. We obtain a large N, fixed T bound; this bound coincides with Hahn and Kuersteiner's (2002) bound when T goes to infinity. MILE reaches (i) our bound when N is large and T is fixed; and (ii) Hahn and Kuersteiner's (2002) bound when both N and T are large.
    JEL: C13 C23 C30
    Date: 2008–02
  7. By: Søren Johansen (Department of Economics, University of Copenhagen); Bent Nielsen (Department of Economics, University of Oxford)
    Abstract: An algorithm suggested by Hendry (1999) for estimation in a regression with more regressors than observations, is analyzed with the purpose of finding an estimator that is robust to outliers and structural breaks. This estimator is an example of a one-step M-estimator based on Huber's skip function. The asymptotic theory is derived in the situation where there are no outliers or structural breaks using empirical process techniques. Stationary processes, trend stationary autoregressions and unit root processes are considered. Classification JEL: C32
    Keywords: empirical processes; Huber's skip; indicator saturation; M-estimator; outlier robustness; vector autoregressive process
    Date: 2008–02
  8. By: Hiroyuki Kasahara (University of Western Ontario); Katsumi Shimotsu (Queen's University)
    Abstract: We study nonparametric identifiability of finite mixture models of k-variate data with M subpopulations, in which the components of the data vector are independent conditional on belonging to a subpopulation. We provide a sufficient condition for nonparametrically identifying M subpopulations when k>=3. Our focus is on the relationship between the number of values the components of the data vector can take on, and the number of identifiable subpopulations. Intuition would suggest that if the data vector can take many different values, then combining information from these different values helps identification. Hall and Zhou (2003) show, however, when k=2, two-component finite mixture models are not nonparametrically identifiable regardless of the number of the values the data vector can take. When k>=3, there emerges a link between the variation in the data vector, and the number of identifiable subpopulations: the number of identifiable subpopulations increases as the data vector takes on additional (different) values. This points to the possibility of identifying many components even when k=3, if the data vector has a continuously distributed element. Our identification method is constructive, and leads to an estimation strategy. It is not as efficient as the MLE, but can be used as the initial value of the optimization algorithm in computing the MLE. We also provide a sufficient condition for identifying the number of nonparametrically identifiable components, and develop a method for statistically testing and consistently estimating the number of nonparametrically identifiable components. We extend these procedures to develop a test for the number of components in binomial mixtures.
    Keywords: finite mixture, binomial mixture, model selection, number of components, rank estimation
    JEL: C13 C14 C51 C52
    Date: 2007–12
  9. By: Andros Kourtellos (University of Cyprus, Cyprus.); Chih Ming Tan (Tufts University, USA.); Thanasis Stengos (University of Guelph, Canada and The Rimini Centre for Economic Analysis, Rimini, Italy.)
    Abstract: This paper extends the simple threshold regression framework of Hansen (2000) and Caner and Hansen (2004) to allow for endogeneity of the threshold variable. We develop a concentrated two-stage least squares (C2SLS) estimator of the threshold parameter that is based on an inverse Mills ratio bias correction. Our method also allows for the endogeneity of the slope variables. We show that our estimator is consistent and investigate its performance using a Monte Carlo simulation that indicates the applicability of the method in finite samples. We also illustrate its usefulness with an empirical example from economic growth. JEL Classifications: C13, C51
    Date: 2008–01
  10. By: Michael Artis; José G. Clavel; Mathias Hoffmann; Dilip Nachane
    Abstract: Strongly periodic series occur frequently in many disciplines. This paper reviews one specific approach to analyzing such series viz. the harmonic regression approach. In this paper the five major methods suggested under this approach are critically reviewed and compared, and their empirical potential highlighted via two applications. The out-of-sample forecast comparisons are made using the Superior Predictive Ability test, which specifically guards against the perils of data snooping. Certain tentative conclusions are drawn regarding the relative forecasting ability of the different methods.
    Keywords: Mixed spectrum, autoregressive methods, eigenvalue methods, dynamic harmonic regression, data snooping: multiple forecast comparisons
    JEL: C22 C52 C53
    Date: 2007–09
  11. By: Abdou Kâ Diongue (UFR SAT - Université Gaston Berger - Université Gaston Berger de Saint-Louis, School of Economics and Finance - Queensland University of Technology); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I)
    Abstract: In this paper, we discuss the parameter estimation for a k-factor generalized long memory process with conditionally heteroskedastic noise. Two estimation methods are proposed. The first method is based on the conditional distribution of the process and the second is obtained as an extension of Whittle's estimation approach. For comparison purposes, Monte Carlo simulations are used to evaluate the finite sample performance of these estimation techniques.
    Keywords: Long memory, Gegenbauer polynomial, heteeroskedasticity, conditional sum of squares, Whittle estimation.
    Date: 2008–01
  12. By: Härdle, Wolfgang; Moro, Rouslan A.; Schäfer, Dorothea
    Abstract: This paper proposes a rating methodology that is based on a non-linear classification method, the support vector machine, and a non-parametric technique for mapping rating scores into probabilities of default. We give an introduction to underlying statistical models and represent the results of testing our approach on Deutsche Bundesbank data. In particular we discuss the selection of variables and give a comparison with more traditional approaches such as discriminant analysis and the logit regression. The results demonstrate that the SVM has clear advantages over these methods for all variables tested.
    Keywords: Bankruptcy, Company rating, Default probability, Support vector machines
    JEL: C14 C45 G33
    Date: 2007
  13. By: Viktor Winschel; Markus Krätzig
    Abstract: We present a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economic models on sparse grids. TheSmolyak operator underlying the sparse grids approach frees global approximation from the curse of dimensionality and we apply it to a Chebyshev approximation of the model solution. The operator also eliminates the curse from Gaussian quadrature and we use it for the integrals arising from rational expectations and in three new nonlinear state space filters. The filters substantially decrease the computational burden compared to the sequential importance resampling particle filter. The posterior of the structural parameters is estimated by a new Metropolis-Hastings algorithm with mixing parallel sequences. The parallel extension improves the global maximization property of the algorithm, simplifies the choice of the innovation variances, allows for unbiased convergence diagnostics and for a simple implementation of the estimation on parallel computers. Finally, we provide all algorithms in the open source software JBendge4 for the solution and estimation of a general class of models.
    Keywords: Dynamic Stochastic General Equilibrium (DSGE) Models, Bayesian Time Series Econometrics, Curse of Dimensionality
    JEL: C11 C13 C15 C32 C52 C63 C68 C87
    Date: 2008–02
  14. By: Wichert, Laura; Wilke, Ralf A.
    Abstract: "We consider an extension of conventional univariate Kaplan-Meier type estimators for the hazard rate and the survivor function to multivariate censored data with a censored random regressor. It is an Akritas (1994) type estimator which adapts the nonparametric conditional hazard rate estimator of Beran (1981) to more typical data situations in applied analysis. We show with simulations that the estimator has nice finite sample properties and our implementation appears to be fast. As an application we estimate nonparametric conditional quantile functions with German administrative unemployment duration data." (author's abstract, IAB-Doku) ((en))
    Keywords: Arbeitslosigkeitsdauer, Schätzung - Methode, IAB-Beschäftigtenstichprobe
    Date: 2007–10–16
  15. By: Hong-Ming Huang; Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Giovanni Urga (Cass Business School, City University, 106 Bunhill Row, London EC1Y 8TZ, U.K.)
    Abstract: This paper processes copula-based tests for testing cross-sectional independence of panel models.
    Keywords: Copulas; Panel data; Cross-sectional independence
    JEL: C13 C33
    Date: 2007–12
  16. By: Balázs Cserna (University of Heidelberg, Department of Economics)
    Abstract: We show by Monte Carlo simulations that the jackknife estimation of QUENOUILLE (1956) provides substantial bias reduction for the estimation of short-term interest rate models applied in CHAN ET AL. (1992) - hereafter CKLS (1992). We find that an alternative estimation based on NOWMAN (1997) does not sufficiently solve the problem of time aggregation. We provide empirical distributions for parameter tests depending on the elasticity of conditional variance. Using three-month U.S. Treasury bill yields and the Federal fund rates, we demonstrate that the estimation results can depend on both the sampling frequency and the proxy that is used for interest rates.
    Keywords: Elasticity of conditional variance, generalized method of moments, jackknife estimation, stochastic differential equations, short-term interest rate.
    JEL: C16 C52
    Date: 2008–01
  17. By: Pettersson, Kjell (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: In this paper we discuss statistical methods for curve-estimation under the assumption of unimodality for variables with distributions belonging to the two-parameter exponential family with known or constant dispersion parameter. We suggest a non-parametric method based on monotonicity properties. The method is applied to Swedish data on laboratory verified diagnoses of influenza and data on inflation from an episode of hyperinflation in Bulgaria.
    Keywords: Non-parametric; Order restrictions; Two-parameter exponential family; Known dispersion parameter; Poisson distribution
    JEL: C10
    Date: 2008–02–04
  18. By: Joseph P. Romano; Azeem M. Shaikh; Michael Wolf
    Abstract: This paper considers the problem of testing s null hypotheses simultaneously while controlling the false discovery rate (FDR). Benjamini and Hochberg (1995) provide a method for controlling the FDR based on p-values for each of the null hypotheses under the assumption that the p-values are independent. Subsequent research has since shown that this procedure is valid under weaker assumptions on the joint distribution of the p-values. Related procedures that are valid under no assumptions on the joint distribution of the p-values have also been developed. None of these procedures, however, incorporate information about the dependence structure of the test statistics. This paper develops methods for control of the FDR under weak assumptions that incorporate such information and, by doing so, are better able to detect false null hypotheses. We illustrate this property via a simulation study and two empirical applications. In particular, the bootstrap method is competitive with methods that require independence if independence holds, but it outperforms these methods under dependence.
    Keywords: Bootstrap, Subsampling, False Discovery Rate, Multiple Testing, Stepdown Procedure.
    JEL: C12 C14
    Date: 2007–10
  19. By: Enzo Weber
    Abstract: A small strand of recent literature is occupied with identifying simultaneity in multiple equation systems through autoregressive conditional heteroscedasticity. Since this approach assumes that the structural innovations are uncorrelated, any contemporaneous connection of the endogenous variables needs to be exclusively explained by mutual spillover effects. In contrast, this paper allows for instantaneous covariances, which become identifiable by imposing the constraint of structural constant conditional correlation (SCCC). In this, common driving forces can be modelled in addition to simultaneous transmission effects. The new methodology is applied to the Dow Jones and Nasdaq Composite indexes in a small empirical example, illuminating scope and functioning of the SCCC model.
    Keywords: Simultaneity, Identification, EGARCH, CCC
    JEL: C32 G10
    Date: 2008–01
  20. By: Andreea Halunga; Chris D. Orme
    Date: 2007
  21. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia); Joon Y. Park
    Abstract: We consider nonlinear transformations of random walks driven by thick-tailed innovations that may have infinite means or variances. These three nonstandard characteristics: nonlinearity, nonstationarity, and thick tails interact to generate a spectrum of asymptotic autocorrelation patterns consistent with long-memory processes. Such autocorrelations may decay very slowly as the number of lags increases or may not decay at all and remain constant at all lags. Depending upon the type of transformation considered and how the model error is speci- fied, the autocorrelation functions are given by random constants, deterministic functions that decay slowly at hyperbolic rates, or mixtures of the two. Such patterns, along with other sample characteristics of the transformed time series, such as jumps in the sample path, excessive volatility, and leptokurtosis, suggest the possibility that these three ingredients are involved in the data generating processes of many actual economic and financial time series data. In addition to time series characteristics, we explore nonlinear regression asymptotics when the regressor is observable and an alternative regression technique when it is unobservable. To illustrate, we examine two empirical applications: wholesale electricity price spikes driven by capacity shortfalls and exchange rates governed by a target zone.
    Keywords: persistency in memory, nonlinear transformations, random walks, thick tails, stable distributions, wholesale electricity prices, target zone exchange rates
    JEL: C22
    Date: 2008–01–15
  22. By: Shiyi Chen; Kiho Jeong; Wolfgang Härdle
    Abstract: In recent years, support vector regression (SVR), a novel neural network (NN) technique, has been successfully used for financial forecasting. This paper deals with the application of SVR in volatility forecasting. Based on a recurrent SVR, a GARCH method is proposed and is compared with a moving average (MA), a recurrent NN and a parametric GACH in terms of their ability to forecast financial markets volatility. The real data in this study uses British Pound-US Dollar (GBP) daily exchange rates from July 2, 2003 to June 30, 2005 and New York Stock Exchange (NYSE) daily composite index from July 3, 2003 to June 30, 2005. The experiment shows that, under both varying and fixed forecasting schemes, the SVR-based GARCH outperforms the MA, the recurrent NN and the parametric GARCH based on the criteria of mean absolute error (MAE) and directional accuracy (DA). No structured way being available to choose the free parameters of SVR, the sensitivity of performance is also examined to the free parameters.
    Keywords: recurrent support vector regression, GARCH model, volatility forecasting
    JEL: C45 C53 G32
    Date: 2008–01
  23. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I)
    Abstract: The detection of chaotic behaviors in commodities, stock markets and weather data is usually complicated by large noise perturbation inherent to the underlying system. It is well known, that predictions, from pure deterministic chaotic systems can be accurate mainly in the short term. Thus, it will be important to be able to reconstruct in a robust way the attractor in which evolves the data, if this attractor exists. In chaotic theory, the deconvolution methods have been largely studied and there exist different approaches which are competitive and complementary. In this work, we apply two methods : the singular value method and the wavelet approach. This last one has not been investigated a lot of filtering chaotic systems. Using very large Monte Carlo simulations, we show the ability of this last deconvolution method. Then, we use the de-noised data set to do forecast, and we discuss deeply the possibility to do long term forecasts with chaotic systems.
    Keywords: Deconvolution, chaos, SVD, state space method, wavelets method.
    Date: 2008–01
  24. By: Eduardo Fé Rodríguez
    Date: 2007
  25. By: Monica Billio (Department of Economics, University Of Venice Cà Foscari); Massimiliano Caporin (Dipartimento di Scienze Economiche “Marco Fanno”, University of Padova)
    Abstract: We propose a simultaneous equation system with GARCH errors to model the contemporaneous relations among Asian and American stock markets. On the estimated residuals, we evaluate the correlation matrix over rolling windows and introduce a correlation matrix distance, which allows both a graphical analysis and the development of a statistical test of correlation movements. Furthermore, we introduce a methodology that can be used for identifying turmoil periods on a data-driven basis. We employ the previous results in the analysis of the contagion issue between Asian and American stock markets. Our results shows some evidence of contagion and the proposed statistics identifies, on a data-driven basis, turmoil periods consistent with the ones currently assumed in the literature.
    Keywords: Financial market contagion, Market linkages, Variance spillovers, Dynamic correlations, Rolling correlations, Transformed correlations
    JEL: C51 F3 C22 C32
    Date: 2007
  26. By: Monica Billio (Department of Economics, University Of Venice Cà Foscari); Jacques Anas (Coe Rexecode, Paris); Laurent Ferrara (Banque de Frances); Marco Lo Duca (European Central Bank)
    Abstract: The class of Markov switching models can be extended in two main directions in a multivariate framework. In the first approach, the switching dynamics are introduced by way of a common latent factor. In the second approach a VAR model with parameters depending on one common Markov chain is considered (MSVAR). We will extend the MSVAR approach allowing for the presence of specific Markov chains in each equation of the VAR (MMSVAR). In the MMSVAR approach we also explore the introduction of correlated Markov chains which allow us to evaluate the relationships among phases in different economies or sectors and introduce causality relationships, which allow a more parsimonious representation. We apply our model to study the relationship between cyclical phases of the industrial production in the US and Euro zone. Moreover, we construct a MMS model to explore the cyclical relationship between the Euro zone industrial production and the industrial component of the European Sentiment Index.
    Keywords: Economic cycles, Multivariate models, Markov switching models, Common latent factors, Causality, Euro-zone
    JEL: C50 C32 E32
    Date: 2007
  27. By: Arntz, Melanie; Lo, Simon M. S.; Wilke, Ralf A.
    Abstract: "In this paper we derive nonparametric bounds for the cumulative incidence curve within a competing risks model with partly identified interval data. As an advantage over earlier attempts our approach also gives valid results in case of dependent competing risks. We apply our framework to empirically evaluate the effect of unemployment benefits on observed migration of unemployed workers in Germany. Our findings weakly indicate that reducing the entitlement length for unemployment benefits increases migration among high-skilled individuals." (author's abstract, IAB-Doku) ((en))
    Keywords: Arbeitslosenunterstützung, Leistungsanspruch - Dauer, Binnenwanderung, regionale Mobilität, Wanderungsmotivation, Mobilitätsbereitschaft, Arbeitslose, Hochqualifizierte, IAB-Beschäftigtenstichprobe
    JEL: C41 C14 J61
    Date: 2007–08–13
  28. By: Inoue, Atsushi; Rossi, Barbara
    Abstract: The objective of this paper is to identify which parameters of a model are stable over time. Existing procedures can only be used to test whether a given subset of parameters is stable, and cannot be used to find which subset of parameters is stable. We propose a new procedure that is informative on the nature of instabilities affecting economic models, and sheds light on the economic interpretation and causes of such instabilities. Furthermore, our procedure provides clear guidelines on which parts of the model are reliable for policy analysis and which are possibly mis-specified. Our empirical findings suggest that instabilities during the Great Moderation were mainly concentrated in Euler and IS equations as well as in monetary policy. Such results offer important insights to guide the future theoretical development of macroeconomic models.
    Keywords: Instability, Model Evaluation, Great Moderation
    JEL: E32 E52 E58 C22 C52
    Date: 2008
  29. By: Ralf Becker; Denise Osborn
    Date: 2007

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.