nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒10‒21
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices By Hiroyuki Kasahara; Katsumi Shimotsu
  2. Can One Estimate the Unconditional Distribution of Post-Model-Selection Estimators ? By Leeb, Hannes; Pötscher, Benedikt M.
  3. Robust volatility forecasts and model selection in financial time series By L. Grossi; G. Morelli
  4. The extremal index for GARCH(1,1) processes with t-distributed innovations By F. Laurini; J. A. Tawn
  5. Unit Roots and Structural Breaks: A Survey of the Literature By Joseph P. Byrne and Roger Perman
  6. The stability of electricity prices: estimation and inference of the Lyapunov exponents By Bask , Mikael; Liu , Tung; Widerberg , Anna
  7. The art of fitting financial time series with Levy stable distributions By Scalas, Enrico; Kim, Kyungsik
  8. The smooth transition autoregressive target zone model with the Gaussian stochastic volatility and TGARCH error terms with applications By Oleg Korenok; Stanislav Radchenko
  9. An empirically based implementation and evaluation of a network model for commuting flows By Gitlesen, Jens Petter; Kleppe, Gisle; Thorsen, Inge; Ubøe, Jan
  10. Frequent Turbulence? A Dynamic Copula Approach By Chollete, Lorán; Heinen, Andreas
  11. Color Harmonization in Car Manufacturing Process By Anton Andriyashin; Michal Benko; Wolfgang Härdle; Roman Timofeev; Uwe Ziegenhagen
  12. Forecasting and testing a non-constant volatility By Abramov, Vyacheslav; Klebaner, Fima
  13. A wavelet analysis of scaling laws and long-memory in stock market volatility By Vuorenmaa , Tommi
  14. Forecasting with a forward-looking DGE model: combining long-run views of financial markets with macro forecasting By Männistö , Hanna-Leena

  1. By: Hiroyuki Kasahara (University of Western Ontario); Katsumi Shimotsu (Queen's University)
    Abstract: In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work. Three elements emerge as the important determinants of identification; the time-dimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case, a time-dimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types, and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Type-specific components are identifiable even when state dependence is present as long as the panel has a moderate time-dimension (T = 6). We also develop a series logit estimator for finite mixture models of dynamic discrete choices and derive its convergence rate.
    Keywords: dynamic discrete choice models; finite mixture; nonparametric identification; panel data; sieve estimator; unobserved heterogeneity
    JEL: C13 C14 C23 C25
    Date: 2006
  2. By: Leeb, Hannes; Pötscher, Benedikt M.
    Abstract: We consider the problem of estimating the unconditional distribution of a post-model-selection estimator. The notion of a post-model-selection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion like AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by least-squares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate the unconditional distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for the distribution. These lower bounds are shown to approach 1/2 or even 1 in large samples, depending on the situation considered. Similar impossibility results are also obtained for the distribution of linear functions (e.g., predictors) of the post-model-selection estimator.
    Keywords: Inference after model selection; Post-model-selection estimator; Pre-test estimator; Selection of regressors; Akaike's information criterion AIC; Thresholding; Model uncertainty; Consistency; Uniform consistency; Lower risk bound.
    JEL: C20 C13 C52 C12 C51
    Date: 2005–04
  3. By: L. Grossi; G. Morelli
    Abstract: In order to cope with the stylized facts of financial time series, many models have been proposed inside the GARCH family (e.g. EGARCH, GJR-GARCH, QGARCH, FIGARCH, LSTGARCH) and the stochastic volatility models (e.g. SV). Generally, all these models tend to produce very similar results as concerns forecasting performance. Most of the time it is difficult to choose which is the most appropriate specification. In addition, all these models are very sensitive to the presence of atypical observations. The purpose of this paper is to provide the user with new robust model selection procedures in financial models which downweight or eliminate the effect of atypical observations. The extreme case is when outliers are treated as missing data. In this paper we extend the theory of missing data to the family of GARCH models and show how to robustify the loglikelihood to make it insensitive to the presence of outliers. The suggested procedure enables us both to detect atypical observations and to select the best models in terms of forecasting performance.
    Keywords: GARCH models, extreme value, robust estimation
    JEL: C16 C22 C53 G15
    Date: 2006
  4. By: F. Laurini; J. A. Tawn
    Abstract: Generalised autoregressive conditional heteroskedastic (GARCH) processes have wide application in financial modelling. To characterise the extreme values of this process the extremal index is required. Mikosch and Starica (2000) derive the extremal index for the squared GARCH(1,1) process. Here we propose an algorithm for the evaluation of the extremal index and for the limiting distribution of the size of clusters of extremes for GARCH(1,1) processes with t-distributed innovations, and tabulate values of these characteristics for a range of parameters of the GARCH(1,1) process. This algorithm also enables properties of other cluster functionals to be evaluated.
    Keywords: clusters, extreme value theory, extremal index, finance, GARCH, multivariate regular variation
    JEL: C15 C32 C53
    Date: 2006
  5. By: Joseph P. Byrne and Roger Perman
    Abstract: Since Perron (1989) the time series literature has emphasised the importance of testing for structural breaks in typical economic data sets and pronounced the implications of structural breaks when testing for unit root processes. In this paper we survey recent developments in testing for unit roots taking account of possible structural breaks. In doing so we discuss the distinction between taking structural break dates as exogenously determined, an approach initially adopted in the literature, and endogenously testing break dates. That is, we differentiate between testing for breaks when the break date is known and when it is assumed to be unknown. Also important is the distinction between discrete breaks and gradual breaks. Additionally we describe tests for both single and multiple breaks and discuss some of the pitfalls of the latter.
    JEL: C12 C32
  6. By: Bask , Mikael (Bank of Finland Research); Liu , Tung (Department of Economics, Ball State University); Widerberg , Anna (Department of Economics)
    Abstract: The aim of this paper is to illustrate how the stability of a stochastic dynamic system is measured using the Lyapunov exponents. Specifically, we use a feedforward neural network to estimate these exponents as well as asymptotic results for this estimator to test for unstable (chaotic) dynamics. The data set used is spot electricity prices from the Nordic power exchange market. Nord Pool, and the dynamic system that generates these prices appears to be chaotic in one case.
    Keywords: feedforward neural network; Nord Pool; Lyapunov exponents; spot electricity prices; stochastic dynamic system
    JEL: C12 C14 C22
    Date: 2006–06–12
  7. By: Scalas, Enrico; Kim, Kyungsik
    Abstract: This paper illustrates a procedure for fitting financial data with alpha-stable distributions. After using all the available methods to evaluate the distribution parameters, one can qualitatively select the best estimate and run some goodness-of-fit tests on this estimate, in order to quantitatively assess its quality. It turns out that, for the two investigated data sets (MIB30 and DJIA from 2000 to present), an alpha-stable fit of log-returns is reasonably good.
    Keywords: finance; statistical methods; stable distributions
    JEL: C14 C16 G00
    Date: 2006–08–23
  8. By: Oleg Korenok (Department of Economics, VCU School of Business); Stanislav Radchenko (Department of Economics, University of North Carolina at Charlotte)
    Abstract: This paper proposes to model the error term in smooth transition autoregressive target zone model as Gaussian with stochastic volatility (STARTZ-SV) or as Student-t with GARCH volatility (STARTZ-TGARCH). Using the dynamics of Norwegian krone exchange rate index, we show that both models produce standardized residuals that are closer to assumed distributions and do not produce a hump in the estimated marginal distribution of exchange rate which is more consistent with theoretical predictions. We apply developed models to test whether the dynamics of oil price can be well approximated by the Krugman’s target zone model. Our estimates of conditional volatility and marginal distribution reject the target zone hypothesis.
    Keywords: target zone, oil price, exchange rate, stochastic volatility, griddy Gibbs, smooth transition
    JEL: C52 Q38 F31
    Date: 2005–08
  9. By: Gitlesen, Jens Petter (University of Stavanger); Kleppe, Gisle (Stord/Haugesund University College (HSH)); Thorsen, Inge (Stord/Haugesund University College (HSH)); Ubøe, Jan (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration)
    Abstract: In this paper we present empirical results based on a network model for commuting flows. The model is a modified version of a construction introduced in Thorsen et al. (1999). Journeys-to-work are determined by distance deterrence effects, the effects of intervening opportunities, and the location of potential destinations relative to alternatives at subsequent steps in the transportation network. Calibration is based on commuting data from a region in Western Norway. Estimated parameter values are reasonable, and the explanatory power is found to be very satisfying compared to results from a competing destinations approach. We also provide theoretical arguments in favor of a network approach to represent spatial structure characteristics.
    Keywords: Journeys-to-work; transportation network; network approach; spatial structure characteristics
    JEL: C13 C51 C52
    Date: 2006–04–27
  10. By: Chollete, Lorán (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Heinen, Andreas (Dept. of Statistics and Econometrics, Universidad Carlos III de Madrid)
    Abstract: How common and how persistent are turbulent periods? We address these questions by developing and applying a dynamic dependence framework. In order to answer the first question we estimate an unconditional mixture model of normal copulas, based on both economic and econometric justification. In order to answer the second question, we develop and estimate a hidden markov model of copulas, which allows for dynamic clustering of correlations. These models permit one to infer the relative importance of turbulent and quiescent periods in international markets. Empirically, the three most striking findings are as follows. First, for the unconditional model, turbulent regimes are more common. Second, the conditional copula model dominates the unconditional model. Third, turbulent regimes tend to be more persistent.
    Keywords: International Markets; Turbulence; Hidden Markov Model; Copula
    JEL: C14 C22 C50 F30 G15
    Date: 2006–10–11
  11. By: Anton Andriyashin; Michal Benko; Wolfgang Härdle; Roman Timofeev; Uwe Ziegenhagen
    Abstract: One of the major cost factors in car manufacturing is the painting of body and other parts such as wing or bonnet. Surprisingly, the painting may be even more expensive than the body itself. From this point of view it is clear that car manufacturers need to observe the painting process carefully to avoid any deviations from the desired result. Especially for metallic colors where the shining is based on microscopic aluminium particles, customers tend to be very sensitive towards a difference in the light reflection of different parts of the car. The following study, carried out in close cooperation with a partner from car industry, combines classical tests and nonparametric smoothing techniques to detect trends in the process of car painting. The localized versions motivated by t-test, Mann-Kendall, Cox-Stuart and a change point test are employed in this study. Suitable parameter settings and the properties of the proposed tests are studied by simulations based on resampling methods borrowed from nonparametric smoothing. The aim of the analysis is to find a reliable technical solution which avoids any interaction from a human side.
    Keywords: smoothing, resampling, nonparametric regression, trend detection
    JEL: C14 C19 C89
    Date: 2006–10
  12. By: Abramov, Vyacheslav; Klebaner, Fima
    Abstract: In this paper we study volatility functions. Our main assumption is that the volatility is deterministic or stochastic but driven by a Brownian motion independent of the stock. We propose a forecasting method and check the consistency with option pricing theory. To estimate the unknown volatility function we use the approach of \cite{Goldentayer Klebaner and Liptser} based on filters for estimation of an unknown function from its noisy observations. One of the main assumptions is that the volatility is a continuous function, with derivative satisfying some smoothness conditions. The two forecasting methods correspond to the the first and second order filters, the first order filter tracks the unknown function and the second order tracks the function and it derivative. Therefore the quality of forecasting depends on the type of the volatility function: if oscillations of volatility around its average are frequent, then the first order filter seems to be appropriate, otherwise the second order filter is better. Further, in deterministic volatility models the price of options is given by the Black-Scholes formula with averaged future volatility \cite{Hull White 1987}, \cite{Stein and Stein 1991}. This enables us to compare the implied volatility with the averaged estimated historical volatility. This comparison is done for five companies and shows that the implied volatility and the historical volatilities are not statistically related.
    Keywords: Non-constant volatility; approximating and forecasting volatility; Black-Scholes formula; best linear predictor
    JEL: G13
    Date: 2006–06–06
  13. By: Vuorenmaa , Tommi (Department of Economics, University of Helsinki)
    Abstract: This paper investigates the dependence of average stock market volatility on the timescale or on the time interval used to measure price changes, which dependence is often referred to as the scaling law. Scaling factor, on the other hand, refers to the elasticity of the volatility measure with respect to the timescale. This paper studies, in particular, whether the scaling factor differs from the one in a simple random walk model and whether it has remained stable over time. It also explores possible underlying reasons for the observed behaviour of volatility in terms of heterogeneity of stock market players and periodicity of in-traday volatility. The data consist of volatility series of Nokia Oyj at the Helsinki Stock Exchange at five minute frequency over the period from January 4, 1999 to December 30, 2002. The paper uses wavelet methods to decompose stock market volatility at different timescales. Wavelet methods are particularly well motivated in the present context due to their superior ability to describe local properties of times se-ries. The results are, in general, consistent with multiscaling in Finnish stock markets. Furthermore, the scaling factor and the long-memory parameters of the volatility series are not constant over time, nor con-sistent with a random walk model. Interestingly, the evidence also suggests that, for a significant part, the behaviour of volatility is accounted for by an intraday volatility cycle referred to as the New York effect. Long-memory features emerge more clearly in the data over the period around the burst of the IT bubble and may, consequently, be an indication of irrational exuberance on the part of investors.
    Keywords: long-memory; scaling; stock market; volatility; wavelets
    JEL: C14 C22
    Date: 2005–10–11
  14. By: Männistö , Hanna-Leena (Bank of Finland Research)
    Abstract: To develop forecasting procedures with a forward-looking dynamic general equilibrium model, we built a small New-Keynesian model and calibrated it to euro area data. It was essential in this context that we allowed for long-run growth in GDP. We brought additional asset price equations based on the expecta-tions hypothesis and the Gordon growth model, into the standard open economy model, in order to extract information on private sector long-run expectations on fundamentals, and to combine that information into the macro economic forecast. We propose a method of transforming the model in forecasting use in such a way, as to match, in an economically meaningful way, the short-term forecast levels, especially of the model's jump-variables, to the parameters affecting the long-run trends of the key macroeconomic variables. More specifically, in the model we have used for illustrative purposes, we pinned down the long-run inflation expectations and domestic and foreign potential growth-rates using the model's steady state solution in combination with, by assumption, forward looking information in up-to-date financial market data. Consequently, our proposed solution preserves consistency with market expectations and results, as a favourable by-product, in forecast paths with no initial, first forecast period jumps. Further-more, no ad hoc re-calibration is called for in the proposed forecasting procedures, which clearly is an advantage from point of view of transparency in communication.
    Keywords: forecasting; New Keynesian model; DSGE model; rational expectations; open economy
    JEL: E17 E30 E31 F41
    Date: 2005–10–11

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.