
on Econometrics 
By:  Hiroyuki Kasahara (University of Western Ontario); Katsumi Shimotsu (Queen's University) 
Abstract:  In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and typespecific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work. Three elements emerge as the important determinants of identification; the timedimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case, a timedimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types, and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Typespecific components are identifiable even when state dependence is present as long as the panel has a moderate timedimension (T = 6). We also develop a series logit estimator for finite mixture models of dynamic discrete choices and derive its convergence rate. 
Keywords:  dynamic discrete choice models; finite mixture; nonparametric identification; panel data; sieve estimator; unobserved heterogeneity 
JEL:  C13 C14 C23 C25 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:uwo:uwowop:20065&r=ecm 
By:  Leeb, Hannes; Pötscher, Benedikt M. 
Abstract:  We consider the problem of estimating the unconditional distribution of a postmodelselection estimator. The notion of a postmodelselection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion like AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by leastsquares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate the unconditional distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for the distribution. These lower bounds are shown to approach 1/2 or even 1 in large samples, depending on the situation considered. Similar impossibility results are also obtained for the distribution of linear functions (e.g., predictors) of the postmodelselection estimator. 
Keywords:  Inference after model selection; Postmodelselection estimator; Pretest estimator; Selection of regressors; Akaike's information criterion AIC; Thresholding; Model uncertainty; Consistency; Uniform consistency; Lower risk bound. 
JEL:  C20 C13 C52 C12 C51 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:72&r=ecm 
By:  L. Grossi; G. Morelli 
Abstract:  In order to cope with the stylized facts of financial time series, many models have been proposed inside the GARCH family (e.g. EGARCH, GJRGARCH, QGARCH, FIGARCH, LSTGARCH) and the stochastic volatility models (e.g. SV). Generally, all these models tend to produce very similar results as concerns forecasting performance. Most of the time it is difficult to choose which is the most appropriate specification. In addition, all these models are very sensitive to the presence of atypical observations. The purpose of this paper is to provide the user with new robust model selection procedures in financial models which downweight or eliminate the effect of atypical observations. The extreme case is when outliers are treated as missing data. In this paper we extend the theory of missing data to the family of GARCH models and show how to robustify the loglikelihood to make it insensitive to the presence of outliers. The suggested procedure enables us both to detect atypical observations and to select the best models in terms of forecasting performance. 
Keywords:  GARCH models, extreme value, robust estimation 
JEL:  C16 C22 C53 G15 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:par:dipeco:2006se02&r=ecm 
By:  F. Laurini; J. A. Tawn 
Abstract:  Generalised autoregressive conditional heteroskedastic (GARCH) processes have wide application in financial modelling. To characterise the extreme values of this process the extremal index is required. Mikosch and Starica (2000) derive the extremal index for the squared GARCH(1,1) process. Here we propose an algorithm for the evaluation of the extremal index and for the limiting distribution of the size of clusters of extremes for GARCH(1,1) processes with tdistributed innovations, and tabulate values of these characteristics for a range of parameters of the GARCH(1,1) process. This algorithm also enables properties of other cluster functionals to be evaluated. 
Keywords:  clusters, extreme value theory, extremal index, finance, GARCH, multivariate regular variation 
JEL:  C15 C32 C53 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:par:dipeco:2006se01&r=ecm 
By:  Joseph P. Byrne and Roger Perman 
Abstract:  Since Perron (1989) the time series literature has emphasised the importance of testing for structural breaks in typical economic data sets and pronounced the implications of structural breaks when testing for unit root processes. In this paper we survey recent developments in testing for unit roots taking account of possible structural breaks. In doing so we discuss the distinction between taking structural break dates as exogenously determined, an approach initially adopted in the literature, and endogenously testing break dates. That is, we differentiate between testing for breaks when the break date is known and when it is assumed to be unknown. Also important is the distinction between discrete breaks and gradual breaks. Additionally we describe tests for both single and multiple breaks and discuss some of the pitfalls of the latter. 
JEL:  C12 C32 
URL:  http://d.repec.org/n?u=RePEc:gla:glaewp:2006_10&r=ecm 
By:  Bask , Mikael (Bank of Finland Research); Liu , Tung (Department of Economics, Ball State University); Widerberg , Anna (Department of Economics) 
Abstract:  The aim of this paper is to illustrate how the stability of a stochastic dynamic system is measured using the Lyapunov exponents. Specifically, we use a feedforward neural network to estimate these exponents as well as asymptotic results for this estimator to test for unstable (chaotic) dynamics. The data set used is spot electricity prices from the Nordic power exchange market. Nord Pool, and the dynamic system that generates these prices appears to be chaotic in one case. 
Keywords:  feedforward neural network; Nord Pool; Lyapunov exponents; spot electricity prices; stochastic dynamic system 
JEL:  C12 C14 C22 
Date:  2006–06–12 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2006_009&r=ecm 
By:  Scalas, Enrico; Kim, Kyungsik 
Abstract:  This paper illustrates a procedure for fitting financial data with alphastable distributions. After using all the available methods to evaluate the distribution parameters, one can qualitatively select the best estimate and run some goodnessoffit tests on this estimate, in order to quantitatively assess its quality. It turns out that, for the two investigated data sets (MIB30 and DJIA from 2000 to present), an alphastable fit of logreturns is reasonably good. 
Keywords:  finance; statistical methods; stable distributions 
JEL:  C14 C16 G00 
Date:  2006–08–23 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:336&r=ecm 
By:  Oleg Korenok (Department of Economics, VCU School of Business); Stanislav Radchenko (Department of Economics, University of North Carolina at Charlotte) 
Abstract:  This paper proposes to model the error term in smooth transition autoregressive target zone model as Gaussian with stochastic volatility (STARTZSV) or as Studentt with GARCH volatility (STARTZTGARCH). Using the dynamics of Norwegian krone exchange rate index, we show that both models produce standardized residuals that are closer to assumed distributions and do not produce a hump in the estimated marginal distribution of exchange rate which is more consistent with theoretical predictions. We apply developed models to test whether the dynamics of oil price can be well approximated by the Krugman’s target zone model. Our estimates of conditional volatility and marginal distribution reject the target zone hypothesis. 
Keywords:  target zone, oil price, exchange rate, stochastic volatility, griddy Gibbs, smooth transition 
JEL:  C52 Q38 F31 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:vcu:wpaper:0505&r=ecm 
By:  Gitlesen, Jens Petter (University of Stavanger); Kleppe, Gisle (Stord/Haugesund University College (HSH)); Thorsen, Inge (Stord/Haugesund University College (HSH)); Ubøe, Jan (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration) 
Abstract:  In this paper we present empirical results based on a network model for commuting flows. The model is a modified version of a construction introduced in Thorsen et al. (1999). Journeystowork are determined by distance deterrence effects, the effects of intervening opportunities, and the location of potential destinations relative to alternatives at subsequent steps in the transportation network. Calibration is based on commuting data from a region in Western Norway. Estimated parameter values are reasonable, and the explanatory power is found to be very satisfying compared to results from a competing destinations approach. We also provide theoretical arguments in favor of a network approach to represent spatial structure characteristics. 
Keywords:  Journeystowork; transportation network; network approach; spatial structure characteristics 
JEL:  C13 C51 C52 
Date:  2006–04–27 
URL:  http://d.repec.org/n?u=RePEc:hhs:nhhfms:2006_004&r=ecm 
By:  Chollete, Lorán (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Heinen, Andreas (Dept. of Statistics and Econometrics, Universidad Carlos III de Madrid) 
Abstract:  How common and how persistent are turbulent periods? We address these questions by developing and applying a dynamic dependence framework. In order to answer the first question we estimate an unconditional mixture model of normal copulas, based on both economic and econometric justification. In order to answer the second question, we develop and estimate a hidden markov model of copulas, which allows for dynamic clustering of correlations. These models permit one to infer the relative importance of turbulent and quiescent periods in international markets. Empirically, the three most striking findings are as follows. First, for the unconditional model, turbulent regimes are more common. Second, the conditional copula model dominates the unconditional model. Third, turbulent regimes tend to be more persistent. 
Keywords:  International Markets; Turbulence; Hidden Markov Model; Copula 
JEL:  C14 C22 C50 F30 G15 
Date:  2006–10–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:nhhfms:2006_010&r=ecm 
By:  Anton Andriyashin; Michal Benko; Wolfgang Härdle; Roman Timofeev; Uwe Ziegenhagen 
Abstract:  One of the major cost factors in car manufacturing is the painting of body and other parts such as wing or bonnet. Surprisingly, the painting may be even more expensive than the body itself. From this point of view it is clear that car manufacturers need to observe the painting process carefully to avoid any deviations from the desired result. Especially for metallic colors where the shining is based on microscopic aluminium particles, customers tend to be very sensitive towards a difference in the light reflection of different parts of the car. The following study, carried out in close cooperation with a partner from car industry, combines classical tests and nonparametric smoothing techniques to detect trends in the process of car painting. The localized versions motivated by ttest, MannKendall, CoxStuart and a change point test are employed in this study. Suitable parameter settings and the properties of the proposed tests are studied by simulations based on resampling methods borrowed from nonparametric smoothing. The aim of the analysis is to find a reliable technical solution which avoids any interaction from a human side. 
Keywords:  smoothing, resampling, nonparametric regression, trend detection 
JEL:  C14 C19 C89 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006071&r=ecm 
By:  Abramov, Vyacheslav; Klebaner, Fima 
Abstract:  In this paper we study volatility functions. Our main assumption is that the volatility is deterministic or stochastic but driven by a Brownian motion independent of the stock. We propose a forecasting method and check the consistency with option pricing theory. To estimate the unknown volatility function we use the approach of \cite{Goldentayer Klebaner and Liptser} based on filters for estimation of an unknown function from its noisy observations. One of the main assumptions is that the volatility is a continuous function, with derivative satisfying some smoothness conditions. The two forecasting methods correspond to the the first and second order filters, the first order filter tracks the unknown function and the second order tracks the function and it derivative. Therefore the quality of forecasting depends on the type of the volatility function: if oscillations of volatility around its average are frequent, then the first order filter seems to be appropriate, otherwise the second order filter is better. Further, in deterministic volatility models the price of options is given by the BlackScholes formula with averaged future volatility \cite{Hull White 1987}, \cite{Stein and Stein 1991}. This enables us to compare the implied volatility with the averaged estimated historical volatility. This comparison is done for five companies and shows that the implied volatility and the historical volatilities are not statistically related. 
Keywords:  Nonconstant volatility; approximating and forecasting volatility; BlackScholes formula; best linear predictor 
JEL:  G13 
Date:  2006–06–06 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:207&r=ecm 
By:  Vuorenmaa , Tommi (Department of Economics, University of Helsinki) 
Abstract:  This paper investigates the dependence of average stock market volatility on the timescale or on the time interval used to measure price changes, which dependence is often referred to as the scaling law. Scaling factor, on the other hand, refers to the elasticity of the volatility measure with respect to the timescale. This paper studies, in particular, whether the scaling factor differs from the one in a simple random walk model and whether it has remained stable over time. It also explores possible underlying reasons for the observed behaviour of volatility in terms of heterogeneity of stock market players and periodicity of intraday volatility. The data consist of volatility series of Nokia Oyj at the Helsinki Stock Exchange at five minute frequency over the period from January 4, 1999 to December 30, 2002. The paper uses wavelet methods to decompose stock market volatility at different timescales. Wavelet methods are particularly well motivated in the present context due to their superior ability to describe local properties of times series. The results are, in general, consistent with multiscaling in Finnish stock markets. Furthermore, the scaling factor and the longmemory parameters of the volatility series are not constant over time, nor consistent with a random walk model. Interestingly, the evidence also suggests that, for a significant part, the behaviour of volatility is accounted for by an intraday volatility cycle referred to as the New York effect. Longmemory features emerge more clearly in the data over the period around the burst of the IT bubble and may, consequently, be an indication of irrational exuberance on the part of investors. 
Keywords:  longmemory; scaling; stock market; volatility; wavelets 
JEL:  C14 C22 
Date:  2005–10–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_027&r=ecm 
By:  Männistö , HannaLeena (Bank of Finland Research) 
Abstract:  To develop forecasting procedures with a forwardlooking dynamic general equilibrium model, we built a small NewKeynesian model and calibrated it to euro area data. It was essential in this context that we allowed for longrun growth in GDP. We brought additional asset price equations based on the expectations hypothesis and the Gordon growth model, into the standard open economy model, in order to extract information on private sector longrun expectations on fundamentals, and to combine that information into the macro economic forecast. We propose a method of transforming the model in forecasting use in such a way, as to match, in an economically meaningful way, the shortterm forecast levels, especially of the model's jumpvariables, to the parameters affecting the longrun trends of the key macroeconomic variables. More specifically, in the model we have used for illustrative purposes, we pinned down the longrun inflation expectations and domestic and foreign potential growthrates using the model's steady state solution in combination with, by assumption, forward looking information in uptodate financial market data. Consequently, our proposed solution preserves consistency with market expectations and results, as a favourable byproduct, in forecast paths with no initial, first forecast period jumps. Furthermore, no ad hoc recalibration is called for in the proposed forecasting procedures, which clearly is an advantage from point of view of transparency in communication. 
Keywords:  forecasting; New Keynesian model; DSGE model; rational expectations; open economy 
JEL:  E17 E30 E31 F41 
Date:  2005–10–11 
URL:  http://d.repec.org/n?u=RePEc:hhs:bofrdp:2005_021&r=ecm 