
on Econometrics 
By:  Jose Olmo 
Abstract:  This paper introduces an estimator for the extremal index as the ratio of the number of elements of two point processes defined by threshold sequences un, vn and a partition of the sequence in different blocks of the same size. The first point process is defined by the sequence of the block maxima that exceed un. This paper introduces a thinning of this point process, defined by a threshold vn with vn > un, and with the appealing property that under some mild conditions the ratio of the number of elements of both point processes is a consistent estimator of the extremal index. The method supports a hypothesis test for the extremal index, and hence for testing the existence of clustering in the extreme values. Other advantages are that it allows some freedom to choose un, and it is not very sensitive to the choice of the partition. Finally, the stylized facts found in financial returns (clustering, skewness, heavy tails) are tested via the extremal index, in this case for the DaX returns 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we051809&r=ecm 
By:  Sara LopezPintado; Juan Romo 
Abstract:  A recent and highly attractive area of research in statistics is the analysis of functional data. In this paper a new definition of depth for functional observations is introduced based on the notion of “halfgraph” of a curve. It has computational advantages with respect to other concepts of depth previously proposed. The halfgraph depth provides a natural criterion to measure the centrality of a function within a sample of curves. Based on this depth a sample of curves can be ordered from the center outward and Lstatistics are defined. The properties of the halfgraph depth, such as the consistency and uniform convergence, are established. A simulation study shows the robustness of this new definition of depth when the curves are contaminated. Finally real data examples are analyzed. 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws051603&r=ecm 
By:  Olivier SCAILLET (HECUniversity of Geneva and FAME) 
Abstract:  We consider a nonparametric method to estimate conditional expected shortfalls, i.e. conditional expected losses knowing that losses are larger than a given loss quantile. We derive the asymptotic properties of kernal estimators of conditional expected shortfalls in the context of a stationary process satisfying strong mixing conditions. An empirical illustration is given for several stock index returns, namely CAC40, DAX30, S&P500, DJI, and Nikkei225. 
Keywords:  Nonparametric; Kernel; Time series; Conditional VAR; Conditional expected shortfall; Risk management; Loss severity distribution 
JEL:  C14 D81 G10 G21 G22 G28 
Date:  2004–05 
URL:  http://d.repec.org/n?u=RePEc:fam:rpseri:rp112&r=ecm 
By:  Olivier Scaillet (HEC, University of Geneva and FAME) 
Abstract:  We consider a consistent test, that is similar to a KolmogorovSmirnov test, of the complete set of restrictions that relate to the copula representation of positive quadrant dependence. For such a test we propose and justify inference relying on a simulation based multiplier method and a bootstrap method. We also explore the finite sample behaviour of ^ both methods with Monte Carlo experiments. A first empirical illustration is given for US insurance claim data. A second one exemines the presence of positive quadrant dependence in life expectancies at birth of males and females among countries. 
Keywords:  Nonparametric; Positive Quadrant Dependence; Copula; Risk Management; Loss Severity Distribution; Bootstrap; Multiplier Method; Empirical Process 
JEL:  C12 D81 G10 G21 G22 
URL:  http://d.repec.org/n?u=RePEc:fam:rpseri:rp128&r=ecm 
By:  Veronika Czellar (Dept. of Econometrics, University of Geneva); G. Andrew Karolyi (Fisher College of Business, Ohio State University); Elvezio Ronchetti (Dept. Econometrics, University of Geneva) 
Abstract:  We introduce Indirect Robust Generalized Method of Moments (IRGMM), a new simulationbased estimation methodology, to model shortterm interest rate processes. The primary advantage of IRGMM relative to classical estimators of the continuoustime shortrate diffusion processes is that it corrects both errors due to discretization and the errors due to model misspecification. We apply this new approach to various monthly and weekly Eurocurrency interest rate series. 
Keywords:  GMM and RGMM estimators; CKLS one factor model; indirect inference 
JEL:  G10 G12 C10 C22 C15 C53 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:fam:rpseri:rp135&r=ecm 
By:  Quoreshi, Shahiduzzaman (Department of Economics, Umeå University) 
Abstract:  A bivariate integervalued moving average (BINMA) model is proposed. The BINMA model allows for both positive and negative correlation between the counts. This model can be seen as an inverse of the conditional duration model in the sense that short durations in a time interval correspond to a large count and vice versa. The conditional mean, variance and covariance of the BINMA model are given. Model extensions to include explanatory variables are suggested. Using the BINMA model for AstraZeneca and Ericsson B it is found that there is positive correlation between the stock transactions series. Empirically, we find support for the use of longlag bivariate moving average models for the two series. have significant effects for both series. 
Keywords:  Count data; Intraday; High frequency; Time series; Estimation; Long memory; Finance 
JEL:  C13 C22 C25 C51 G12 G14 
Date:  2005–04–14 
URL:  http://d.repec.org/n?u=RePEc:hhs:umnees:0655&r=ecm 
By:  Welz, Peter (Department of Economics); Österholm, Pär (Department of Economics) 
Abstract:  This paper contributes to the recent debate about the estimated high partial adjustment coefficient in dynamic Taylor rules, commonly interpreted as deliberate interest rate smoothing on the part of the monetary authority. We argue that a high coefficient on the lagged interest rate term may be a consequence of an incorrectly specified central bank reaction function. Focusing on omitted variables, our Monte Carlo study first generates the wellknown fact that all coefficients in the misspecified equation are biased in such cases. In particular, if relevant variables are left out from the estimated equation, a high partial adjustment coefficient is obtained even when it is in fact zero in the data generating process. Misspecification also leads to considerable size distortions in two tests that were recently proposed by English, Nelson, and Sack (2003) in order to distinguish between interest rate smoothing and serially correlated disturbances. Our results question the common interpretation of very slow partial adjustment as interest rate smoothing in estimated dynamic Taylor rules. 
Keywords:  Monetary policy; Taylor rule; Interest rate smoothing; Serially correlated error term; Omitted variables 
JEL:  C12 C15 E52 
Date:  2005–03–31 
URL:  http://d.repec.org/n?u=RePEc:hhs:uunewp:2005_014&r=ecm 
By:  Kazuhiko Hayakawa 
Abstract:  This paper examines analytically and experimentally why the system GMM estimator in dynamic panel data models is less biased than the first differencing or the level estimators even though the former uses more instruments. We find that the bias of the system GMM estimator is a weighted sum of the biases in opposite directions of the first differencing and the level estimator. We also find that an important condition for the system GMM estimator to have small bias is that the variances of the individual effects and the disturbances are almost of the same magnitude. If the variance of individual effects is much larger than that of disturbances, then all GMM estimators are heavily biased. To reduce such biases, we propose biascorrected GMM estimators. On the other hand, if the variance of individual effects is smaller than that of disturbances, the system estimator has a more severe downward bias than the level estimator. 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:hst:hstdps:d0582&r=ecm 
By:  Lancelot F. James; Antonio Lijoi; Igor Pruenster 
Abstract:  One of the main research areas in Bayesian Nonparametrics is the proposal and study of priors which generalize the Dirichlet process. Here we exploit theoretical properties of Poisson random measures in order to provide a comprehensive Bayesian analysis of random probabilities which are obtained by an appropriate normalization. Specifically we achieve explicit and tractable forms of the posterior and the marginal distributions, including an explicit and easily used description of generalizations of the important BlackwellMacQueen Pólya urn distribution. Such simplifications are achieved by the use of a latent variable which admits quite interesting interpretations which allow to gain a better understanding of the behaviour of these random probability measures. It is noteworthy that these models are generalizations of models considered by Kingman (1975) in a nonBayesian context. Such models are known to play a significant role in a variety of applications including genetics, physics, and work involving random mappings and assemblies. Hence our analysis is of utility in those contexts as well. We also show how our results may be applied to Bayesian mixture models and describe computational schemes which are generalizations of known efficient methods for the case of the Dirichlet process. We illustrate new examples of processes which can play the role of priors for Bayesian nonparametric inference and finally point out some interesting connections with the theory of generalized gamma convolutions initiated by Thorin and further developed by Bondesson. 
Keywords:  Bayesian Nonparametrics; Chinese restaurant process; Generalized gamma convolutions; Gibbs partitions; Poisson random measure 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:icr:wpmath:52005&r=ecm 
By:  Nikolaus Hautsch (Institute of Economics, University of Copenhagen) 
Abstract:  In this paper, we propose a framework for the modelling of multivariate dynamic processes which are driven by an unobservable common autoregressive component. Economically motivated by the mixtureofdistribution hypothesis, we model the multivariate intraday trading process of return volatility, volume and trading intensity by a VAR model that is augmented by a joint latent factor serving as a proxy for the unobserved information flow. The model is estimated by simulated maximum likelihood using efficient importance sampling techniques. Analyzing intraday data from the NYSE, we find strong empirical evidence for the existence of an underlying persistent component as an important driving force of the trading process. It is shown that the inclusion of the latent factor clearly improves the goodnessoffit of the model as well as its dynamical and distributional properties. 
Keywords:  observation vs. parameter driven dynamics; mixtureofdistribution hypothesis; VAR model; efficient importance sampling 
JEL:  C15 C32 C52 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiefr:200503&r=ecm 
By:  Frank T. Denton 
Abstract:  The use of a nonparametrically generated instrumental variable in estimating a singleequation linear parametric model is explored, using kernel and other smoothing functions. The method, termed IVOS (Instrumental Variables Obtained by Smoothing), is applied in the estimation of measurement error and endogenous regressor models. Asymptotic and smallsample properties are investigated by simulation, using artificial data sets. IVOS is easy to apply and the simulation results exhibit good statistical properties. It can be used in situations in which standard IV cannot because suitable instruments are not available. 
Keywords:  single equation models; nonparametric; instrumental variables 
JEL:  C13 C14 C21 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:mcm:sedapp:124&r=ecm 
By:  Ralph D Snyder 
Abstract:  An approach to exponential smoothing that relies on a linear single source of error state space model is outlined. A maximum likelihood method for the estimation of associated smoothing parameters is developed. Commonly used restrictions on the smoothing parameters are rationalised. Issues surrounding model identification and selection are also considered. It is argued that the proposed revised version of exponential smoothing provides a better framework for forecasting than either the BoxJenkins or the traditional multidisturbance state space approaches. 
Keywords:  Time Series Analysis, Prediction, Exponential Smoothing, ARIMA Models, Kalman Filter, State Space Models 
JEL:  C22 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20055&r=ecm 
By:  Baki Billah; Maxwell L King; Ralph D Snyder; Anne B Koehler 
Abstract:  Applications of exponential smoothing to forecast time series usually rely on three basic methods: simple exponential smoothing, trend corrected exponential smoothing and a seasonal variation thereof. A common approach to select the method appropriate to a particular time series is based on prediction validation on a withheld part of the sample using criteria such as the mean absolute percentage error. A second approach is to rely on the most appropriate general case of the three methods. For annual series this is trend corrected exponential smoothing: for subannual series it is the seasonal adaptation of trend corrected exponential smoothing. The rationale for this approach is that a general method automatically collapses to its nested counterparts when the pertinent conditions pertain in the data. A third approach may be based on an information criterion when maximum likelihood methods are used in conjunction with exponential smoothing to estimate the smoothing parameters. In this paper, such approaches for selecting the appropriate forecasting method are compared in a simulation study. They are also compared on real time series from the M3 forecasting competition. The results indicate that the information criterion approach appears to provide the best basis for an automated approach to method selection, provided that it is based on Akaike's information criterion. 
Keywords:  Model Selection; Exponential Smoothing; Information Criteria; Prediction; Forecast Validation 
JEL:  C22 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20056&r=ecm 
By:  J Keith Ord; Ralph D Snyder; Anne B Koehler; Rob J Hyndman; Mark Leeds 
Abstract:  The state space approach to modelling univariate time series is now widely used both in theory and in applications. However, the very richness of the framework means that quite different model formulations are possible, even when they purport to describe the same phenomena. In this paper, we examine the single source of error [SSOE] scheme, which has perfectly correlated error components. We then proceed to compare SSOE to the more common version of the state space models, for which all the error terms are independent; we refer to this as the multiple source of error [MSOE] scheme. As expected, there are many similarities between the MSOE and SSOE schemes, but also some important differences. Both have ARIMA models as their reduced forms, although the mapping is more transparent for SSOE. Further, SSOE does not require a canonical form to complete its specification. An appealing feature of SSOE is that the estimates of the state variables converge in probability to their true values, thereby leading to a formal inferential structure for the adhoc exponential smoothing methods for forecasting. The parameter space for SSOE models may be specified to match that of the corresponding ARIMA scheme, or it may be restricted to meaningful subspaces, as for MSOE but with somewhat different outcomes. The SSOE formulation enables straightforward extensions to certain classes of nonlinear models, including a linear trend with multiplicative seasonals version that underlies the HoltWinters forecasting method. Conditionally heteroscedastic models may be developed in a similar manner. Finally we note that smoothing and decomposition, two crucial practical issues, may be performed within the SSOE framework. 
Keywords:  ARIMA, Dynamic Linear Models, Equivalence, Exponential Smoothing, Forecasting, GARCH, Holt's Method, HoltWinters Method, Kalman Filter, Prediction Intervals. 
JEL:  C22 C53 C51 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20057&r=ecm 
By:  DUFOUR, JeanMarie 
Abstract:  The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter). 
Keywords:  Monte Carlo test ; maximized monte Carlo test ; finite same test ; exact test ; nuisance rameter ; bounds ; bootstra; rametric bootstra; simulated annealing ; asymotics ; nonstandard asymotic distribution. 
JEL:  C12 C15 C2 C52 C22 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:mtl:montde:200503&r=ecm 
By:  BEAULIEU, MarieClaude; DUFOUR, JeanMarie; KHALAF, Lynda 
Abstract:  In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative  possibly asymmetric  heavy tailed distributions without the use of largesample approximations. The methods suggested are based on Monte Carlo test techniques. Goodnessoffit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 19261995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodnessoffit and lead to fewer rejections of the efficiency hypothesis. 
Keywords:  catal asset icing model ; meanvariance efficiency ; nonnormality ; multivariate linear regression ; stable distribution ; skewness ; kurtosis ; asymmetry ; uniform linear hythesis ; exact test ; Monte Carlo test ; nuisance rameter ; scification test ; diagnostics. 
JEL:  C3 C12 C33 C15 G1 G12 G14 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:mtl:montde:200504&r=ecm 
By:  DUFOUR, JeanMarie; FARHAT, Abdekjelik; HALLIN, Marc 
Abstract:  We consider the problem of testing whether the observations X1, ..., Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eatontype bounds, Chebyshev bounds and BerryEsséenZolotarev bounds. The bounds are exact in finite samples, distributionfree and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate). 
Keywords:  autocorrelation ; serial dendence ; nonrametric test ; distributionfree test ; heterogeneity ; heteroskedasticity ; symmetric distribution ; robustness ; exact test ; bound ; exnential bound ; large deviations ; Chebyshev inequality ; BerryEsséen ; interest rates. 
JEL:  C14 C22 C12 C32 E4 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:mtl:montde:200505&r=ecm 
By:  DUFOUR, JeanMarie; TAREK, Jouini 
Abstract:  In this paper, we study the asymptotic distribution of a simple twostage (HannanRissanentype) linear estimator for stationary invertible vector autoregressive moving average (VARMA) models in the echelon form representation. General conditions for consistency and asymptotic normality are given. A consistent estimator of the asymptotic covariance matrix of the estimator is also provided, so that tests and confidence intervals can easily be constructed. 
Keywords:  Time series ; VARMA ; stationary ; invertible ; echelon form ; estimation ; asymotic normality ; bootstra; HannanRissanen 
JEL:  C3 C32 C53 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:mtl:montde:200509&r=ecm 
By:  Barbara Rossi (Duke University) 
Abstract:  Many authors have documented that it is challenging to explain exchange rate fluctuations with macroeconomic fundamentals: a random walk forecasts future exchange rates better than existing macroeconomic models. This paper applies newly developed tests for nested model that are robust to the presence of parameter instability. The empirical evidence shows that for some countries we can reject the hypothesis that exchange rates are random walks. This raises the possibility that economic models were previously rejected not because the fundamentals are completely unrelated to exchange rate fluctuations, but because the relationship is unstable over time and, thus, difficult to capture by Granger Causality tests or by forecast comparisons. We also analyze forecasts that exploit the time variation in the parameters and find that, in some cases, they can improve over the random walk. 
Keywords:  forecasting, exchange rates, parameter instability, random walks 
JEL:  C52 C53 F3 
Date:  2005–03–19 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpda:0503001&r=ecm 
By:  Javier Escobal (GRADE); Sonia Laszlo (McGill University) 
Abstract:  Studies in the microeconometric literature increasingly utilize distance to or time to reach markets or social services as determinants of economic issues. These studies typically use selfreported measures from survey data, often characterized by nonclassical measurement error. This paper is the first validation study of access to markets data. New and unique data from Peru allow comparison of selfreported variables with scientifically calculated variables. We investigate the determinants of the deviation between imputed and selfreported data and show that it is nonclassical and dependent on observable socioeconomic variables. Our results suggest that studies using selfreported measures of access may be estimating biased effects. 
JEL:  O P 
Date:  2005–03–29 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpdc:0503008&r=ecm 
By:  Dubois (Minstère de l'Economie, des Finances et de l'Industrie  Paris France) 
Abstract:  Grocer is an econometric toolbox for Scilab, a free opensource matrix oriented toolbox similar to Matlab and Gauss. It contains more than 50 econometric different methods, with many variants and extensions. Most standard econometric tools are available. Grocer contains also two original econometric tools: a function allowing the 'automatic' estimation of the 'true model' starting from a more general and bigger one, a method that provides the most thorough expression of the so called LSE econometric methodology; a function calculating the contributions of exogenous variables to an endogenous one. 
Keywords:  econometric software, estimation, general to specific, contributions 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–01–21 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0501014&r=ecm 
By:  Eric Hillebrand (Louisiana State University, Department of Economics) 
Abstract:  Apart from the wellknown, high persistence of daily financial volatility data, there is also a short correlation structure that reverts to the mean in less than a month. We find this short correlation time scale in six different daily financial time series and use it to improve the shortterm forecasts from GARCH models. We study different generalizations of GARCH that allow for several time scales. On our holding sample, none of the considered models can fully exploit the information contained in the short scale. Wavelet analysis shows a correlation between fluctuations on long and on short scales. Models accounting for this correlation as well as long memory models for absolute returns appear to be promising. 
Keywords:  GARCH, volatility persistence, spurious high persistence, long memory, fractional integration, changepoints, wavelets, time scales 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–01–31 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0501015&r=ecm 
By:  Edgar L. Feige (University of WisconsinMadison); Harold W. Watts (University of WisconsinMadison) 
Abstract:  A proposal for maintaining privacy protection in large data bases by the use of partially aggregated data instead of the original individual data. Proper micro aggregation techniques can serve to protect the confidential nature of the individual data with minimumal information loss. Reference:Data base4s, Computers and the Social Sciences, R. Bisco (ed.),Wiley, 1970, pp. 261272 
Keywords:  Privacy,data protection, microaggregation, 
JEL:  C43 C82 C88 
Date:  2005–02–03 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502001&r=ecm 
By:  Stanislav Radchenko (UNC at Charlotte) 
Abstract:  This paper constructs longterm forecasts of energy prices using a reduced form model of shifting trend developed by Pindyck (1999). A Gibbs sampling algorithm is developed to estimate models with a shifting trend line which are used to construct 10periodahead and 15period ahead forecasts. An advantage of forecasts from this model is that they are not very influenced by the presence of large, longlived increases and decreases in energy prices. The forecasts form shifting trends model are combined with forecasts from the random walk model and the autoregressive model to substantially decrease the mean forecast squared error compared to each individual model. 
Keywords:  energy forecasting, oil price, coal price, natural gas price, shifting trends model, long term forecasting 
JEL:  C53 
Date:  2005–02–04 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502002&r=ecm 
By:  Rafal Weron (Hugo Steinhaus Center); Adam Misiorek (Institute of Power Systems Automation) 
Abstract:  In this paper we study two statistical approaches to load forecasting. Both of them model electricity load as a sum of two components – a deterministic (representing seasonalities) and a stochastic (representing noise). They differ in the choice of the seasonality reduction method. Model A utilizes differencing, while Model B uses a recently developed seasonal volatility technique. In both models the stochastic component is described by an ARMA time series. Models are tested on a time series of systemwide loads from the California power market and compared with the official forecast of the California System Operator (CAISO). 
Keywords:  Electricity, load forecasting, ARMA model, seasonal component 
JEL:  C22 C53 L94 Q40 
Date:  2005–02–07 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502004&r=ecm 
By:  Ewa BroszkiewiczSuwaj (Wroclaw University of Technology); Andrzej Makagon (Hampton University); Rafal Weron (Hugo Steinhaus Center); Agnieszka Wylomanska (Wroclaw University of Technology) 
Abstract:  For many economic problems standard statistical analysis, based on the notion of stationarity, is not adequate. These include modeling seasonal decisions of consumers, forecasting business cycles and  as we show in the present article  modeling wholesale power market prices. We apply standard methods and a novel spectral domain technique to conclude that electricity price returns exhibit periodic correlation with daily and weekly periods. As such they should be modeled with periodically correlated processes. We propose to apply periodic autoregression (PAR) models which are closely related to the standard instruments in econometric analysis  vector autoregression (VAR) models. 
Keywords:  periodic correlation, sample coherence, electricity price, periodic autoregression, vector autoregression 
JEL:  C22 C32 L94 Q40 
Date:  2005–02–07 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502006&r=ecm 
By:  Bragoudakis Zacharias (Bank of Greece) 
Abstract:  This paper is an exercise in applied macroeconomic forecasting. We examine the forecasting power of a vector errorcorrection model (VECM) that is anchored by a longrun equilibrium relationship between Greek national income and productive public expenditure as suggested by the economic theory. We compare the estimated forecasting values of the endogenous variables to the realhistorical values using a stochastic simulation analysis. The simulation results provide new evidence supporting the ability of the model to forecast not only oneperiod ahead but also many periods into the future. Keywords: Cointegration, Forecasting, Simulation Analysis, Vector error correction models JEL Classifications: C15, C32, C53, E0, E6 Working Paper Series 
Keywords:  Cointegration, Forecasting, Simulation Analysis, Vector error correction models 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–02–09 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502007&r=ecm 
By:  Kusum Mundra (San Diego State University) 
Abstract:  In panel data the interest is often in slope estimation while taking account of the unobserved cross sectional heterogeneity. This paper proposes two nonparametric slope estimation where the unobserved effect is treated as fixed across cross section. The first estimator uses firstdifferencing transformation and the second estimator uses the mean deviation transformation. The asymptotic properties of the two estimators are established and the finite sample Monte Carlo properties of the two estimators are investigated allowing for systematic dependence between the crosssectional effect and the independent variable. Simulation results suggest that the new nonparametric estimators perform better than the parametric counterparts. We also investigate the finite sample properties of the parametric within and first differencing estimators. A very common practice in estimating earning function is to assume earnings to be quadratic in age and tenure, but that might be misspecified. In this paper we estimate nonparametric slope of age and tenure on earnings using NLSY data and compare it to the parametric (quadratic) effect. 
Keywords:  Nonparametric, Fixedeffect, Kernel, Monte carlo 
JEL:  C1 C14 C23 C15 
Date:  2005–02–09 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502008&r=ecm 
By:  Costas Milas (City University); Phil Rothman (East Carolina University) 
Abstract:  In this paper we use smooth transition vector errorcorrection models (STVECMs) in a simulated outofsample forecasting experiment for the unemployment rates of the four nonEuro G7 countries, the U.S., U.K., Canada, and Japan. For the U.S., pooled forecasts constructed by taking the median value across the point forecasts generated by the STVECMs perform better than the linear VECM benchmark more so during business cycle expansions. Pooling across the linear and nonlinear forecasts tends to lead to statistically signißcant forecast improvement for business cycle expansions for Canada, while the opposite is the case for the U.K. 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–02–18 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502010&r=ecm 
By:  Edoardo Otranto (DEIRUniversità di Sassari) 
Abstract:  The extraction of a common signal from a group of time series is generally obtained using variables recorded with the same frequency or transformed to have the same frequency (monthly, quarterly, etc.). The statistical literature has not paid a great deal of attention to this topic. In this paper we extend an approach based on the use of dummy variables to the well known trend plus cycle model, in a multivariate context, using both quarterly and monthly data. This procedure is applied to the Italian economy, using the variables suggested by an Italian Institution (ISAE) to provide a national dating. 
Keywords:  Business cycle; Statespace; Time Series; Trend; Turning Points 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–02–18 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502011&r=ecm 
By:  Victor Aguirregabiria (Boston University); Pedro Mira (CEMFI) 
Abstract:  This paper proposes an algorithm to obtain maximum likelihood estimates of structural parameters in discrete games with multiple equilibria. The method combines a genetic algorithm (GA) with a pseudo maximum likelihood (PML) procedure. The GA searches efficiently over the huge space of possible combinations of equilibria in the data. The PML procedure avoids the repeated computation of equilibria for each trial value of the parameters of interest. To test the ability of this method to get maximum likelihood estimates, we present a Monte Carlo experiment in the context of a game of price competition and collusion. 
Keywords:  Empirical games; Maximum likelihood estimation; Multiple equilibria; Genetic algorithms. 
JEL:  C13 C35 
Date:  2005–02–28 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502017&r=ecm 
By:  Vadim Marmer (Yale University) 
Abstract:  Various implications of nonlinearity, nonstationarity and misspecification are considered from a forecasting perspective. My model allows for small departures from the martingale difference sequence hypothesis by including an additive nonlinear component, formulated as a general, integrable transformation of the predictor, which is assumed to be I(1). Such a generating mechanism provides for predictability only in the extremely short run. In the stock market example, this formulation corresponds to a situation where some relevant information may escape the attention of market participants only for very short periods of time. I assume that the true generating mechanism involving the nonlinear dependency is unknown to the econometrician and he is therefore forced to use some approximating functions. I show that the usual regression techniques lead to spurious forecasts. Improvements of the forecast accuracy are possible with properly chosen integrable approximating functions. This paper derives the limiting distribution of the forecast MSE. In the case of square integrable approximants, it depends on the $L_{2}$distance between the nonlinear component and the approximating function. Optimal forecasts are available for a given class of approximants. Finally, I present a Monte Carlo simulation study and an empirical example in support of the theoretical findings. 
Keywords:  forecasting, integrated time series, misspecified models, nonlinear transformations, stock returns, dividendprice ratio. 
JEL:  C22 C53 G14 
Date:  2005–03–05 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503002&r=ecm 
By:  Matteo M. Pelagatti (University of MilanBicocca); Stefania Rondena (University of MilanBicocca) 
Abstract:  The Dynamic Conditional Correlation model of Engle has made the estimation of multivariate GARCH models feasible for reasonably big vectors of securities’ returns. In the present paper we show how Engle’s twosteps estimate of the model can be easily extended to elliptical conditional distributions and apply different leptokurtic DCC models to some stocks listed at the Milan Stock Exchange. A free software written by the authors to carry out all the required computations is presented as well. 
Keywords:  Multivariate GARCH, Dynamic conditional correlation, Generalized method of moments 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–11 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503007&r=ecm 
By:  Matteo M. Pelagatti (University of MilanBicocca) 
Abstract:  Duration dependent Markovswitching VAR (DDMSVAR) models are time series models with data generating process consisting in a mixture of two VAR processes, which switches according to a twostate Markov chain with transition probabilities depending on how long the process has been in a state. In the present paper I propose a MCMCbased methodology to carry out inference on the model's parameters and introduce DDMSVAR for Ox, a software written by the author for the analysis of time series by means of DDMSVAR models. An application of the methodology to the U.S. business cycle concludes the article. 
Keywords:  Markovswitching, Business cycle, Gibbs sampling, Duration dependence, Vector autoregression 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–11 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503008&r=ecm 
By:  Amjad D. AlNasser 
Abstract:  This paper presents the methodology of the Generalised Maximum Entropy (GME) approach for estimating linear models that contain latent variables such as customer satisfaction measurement models. The GME approach is a distribution free method and it provides better alternatives to the conventional method; Namely, Partial Least Squares (PLS), which used in the context of costumer satisfaction measurement. A simplified model that is used for the Swedish customer satis faction index (CSI) have been used to generate simulated data in order to study the performance of the GME and PLS. The results showed that the GME outperforms PLS in terms of mean square errors (MSE). A simulated data also used to compute the CSI using the GME approach. 
Keywords:  Generalised Maximum Entropy, Partial Least Squares, Costumer Satisfaction Models. 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–10 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503013&r=ecm 
By:  Ozgen Sayginsoy (University at AlbanySUNY) 
Abstract:  In this paper, a likelihood ratio approach is taken to derive a test of the economic convergence hypothesis in the context of the linear deterministic trend model. The test is designed to directly address the nonstandard nature of the hypothesis, and is a systematic improvement over existing methods for testing convergence in the same context. The test is first derived under the assumption of Gaussian errors with known serial correlation. However, the normality assumption is then relaxed, and the results are naturally extended to the case of covariance stationary errors with unknown serial correlation. The test statistic is a continuous function of individual tstatistics on the intercept and slope parameters of the linear deterministic trend model, and therefore, standard heteroskedasticity and autocorrelation consistent estimators of the longrun variance can be directly implemented. Building upon the likelihood ratio framework, concrete and specific tests are recommended to be used in practice. The recommended tests do not require the knowledge of the form of serial correlation in the data, and they are robust to highly persistent serial correlation, including the case of a unit root in the errors. The recommended tests utilize the nonparametric kernel variance estimators, which are analyzed using the fixed bandwidth (fixedb) asymptotic framework recently proposed by Kiefer and Vogelsang (2003). The fixedb framework makes possible the choice of kernel and bandwidth that deliver tests with maximal asymptotic power within a specific class of tests. It is shown that when the Daniell kernel variance estimator is implemented with specific bandwidth choices, the recommended tests have asymptotic power close that of the known variance case, as well as good finite sample size and power properties. Finally, the newly developed tests are used to investigate economic convergence among eight regions of the United States (as defined by the Bureau of Economic Analysis) in the postWorldWarII period. Empirical evidence is found for convergence in three of the eight regions. 
Keywords:  Likelihood Ratio, Joint Inequality, HAC Estimator, Fixedb Asymptotics, Power Envelope, Unit Root, Linear Trend, BEA Regions. 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–11 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503014&r=ecm 
By:  Jonathan B. Hill (Florida International University) 
Abstract:  In this paper, we develop a parametric test procedure for multiple horizon "Granger" causality and apply the procedure to the well established problem of determining causal patterns in aggregate monthly U.S. money and output. As opposed to most papers in the parametric causality literature, we are interested in whether money ever "causes" (can ever be used to forecast) output, when causation occurs, and how (through which causal chains). For brevity, we consider only causal patterns up to horizon h = 3. Our tests are based on new recursive parametric characterizations of causality chains which help to distinguish between mere noncausation (the total absence of indirect causal routes) and causal neutralization, in which several causal routes exists that cancel each other out such that noncausation occurs. In many cases the recursive characterizations imply greatly simplified linear compound hypotheses for multistep ahead causation, and permit Wald tests with the usual asymptotic ÷²distribution. A simulation study demonstrates that a sequential test method does not generate the type of size distortions typically reported in the literature, and null rejection frequencies depend entirely on how we define the "null hypothesis" of noncausality (at which horizon, if any). Using monthly data employed in Stock and Watson (1989), and others, we demonstrate that while Friedman and Kuttner's (1993) result that detrended money growth fails to cause output one month ahead continues into the third quarter of 2003, a significant causal lag may exist through a variety of shortterm interest rates: money appears to cause output after at least one month passes, although in some cases using recent data conflicting evidence suggests money may never cause output and be truly irrelevant in matters of real decisions. 
Keywords:  multiple horizon causation; multivariate time series; sequential tests. 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–15 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503016&r=ecm 
By:  Patrick Crowley (Texas A&M University  Corpus Christi) 
Abstract:  Wavelet analysis, although used extensively in disciplines such as signal processing, engineering, medical sciences, physics and astronomy, has not yet fully entered the economics discipline. In this discussion paper, wavelet analysis is introduced in an intuitive manner, and the existing economics and finance literature that utilises wavelets is explored. Extensive examples of exploratory wavelet analysis are given, many using Canadian, US and Finnish industrial production data. Finally, potential future applications for wavelet analysis in economics are also discussed and explored. 
Keywords:  statistical methodology, multiresolution analysis, wavelets, business cycles, economic growth 
JEL:  C19 C87 E32 
Date:  2005–03–17 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503017&r=ecm 
By:  Marie Bessec (EURIsCO  University Paris Dauphine); Othman Bouabdallah (EUREQua  University Paris Panthéon Sorbonne) 
Abstract:  This paper explores the forecasting abilities of MarkovSwitching models. Although MS models generally display a superior insample fit relative to linear models, the gain in prediction remains small. We confirm this result using simulated data for a wide range of specifications by applying several tests of forecast accuracy and encompassing robust to nested models. In order to explain this poor performance, we use a forecasting error decomposition. We identify four components and derive their analytical expressions in different MS specifications. The relative contribution of each source is assessed through Monte Carlo simulations. We find that the main source of error is due to the misclassification of future regimes. 
Keywords:  Forecasting, Regime Shifts, MarkovSwitching. 
JEL:  C22 C32 C53 
Date:  2005–03–22 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503018&r=ecm 
By:  ChingKang Ing (Institute of Statistical Science, Academia Sinica) 
Abstract:  The predictive capability of a modification of Rissanen's accumulated prediction error (APE) criterion, APE$_{\delta_{n}}$,is investigated in infiniteorder autoregressive (AR($\infty$)) models. Instead of accumulating squares of sequential prediction errors from the beginning, APE$_{\delta_{n}}$ is obtained by summing these squared errors from stage $n\delta_{n}$, where $n$ is the sample size and $0 < \delta_{n} < 1$ may depend on $n$. Under certain regularity conditions, an asymptotic expression is derived for the meansquared prediction error (MSPE) of an AR predictor with order determined by APE$_{\delta_{n}}$. This expression shows that the prediction performances of APE$_{\delta_{n}}$ can vary dramatically depending on the choice of $\delta_{n}$. Another interesting finding is that when $\delta_{n}$ approaches 1 at a certain rate, APE$_{\delta_{n}}$ can achieve asymptotic efficiency in most practical situations. An asymptotic equivalence between APE$_{\delta_{n}}$ and an information criterion with a suitable penalty term is also established from the MSPE point of view. It offers a new perspective for comparing the information and predictionbased model selection criteria in AR($\infty$) models. Finally, we provide the first asymptotic efficiency result for the case when the underlying AR($\infty$) model is allowed to degenerate to a finite autoregression. 
Keywords:  Accumulated prediction errors, Asymptotic equivalence, Asymptotic efficiency, Information criterion, Order selection, Optimal forecasting 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–03–23 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503020&r=ecm 
By:  Chen Pu (Universität Bielefeld → Fakultät für Wirtschaftswissenschaften); Hsiao Chihying (Universität Bielefeld → Fakultät für Wirtschaftswissenschaften) 
Abstract:  In this paper we investigate the possibility of the application of subsampling procedure for testing cointegration relations in large multivariate systems. The subsampling technique is applied to overcome the difficulty of nonstandard distribution and nuisance parameters in testing for cointegration rank without an explicitly formulated structural model. The contribution in this paper is twofold: theoretically this paper shows that the subsampling testing procedure is consistent and asymptotically most powerful; practically this paper demonstrates that the subsampling procedure can be applied to determine the cointegration rank in large scale models, where the standard procedures hits already its limit. Especially for the cases of few stochastic trends in a system, the subsampling procedure shows robust and reliable results. 
Keywords:  Cointegration, Large System, Nonparametric Tests, Subsampling, PPP 
JEL:  C19 C40 C50 
Date:  2005–04–08 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0504002&r=ecm 
By:  Patrick Marsh 
Abstract:  This paper considers the information available to invariant unit root tests at and near the unit root. Since all invariant tests will be functions of the maximal invariant, the Fisher information in this statistic will be the available information. The main finding of the paper is that the available information for all tests invariant to a linear trend is zero at the unit root. This result applies for any sample size, over a variety of distributions and correlation structures and is robust to the inclusion of any other deterministic component. In addition, an explicit bound upon the power of all invariant unit root tests is shown to depend solely upon the information. This bound is illustrated via comparison with the localtounity power envelope and a brief simulation study illustrates the impact that the requirements of invariance have on power. 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:05/03&r=ecm 
By:  Joaquim J.S. Ramalho (Department of Economics, University of Évora); Richard J. Smith (Department of Economics, University of Warwick) 
Abstract:  This paper proposes novel methods for the construction of tests for models specified by unconditional moment restrictions. It exploits the classicallike nature of generalized empirical likelihood (GEL) to define Pearsontype statistics for overidentifying moment conditions and parametric constraints based on constrasts of GEL implied probabilities which are natural byproducts of GEL estimation. As is increasingly recognized, GEL can possess both theoretical and empirical advantages over the more standard generalized method of moments (GMM). Monte Carlo evidence comparing GMM, GEL and Pearsontype statistics for overidentifying moment conditions indicates that the size properties of a particular Pearsontype statistic is competitive in most and an improvement over other statistics in many circumstances. 
Keywords:  GMM, Generalized Empirical Likelihood, Overidentifying Moments, Parametric Restrictions, PearsonType Tests 
JEL:  C13 C30 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:evo:wpecon:5_2005&r=ecm 
By:  Joaquim J.S. Ramalho (Department of Economics, University of Évora); Esmeralda A. Ramalho (Department of Economics, University of Évora) 
Abstract:  Empirical likelihood (EL) is appropriate to estimate moment condition models when a random sample from the target population is available. However, many economic surveys are subject to some form of stratification, with different subsets of the underlying population of interest being sampled with different frequencies. In this setting, the available data provide a related but distorted picture of the features of the target population and direct application of EL will produce inconsistent estimators. In this paper we propose some adaptations to EL to deal with stratified samples in models defined by unconditional moment restrictions. We develop two distinct modified EL estimators: the weighted EL estimator, which requires knowledge on the marginal strata probabilities; and the twostep EL estimator, which assumes the availability of some auxiliary aggregate information on the target population. A Monte Carlo simulation study reveals promising results both for the weighted and some versions of the twostep EL estimators. 
Keywords:  Stratified Sampling, Empirical Likelihood, Weighted Estimation, Auxiliary Information 
JEL:  C13 C30 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:evo:wpecon:6_2005&r=ecm 