|
on Econometric Time Series |
By: | Amine JALAL (HEC-University of Lausanne and FAME); Michael ROCKINGER (HEC-University of Lausanne, FAME and CEPR) |
Abstract: | We investigate the consequences for value-at-risk and expected short-fall purposes of using a GARCH filter on various mis-specified processes. We show that careful investigation of the adequacy of the GARCH filter is necessary since under mis-specifications a GARCH filter appears to do more harm than good. Using an unconditional non filtered tail estimate appears to perform satisfactorily for dependent data with a degree of dependency corresponding to actual market conditions. |
Keywords: | Extreme value theory; Value at Risk (VaR); Expected shortfall; GARCH; Markov switching; Jump diffusion; Backtesting. |
JEL: | G12 C32 |
Date: | 2004–06 |
URL: | http://d.repec.org/n?u=RePEc:fam:rpseri:rp115&r=ets |
By: | Matthias HAGMANN (HEC-University of Lausanne and FAME); Carlos LENZ (University of Basel, Department of Economics) |
Abstract: | We shed new light on the negative relationship between real stock returns or real interest rates and (i) post inflation, (ii) expected inflation, (iii) unexpected inflation and (iv) changes in expected inflation. Using the structural vector autoregression methodology, we propose a decomposition of those series into economically interpretable components driven by aggregate supply, real demand and money market shocks. Our empirical results support Fama’s ‘proxy hypothesis’ and the predictions of several general equilibrium models. Concerning the negative relation between the real rate of interest and inflation, we find that the Mundell-Tobin model and the explanation of Fama and Gibbons (1982) are not competitors: both add insight in their own way about the reasons for the negative correlation between those variables. However, the importance of the latter explanation decreased since the 1980’s. |
Keywords: | Real stock returns; Real rate of interest; Expected and unexpected inflation; 'Fisher hypothesis'; Structural VAR |
JEL: | E44 G1 |
Date: | 2004–10 |
URL: | http://d.repec.org/n?u=RePEc:fam:rpseri:rp118&r=ets |
By: | Olivier Scaillet (HEC, University of Geneva and FAME) |
Abstract: | We consider a consistent test, that is similar to a Kolmogorov-Smirnov test, of the complete set of restrictions that relate to the copula representation of positive quadrant dependence. For such a test we propose and justify inference relying on a simulation based multiplier method and a bootstrap method. We also explore the finite sample behaviour of ^ both methods with Monte Carlo experiments. A first empirical illustration is given for US insurance claim data. A second one exemines the presence of positive quadrant dependence in life expectancies at birth of males and females among countries. |
Keywords: | Nonparametric; Positive Quadrant Dependence; Copula; Risk Management; Loss Severity Distribution; Bootstrap; Multiplier Method; Empirical Process |
JEL: | C12 D81 G10 G21 G22 |
URL: | http://d.repec.org/n?u=RePEc:fam:rpseri:rp128&r=ets |
By: | Quoreshi, Shahiduzzaman (Department of Economics, Umeå University) |
Abstract: | A bivariate integer-valued moving average (BINMA) model is proposed. The BINMA model allows for both positive and negative correlation between the counts. This model can be seen as an inverse of the conditional duration model in the sense that short durations in a time interval correspond to a large count and vice versa. The conditional mean, variance and covariance of the BINMA model are given. Model extensions to include explanatory variables are suggested. Using the BINMA model for AstraZeneca and Ericsson B it is found that there is positive correlation between the stock transactions series. Empirically, we find support for the use of long-lag bivariate moving average models for the two series. have significant effects for both series. |
Keywords: | Count data; Intra-day; High frequency; Time series; Estimation; Long memory; Finance |
JEL: | C13 C22 C25 C51 G12 G14 |
Date: | 2005–04–14 |
URL: | http://d.repec.org/n?u=RePEc:hhs:umnees:0655&r=ets |
By: | Welz, Peter (Department of Economics); Österholm, Pär (Department of Economics) |
Abstract: | This paper contributes to the recent debate about the estimated high partial adjustment coefficient in dynamic Taylor rules, commonly interpreted as deliberate interest rate smoothing on the part of the monetary authority. We argue that a high coefficient on the lagged interest rate term may be a consequence of an incorrectly specified central bank reaction function. Focusing on omitted variables, our Monte Carlo study first generates the well-known fact that all coefficients in the misspecified equation are biased in such cases. In particular, if relevant variables are left out from the estimated equation, a high partial adjustment coefficient is obtained even when it is in fact zero in the data generating process. Misspecification also leads to considerable size distortions in two tests that were recently proposed by English, Nelson, and Sack (2003) in order to distinguish between interest rate smoothing and serially correlated disturbances. Our results question the common interpretation of very slow partial adjustment as interest rate smoothing in estimated dynamic Taylor rules. |
Keywords: | Monetary policy; Taylor rule; Interest rate smoothing; Serially correlated error term; Omitted variables |
JEL: | C12 C15 E52 |
Date: | 2005–03–31 |
URL: | http://d.repec.org/n?u=RePEc:hhs:uunewp:2005_014&r=ets |
By: | Kazuhiko Hayakawa |
Abstract: | This paper examines analytically and experimentally why the system GMM estimator in dynamic panel data models is less biased than the first differencing or the level estimators even though the former uses more instruments. We find that the bias of the system GMM estimator is a weighted sum of the biases in opposite directions of the first differencing and the level estimator. We also find that an important condition for the system GMM estimator to have small bias is that the variances of the individual effects and the disturbances are almost of the same magnitude. If the variance of individual effects is much larger than that of disturbances, then all GMM estimators are heavily biased. To reduce such biases, we propose bias-corrected GMM estimators. On the other hand, if the variance of individual effects is smaller than that of disturbances, the system estimator has a more severe downward bias than the level estimator. |
Date: | 2005–04 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d05-82&r=ets |
By: | Nikolaus Hautsch (Institute of Economics, University of Copenhagen) |
Abstract: | In this paper, we propose a framework for the modelling of multivariate dynamic processes which are driven by an unobservable common autoregressive component. Economically motivated by the mixture-of-distribution hypothesis, we model the multivariate intraday trading process of return volatility, volume and trading intensity by a VAR model that is augmented by a joint latent factor serving as a proxy for the unobserved information flow. The model is estimated by simulated maximum likelihood using efficient importance sampling techniques. Analyzing intraday data from the NYSE, we find strong empirical evidence for the existence of an underlying persistent component as an important driving force of the trading process. It is shown that the inclusion of the latent factor clearly improves the goodness-of-fit of the model as well as its dynamical and distributional properties. |
Keywords: | observation vs. parameter driven dynamics; mixture-of-distribution hypothesis; VAR model; efficient importance sampling |
JEL: | C15 C32 C52 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:kud:kuiefr:200503&r=ets |
By: | D H Kim |
Abstract: | This paper investigates the nature of nonlinearities in the term structure using the flexible approach to nonlinear inference. The paper reports clear evidence of nonlinearity, in contrast to the affine term structure model and consistent with recent claims in the literature. We find that there is a threshold effect of volatility on the interest rate but this effect does not capture the entire nature of the nonlinearity. The quadratic term structure model recentlyproposed performs better for capturing the nonlinearity than the threshold model but the former model seems to miss some aspect of nonlinearity for short-term rates. However, our flexible nonlinear model which incorporates the threshold effect and the convexity of volatility into the quadratic model, generally performs well for all interest rates. The paper suggests that this model is a promising representation of nonlinearities and out-of-sample forecasts support the claim of nonlinearities. |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:man:cgbcrp:51&r=ets |
By: | Baki Billah; Maxwell L King; Ralph D Snyder; Anne B Koehler |
Abstract: | Applications of exponential smoothing to forecast time series usually rely on three basic methods: simple exponential smoothing, trend corrected exponential smoothing and a seasonal variation thereof. A common approach to select the method appropriate to a particular time series is based on prediction validation on a withheld part of the sample using criteria such as the mean absolute percentage error. A second approach is to rely on the most appropriate general case of the three methods. For annual series this is trend corrected exponential smoothing: for sub-annual series it is the seasonal adaptation of trend corrected exponential smoothing. The rationale for this approach is that a general method automatically collapses to its nested counterparts when the pertinent conditions pertain in the data. A third approach may be based on an information criterion when maximum likelihood methods are used in conjunction with exponential smoothing to estimate the smoothing parameters. In this paper, such approaches for selecting the appropriate forecasting method are compared in a simulation study. They are also compared on real time series from the M3 forecasting competition. The results indicate that the information criterion approach appears to provide the best basis for an automated approach to method selection, provided that it is based on Akaike's information criterion. |
Keywords: | Model Selection; Exponential Smoothing; Information Criteria; Prediction; Forecast Validation |
JEL: | C22 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2005-6&r=ets |
By: | J Keith Ord; Ralph D Snyder; Anne B Koehler; Rob J Hyndman; Mark Leeds |
Abstract: | The state space approach to modelling univariate time series is now widely used both in theory and in applications. However, the very richness of the framework means that quite different model formulations are possible, even when they purport to describe the same phenomena. In this paper, we examine the single source of error [SSOE] scheme, which has perfectly correlated error components. We then proceed to compare SSOE to the more common version of the state space models, for which all the error terms are independent; we refer to this as the multiple source of error [MSOE] scheme. As expected, there are many similarities between the MSOE and SSOE schemes, but also some important differences. Both have ARIMA models as their reduced forms, although the mapping is more transparent for SSOE. Further, SSOE does not require a canonical form to complete its specification. An appealing feature of SSOE is that the estimates of the state variables converge in probability to their true values, thereby leading to a formal inferential structure for the ad-hoc exponential smoothing methods for forecasting. The parameter space for SSOE models may be specified to match that of the corresponding ARIMA scheme, or it may be restricted to meaningful sub-spaces, as for MSOE but with somewhat different outcomes. The SSOE formulation enables straightforward extensions to certain classes of non-linear models, including a linear trend with multiplicative seasonals version that underlies the Holt-Winters forecasting method. Conditionally heteroscedastic models may be developed in a similar manner. Finally we note that smoothing and decomposition, two crucial practical issues, may be performed within the SSOE framework. |
Keywords: | ARIMA, Dynamic Linear Models, Equivalence, Exponential Smoothing, Forecasting, GARCH, Holt's Method, Holt-Winters Method, Kalman Filter, Prediction Intervals. |
JEL: | C22 C53 C51 |
Date: | 2005–04 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2005-7&r=ets |
By: | DUFOUR, Jean-Marie; FARHAT, Abdekjelik; HALLIN, Marc |
Abstract: | We consider the problem of testing whether the observations X1, ..., Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eaton-type bounds, Chebyshev bounds and Berry-Esséen-Zolotarev bounds. The bounds are exact in finite samples, distribution-free and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate). |
Keywords: | autocorrelation ; serial dendence ; nonrametric test ; distribution-free test ; heterogeneity ; heteroskedasticity ; symmetric distribution ; robustness ; exact test ; bound ; exnential bound ; large deviations ; Chebyshev inequality ; Berry-Esséen ; interest rates. |
JEL: | C14 C22 C12 C32 E4 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:mtl:montde:2005-05&r=ets |
By: | DUFOUR, Jean-Marie; TAREK, Jouini |
Abstract: | In this paper, we study the asymptotic distribution of a simple two-stage (Hannan-Rissanen-type) linear estimator for stationary invertible vector autoregressive moving average (VARMA) models in the echelon form representation. General conditions for consistency and asymptotic normality are given. A consistent estimator of the asymptotic covariance matrix of the estimator is also provided, so that tests and confidence intervals can easily be constructed. |
Keywords: | Time series ; VARMA ; stationary ; invertible ; echelon form ; estimation ; asymotic normality ; bootstra; Hannan-Rissanen |
JEL: | C3 C32 C53 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:mtl:montde:2005-09&r=ets |
By: | Eric Hillebrand (Louisiana State University, Department of Economics) |
Abstract: | Apart from the well-known, high persistence of daily financial volatility data, there is also a short correlation structure that reverts to the mean in less than a month. We find this short correlation time scale in six different daily financial time series and use it to improve the short-term forecasts from GARCH models. We study different generalizations of GARCH that allow for several time scales. On our holding sample, none of the considered models can fully exploit the information contained in the short scale. Wavelet analysis shows a correlation between fluctuations on long and on short scales. Models accounting for this correlation as well as long memory models for absolute returns appear to be promising. |
Keywords: | GARCH, volatility persistence, spurious high persistence, long memory, fractional integration, change-points, wavelets, time scales |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–01–31 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0501015&r=ets |
By: | Stanislav Radchenko (UNC at Charlotte) |
Abstract: | This paper constructs long-term forecasts of energy prices using a reduced form model of shifting trend developed by Pindyck (1999). A Gibbs sampling algorithm is developed to estimate models with a shifting trend line which are used to construct 10-period-ahead and 15-period ahead forecasts. An advantage of forecasts from this model is that they are not very influenced by the presence of large, long-lived increases and decreases in energy prices. The forecasts form shifting trends model are combined with forecasts from the random walk model and the autoregressive model to substantially decrease the mean forecast squared error compared to each individual model. |
Keywords: | energy forecasting, oil price, coal price, natural gas price, shifting trends model, long term forecasting |
JEL: | C53 |
Date: | 2005–02–04 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502002&r=ets |
By: | Rafal Weron (Hugo Steinhaus Center); Adam Misiorek (Institute of Power Systems Automation) |
Abstract: | In this paper we study two statistical approaches to load forecasting. Both of them model electricity load as a sum of two components – a deterministic (representing seasonalities) and a stochastic (representing noise). They differ in the choice of the seasonality reduction method. Model A utilizes differencing, while Model B uses a recently developed seasonal volatility technique. In both models the stochastic component is described by an ARMA time series. Models are tested on a time series of system-wide loads from the California power market and compared with the official forecast of the California System Operator (CAISO). |
Keywords: | Electricity, load forecasting, ARMA model, seasonal component |
JEL: | C22 C53 L94 Q40 |
Date: | 2005–02–07 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502004&r=ets |
By: | Michael Bierbrauer (University of Karlsruhe); Stefan Trueck (University of Karlsruhe); Rafal Weron (Hugo Steinhaus Center) |
Abstract: | We address the issue of modeling spot electricity prices with regime switching models. After reviewing the stylized facts about power markets we propose and fit various models to spot prices from the Nordic power exchange. Afterwards we assess their performance by comparing simulated and market prices. |
Keywords: | Power market, Electricity price modeling, Regime switching model |
JEL: | C51 L94 Q40 |
Date: | 2005–02–07 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502005&r=ets |
By: | Ewa Broszkiewicz-Suwaj (Wroclaw University of Technology); Andrzej Makagon (Hampton University); Rafal Weron (Hugo Steinhaus Center); Agnieszka Wylomanska (Wroclaw University of Technology) |
Abstract: | For many economic problems standard statistical analysis, based on the notion of stationarity, is not adequate. These include modeling seasonal decisions of consumers, forecasting business cycles and - as we show in the present article - modeling wholesale power market prices. We apply standard methods and a novel spectral domain technique to conclude that electricity price returns exhibit periodic correlation with daily and weekly periods. As such they should be modeled with periodically correlated processes. We propose to apply periodic autoregression (PAR) models which are closely related to the standard instruments in econometric analysis - vector autoregression (VAR) models. |
Keywords: | periodic correlation, sample coherence, electricity price, periodic autoregression, vector autoregression |
JEL: | C22 C32 L94 Q40 |
Date: | 2005–02–07 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502006&r=ets |
By: | Bragoudakis Zacharias (Bank of Greece) |
Abstract: | This paper is an exercise in applied macroeconomic forecasting. We examine the forecasting power of a vector error-correction model (VECM) that is anchored by a long-run equilibrium relationship between Greek national income and productive public expenditure as suggested by the economic theory. We compare the estimated forecasting values of the endogenous variables to the real-historical values using a stochastic simulation analysis. The simulation results provide new evidence supporting the ability of the model to forecast not only one-period ahead but also many periods into the future. Keywords: Cointegration, Forecasting, Simulation Analysis, Vector error- correction models JEL Classifications: C15, C32, C53, E0, E6 Working Paper Series |
Keywords: | Cointegration, Forecasting, Simulation Analysis, Vector error- correction models |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–02–09 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502007&r=ets |
By: | Costas Milas (City University); Phil Rothman (East Carolina University) |
Abstract: | In this paper we use smooth transition vector error-correction models (STVECMs) in a simulated out-of-sample forecasting experiment for the unemployment rates of the four non-Euro G-7 countries, the U.S., U.K., Canada, and Japan. For the U.S., pooled forecasts constructed by taking the median value across the point forecasts generated by the STVECMs perform better than the linear VECM benchmark more so during business cycle expansions. Pooling across the linear and nonlinear forecasts tends to lead to statistically signißcant forecast improvement for business cycle expansions for Canada, while the opposite is the case for the U.K. |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–02–18 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502010&r=ets |
By: | Edoardo Otranto (DEIR-Università di Sassari) |
Abstract: | The extraction of a common signal from a group of time series is generally obtained using variables recorded with the same frequency or transformed to have the same frequency (monthly, quarterly, etc.). The statistical literature has not paid a great deal of attention to this topic. In this paper we extend an approach based on the use of dummy variables to the well known trend plus cycle model, in a multivariate context, using both quarterly and monthly data. This procedure is applied to the Italian economy, using the variables suggested by an Italian Institution (ISAE) to provide a national dating. |
Keywords: | Business cycle; State-space; Time Series; Trend; Turning Points |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–02–18 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0502011&r=ets |
By: | Vadim Marmer (Yale University) |
Abstract: | Various implications of nonlinearity, nonstationarity and misspecification are considered from a forecasting perspective. My model allows for small departures from the martingale difference sequence hypothesis by including an additive nonlinear component, formulated as a general, integrable transformation of the predictor, which is assumed to be I(1). Such a generating mechanism provides for predictability only in the extremely short run. In the stock market example, this formulation corresponds to a situation where some relevant information may escape the attention of market participants only for very short periods of time. I assume that the true generating mechanism involving the nonlinear dependency is unknown to the econometrician and he is therefore forced to use some approximating functions. I show that the usual regression techniques lead to spurious forecasts. Improvements of the forecast accuracy are possible with properly chosen integrable approximating functions. This paper derives the limiting distribution of the forecast MSE. In the case of square integrable approximants, it depends on the $L_{2}$-distance between the nonlinear component and the approximating function. Optimal forecasts are available for a given class of approximants. Finally, I present a Monte Carlo simulation study and an empirical example in support of the theoretical findings. |
Keywords: | forecasting, integrated time series, misspecified models, nonlinear transformations, stock returns, dividend-price ratio. |
JEL: | C22 C53 G14 |
Date: | 2005–03–05 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503002&r=ets |
By: | Matteo M. Pelagatti (University of Milan-Bicocca) |
Abstract: | A methodology based on the multivariate generalized Butterwoth filter for extracting the business cycles of the whole economy and of its productive sectors is developed. The method is then illustrated through an application to the Italian gross value added time series of the main economic sectors. |
Keywords: | Business cycle, Butterworth filter, Unobserved components, Kalman Filter |
JEL: | C13 C32 E32 |
Date: | 2005–03–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503006&r=ets |
By: | Matteo M. Pelagatti (University of Milan-Bicocca); Stefania Rondena (University of Milan-Bicocca) |
Abstract: | The Dynamic Conditional Correlation model of Engle has made the estimation of multivariate GARCH models feasible for reasonably big vectors of securities’ returns. In the present paper we show how Engle’s twosteps estimate of the model can be easily extended to elliptical conditional distributions and apply different leptokurtic DCC models to some stocks listed at the Milan Stock Exchange. A free software written by the authors to carry out all the required computations is presented as well. |
Keywords: | Multivariate GARCH, Dynamic conditional correlation, Generalized method of moments |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–03–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503007&r=ets |
By: | Matteo M. Pelagatti (University of Milan-Bicocca) |
Abstract: | Duration dependent Markov-switching VAR (DDMS-VAR) models are time series models with data generating process consisting in a mixture of two VAR processes, which switches according to a two-state Markov chain with transition probabilities depending on how long the process has been in a state. In the present paper I propose a MCMC-based methodology to carry out inference on the model's parameters and introduce DDMSVAR for Ox, a software written by the author for the analysis of time series by means of DDMS-VAR models. An application of the methodology to the U.S. business cycle concludes the article. |
Keywords: | Markov-switching, Business cycle, Gibbs sampling, Duration dependence, Vector autoregression |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–03–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503008&r=ets |
By: | Ozgen Sayginsoy (University at Albany--SUNY) |
Abstract: | In this paper, a likelihood ratio approach is taken to derive a test of the economic convergence hypothesis in the context of the linear deterministic trend model. The test is designed to directly address the nonstandard nature of the hypothesis, and is a systematic improvement over existing methods for testing convergence in the same context. The test is first derived under the assumption of Gaussian errors with known serial correlation. However, the normality assumption is then relaxed, and the results are naturally extended to the case of covariance stationary errors with unknown serial correlation. The test statistic is a continuous function of individual t-statistics on the intercept and slope parameters of the linear deterministic trend model, and therefore, standard heteroskedasticity and autocorrelation consistent estimators of the long-run variance can be directly implemented. Building upon the likelihood ratio framework, concrete and specific tests are recommended to be used in practice. The recommended tests do not require the knowledge of the form of serial correlation in the data, and they are robust to highly persistent serial correlation, including the case of a unit root in the errors. The recommended tests utilize the nonparametric kernel variance estimators, which are analyzed using the fixed bandwidth (fixed-b) asymptotic framework recently proposed by Kiefer and Vogelsang (2003). The fixed-b framework makes possible the choice of kernel and bandwidth that deliver tests with maximal asymptotic power within a specific class of tests. It is shown that when the Daniell kernel variance estimator is implemented with specific bandwidth choices, the recommended tests have asymptotic power close that of the known variance case, as well as good finite sample size and power properties. Finally, the newly developed tests are used to investigate economic convergence among eight regions of the United States (as defined by the Bureau of Economic Analysis) in the post-World-War-II period. Empirical evidence is found for convergence in three of the eight regions. |
Keywords: | Likelihood Ratio, Joint Inequality, HAC Estimator, Fixed-b Asymptotics, Power Envelope, Unit Root, Linear Trend, BEA Regions. |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–03–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503014&r=ets |
By: | Patrick Crowley (Texas A&M University - Corpus Christi) |
Abstract: | Wavelet analysis, although used extensively in disciplines such as signal processing, engineering, medical sciences, physics and astronomy, has not yet fully entered the economics discipline. In this discussion paper, wavelet analysis is introduced in an intuitive manner, and the existing economics and finance literature that utilises wavelets is explored. Extensive examples of exploratory wavelet analysis are given, many using Canadian, US and Finnish industrial production data. Finally, potential future applications for wavelet analysis in economics are also discussed and explored. |
Keywords: | statistical methodology, multiresolution analysis, wavelets, business cycles, economic growth |
JEL: | C19 C87 E32 |
Date: | 2005–03–17 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503017&r=ets |
By: | Marie Bessec (EURIsCO - University Paris Dauphine); Othman Bouabdallah (EUREQua - University Paris Panthéon Sorbonne) |
Abstract: | This paper explores the forecasting abilities of Markov-Switching models. Although MS models generally display a superior in-sample fit relative to linear models, the gain in prediction remains small. We confirm this result using simulated data for a wide range of specifications by applying several tests of forecast accuracy and encompassing robust to nested models. In order to explain this poor performance, we use a forecasting error decomposition. We identify four components and derive their analytical expressions in different MS specifications. The relative contribution of each source is assessed through Monte Carlo simulations. We find that the main source of error is due to the misclassification of future regimes. |
Keywords: | Forecasting, Regime Shifts, Markov-Switching. |
JEL: | C22 C32 C53 |
Date: | 2005–03–22 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503018&r=ets |
By: | Ching-Kang Ing (Institute of Statistical Science, Academia Sinica) |
Abstract: | The predictive capability of a modification of Rissanen's accumulated prediction error (APE) criterion, APE$_{\delta_{n}}$,is investigated in infinite-order autoregressive (AR($\infty$)) models. Instead of accumulating squares of sequential prediction errors from the beginning, APE$_{\delta_{n}}$ is obtained by summing these squared errors from stage $n\delta_{n}$, where $n$ is the sample size and $0 < \delta_{n} < 1$ may depend on $n$. Under certain regularity conditions, an asymptotic expression is derived for the mean-squared prediction error (MSPE) of an AR predictor with order determined by APE$_{\delta_{n}}$. This expression shows that the prediction performances of APE$_{\delta_{n}}$ can vary dramatically depending on the choice of $\delta_{n}$. Another interesting finding is that when $\delta_{n}$ approaches 1 at a certain rate, APE$_{\delta_{n}}$ can achieve asymptotic efficiency in most practical situations. An asymptotic equivalence between APE$_{\delta_{n}}$ and an information criterion with a suitable penalty term is also established from the MSPE point of view. It offers a new perspective for comparing the information- and prediction-based model selection criteria in AR($\infty$) models. Finally, we provide the first asymptotic efficiency result for the case when the underlying AR($\infty$) model is allowed to degenerate to a finite autoregression. |
Keywords: | Accumulated prediction errors, Asymptotic equivalence, Asymptotic efficiency, Information criterion, Order selection, Optimal forecasting |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–03–23 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0503020&r=ets |
By: | Rafal Weron (Hugo Steinhaus Center); Adam Misiorek (Institute of Power Systems Automation) |
Abstract: | In this paper we study simple time series models and assess their forecasting performance. In particular we calibrate ARMA and ARMAX (where the exogenous variable is the system load) processes. Models are tested on a time series of California power market system prices and loads from the period proceeding and including the market crash. |
Keywords: | Electricity, price forecasting, ARMA model, seasonal component |
JEL: | C22 C53 L94 Q40 |
Date: | 2005–04–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0504001&r=ets |
By: | Chen Pu (Universität Bielefeld → Fakultät für Wirtschaftswissenschaften); Hsiao Chihying (Universität Bielefeld → Fakultät für Wirtschaftswissenschaften) |
Abstract: | In this paper we investigate the possibility of the application of subsampling procedure for testing cointegration relations in large multivariate systems. The subsampling technique is applied to overcome the difficulty of nonstandard distribution and nuisance parameters in testing for cointegration rank without an explicitly formulated structural model. The contribution in this paper is twofold: theoretically this paper shows that the subsampling testing procedure is consistent and asymptotically most powerful; practically this paper demonstrates that the subsampling procedure can be applied to determine the cointegration rank in large scale models, where the standard procedures hits already its limit. Especially for the cases of few stochastic trends in a system, the subsampling procedure shows robust and reliable results. |
Keywords: | Cointegration, Large System, Nonparametric Tests, Subsampling, PPP |
JEL: | C19 C40 C50 |
Date: | 2005–04–08 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0504002&r=ets |
By: | Patrick Crowley (Texas A&M University - Corpus Christi); Jim Lee (Texas A&M University - Corpus Christi) |
Abstract: | This article analyses the frequency components of European business cycles using real GDP by employing multiresolution decomposition (MRD) with the use of maximal overlap discrete wavelet transforms (MODWT). Static wavelet variance and correlation analysis is performed, and phasing is studied using co-correlation with the eurozone by scale. Lastly dynamic conditional correlation GARCH models are used to obtain dynamic correlation estimates by scale against the EU to evaluate synchronicity of cycles through time. The general …ndings are that eurozone members fall into one of three categories: i) high static and dynamic correlations at all frequency cycles (e.g. France, Belgium, Germany), ii) low static and dynamic correlations, with little sign of convergence occurring (e.g. Greece), and iii) low static correlation but convergent dynamic correlations (e.g. Finland and Ireland) |
Keywords: | Business cycles, growth cycles, European Union, multiresolution analysis, wavelets, co-correlation, dynamic correlation. |
JEL: | E32 O52 |
Date: | 2005–03–17 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpma:0503015&r=ets |
By: | Patrick Marsh |
Abstract: | This paper considers the information available to invariant unit root tests at and near the unit root. Since all invariant tests will be functions of the maximal invariant, the Fisher information in this statistic will be the available information. The main finding of the paper is that the available information for all tests invariant to a linear trend is zero at the unit root. This result applies for any sample size, over a variety of distributions and correlation structures and is robust to the inclusion of any other deterministic component. In addition, an explicit bound upon the power of all invariant unit root tests is shown to depend solely upon the information. This bound is illustrated via comparison with the local-to-unity power envelope and a brief simulation study illustrates the impact that the requirements of invariance have on power. |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:05/03&r=ets |
By: | Joaquim J.S. Ramalho (Department of Economics, University of Évora); Richard J. Smith (Department of Economics, University of Warwick) |
Abstract: | This paper proposes novel methods for the construction of tests for models specified by unconditional moment restrictions. It exploits the classical-like nature of generalized empirical likelihood (GEL) to define Pearson-type statistics for over-identifying moment conditions and parametric constraints based on constrasts of GEL implied probabilities which are natural by-products of GEL estimation. As is increasingly recognized, GEL can possess both theoretical and empirical advantages over the more standard generalized method of moments (GMM). Monte Carlo evidence comparing GMM, GEL and Pearsontype statistics for over-identifying moment conditions indicates that the size properties of a particular Pearson-type statistic is competitive in most and an improvement over other statistics in many circumstances. |
Keywords: | GMM, Generalized Empirical Likelihood, Overidentifying Moments, Parametric Restrictions, Pearson-Type Tests |
JEL: | C13 C30 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:evo:wpecon:5_2005&r=ets |