|
on Econometric Time Series |
By: | Fuchun Li |
Abstract: | A new consistent test is proposed for the parametric specification of the diffusion function in a diffusion process without any restrictions on the functional form of the drift function. The data are assumed to be sampled discretely in a time interval that can be fixed or lengthened to infinity. The test statistic is shown to follow an asymptotic normal distribution under the null hypothesis that the parametric diffusion function is correctly specified. Monte Carlo simulations are conducted to examine the finite-sample performance of the test, revealing that the test has good size and power. |
Keywords: | Econometric and statistical methods; Interest rates |
JEL: | C12 C14 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:05-35&r=ets |
By: | Beirlant,Jan; Joossens,Elisabeth; Segers,Johan (Tilburg University, Center for Economic Research) |
Abstract: | The generalized Pareto distribution (GPD) is probably the most popular model for inference on the tail of a distribution. The peaks-over-threshold methodology postulates the GPD as the natural model for excesses over a high threshold. However, for the GPD to fit such excesses well, the threshold should often be rather large, thereby restricting the model to only a small upper fraction of the data. In case of heavy-tailed distributions, we propose an extension of the GPD with a single parameter, motivated by a second-order refinement of the underlying Pareto-type model. Not only can the extended model be fitted to a larger fraction of the data, but in addition is the resulting maximum likelihood for the tail index asymptotically unbiased. In practice, sample paths of the new tail index estimator as a function of the chosen threshold exhibit much larger regions of stability around the true value. We apply the method to daily log-returns of the euro-UK pound exchange rate. Some simulation results are presented as well. |
Keywords: | heavy tails;peaks-over-threshold;regular variation;tail index;62G20;62G32; C13; bias;exchange rate |
JEL: | C14 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:dgr:kubcen:2005112&r=ets |
By: | Michiel de Pooter (Faculty of Economics, Erasmus Universiteit Rotterdam); Martin Martens (Faculty of Economics, Erasmus Universiteit Rotterdam); Dick van Dijk (Faculty of Economics, Erasmus Universiteit Rotterdam) |
Abstract: | This paper investigates the merits of high-frequency intraday data when forming minimum variance portfolios and minimum tracking error portfolios with daily rebalancing from the individual constituents of the S&P 100 index. We focus on the issue of determining the optimal sampling frequency, which strikes a balance between variance and bias in covariance matrix estimates due to market microstructure effects such as non-synchronous trading and bid-ask bounce. The optimal sampling frequency typically ranges between 30- and 65-minutes, considerably lower than the popular five-minute frequency. We also examine how bias-correction procedures, based on the addition of leads and lags and on scaling, and a variance-reduction technique, based on subsampling, affect the performance. |
Keywords: | realized volatility; high-frequency data; volatility timing; mean-variance analysis; tracking error |
JEL: | G11 |
Date: | 2005–10–12 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20050089&r=ets |
By: | Torben G. Andersen; Tim Bollerslev; Francis X. Diebold |
Abstract: | A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Nielsen and Shephard (2004a, 2005) for related bi-power variation measures, the present paper provides a practical and robust framework for non-parametrically measuring the jump component in asset return volatility. In an application to the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find that jumps are both highly prevalent and distinctly less persistent than the continuous sample path variation process. Moreover, many jumps appear directly associated with specific macroeconomic news announcements. Separating jump from non-jump movements in a simple but sophisticated volatility forecasting model, we find that almost all of the predictability in daily, weekly, and monthly return volatilities comes from the non-jump component. Our results thus set the stage for a number of interesting future econometric developments and important financial applications by separately modeling, forecasting, and pricing the continuous and jump components of the total return variation process. |
JEL: | C1 G1 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:11775&r=ets |
By: | Carlos Capistrán-Carmona |
Abstract: | This paper documents that inflation forecasts of the Federal Reserve systematically under-predicted inflation before Volker's appointment as Chairman and systematically over-predicted it afterward. It also documents that, under quadratic loss, commercial forecasts have information not contained in the forecasts of the Federal Reserve. It demonstrates that this evidence leads to a rejection of the joint hypothesis that the Federal Reserve has rational expectations and quadratic loss. To investigate the causes of this failure, this paper uses moment conditions derived from a model of an inflation targeting central bank to back out the loss function implied by the forecasts of the Federal Reserve. It finds that the cost of having inflation above the target was larger than the cost of having inflation below it for the post-Volker Federal Reserve, and that the opposite was true for the pre-Volker era. Once these asymmetries are taken into account, the Federal Reserve is found to be rational and to efficiently incorporate the information contained in forecasts from the Survey of Professional Forecasters |
Keywords: | Asymmetric loss function, Inflation forecasts, Forecast Evaluation |
JEL: | C53 E52 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:127&r=ets |
By: | Daniel Ventosa-Santaularia; Antonio E. Noriega |
Abstract: | We study the phenomenon of spurious regression between two random variables when the generating mechanism for individual series follows a stationary process around a trend with (possibly) multiple breaks in its level and slope. We develop relevant asymptotic theory and show that spurious regression occurs independently of the structure assumed for the errors. In contrast to previous findings, the spurious relationship is less severe when breaks are present, whether or not the regression model includes a linear trend. Simulations confirm our asymptotic results and reveal that, in finite samples, the spurious regression is sensitive to the presence of a linear trend and to the relative locations of the breaks within the sample |
Keywords: | Spurious regression, Structural breaks, Stationarity |
JEL: | C22 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:186&r=ets |
By: | Eymen Errais (Managment Science and Engineering Stanford University); Fabio Mercurio |
Abstract: | We introduce a simple extension of a shifted geometric Brownian motion for modelling forward LIBOR rates under their canonical measures. The extension is based on a parameter uncertainty modelled through a random variable whose value is drawn at an in¯nitesimal time after zero. The shift in the proposed model captures the skew commonly seen in the cap market, whereas the uncertain volatility component allows us to obtain more symmetric implied volatility structures. We show how this model can be calibrated to cap prices. We also propose an analytical approximated formula to price swaptions from the cap calibrated model. Finally, we build the bridge between caps and swaptions market by calibrating the correlation structure to swaption prices, and analysing some implications of the calibrated model parameters |
Keywords: | Libor Models, Volatility Skew, Interest Rate Derivatives |
JEL: | C6 G12 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:192&r=ets |
By: | Christoph Schleicher; Francisco Barillas |
Abstract: | This paper examines evidence of long- and short-run co-movement in Canadian sectoral output data. Our framework builds on a vector-error-correction representation that allows to test for and compute full-information maximum-likelihood estimates of models with codependent cycle restrictions. We find that the seven sectors under consideration contain five common trends and five codependent cycles and use their estimates to obtain a multivariate Beveridge-Nelson decomposition to isolate and compare the common components. A forecast error variance decomposition indicates that some sectors, such as manufacturing and construction, are subject to persistent transitory shocks, whereas other sectors, such as financial services, are not. We also find that imposing common feature restrictions leads to a non-trivial gain in the ability to forecast both aggregate and sectoral output. Among the main conclusions is that manufacturing, construction, and the primary sector are the most important sources of business cycle fluctuations for the Canadian economy. |
Keywords: | common features, business cycles, vector autoregressions |
JEL: | C15 C22 C32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:214&r=ets |
By: | Vitaliy Vandrovych (International Business School Brandeis University) |
Abstract: | This paper studies the dynamics of six major exchange rates, and runs formal tests to distinguish among different types of nonlinearities. In particular we study exchange rate returns, normalized exchange rates and exchange rate volatilities, classifying these series using BDS test, correlation dimensions and maximum Liapunov exponents. Estimates of dimension indicate high complexity in all series, suggesting that the series are either stochastic processes or high dimensional deterministic processes. Though we obtain a number of positive estimates of Liapunov exponent, they are quite small and it seems more appropriate to interpret them as indicating stochastic origin of the series. |
JEL: | C22 F31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:234&r=ets |
By: | J. Huston McCulloch |
Abstract: | Adaptive Least Squares (ALS), i.e. recursive regression with asymptotically constant gain, as proposed by Ljung (1992), Sargent (1993, 1999), and Evans and Honkapohja (2001), is an increasingly widely-used method of estimating time-varying relationships and of proxying agents’ time-evolving expectations. This paper provides theoretical foundations for ALS as a special case of the generalized Kalman solution of a Time Varying Parameter (TVP) model. This approach is in the spirit of that proposed by Ljung (1992) and Sargent (1999), but unlike theirs, nests the rigorous Kalman solution of the elementary Local Level Model, and employs a very simple, yet rigorous, initialization. Unlike other approaches, the proposed method allows the asymptotic gain to be estimated by maximum likelihood (ML). The ALS algorithm is illustrated with univariate time series models of U.S. unemployment and inflation. Because the null hypothesis that the coefficients are in fact constant lies on the boundary of the permissible parameter space, the usual regularity conditions for the chi-square limiting distribution of likelihood-based test statistics are not met. Consequently, critical values of the Likelihood Ratio test statistics are established by Monte Carlo means and used to test the constancy of the parameters in the estimated models. |
Keywords: | Kalman Filter, Adaptive Learning, Adaptive Least Squares, Time Varying Parameter Model, Natural Unemployment Rate, Inflation Forecasting |
JEL: | C22 E37 E31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:239&r=ets |
By: | L Christopher Plantier; Ozer Karagedikli |
Abstract: | The output gap plays a crucial role in thinking and actions of many central banks but real time measurements undergo substantial revisions as more data become available (Orphanides (2001), Orphanides and van Norden (forthcoming)). Some central banks augment, such as the Bank of Canada and the Reserve Bank of New Zealand, the Hodrick and Prescott (1997) filter with conditioning structural information to mitigate the impact of revisions to the output gap estimates. In this paper, we use a state space Kalman filter framework to examine whether the augmented (so-called “multivariate filtersâ€) achieve this objective. We find that the multivariate filters are no better than the Hodrick-Prescott filter for real-time NZ data. The addition of structural equations increase the number of signal equations, but at the same time adds more unobserved trend/equilibrium variables to the system. We find that how these additional trends/equilibrium values are treated matters a lot, and they increase the uncertainty around the estimates. In addition, the revisions from these models can be as large as a univariate Hodrick-Prescott filter. |
Keywords: | output gap, real time, multivariate filters |
JEL: | C32 E32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:250&r=ets |
By: | Tatsuma Wada; Pierre Perron |
Abstract: | Recent work on trend-cycle decompositions for US real GDP yields the following puzzling features: method based on Unobserved Components models, the Beveridge-Nelson decomposition, the Hodrick-Prescott filter and others yield very different cycles which bears little resemblance to the NBER chronology, ascribes much movements to the trend leaving little to the cycles, and some imply a negative correlation between the noise to the cycle and the trend. We argue that these features are artifacts created by the neglect of the presence of a change in the slope of the trend function in real GDP in 1973. Once this is properly accounted for, the results show all methods to yield the same cycle with a trend that is non-stochastic except for a few periods around 1973. This cycle is more important in magnitude than previously reported, it accords very well with the NBER chronology and imply no correlation between the trend and cycle, since the former is non-stochastic. We propose a new approach to univariate trend-cycle decompositions using a generalized Unobserved Components models with errors having a mixture of Normals distribution for both the slope of the trend function and the cycle components. It can account endogenously for infrequent changes such as level shifts and change in slope, as well as different variances for expansions and recessions. It yields a decomposition that accords very well with common notions of the business cycles |
Keywords: | Trend-Cycle Decomposition, Structural Change, Non Gaussian Filtering, Unobserved Components Model, Beveridge-Nelson Decomposition |
JEL: | C22 E32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:252&r=ets |
By: | Kirstin Hubrich; David F. Hendry (Research Department European Central Bank) |
Abstract: | We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over forecasting the disaggregates and aggregating those forecasts, or using only aggregate information in forecasting the aggregate. An implication of a general theory of prediction is that the first should outperform the alternative methods to forecasting the aggregate in population. However, forecast models are based on sample information. The data generation process and the forecast model selected might differ. We show how changes in collinearity between regressors affect the bias-variance trade-off in model selection and how the criterion used to select variables in the forecasting model affects forecast accuracy. We investigate why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of Euro area inflation in some situations, but not in others. |
Keywords: | Disaggregate information, predictability, forecast model selection, VAR, factor models |
JEL: | C32 C53 E31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:270&r=ets |
By: | Aaron Smallwood; Alex Maynard; Mark Wohar |
Abstract: | Persistent regressors pose a common problem in predictive regressions. Tests of the forward rate unbiased hypothesis (FRUH) constitute a prime example. Standard regression tests that strongly reject FRUH have been questioned on the grounds of potential long-memory in the forward premium. Researchers have argued that this could create a regression imbalance thereby invalidating standard statistical inference. To address this concern we employ a two-step procedure that rebalances the predictive equation, while still permitting us to impose the null of FRUH. We conduct a comprehensive simulation study to validate our procedure. The simulations demonstrate the good small sample performance of our two-stage procedure, and its robustness to possible errors in the first stage estimation of the memory parameter. By contrast, the simulations for standard regression tests show the potential for significant size distortion, validating the concerns of previous researchers. Our empirical application to excess returns, suggests less evidence against FRUH than found using the standard, but possibly questionable, t-tests. |
Keywords: | Long Memory; Predictive Regressions; Forward Rate Unbiasedness Hypothesis |
JEL: | C22 C12 F31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:384&r=ets |
By: | Magdalena E. Sokalska; Ananda Chanda (Finance New York University); Robert F. Engle |
Abstract: | This paper proposes a new way of modeling and forecasting intraday returns. We decompose the volatility of high frequency asset returns into components that may be easily interpreted and estimated. The conditional variance is expressed as a product of daily, diurnal and stochastic intraday volatility components. This model is applied to a comprehensive sample consisting of 10-minute returns on more than 2500 US equities. We apply a number of different specifications. Apart from building a new model, we obtain several interesting forecasting results. In particular, it turns out that forecasts obtained from the pooled cross section of groups of companies seem to outperform the corresponding forecasts from company-by-company estimation. |
Keywords: | ARCH, Intra-day Returns, Volatility |
JEL: | C22 G15 C53 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:409&r=ets |
By: | James Morley; Tara M. Sinclair (Economics Washington University) |
Abstract: | While tests for unit roots and cointegration have important econometric and economic implications, they do not always offer conclusive results. For example, Rudebusch (1992; 1993) demonstrates that standard unit root tests have low power against estimated trend stationary alternatives. In addition, Perron (1989) shows that standard unit root tests cannot always distinguish unit root from stationary processes that contain segmented or shifted trends. Recent research (Harvey 1993; Engel and Morley 2001; Morley, Nelson et al. 2003; Morley 2004; Sinclair 2004) suggests that unobserved components models can provide a useful framework for representing economic time series which contain unit roots, including those that are cointegrated. These series can be modeled as containing an unobserved permanent component, representing the stochastic trend, and an unobserved transitory component, representing the stationary component of the series. These unobserved components are then estimated using the Kalman filter. The unobserved components framework can also provide a more powerful way to test for unit roots and cointegration than what is currently available (Nyblom and Harvey 2000). This paper develops a new test that nests a partial unobserved components model within a more general unobserved components model. This nesting allows the general and the restricted models to be compared using a likelihood ratio test. The likelihood ratio test statistic has a nonstandard distribution, but Monte Carlo simulation can provide its proper distribution. The simulation uses data generated with the results from the partial unobserved components model as the values for the null hypothesis. Consequently, the null hypothesis for this test is stationarity, which is useful in many cases. In this sense our test is like the well-known KPSS test (Kwiatkowski, Phillips et al. 1992), but our test is a parametric version which provides more power by considering the unobserved components structure in calculation of the test statistic. This more powerful test can be used to evaluate important macroeconomic theories such as the permanent income hypothesis, real business cycle theories, and purchasing power parity for exchange rates |
Keywords: | unobserved components, unit roots, cointegration |
JEL: | C32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:451&r=ets |
By: | A. Onatski; V. Karguine |
Abstract: | Data in which each observation is a curve occur in many applied problems. This paper explores prediction in time series in which the data is generated by a curve-valued autoregression process. It develops a novel technique, the predictive factor decomposition, for estimation of the autoregression operator, which is designed to be better suited for prediction purposes than the principal components method. The technique is based on finding a reduced-rank approximation to the autoregression operator that minimizes the norm of the expected prediction error. Implementing this idea, we relate the operator approximation problem to an eigenvalue problem for an operator pencil that is formed by the cross-covariance and covariance operators of the autoregressive process. We develop an estimation method based on regularization of the empirical counterpart of this eigenvalue problem, and prove that with a certain choice of parameters, the method consistently estimates the predictive factors. In addition, we show that forecasts based on the estimated predictive factors converge in probability to the optimal forecasts. The new method is illustrated by an analysis of the dynamics of the term structure of Eurodollar futures rates. We restrict the sample to the period of normal growth and find that in this subsample the predictive factor technique not only outperforms the principal components method but also performs on par with the best available prediction methods |
Keywords: | Functional data analysis; Dimension reduction, Reduced-rank regression; Principal component; Predictive factor, Generalized eigenvalue problem; Term structure; Interest rates |
JEL: | C23 C53 E43 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:59&r=ets |
By: | Xiao Qin; Gee Kwang Randolph Tan |
Abstract: | Diba and Grossman (1988) and Hamilton and Whiteman (1985) recommended unit root tests for rational bubbles. They argued that if stock prices are not more explosive than dividends, then it can be concluded that rational bubbles are not present. Evans (1991) demonstrated that these tests will fail to detect the class of rational bubbles which collapse periodically. When such bubbles are present, stock prices will not appear to be more explosive than the dividends on the basis of these tests, even though the bubbles are substantial in magnitude and volatility. Hall et al. (1999) show that the power of unit root test can be improved substantially when the underlying process of the sample observations is allowed to follow a first-order Markov process. Our paper applies unit root tests to the property prices of Hong Kong and Seoul, allowing for the data generating process to follow a three states Markov chain. The null hypothesis of unit root is tested against the explosive bubble or stable alternative. Simulation studies are used to generate the critical values for the one-sided test. The time series used in the tests are the monthly price and rent indices of Seoul’s housing (1986:1 to 2003:6) and Hong Kong’s retail premise (1980:12 to 2003:1). The investigations show that only one state appears to be highly likely in all series under investigation and the switching unit root procedure failed to find explosive bubbles in both prices. |
Keywords: | unit root, bootstrap, Markov-Switching |
JEL: | C52 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:95&r=ets |
By: | Annastiina Silvennoinen (Department of Economic Statistics, Stokholm School of Economics); Timo Teräsvirta (Department of Economic Statistics, Stokholm School of Economics) |
Abstract: | In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The approach adopted here is based on the decomposition of the covariances into correlations and standard deviations. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to an endogenous or exogenous transition variable. An LM test is derived to test the constancy of correlations and LM and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the Standard & Poor 500 stock index completes the paper. The model is estimated for the full five-dimensional system as well as several subsystems and the results discussed in detail. |
Keywords: | multivariate GARCH; constant conditional correlation; dynamic conditional correlation; return comovement; variable correlation GARCH model; volatility model evaluation |
JEL: | C12 C32 C51 C52 G1 |
Date: | 2005–10–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:168&r=ets |
By: | Changli He (Department of Economic Statistics, Stokholm School of Economic); Annastiina Silvennoinen (Department of Economic Statistics, Stokholm School of Economics); Timo Teräsvirta (Department of Economic Statistics, Stokholm School of Economic) |
Abstract: | In this paper we consider the third-moment structure of a class of nonlinear time series models. Empirically it is often found that the marginal distribution of financial time series is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate for unconditional skewness. We consider modelling the unconditional mean and variance using models which respond nonlinearly or asymmetrically to shocks. We investigate the implications these models have on the third moment structure of the marginal distribution and different conditions under which the unconditional distribution exhibits skewness as well as nonzero third-order autocovariance structure. With this respect, the asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions are provided for all third order moments and cross-moments. Finally, we introduce a new tool, shock impact curve, that can be used to investigate the impact of shocks on the conditional mean squared error of the return. |
Keywords: | asymmetry; GARCH; nonlinearity; stock impact curve; time series; unconditional skewness |
JEL: | C22 |
Date: | 2005–10–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:169&r=ets |
By: | Eleftherios Giovanis (Chamber of Commerce of Serres-Greece) |
Abstract: | For hundred years the future was occupying the persons. The ancient Greeks, the Romans, the Egyptians, the Indians, the Chinese and other great ancient cultures, but also the modern, as the English, Germans and the Americans and with the help of the development of technology and computers they are trying and they have improved at a great degree the forecasts for the future, as in the meteorology, in the seismology, in the statistics, in the administration, in the economy. Human was always interested and is always interested still more for the future despite for the past or the present. Where I will be working and under which conditions? I wonder I will be married and if I am I will be separate sometime? They will fire me from my work? Is there any possibility for a nuclear war to be started? And if does become a war which extent will it take and which consequences will have this war? Declare war in my competitor decreasing my prices and how many? Or it is better for me to ally with him, because the losses that I may have in this case where I lose the war will be disastrous? There is also the same effect in the military war. Should i bomb with nuclear weapons or should I think also the pollution of environment that it will be created? Such like these and many other various questions occupy billions persons daily. This research certain does not aim neither aspire to find answers, , in all questions, but neither in few. It simply presents an alternative method of forecast for the grant, the private consumption, the gross domestic and national product and still it can be applied, with reserves and always with a lot of trials, in the meteorology and in other sciences. |
Keywords: | basic econometrics, moving average, moving median, forecasting, Arima, Armed-autoregressive moving median model. |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511013&r=ets |
By: | Hyungsik Roger Moon; Benoit Perron; Peter C.B. Phillips |
Abstract: | The asymptotic local power of various panel unit root tests are investigated. The (Gaussian) power envelope is obtained under homogeneous and heterogeneous alternatives. The envelope is compared with the asymptotic power functions for the pooled t- test, the Ploberger-Phillips (2002) test, and a point optimal test in neighborhoods of unity that are of order n^(1/4)T^(-1) and n^(1/2)T^(-1); depending on whether or not incidental trends are extracted from the panel data. In the latter case, when the alternative hypothesis is homogeneous across individuals, it is shown that the point optimal test and the Ploberger-Phillips test both achieve the power envelope and are uniformly most powerful, in contrast to point optimal unit root tests for time series. Some simulations examining the finite sample performance of the tests are reported. |
Keywords: | Asymptotic power envelope, common point optimal test, heterogeneous alternatives, incidental trends, local to unity, power function, panel unit root test |
JEL: | C22 C23 |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:scp:wpaper:05-38&r=ets |
By: | Jaromír Beneš (Czech National Bank, Monetary and Statistics Department, Prague, Czech Republic); David Vávra (Czech National Bank, Monetary and Statistics Department, Prague, Czech Republic) |
Abstract: | We propose the method of eigenvalue filtering as a new tool to extract time series subcomponents (such as business-cycle or irregular) defined by properties of the underlying eigenvalues. We logically extend the Beveridge-Nelson decomposition of the VAR time-series models focusing on the transient component. We introduce the canonical state-space representation of the VAR models to facilitate this type of analysis. We illustrate the eigenvalue filtering by examining a stylized model of inflation determination estimated on the Czech data. We characterize the estimated components of CPI, WPI and import inflations, together with the real production wage and real output, survey their basic properties, and impose an identification scheme to calculate the structural innovations. We test the results in a simple bootstrap simulation experiment. We find two major areas for further research - first, verifying and improving the robustness of the method, and second, exploring the method’s potential for empirical validation of structural economic models. |
Keywords: | Business cycle; inflation; eigenvalues; filtering; Beveridge-Nelson decomposition; time series analysis. |
JEL: | C32 E32 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20050549&r=ets |
By: | Szymon Borak; Wolfgang Härdle; Rafal Weron |
JEL: | C16 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-008&r=ets |
By: | Matthias R. Fengler |
Abstract: | The pricing accuracy and pricing performance of local volatility models crucially depends on absence of arbitrage in the implied volatility surface: an input implied volatility surface that is not arbitrage-free invariably results in negative transition probabilities and/ or negative local volatilities, and ultimately, into mispricings. The common smoothing algorithms of the implied volatility surface cannot guarantee the absence arbitrage. Here, we propose an approach for smoothing the implied volatility smile in an arbitrage-free way. Our methodology is simple to implement, computationally cheap and builds on the well-founded theory of natural smoothing splines under suitable shape constraints. Unlike other methods, our approach also works when input data are scarce and not arbitrage-free. Thus, it can easily be integrated into standard local volatility pricers. |
Keywords: | Arbitrage-Free Smoothing, Volatility, Implied Volatility Surface |
JEL: | C14 C81 G12 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-019&r=ets |