nep-ets New Economics Papers
on Econometric Time Series
Issue of 2005‒06‒14
24 papers chosen by
Yong Yin
SUNY at Buffalo

  1. Measuring Trend Output: How Useful Are the Great Ratios? By Attfield, Clifford; Temple, Jonathan
  2. Granger Causality of the Inflation-Growth Mirror in Accession Countries By Gillman, Max; Nakov, Anton
  3. On the Fit and Forecasting Performance of New Keynesian Models By Del Negro, Marco; Schorfheide, Frank; Smets, Frank; Wouters, Rafael
  4. Forecasting the Spot Exchange Rate with the Term Structure of Forward Premia: Multivariate Threshold Cointegration By van Tol, Michel R; Wolff, Christian C
  5. Loss Functions in Option Valuation: A Framework for Model Selection By Bams, Dennis; Lehnert, Thorsten; Wolff, Christian C
  6. A Comparison of Direct and Iterated Multistep AR Methods for Forecasting Macroeconomic Time Series By Marcellino, Massimiliano; Stock, James H; Watson, Mark W
  7. Shock Identification of Macroeconomic Forecasts Based on Daily Panels By Amstad, Marlene; Fischer, Andreas M
  8. The Time-Series Properties of Aggregate Consumption: Implications for the Costs of Fluctuations By Reis, Ricardo A.M.R.
  9. Improved HAR Inference By Peter C.B. Phillips; Yixiao Sun; Sainan Jin
  10. Economic Transition and Growth By Peter C.B. Phillips; Donggyu Sul
  11. GMM with Many Moment Conditions By Chirok Han; Peter C.B. Phillips
  12. Nonstationary Discrete Choice: A Corrigendum and Addendum By Peter C.B. Phillips; Sainan Jin; Ling Hu
  13. Limit Theory for Moderate Deviations from a Unit Root under Weak Dependence By Peter C.B. Phillips; Tassos Magadalinos
  14. Semiparametric estimation in perturbed long memory series. By Josu Arteche
  15. A comparison of the real-time performance of business cycle dating methods By Marcelle Chauvet; Jeremy M. Piger
  16. Phillips-Perron-type unit root tests in the nonlinear ESTAR framework By Rothe, Christoph; Sibbertsen, Philipp
  17. Estimation of dynamic linear models in short panels with ordinal observation By Stephen Pudney
  18. Nonparametric inference for unbalance time series data By Oliver Linton
  19. Automatic positive semi-definite HAC covariance matrix and GMM estimation By Richard Smith
  20. Graph-Based Search Procedure for Vector Autoregressive Models By Alessio Moneta; Peter Spirtes
  21. Identifying Structural Breaks in Cointegrated VAR Models By Håvard Hungnes
  22. Neural Networks to Predict Financial Time Series in a Minority Game Context By Luca Grilli; Angelo Sfrecola
  23. FORECASTING EXCHANGE RATE :A Uni-variate out of sample Approach By Mahesh Kumar Tambi
  24. Modified Two Stage Least Squares Estimators for the Estimation of a Structural Vector Autoregressive Integrated Process By Cheng Hsiao; Siyan Wang

  1. By: Attfield, Clifford; Temple, Jonathan
    Abstract: Standard macroeconomic models suggest that the ‘great ratios’ of consumption to output and investment to output should be stationary. The joint behaviour of consumption, investment and output can then be used to measure trend output. We adopt this approach for the USA and UK, and find support for stationarity of the great ratios when structural breaks are taken into account. From the estimated vector error correction models, we extract multivariate estimates of the permanent component in output, and comment on trend growth in the 1980s and the New Economy boom of the 1990s.
    Keywords: great ratios; New Economy; permanent components; structural breaks; trend output
    JEL: C32 C51 E20 E30
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4796&r=ets
  2. By: Gillman, Max; Nakov, Anton
    Abstract: The Paper presents a model in which the exogenous money supply causes changes in the inflation rate and the output growth rate. While inflation and growth rate changes occur simultaneously, the inflation acts as a tax on the return to human capital and in this sense induces the growth rate decrease. Shifts in the model’s credit sector productivity cause shifts in the income velocity of money that can break the otherwise stable relation between money, inflation, and output growth. Applied to two accession countries, Hungary and Poland, a VAR system is estimated for each that incorporates endogenously determined multiple structural breaks. Results indicate Granger causality positively from money to inflation and negatively from inflation to growth for both Hungary and Poland, as suggested by the model, although there is some feedback to money for Poland. Three structural breaks are found for each country that are linked to changes in velocity trends, and to the breaks found in the other country.
    Keywords: Granger causality; growth; inflation; structural breaks; transition; VAR; velocity
    JEL: C22 E31 O42
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4845&r=ets
  3. By: Del Negro, Marco; Schorfheide, Frank; Smets, Frank; Wouters, Rafael
    Abstract: The Paper provides new tools for the evaluation of DSGE models, and applies it to a large-scale New Keynesian dynamic stochastic general equilibrium (DSGE) model with price and wage stickiness and capital accumulation. Specifically, we approximate the DSGE model by a vector autoregression (VAR), and then systematically relax the implied cross-equation restrictions. Let delta denote the extent to which the restrictions are being relaxed. We document how the in- and out-of-sample fit of the resulting specification (DSGE-VAR) changes as a function of delta. Furthermore, we learn about the precise nature of the misspecification by comparing the DSGE model’s impulse responses to structural shocks with those of the best-fitting DSGE-VAR. We find that the degree of misspecification in large-scale DSGE models is no longer so large to prevent their use in day-to-day policy analysis, yet it is not small enough that it cannot be ignored.
    Keywords: Bayesian Analysis; DSGE models; model evaluation; vector autoregression
    JEL: C11 C32 C53
    Date: 2005–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4848&r=ets
  4. By: van Tol, Michel R; Wolff, Christian C
    Abstract: In this paper we develop a multivariate threshold vector error correction model of spot and forward exchange rates that allows for different forms of equilibrium reversion in each of the cointegrating residual series. By introducing the notion of an indicator matrix to differentiate between the various regimes in the set of nonlinear processes we provide a convenient framework for estimation by OLS. Empirically, out-of sample forecasting exercises demonstrate its superiority over a linear VECM, while being unable to out-predict a (driftless) random walk model. As such we provide empirical evidence against the findings of Clarida and Taylor (1997).
    Keywords: foreign exchange; multivariate threshold cointegration; TAR models
    JEL: C51 C53 F31
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4958&r=ets
  5. By: Bams, Dennis; Lehnert, Thorsten; Wolff, Christian C
    Abstract: In this paper, we investigate the importance of different loss functions when estimating and evaluating option pricing models. Our analysis shows that it is important to take into account parameter uncertainty, since this leads to uncertainty in the predicted option price. We illustrate the effect on the out-of-sample pricing errors in an application of the ad hoc Black-Scholes model to DAX index options. Our empirical results suggest that different loss functions lead to uncertainty about the pricing error itself. At the same time, it provides a first yardstick to evaluate the adequacy of the loss function. This is accomplished through a data-driven method to deliver not just a point estimate of the pricing error, but a confidence interval.
    Keywords: estimation risk; GARCH; implied volatility; loss functions; option pricing
    JEL: G12
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4960&r=ets
  6. By: Marcellino, Massimiliano; Stock, James H; Watson, Mark W
    Abstract: ‘Iterated’ multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas ‘direct’ forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 US monthly macroeconomic time series spanning 1959-2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon.
    Keywords: forecast comparisons; multistep forecasts; VAR forecasts
    JEL: C32 E37 E47
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:4976&r=ets
  7. By: Amstad, Marlene; Fischer, Andreas M
    Abstract: A new procedure for shock identification of macroeconomic forecasts based on factor analysis is proposed. The identification scheme relies on daily panels and on the recognition that macroeconomic releases exhibit a high level of clustering. A large number of data releases on a single day is of considerable practical interest not only for the estimation but also for the identification of the factor model. The clustering of cross-sectional information facilitates the interpretation of the forecast innovations as real or as nominal shocks. An empirical application is provided for Swiss inflation. We show that the monetary policy shocks generate an asymmetric response to inflation, that the pass-through for CPI inflation is weak, and that the information shocks to inflation are not synchronized.
    Keywords: common factors; daily panels; inflation forecasting
    JEL: E52 E58
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5008&r=ets
  8. By: Reis, Ricardo A.M.R.
    Abstract: While this is typically ignored, the properties of the stochastic process followed by aggregate consumption affect the estimates of the costs of fluctuations. This paper pursues two approaches to modelling aggregate consumption dynamics and to measuring how much society dislikes fluctuations, one statistical and one economic. The statistical approach estimates the properties of consumption and calculates the cost of having consumption fluctuating around its mean growth. The paper finds that the persistence of consumption is a crucial determinant of these costs and that the high persistence in the data severely distorts conventional measures. It shows how to compute valid estimates and confidence intervals. The economic approach uses a calibrated model of optimal consumption and measures the costs of eliminating income shocks. This uncovers a further cost of uncertainty, through its impact on precautionary savings and investment. The two approaches lead to costs of fluctuations that are higher than the common wisdom, between 0.5% and 5% of per capita consumption.
    Keywords: consumption persistence; costs of fluctuations; models of aggregate consumption
    JEL: E21 E32 E60
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:5054&r=ets
  9. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Yixiao Sun (Dept. Economics, UCLA, San Diego); Sainan Jin (Guanghua School of Management, Peking University)
    Abstract: Employing power kernels suggested in earlier work by the authors (2003), this paper shows how to re.ne methods of robust inference on the mean in a time series that rely on families of untruncated kernel estimates of the long-run parameters. The new methods improve the size properties of heteroskedastic and autocorrelation robust (HAR) tests in comparison with conventional methods that employ consistent HAC estimates, and they raise test power in comparison with other tests that are based on untruncated kernel estimates. Large power parameter (rho) asymptotic expansions of the nonstandard limit theory are developed in terms of the usual limiting chi-squared distribution, and corresponding large sample size and large rho asymptotic expansions of the finite sample distribution of Wald tests are developed to justify the new approach. Exact finite sample distributions are given using operational techniques. The paper further shows that the optimal rho that minimizes a weighted sum of type I and II errors has an expansion rate of at most O(T^{1/2}) and can even be O(1) for certain loss functions, and is therefore slower than the O(T^{2/3}) rate which minimizes the asymptotic mean squared error of the corresponding long run variance estimator. A new plug-in procedure for implementing the optimal rho is suggested. Simulations show that the new plug-in procedure works well in finite samples.
    Keywords: Asymptotic expansion, consistent HAC estimation, data-determined kernel estimation, exact distribution, HAR inference, large rho asymptotics, long run variance, loss function, power parameter, sharp origin kernel
    JEL: C13 C14 C22 C51
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1513&r=ets
  10. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (Dept. of Economics, University of Auckland)
    Abstract: Some extensions of neoclassical growth models are discussed that allow for cross section heterogeneity among economies and evolution in rates of technological progress over time. The models offer a spectrum of transitional behavior among economies that includes convergence to a common steady state path as well as various forms of transitional divergence and convergence. Mechanisms for modeling such transitions and measuring them econometrically are developed in the paper. A new regression test of convergence is proposed, its asymptotic properties are derived and some simulations of its finite sample properties are reported. Transition curves for individual economies and subgroups of economies are estimated in a series of empirical applications of the methods to regional US data, OECD data and Penn World Table data.
    Keywords: Economic growth, Growth convergence, Heterogeneity, Neoclassical growth, Relative transition, Transition curve, Transitional divergence
    JEL: O30 O40 C33
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1514&r=ets
  11. By: Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models covering moderate time spans and with correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent LIML estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.
    Keywords: Epiconvergence, GMM, Irrelevant instruments, IV, Large numbers of instruments, LIML estimation, Panel models, Pseudo true value, Signal, Signal Variability, Weak instrumentation
    JEL: C22 C23
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1515&r=ets
  12. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Sainan Jin (Guanghua School of Management, Peking University); Ling Hu (Dept. of Economics, Ohio State University)
    Abstract: We correct the limit theory presented in an earlier paper by Hu and Phillips (Journal of Econometrics, 2004) for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n^{1/4} and n^{3/4}), all parameters including the thresholds converge at the rate n^{3/4}. The presence of non-zero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.
    Keywords: Brownian motion, Brownian local time, Discrete choices, Integrated processes, Pile-up problem, Threshold parameters
    JEL: C23 C25
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1516&r=ets
  13. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magadalinos (Dept. of Mathematics, University of York)
    Abstract: An asymptotic theory is given for autoregressive time series with weakly dependent innovations and a root of the form rho_{n} = 1+c/n^{alpha}, involving moderate deviations from unity when alpha in (0,1) and c in R are constant parameters. The limit theory combines a functional law to a diffusion on D[0,infinity) and a central limit theorem. For c > 0, the limit theory of the first order serial correlation coefficient is Cauchy and is invariant to both the distribution and the dependence structure of the innovations. To our knowledge, this is the first invariance principle of its kind for explosive processes. The rate of convergence is found to be n^{alpha}rho_{n}^{n}, which bridges asymptotic rate results for conventional local to unity cases (n) and explosive autoregressions ((1 + c)^{n}). For c < 0, we provide results for alpha in (0,1) that give an n^{(1+alpha)/2} rate of convergence and lead to asymptotic normality for the first order serial correlation, bridging the /n and n convergence rates for the stationary and conventional local to unity cases. Weakly dependent errors are shown to induce a bias in the limit distribution, analogous to that of the local to unity case. Linkages to the limit theory in the stationary and explosive cases are established.
    Keywords: Central limit theory; Diffusion; Explosive autoregression, Local to unity; Moderate deviations, Unit root distribution, Weak dependence
    JEL: C22
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1517&r=ets
  14. By: Josu Arteche (Dpto. Economía Aplicada III (UPV/EHU))
    Abstract: The estimation of the memory parameter in perturbed long memory series has recently attracted attention motivated especially by the strong persistence of the volatility in many financial and economic time series and the use of Long Memory in Stochastic Volatility (LMSV) processes to model such a behaviour. This paper discusses frequency domain semiparametric estimation of the memory parameter and proposes an extension of the log periodogram regression which explicitly accounts for the added noise, comparing it, asymptotically and in finite samples, with similar extant techniques. Contrary to the non linear log periodogram regression of Sun and Phillips (2003), we do not use a linear approximation of the logarithmic term which accounts for the added noise. A reduction of the asymptotic bias is achieved in this way and makes possible a faster convergence in long memory signal plus noise series by permitting a larger bandwidth. Monte Carlo results confirm the bias reduction but at the cost of a higher variability. An application to a series of returns of the Spanish Ibex35 stock index is finally included.
    Keywords: long memory, stochastic volatility, semiparametric estimation
    JEL: C22
    Date: 2005–06–09
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:200502&r=ets
  15. By: Marcelle Chauvet; Jeremy M. Piger
    Abstract: This paper evaluates the ability of formal rules to establish U.S. business cycle turning point dates in real time. We consider two approaches, a nonparametric algorithm and a parametric Markov-switching dynamic-factor model. In order to accurately assess the real-time performance of these rules, we construct a new unrevised "real-time" data set of employment, industrial production, manufacturing and trade sales, and personal income. We then apply the rules to this data set to simulate the accuracy and timeliness with which they would have identified the NBER business cycle chronology had they been used in real time for the past 30 years. Both approaches accurately identified the NBER dated turning points in the sample in real time, with no instances of false positives. Further, both approaches, and especially the Markov-switching model, yielded significant improvement over the NBER in the speed with which business cycle troughs were identified. In addition to suggesting that business cycle dating rules are an informative tool to use alongside the traditional NBER analysis, these results provide formal evidence regarding the speed with which macroeconomic data reveals information about new business cycle phases.
    Keywords: Business cycles
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2005-021&r=ets
  16. By: Rothe, Christoph; Sibbertsen, Philipp
    Abstract: In this paper, we propose Phillips-Perron type, semiparametric testing procedures to distinguish a unit root process from a mean-reverting exponential smooth transition autoregressive one. The limiting nonstandard distributions are derived under very general conditions and simulation evidence shows that the tests perform better than the standard Phillips-Perron or Dickey-Fuller tests in the region of the null.
    Keywords: Exponential smooth transition autoregressive model, Unit roots, Monte Carlo simulations, Purchasing Power Parity
    JEL: C12 C32
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-315&r=ets
  17. By: Stephen Pudney (Institute for Fiscal Studies and Institute for Social and Economic Research)
    Abstract: We develop a simulated ML method for short-panel estimation of one or more dynamic linear equations, where the dependent variables are only partially observed through ordinal scales. We argue that this latent autoregression (LAR) model is often more appropriate than the usual state-dependence (SD) probit model for attitudinal and interval variables. We propose a score test for assisting in the treatment of initial conditions and a new simulation approach to calculate the required partial derivative matrices. An illustrative application to a model of households’ perceptions of their financial well-being demonstrates the superior fit of the LAR model.
    Keywords: Dynamic panel data models, ordinal variables, simulated maximum likelihood, GHK simulator, BHPS
    JEL: C23 C25 C33 C35 D84
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:05/05&r=ets
  18. By: Oliver Linton (Institute for Fiscal Studies and London School of Economics)
    Abstract: Estimation of heteroskedasticity and autocorrelation consistent covariance matrices (HACs) is a well established problem in time series. Results have been established under a variety of weak conditions on temporal dependence and heterogeneity that allow one to conduct inference on a variety of statistics, see Newey and West (1987), Hansen (1992), de Jong and Davidson (2000), and Robinson (2004). Indeed there is an extensive literature on automating these procedures starting with Andrews (1991). Alternative methods for conducting inference include the bootstrap for which there is also now a very active research program in time series especially, see Lahiri (2003) for an overview. One convenient method for time series is the subsampling approach of Politis, Romano, andWolf (1999). This method was used by Linton, Maasoumi, andWhang (2003) (henceforth LMW) in the context of testing for stochastic dominance. This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/bootstrap theory applies, but at the expense of throwing away data and perhaps losing effciency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid.1 We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss effciency and power. Unbalanced data of the type we consider are quite common in financial panel data, see for example Connor and Korajczyk (1993). These data also occur in cross-country studies.
    Date: 2004–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:06/04&r=ets
  19. By: Richard Smith (Institute for Fiscal Studies and University of Warwick)
    Abstract: This paper proposes a new class of HAC covariance matrix estimators. The standard HAC estimation method re-weights estimators of the autocovariances. Here we initially smooth the data observations themselves using kernel function based weights. The resultant HAC covariance matrix estimator is the normalised outer product of the smoothed random vectors and is therefore automatically positive semi-definite. A corresponding efficient GMM criterion may also be defined as a quadratic form in the smoothed moment indicators whose normalised minimand provides a test statistic for the over-identifying moment conditions.
    Keywords: GMM, HAC Covariance Matrix Estimation, Overidentifying Moments
    JEL: C13 C30
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:17/04&r=ets
  20. By: Alessio Moneta; Peter Spirtes
    Abstract: Vector Autoregressions (VARs) are a class of time series models commonly used in econometrics to study the dynamic effect of exogenous shocks to the economy. While the estimation of a VAR is straightforward, there is a problem of finding the transformation of the estimated model consistent with the causal relations among the contemporaneous variables. Such problem, which is a version of what is called in econometrics “the problem of identification,” is faced in this paper using a semi-automated search procedure. The unobserved causal relations of the structural form, to be identified, are represented by a directed graph. Discovery algorithms are developed to infer features of the causal graph from tests on vanishing partial correlations among the VAR residuals. Such tests cannot be based on the usual tests of conditional independence, because of sampling problems due to the time series nature of the data. This paper proposes consistent tests on vanishing partial correlations based on the asymptotic distribution of the estimated VAR residuals. Two different types of search algorithm are considered. A first algorithm restricts the analysis to direct causation among the contemporaneous variables, a second algorithm allows the possibility of cycles (feedback loops) and common shocks among contemporaneous variables. Recovering the causal structure allows a reliable transformation of the estimated vector autoregressive model which is very useful for macroeconomic empirical investigations, such as comparing the effects of different shocks (real vs. nominal) on the economy and finding a measure of the monetary policy shock.
    Keywords: VARs, Problem of Identification, Causal Graphs, Structural Shocks
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2005/14&r=ets
  21. By: Håvard Hungnes (Statistics Norway)
    Abstract: The paper describes a procedure for decomposing the deterministic terms in cointegrated VAR models into growth rate parameters and cointegration mean parameters. These parameters express long-run properties of the model. For example, the growth rate parameters tell us how much to expect (unconditionally) the variables in the system to grow from one period to the next, representing the underlying (steady state) growth in the variables. The procedure can be used for analysing structural breaks when the deterministic terms include shift dummies and broken trends. By decomposing the coefficients into interpretable components, different types of structural breaks can be identified. Both shifts in intercepts and shifts in growth rates, or combinations of these, can be tested for. The ability to distinguish between different types of structural breaks makes the procedure superior compared to alternative procedures. Furthermore, the procedure utilizes the information more efficiently than alternative procedures. Finally, interpretable coefficients of different types of structural breaks can be identified.
    Keywords: Johansen procedure; cointegrated VAR; structural breaks; growth rates; cointegration mean levels.
    JEL: C32 C51 C52
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:422&r=ets
  22. By: Luca Grilli; Angelo Sfrecola
    Abstract: In this paper we consider financial time series from U.S. Fixed Income Market, S&P500, Exchange Market and Oil Market. It is well known that financial time series reveal some anomalies as regards the Efficient Market Hypotesis and some scaling behavior is evident such as fat tails and clustered volatility. This suggests to consider financial time serie as "pseudo"-random time series. For this kind of time series the power of prediction of neural networks has been shown to be appreciable. We first consider the financial time serie from the Minority Game point of view and than we apply a neural network with learning algorithm in order to analyze its prediction power. We show that Fixed Income Market presents many differences from other markets in terms of predictability as a measure of market efficiency.
    Keywords: Minority Game, Learning Algorithms, Neural Networks, Financial Time Series, Efficient Market Hypotesis
    JEL: C45 C70 C22 G14
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:ufg:qdsems:14-2005&r=ets
  23. By: Mahesh Kumar Tambi (IIMT, Hyderabad-India)
    Abstract: In this paper we tried to build univariate model to forecast exchange rate of Indian Rupee in terms of different currencies like SDR, USD, GBP, Euro and JPY. Paper uses Box-Jenkins Methodology of building ARIMA model. Sample data for the paper was taken from March 1992 to June 2004, out of which data till December 2002 were used to build the model while remaining data points were used to do out of sample forecasting and check the forecasting ability of the model. All the data were collected from Indiastat database. Result of the paper shows that ARIMA models provides a better forecasting of exchange rates than simple auto- regressive models or moving average models. We were able to build model for all the currencies, except USD, which shows the relative efficiency of the USD currency market.
    Keywords: Exchange rate forecasting, univariate analysis, ARIMA, Box- Jenkins methodology, out of sample approach
    JEL: F3 F4
    Date: 2005–06–08
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpif:0506005&r=ets
  24. By: Cheng Hsiao; Siyan Wang
    Abstract: We consider the estimation of a structural vector autoregressive model of nonstationary and possibly cointegrated variables without the prior knowledge of unit roots or rank of cointegration. We propose two modified two stage least squares estimators that are consistent and have limiting distributions that are either normal or mixed normal. Limited Monte Carlo studies are also conducted to evaluate their finite sample properties.
    Keywords: Structural vector autoregression; Unit root; Cointegration; Asymptotic properties; Hypothesis testing
    JEL: C32 C12 C13
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-23&r=ets

This nep-ets issue is ©2005 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.