|
on Econometrics |
By: | Guglielmo Maria Caporale; Luis A. Gil-Alana |
Abstract: | This paper examines aggregate money demand relationships in five industrial countries by employing a two-step strategy for testing the null hypothesis of no cointegration against alternatives which are fractionally cointegrated. Fractional cointegration would imply that, although there exists a long-run relationship, the equilibrium errors exhibit slow reversion to zero, i.e. that the error correction term possesses long memory, and hence deviations from equilibrium are highly persistent. It is found that the null hypothesis of no cointegration cannot be rejected for Japan. By contrast, there is some evidence of fractional cointegration for the remaining countries, i.e., Germany, Canada, the US, and the UK (where, however, the negative income elasticity which is found is not theory-consistent). Consequently, it appears that money targeting might be the appropriate policy framework for monetary authorities in the first three countries, but not in Japan or in the UK. |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:bru:bruedp:05-01&r=ecm |
By: | Sarai Criado Nuevo |
URL: | http://d.repec.org/n?u=RePEc:fda:fdaeee:05-01&r=ecm |
By: | Cantoni Eva; Field Chris; Mills Flemming Joanna; Ronchetti Elvezio |
Abstract: | Longitudinal models are commonly used for studying data collected on individuals repeatedly through time. While there are now a variety of such models available (Marginal Models, Mixed Effects Models, etc.), far fewer options appear to exist for the closely related issue of variable selection. In addition, longitudinal data typically derive from medical or other large-scale studies where often large numbers of potential explanatory variables and hence even larger numbers of candidate models must be considered. Cross-validation is a popular method for variable selection based on the predictive ability of the model. Here, we propose a cross-validation Markov Chain Monte Carlo procedure as a general variable selection tool which avoids the need to visit all candidate models. Inclusion of a “one-standard error” rule provides users with a collection of good models as is often desired. We demonstrate the effectiveness of our procedure both in a simulation setting and in a real application. |
Date: | 2005–02 |
URL: | http://d.repec.org/n?u=RePEc:gen:geneem:2005.01&r=ecm |
By: | Czellar Veronika; Karolyi G. Andrew; Ronchetti Elvezio |
Abstract: | We introduce Indirect Robust Generalized Method of Moments (IRGMM), a new simulation-based estimation methodology, to modl short-term interest rate processes. The primary advantage of IRGMM relative to classical estimators of the continuous-time short-rate diffusion processes is thet it corrects both the errors due to discretization and the errors due to model misspecification. We apply this new approach to various monthly and weekly Eurocurrency interest rate series. |
Keywords: | GMM and RGMM estimators, CKLS one factor model, indirect inference |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:gen:geneem:2005.02&r=ecm |
By: | Adolfson, Malin (Research Department, Central Bank of Sweden); Laséen, Stefan (Monetary Policy Department, Central Bank of Sweden); Lindé, Jesper (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden) |
Abstract: | In this paper we develop a dynamic stochastic general equilibrium (DSGE) model for an open economy, and estimate it on Euro area data using Bayesian estimation techniques. The model incorporates several open economy features, as well as a number of nominal and real frictions that have proven to be important for the empirical fit of closed economy models. The paper offers: i) a theoretical development of the standard DSGE model into an open economy setting, ii) Bayesian estimation of the model, including assesments of the relative importance of various shocks and frictions for explaining the dynamic development of an open economy, and iii) an evaluation of the model's empirical properties using standard validation methods. |
Keywords: | DSGE model; Open economy; Monetary Policy; Bayesian Inference; Business cycle |
JEL: | C11 E40 E47 E52 |
Date: | 2005–03–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0179&r=ecm |
By: | Villani, Mattias (Research Department, Central Bank of Sweden) |
Abstract: | Vector autoregressions have steadily gained in popularity since their introduction in econometrics 25 years ago. A drawback of the otherwise fairly well developed methodology is the inability to incorporate prior beliefs regarding the system's steady state in a satisfactory way. Such prior information are typically readily available and may be crucial for forecasts at long horizons. This paper develops easily implemented numerical simulation algorithms for analyzing stationary and cointegrated VARs in a parametrization where prior beliefs on the steady state may be adequately incorporated. The analysis is illustrated on macroeconomic data for the Euro area. |
Keywords: | Cointegration; Bayesian inference; Forecasting; Unconditional mean; VARs |
JEL: | C11 C32 C53 E50 |
Date: | 2005–03–16 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0181&r=ecm |
By: | Hosung Jung |
Abstract: | This paper presents an autocorrelation test that is applicable to dynamic panel data models with serially correlated errors. Our residual-based GMM t-test (hereafter: t-test) differs from the m2 and Sargan's over-identifying restriction (hereafter: Sargan test) in Arellano and Bond (1991), both of which are based on residuals from the first-difference equation. It is a significance test which is applied after estimating a dynamic model by the instrumental variable (IV) method and is directly applicable to any other consistently estimated residual. Two interesting points are found: the test depends only on the consistency of the first-step estimation, not on its efficiency;and the test is applicable to both forms of serial correlation (i.e., AR(1) or MA(1)). Monte Carlo simulations are also performed to study the practical performance of these three tests, the m2, the Sargan and the t-test for models with first-order auto-regressive AR(1) and first-order moving-average MA(1) serial correlation. The m2 and Sargan test statistics appear to accept too often in small samples even when the autocorrelation coefficient approaches unity in the AR(1) disturbance. Overall, our residual based t-test has considerably more power than the m2 test or the Sargan test. |
Keywords: | Dynamic panel data, Residual based GMM t-test, m2 and Sargan tests |
Date: | 2005–02 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d04-77&r=ecm |
By: | Massimiliano Marcellino; James Stock; Mark Watson |
Abstract: | “Iterated” multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas “direct” forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 U.S. monthly macroeconomic time series spanning 1959 – 2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon. |
URL: | http://d.repec.org/n?u=RePEc:igi:igierp:285&r=ecm |
By: | Massimiliano Marcellino |
Abstract: | We provide a summary updated guide for the construction, use and evaluation of leading indicators, and an assessment of the most relevant recent developments in this field of economic forecasting. To begin with, we analyze the problem of selecting a target coincident variable for the leading indicators, which requires coincident indicator selection, construction of composite coincident indexes, choice of filtering methods, and business cycle dating procedures to transform the continous target into a binary expansion/recession indicator. Next, we deal with criteria for choosing good leading indicators, and simple non-model based methods to combine them into composite indexes. Then, we examine models and methods to transform the leading indicators into forecasts of the target variable. Finally, we consider the evaluation of the resulting leading indicator based forecasts, and review the recent literature on the forecasting performance of leading indicators. |
URL: | http://d.repec.org/n?u=RePEc:igi:igierp:286&r=ecm |
By: | Torben G. Andersen; Tim Bollerslev; Peter F. Chirstoffersen; Francis X. Diebold |
Abstract: | Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. |
JEL: | C1 G1 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:11188&r=ecm |
By: | Jaroslava Hlouskova; Martin Wagner |
Abstract: | This paper presents results concerning the size and power of first generation panel unit root and stationarity tests obtained from a large scale simulation study, with in total about 290 million test statistics computed. The tests developed in the following papers are included: Levin, Lin and Chu (2002), Harris and Tzavalis (1999), Breitung (2000), Im, Pesaran and Shin (1997 and 2003), Maddala and Wu (1999), Hadri (2000) and Hadri and Larsson (2002). Our simulation set-up is designed to address i.a. the following issues. First, we assess the performance as a function of the time and the cross-section dimension. Second, we analyze the impact of positive MA roots on the test performance. Third, we investigate the power of the panel unit root tests (and the size of the stationarity tests) for a variety of first order autoregressive coefficients. Fourth, we consider both of the two usual specifications of deterministic variables in the unit root literature |
Keywords: | Panel Unit Root Test; Panel Stationarity Test; Size; Power; Simulation Study |
JEL: | C12 C15 C23 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:ube:dpvwib:dp0503&r=ecm |
By: | Rien Wagenvoort (European Investment Bank, Luxembourg); Paul Schure (Department of Economics, University of Northern B.C.) |
Abstract: | We introduce a new panel data estimation technique for cost and production functions: the Recursive Thick Frontier Approach (RTFA). RTFA has two advantages over existing thick frontier methods. First, technical inefficiency is allowed to be dependent on the explanatory variables of the frontier model. Secondly, no distributional assumptions are imposed on the inefficiency component of the error term. We show by means of simulation experiments that RTFA can outperform the popular stochastic frontier approach (SFA) and the “within” OLS estimator for realistic parameterisations of the productivity model. |
Keywords: | Technical Efficiency, Efficiency Measurement, Frontier Production Functions, Recursive Thick Frontier Approach |
JEL: | C15 C23 C50 D2 |
Date: | 2005–03–11 |
URL: | http://d.repec.org/n?u=RePEc:vic:vicewp:0503&r=ecm |
By: | Patrick Marsh |
Abstract: | This paper proposes and analyses a measure of distance for the unit root hypothesis tested against stochastic stationarity. It applies over a family of distributions, for any sample size, for any specification of deterministic components and under additional autocorrelation, here parameterised by a finite order moving-average. The measure is shown to obey a set of inequalities involving the measures of distance of Gibbs and Su (2002) which are also extended to include power. It is also shown to be a convex function of both the degree of a time polynomial regressors and the moving average parameters. Thus it is minimisable with respect to either. Implicitly, therefore, we find that linear trends and innovations having a moving average negative unit root will necessarily make power small. In the context of the Nelson and Plosser (1982) data, the distance is used to measure the impact that specification of the deterministic trend has on our ability to make unit root inferences. For certain series it highlights how imposition of a linear trend can lead to estimated models indistinguishable from unit root processes while freely estimating the degree of the trend yields a model very different in character. |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:05/02&r=ecm |