|
on Econometrics |
By: | Mika Meitz; Pentti Saikkonen |
Abstract: | This paper develops an asymptotic estimation theory for nonlinear autoregressive models with conditionally heteroskedastic errors. We consider a functional coefficient autoregression of order p (AR(p)) with the conditional variance specified as a general nonlinear first order generalized autoregressive conditional heteroskedasticity (GARCH(1,1)) model. Strong consistency and asymptotic normality of the global Gaussian quasi maximum likelihood (QML) estimator are established under conditions comparable to those recently used in the corresponding linear case. To the best of our knowledge, this paper provides the first results on consistency and asymptotic normality of the QML estimator in nonlinear autoregressive models with GARCH errors. |
Keywords: | AR-GARCH, asymptotic normality, consistency, nonlinear time series, quasi maximum likelihood estimation |
JEL: | C13 C22 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/25&r=ecm |
By: | Jason Allen; Allan W. Gregory; Katsumi Shimotsu |
Abstract: | Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping. |
Keywords: | Econometric and statistical methods |
JEL: | C14 C22 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:08-18&r=ecm |
By: | Brendan K. Beare (Nuffield College, Oxford University) |
Abstract: | It is known that unit root test statistics may not have the usual asymptotic properties when the variance of innovations is unstable. In particular, persistent changes in volatility can cause the size of unit root tests to differ from the nominal level. In this paper we propose a class of modified unit root test statistics that are robust to the presence of unstable volatility. The modification is achieved by purging heteroskedasticity from the data using a kernel estimate of volatility prior to the application of standard tests. In the absence of deterministic trend components, this approach delivers test statistics that achieve standard asymptotics under the null hypothesis of a unit root. When the data are homoskedastic, the local power of unit root tests is unchanged by our modification. We use Monte Carlo simulations to compare the finite sample performance of our modified tests with that of existing methods of correcting for unstable volatility. |
Keywords: | unit root, heteroskedasticity, nonstationary volatility. |
JEL: | C14 C22 |
Date: | 2008–05–05 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0806&r=ecm |
By: | Subbotin, Viktor |
Abstract: | The paper develops the bootstrap theory and extends the asymptotic theory of rank estimators, such as the Maximum Rank Correlation Estimator (MRC) of Han (1987), Monotone Rank Estimator (MR) of Cavanagh and Sherman (1998) or Pairwise-Difference Rank Estimators (PDR) of Abrevaya (2003). It is known that under general conditions these estimators have asymptotic normal distributions, but the asymptotic variances are difficult to find. Here we prove that the quantiles and the variances of the asymptotic distributions can be consistently estimated by the nonparametric bootstrap. We investigate the accuracy of inference based on the asymptotic approximation and the bootstrap, and provide bounds on the associated error. In the case of MRC and MR, the bound is a function of the sample size of order close to n^{-1/6}. The PDR estimators belong to a special subclass of rank estimators for which the bound is vanishing with the rate close to n^{-1/2}. The theoretical findings are illustrated with Monte-Carlo experiments and a real data example. |
Keywords: | Rank Estimators; Bootstrap; M-Estimators; U-Statistics; U-Processes |
JEL: | C14 C12 C15 |
Date: | 2007–11–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:9030&r=ecm |
By: | Helen Armstrong (School of Mathematics, University of New South Wales); Christopher K. Carter (School of Economics, University of New South Wales); Kevin K. F. Wong (Graduate University for Advanced Studies, Tokyo, Japan); Robert Kohn (School of Economics, University of New South Wales) |
Abstract: | Estimating a covariance matrix efficiently and discovering its structure are important statistical problems with applications in many fields. This article takes a Bayesian approach to estimate the covariance matrix of Gaussian data. We use ideas from Gaussian graphical models and model selection to construct a prior for the covariance matrix that is a mixture over all decomposable graphs, where a graph means the configuration of nonzero offdiagonal elements in the inverse of the covariance matrix. Our prior for the covariance matrix is such that the probability of each graph size is specified by the user and graphs of equal size are assigned equal probability. Most previous approaches assume that all graphs are equally probable. We give empirical results that show the prior that assigns equal probability over graph sizes outperforms the prior that assigns equal probability over all graphs, both in identifying the correct decomposable graph and in more efficiently estimating the covariance matrix. The advantage is greatest when the number of observations is small relative to the dimension of the covariance matrix. The article also shows empirically that there is minimal change in statistical efficiency in using the mixture over decomposable graphs prior for estimating a general covariance compared to the Bayesian estimator by Wong et al. (2003), even when the graph of the covariance matrix is nondecomposable. However, our approach has some important advantages over that of Wong et al. (2003). Our method requires the number of decomposable graphs for each graph size. We show how to estimate these numbers using simulation and that the simulation results agree with analytic results when such results are known. We also show how to estimate the posterior distribution of the covariance matrix using Markov chain Monte Carlo with the elements of the covariance matrix integrated out and give empirical results that show the sampler is computationally efficient and converges rapidly. Finally, we note that both the prior and the simulation method to evaluate the prior apply generally to any decomposable graphical model. |
Keywords: | Covariance selection; Graphical models; Reduced conditional sampling; Variable selection |
Date: | 2007–04 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2007-13&r=ecm |
By: | Matei Demetrescu; Helmut Luetkepohl; Pentti Saikkonen |
Abstract: | When applying Johansen's procedure for determining the cointegrating rank to systems of variables with linear deterministic trends, there are two possible tests to choose from. One test allows for a trend in the cointegration relations and the other one restricts the trend to be orthogonal to the cointegration relations. The first test is known to have reduced power relative to the second one if there is in fact no trend in the cointegration relations, whereas the second one is based on a misspecified model if the linear trend is not orthogonal to the cointegration relations. Hence, the treatment of the linear trend term is crucial for the outcome of the rank determination procedure. We compare two alternative testing strategies which are applicable if there is uncertainty regarding the proper trend specification. In the first one a specific cointegrating rank is rejected if one of the two tests rejects and in the second one the trend term is decided upon by a pretest. The first strategy is shown to be preferable in applied work. |
Keywords: | Cointegration analysis, likelihood ratio test, vector autoregressive model, vector error correction model |
JEL: | C32 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/24&r=ecm |
By: | Markku Lanne; Helmut Luetkepohl |
Abstract: | Different identification schemes for monetary policy shocks have been proposed in the literature. They typically specify just-identifying restrictions in a standard structural vector autoregressive (SVAR) framework. Thus, in this framework the different schemes cannot be checked against the data with statistical tests. We consider different approaches how to use the data properties to augment the standard SVAR setup for identifying the shocks. Thereby it becomes possible to test models which are just identified in a standard setting. For monthly US data it is found that a model where monetary shocks are induced via the federal funds rate is the only one which cannot be rejected when the data properties are used for identification. |
Keywords: | Mixed normal distribution, structural vector autoregressive model, vector autoregressive process |
JEL: | C32 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/23&r=ecm |
By: | Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA) |
Abstract: | This paper considers a first-order autoregressive model with conditionally heteroskedastic innovations. The asymptotic distributions of least squares (LS), infeasible generalized least squares (GLS), and feasible GLS estimators and t statistics are determined. The GLS procedures allow for misspecification of the form of the conditional heteroskedasticity and, hence, are referred to as quasi-GLS procedures. The asymptotic results are established for drifting sequences of the autoregressive parameter and the distribution of the time series of innovations. In particular, we consider the full range of cases in which the autoregressive parameter rho_n satisfies (i) n(1 - rho_n) -> infinity and (ii) n(1 - rho_n) -> h_1 < infinity as n -> infinity, where n is the sample size. Results of this type are needed to establish the uniform asymptotic properties of the LS and quasi-GLS statistics. |
Keywords: | Asymptotic distribution, Autoregression, Conditional heteroskedasticity, Generalized least squares, Least squares |
JEL: | C22 |
Date: | 2008–06 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1665&r=ecm |
By: | Pesaran, M.H.; Schleicher, C.; Zaffaroni, P. |
Abstract: | This paper considers the problem of model uncertainty in the case of multi-asset volatility models and discusses the use of model averaging techniques as a way of dealing with the risk of inadvertently using false models in portfolio management. Evaluation of volatility models is then considered and a simple Value-at-Risk (VaR) diagnostic test is proposed for individual as well as `average' models. The asymptotic as well as the exact ¯nite-sample distribution of the test statistic, dealing with the possibility of parameter uncertainty, are established. The model averaging idea and the VaR diagnostic tests are illustrated by an application to portfolios of daily returns on six currencies, four equity indices, four ten year government bonds and four commodities over the period 1991-2007. The empirical evidence supports the use of `thick' model averaging strategies over single models or Bayesian type model averaging procedures. |
Keywords: | Model Averaging, Value-at-Risk, Decision Based Evaluations. |
JEL: | C32 C52 C53 G11 |
Date: | 2008–01 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0808&r=ecm |
By: | Lokshin, Boris (UNU-MERIT, and Maastricht University) |
Abstract: | This paper extends the LSDV bias-corrected estimator in [Bun, M., Carree, M.A. 2005. Bias-corrected estimation in dynamic panel data models, Journal of Business and Economic Statistics, 23(2): 200-10] to unbalanced panels and discusses the analytic method of obtaining the solution. Using a Monte Carlo approach the paper compares the performance of this estimator with three other available techniques for dynamic panel data models. Simulation reveals that LSDV-bc estimator is a good choice except for samples with small T, where it may be unpractical. The methodology is applied to examine the impact of internal and external R&D on labor productivity in an unbalanced panel of innovating firms. |
Keywords: | Bias Correction, Unbalanced Panel Data, GMM, Dynamic Models |
JEL: | C23 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:dgr:unumer:2008039&r=ecm |
By: | Marc Hallin; Roman Liska |
Abstract: | Macroeconometric data often come under the form of large panels of time series, themselves decomposing into smaller but still quite large subpanels or blocks. We show how the dynamic factor analysis method proposed in Forni et al (2000), combined with the identification method of Hallin and Liska (2007), allows for identifying and estimating joint and block-specific common factors. This leads to a more sophisticated analysis of the structures of dynamic interrelations within and between the blocks in such datasets, along with an informative decomposition of explained variances. The method is illustrated with an analysis of the Industrial Production Index data for France, Germany, and Italy. |
Keywords: | Panel data; Time series; High dimensional data; Dynamic factor model; Business cycle; Block specific factors; Dynamic principal components; Information criterion. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/22&r=ecm |
By: | Markku Lanne; Pentti Saikkonen |
Abstract: | This paper is concerned with univariate noncausal autoregressive models and their potential usefulness in economic applications. We argue that noncausal autoregressive models are especially well suited for modeling expectations. Unlike conventional causal autoregressive models, they explicitly show how the considered economic variable is affected by expectations and how expectations are formed. Noncausal autoregressive models can also be used to examine the related issue of backward-looking or forward-looking dynamics of an economic variable. We show in the paper how the parameters of a noncausal autoregressive model can be estimated by the method of maximum likelihood and how related test procedures can be obtained. Because noncausal autoregressive models cannot be distinguished from conventional causal autoregressive models by second order properties or Gaussian likelihood, a detailed discussion on their specification is provided. Motivated by economic applications we explicitly use a forward-looking autoregressive polynomial in the formulation of the model. This is di¤erent from the practice used in previous statistics literature on noncausal autoregressions and, in addition to its economic motivation, it is also convenient from a statistical point of view. In particular, it facilitates obtaining likelihood based diagnostic tests for the specified orders of the backward-looking and forward-looking autoregressive polynomials. Such test procedures are not only useful in the specification of the model but also in testing economically interesting hypotheses such as whether the considered variable only exhibits forward-looking behavior. As an empirical application, we consider modeling the U.S. in.ation dynamics which, according to our results, is purely forward-looking. |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/20&r=ecm |
By: | Pesaran, M.H.; Schuermann, T.; Smit, L.V. |
Abstract: | This paper considers the problem of forecasting real and financial macroeconomic variables across a large number of countries in the global economy. To this end a global vector autoregressive (GVAR) model previously estimated over the 1979Q1-2003Q4 period by Dees, de Mauro, Pesaran, and Smith (2007), is used to generate out-of-sample one quarter and four quarters ahead forecasts of real output, inflation, real equity prices, exchange rates and interest rates over the period 2004Q1-2005Q4. Forecasts are obtained for 134 variables from 26 regions made up of 33 countries covering about 90% of world output. The forecasts are compared to typical benchmarks: univariate autoregressive and random walk models. Building on the forecast combination literature, the effects of model and estimation uncertainty on forecast outcomes are examined by pooling forecasts obtained from different GVAR models estimated over alternative sample periods. Given the size of the modeling problem, and the heterogeneity of economies considered — industrialised, emerging, and less developed countries — as well as the very real likelihood of possibly multiple structural breaks, averaging forecasts across both models and windows makes a significant difference. Indeed the double-averaged GVAR forecasts performed better than the benchmark competitors, especially for output, inflation and real equity prices. |
Keywords: | Forecasting using GVAR, structural breaks and forecasting, average forecasts across models and windows, financial and macroeconomic forecasts. |
JEL: | C32 C51 C53 |
Date: | 2008–01 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0807&r=ecm |
By: | Bernardi, Mauro; Della Corte, Giuseppe; Proietti, Tommaso |
Abstract: | The series on average hours worked in the manufacturing sector is a key leading indicator of the U.S. business cycle. The paper deals with robust estimation of the cyclical component for the seasonally adjusted time series. This is achieved by an unobserved components model featuring an irregular component that is represented by a Gaussian mixture with two components. The mixture aims at capturing the kurtosis which characterizes the data. After presenting a Gibbs sampling scheme, we illustrate that the Gaussian mixture model provides a satisfactory representation of the data, allowing for the robust estimation of the cyclical component of per capita hours worked. Another important piece of evidence is that the outlying observations are not scattered randomly throughout the sample, but have a distinctive seasonal pattern. Therefore, seasonal adjustment plays a role. We ¯nally show that, if a °exible seasonal model is adopted for the unadjusted series, the level of outlier contamination is drastically reduced. |
Keywords: | Gaussian Mixtures; Robust signal extraction; State Space Models; Bayesian model selection; Seasonality |
JEL: | E32 C52 C22 C11 |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:8880&r=ecm |
By: | Cheti Nicoletti (Institute for Social and Economic Research) |
Abstract: | The estimation of occupational mobility across generations can be biased because of different sample selection issues as, for example, selection into employment. Most empirical papers have either neglected sample selection issues or adopted Heckman-type correction methods. These methods are generally not adequate to estimate intergenerational mobility models. In this paper, we show how to use new methods to estimate linear and quantile intergenerational mobility equations taking account of multiple sample selection. |
Keywords: | intergenerational links, sample selection |
Date: | 2008–05 |
URL: | http://d.repec.org/n?u=RePEc:ese:iserwp:2008-20&r=ecm |
By: | Bent Nielsen (Nuffield College, Oxford University); Heino Bohn Nielsen (University of Copenhagen) |
Abstract: | Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this imply a non-differentiability so the d-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples this has a considerable influence on the finite sample distribution unless the roots are far apart. With increasing order of the autoregressions it becomes increasingly difficult to place the roots far apart giving a very noisy signal from the characteristic roots. |
Keywords: | Autoregression; Characteristic root. |
JEL: | C22 |
Date: | 2008–05–30 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0807&r=ecm |
By: | Framroze Moller, Niels |
Abstract: | Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the economic model implies the econometric concept of strong exogeneity for â. The economic equilibrium corresponds to the so-called long-run value (Johansen 2005), the comparative statics are captured by the long-run impact matrix, C; and the exogenous variables are the common trends. Also, the adjustment parameters of the CVAR are shown to be interpretable in terms of expectations formation, market clearing, nominal rigidities, etc. The general-partial equilibrium distinction is also discussed. |
Keywords: | Cointegrated VAR, unit root approximation, economic theory models, expectations, general equilibrium, DSGE models |
JEL: | C32 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:7283&r=ecm |
By: | Harvey, A. |
Abstract: | The relationship between in.ation and the output gap can be modeled simply and effectively by including an unobserved random walk component in the model. The dynamic properties match the stylized facts and the random walk component satisfies the properties normally required for core in.ation. The model may be generalized to as to include a term for the expectation of next period's output, but it is shown that this is difficult to distinguish from the original specification. The model is fited as a single equation and as part of a bivariate model that includes an equation for GDP. Fitting the bivariate model highlights some new aspects of unobserved components modeling. Single equation and bivariate models tell a similar story: an output gap two per cent above trend is associated with an annual inflation rate that is one percent above core inflation. |
Keywords: | Cycle; hybrid new Keynesian Phillips curve; inflation gap; Kalman filter, output gap. |
Date: | 2008–01 |
URL: | http://d.repec.org/n?u=RePEc:cam:camdae:0805&r=ecm |
By: | Freeman, Alan |
Abstract: | This note lays out a roadmap to Datapedia: the goal is to share numbers with the same power and ease that the Wiki has delivered for documents. This would transform the quality and usability of economic data. The goal is a system which, by analogy with Wikipedia can establish a world resource for reliable data. The paper discusses a process by which data providers and users can evolve a new set os systems for exchanging, describing and interacting with data to bring this about. The proposal centres on the metadata – additional descriptive data – that is associated with numeric data, and suggests how, in two cases – World GDP and Creative Industry Employment – data could be mapped in such a way that viable Datawiki platforms can be built. The proposal also allows existing communities of users to start reshaping the way they exchange and handle data, to permit, and also to improve existing standards for collaborative use of data. The first step would be Datawiki: an opensource system for recording revisions, changes and sources of data, allowing users to compare different revisions and versions of data with each other. It would be a set of protocols, and simple web tools, to help data researchers pool, compare, scrutinise, and revise datasets from multiple sources. The first step towards Datawiki is Wikidata: rethinking the way that data itself is transmitted between people that collaborate on it a platform-independent standard for exchanging specifically numeric data. I show that the ubiquitous standard for exchanging data – the spreadsheet – is not up to the task of serving as a platform for Datawiki, and assess how alternatives can be developed. |
Keywords: | Creative Industries; Economic statistics; Datapedia; Wikipedia; Wiki; data, wikipedia, creative industries, macroeconomics |
JEL: | Z1 E01 C8 |
Date: | 2008–06–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:9012&r=ecm |