|
on Econometrics |
By: | Giorgio Calzolari (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); Laura Neri (Universit`a di Siena, Dipartimento di Metodi Quantitativi) |
Abstract: | Given a set of continuous variables with missing data, we prove in this paper that the iterative application of a simple “least-squares estimation/multivariate normal simulation” procedure produces an efficient parameters estimator. There are two main assumptions behind our proof: (1) the missing data mechanism is ignorable; (2) the data generating process is a multivariate normal linear regression. Disentangling the iterative procedure and its convergence conditions, we show that the estimator is a “method of simulated scores” (a particular case of McFadden’s “method of simulated moments”), thus equivalent to maximum likelihood if the number of replications is conveniently large. We thus provide a non-Bayesian re-interpretation of the estimation/simulation problem. The computational procedure is obtained introducing a simple modification into existing algorithms. Its software implementation is straightforward (few simple statements in any programming language) and easily applicable to datasets with large number of variables. |
Keywords: | Simulated scores, missing data, multivariate normal regression model, estimation/simulation, general pattern of missingness, simultaneous equations, structural form, reduced form |
JEL: | C13 C15 C30 C81 |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2010_01&r=ecm |
By: | Korobilis, Dimitris |
Abstract: | This paper develops methods for automatic selection of variables in forecasting Bayesian vector autoregressions (VARs) using the Gibbs sampler. In particular, I provide computationally efficient algorithms for stochastic variable selection in generic (linear and nonlinear) VARs. The performance of the proposed variable selection method is assessed in a small Monte Carlo experiment, and in forecasting 4 macroeconomic series of the UK using time-varying parameters vector autoregressions (TVP-VARs). Restricted models consistently improve upon their unrestricted counterparts in forecasting, showing the merits of variable selection in selecting parsimonious models. |
Keywords: | Forecasting; variable selection; time-varying parameters; Bayesian |
JEL: | C32 C53 C52 E37 C11 E47 |
Date: | 2009–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:21124&r=ecm |
By: | Francesco Ravazzolo (Norges Bank (Central Bank of Norway)); Shaun P. Vahey |
Abstract: | We propose a methodology for producing forecast densities for economic aggregates based on disaggregate evidence. Our ensemble predictive methodology utilizes a linear mixture of experts framework to combine the forecast densities from potentially many component models. Each component represents the univariate dynamic process followed by a single disaggregate variable. The ensemble produced from these components approximates the many unknown relationships between the disaggregates and the aggregate by using time-varying weights on the component forecast densities. In our application, we use the disaggregate ensemble approach to forecast US Personal Consumption Expenditure in°ation from 1997Q2 to 2008Q1. Our ensemble combining the evidence from 11 disaggregate series outperforms an aggregate autoregressive benchmark, and an aggregate time-varying parameter specification in density forecasting. |
Keywords: | Ensemble forecasting, disaggregates |
JEL: | C11 C32 C53 E37 E52 |
Date: | 2010–03–05 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2010_02&r=ecm |
By: | David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor |
Abstract: | In this paper we propose tests for the null hypothesis that a time series process displays a constant level against the alternative that it displays (possibly) multiple changes in level. Our proposed tests are based on functions of appropriately standardized sequences of the differences between sub-sample mean estimates from the series under investigation. The tests we propose differ notably from extant tests for level breaks in the literature in that they are designed to be robust as to whether the process admits an autoregressive unit root (the data are I(1)) or stable autoregressive roots (the data are I(0)). We derive the asymptotic null distributions of our proposed tests, along with representations for their asymptotic local power functions against Pitman drift alternatives under both I(0) and I(1) environments. Associated estimators of the level break fractions are also discussed. We initially outline our procedure through the case of non-trending series, but our analysis is subsequently extended to allow for series which display an underlying linear trend, in addition to possible level breaks. Monte Carlo simulation results are presented which suggest that the proposed tests perform well in small samples, showing good size control under the null, regardless of the order of integration of the data, and displaying very decent power when level breaks occur. |
Keywords: | Level breaks; unit root; moving means; long run variance estimation; robust tests; breakpoint estimation |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:10/01&r=ecm |
By: | Valter Di Giacinto (Bank of Italy) |
Abstract: | Despite the fact that it provides a potentially useful analytical tool, allowing for the joint modeling of dynamic interdependencies within a group of connected areas, until lately the VAR approach had received little attention in regional science and spatial economic analysis. This paper aims to contribute in this field by dealing with the issues of parameter identification and estimation and of structural impulse response analysis. In particular, there is a discussion of the adaptation of the recursive identification scheme (which represents one of the more common approaches in the time series VAR literature) to a space-time environment. Parameter estimation is subsequently based on the Full Information Maximum Likelihood (FIML) method, a standard approach in structural VAR analysis. As a convenient tool to summarize the information conveyed by regional dynamic multipliers with a specific emphasis on the scope of spatial spillover effects, a synthetic space-time impulse response function (STIR) is introduced, portraying average effects as a function of displacement in time and space. Asymptotic confidence bands for the STIR estimates are also derived from bootstrap estimates of the standard errors. Finally, to provide a basic illustration of the methodology, the paper presents an application of a simple bivariate fiscal model fitted to data for Italian NUTS 2 regions. |
Keywords: | structural VAR model, spatial econometrics, identification, space-time impulse response analysis |
JEL: | C32 C33 R10 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_746_10&r=ecm |
By: | Stephan Smeekes; A. M. Robert Taylor |
Abstract: | We provide a joint treatment of three major issues that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not, and the possible presence of nonstationary volatility in the data. Harvey, Leybourne and Taylor (2010, Journal of Econometrics, forthcoming) propose decision rules based on a four-way union of rejections of QD and OLS detrended tests, both with and without allowing for a linear trend, to deal with the first two problems. However, in the presence of nonstationary volatility these test statistics have limit distributions which depend on the form of the volatility process, making tests based on the standard asymptotic critical values invalid. We construct bootstrap versions of the four-way union of rejections test, which, by employing the wild bootstrap, are shown to be asymptotically valid in the presence of nonstationary volatility. These bootstrap union tests therefore allow for a joint treatment of all three of the aforementioned problems. |
Keywords: | Unit root; local trend; initial condition; asymptotic power; union of rejections decision rule; nonstationary volatility; wild bootstrap |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:10/03&r=ecm |
By: | Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Patrick Rakotomarolahy (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I) |
Abstract: | The aim of this paper is to introduce a new methodology to forecast the monthly economic indicators used in the Gross Domestic Product (GDP) modelling in order to improve the forecasting accuracy. Our approach is based on multivariate k-nearest neighbors method and radial basis function method for which we provide new theoretical results. We apply these two methods to compute the quarter GDP on the Euro-zone, comparing our approach, with GDP obtained when we estimate the monthly indicators with a linear model, which is often used as a benchmark. |
Keywords: | Multivariate k-Nearest Neighbor, Radial Basis Functions, Non-Parametric Forecasts, Economic indicators, GDP, Euro area |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00460472_v1&r=ecm |
By: | Gaure, Simen (Ragnar Frisch Centre for Economic Research); Roed, Knut (Ragnar Frisch Centre for Economic Research); van den Berg, Gerard J. (University of Mannheim); Zhang, Tao (Ragnar Frisch Centre for Economic Research) |
Abstract: | Consider a setting where a treatment that starts at some point during a spell (e.g. in unemployment) may impact on the hazard rate of the spell duration, and where the impact may be heterogeneous across subjects. We provide Monte Carlo evidence on the feasibility of estimating the distribution of treatment effects from duration data with selectivity, by means of a nonparametric maximum likelihood estimator with unrestricted numbers of mass points for the heterogeneity distribution. We find that specifying the treatment effect as homogenous may yield misleading average results if the true effects are heterogeneous, even when the sorting into treatment is appropriately accounted for. Specifying the treatment effect as a random coefficient allows for precise estimation of informative average treatment effects including the program’s overall impact on the mean duration. |
Keywords: | duration analysis, unobserved heterogeneity, program evaluation, nonparametric estimation, Monte Carlo simulation, timing of events, random effects |
JEL: | C31 C41 J64 C63 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4794&r=ecm |
By: | Torben G. Andersen (Kellogg School of Management, Northwestern University, Evanston, IL; NBER, Cambridge, MA; and CREATES, Aarhus, Denmark); Luca Benzoni (Federal Reserve Bank of Chicago, Chicago, Illinois, USA.) |
Abstract: | We give an overview of a broad class of models designed to capture stochastic volatility in financial markets, with illustrations of the scope of application of these models to practical finance problems. In a broad sense, this model class includes GARCH, but we focus on a narrower set of specifications in which volatility follows its own random process and is therefore a latent factor. These stochastic volatility specifications fit naturally in the continuous-time finance paradigm, and there- fore serve as a prominent tool for a wide range of pricing and hedging applications. Moreover, the continuous-time paradigm of financial economics is naturally linked with the theory of volatility mod- eling and forecasting, and in particular with the practice of constructing ex-post volatility measures from high-frequency intraday data (realized volatility). One drawback is that in this setting volatility is not measurable with respect to observable information, and this feature complicates estimation and inference. Further, the presence of an additional state variable|volatility|renders the model less tractable from an analytic perspective. New estimation methods, combined with model restrictions that allow for closed-form solutions, make it possible to address these challenges while keeping the model consistent with the main properties of the data. |
Keywords: | Stochastic Volatility, Realized Volatility, Implied Volatility, Options, Volatility Smirk, Volatility Smile, Dynamic Term Structure Models, Affine Models |
JEL: | E43 G12 |
Date: | 2010–02–25 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2010-10&r=ecm |
By: | David Pitt (Dept. Economics, University of Melbourne); Montserrat Guillén (Dept. Econometrics, University of Barcelona) |
Abstract: | We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail. |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2010-3&r=ecm |
By: | Picchio, Matteo (Tilburg University); Mussida, Chiara (University of Milan) |
Abstract: | Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates sample selection. We describe a simulation algorithm to implement counterfactual comparisons of densities. The proposed methodology is used to investigate the gender wage gap in Italy. It is found that when sample selection is taken into account gender wage gap widens, especially at the bottom of the wage distribution. Explanations are offered for this empirical finding. |
Keywords: | gender wage gap, hazard function, sample selection, glass ceiling, sticky floor |
JEL: | C21 C41 J16 J31 J71 |
Date: | 2010–02 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4783&r=ecm |
By: | Elmar Mertens |
Abstract: | No, not really, since spectral estimators suffer from small sample and misspecification biases just as VARs do. Spectral estimators are no panacea for implementing long-run restrictions. ; In addition, when combining VAR coefficients with non-parametric estimates of the spectral density, care needs to be taken to consistently account for information embedded in the non-parametric estimates about serial correlation in VAR residuals. This paper uses a spectral factorization to ensure a correct representation of the data's variance. But this cannot overcome the fundamental problems of estimating the long-run dynamics of macroeconomic data in samples of typical length. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2010-09&r=ecm |
By: | Mimako Kobayashi (Department of Resource Economics, University of Nevada, Reno); Klaus Moeltner (Department of Resource Economics, University of Nevada, Reno); Kimberly Rollins (Department of Resource Economics, University of Nevada, Reno) |
Abstract: | In many stated preference settings stakeholders will be uncertain as to their exact willingness-to-pay for a proposed environmental amenity. To accommodate this possibility analysts have designed elicitation formats with multiple bids and response options that allow for the expression of uncertainty. We argue that the information content flowing from such elicitation has not yet been fully and efficiently exploited in existing contributions. We introduce a Latent Thresholds Estimator that focuses on the simultaneous identification of the full set of thresholds that delineate an individual's value space in accordance with observed response categories. Our framework provides a more complete picture of the underlying value distribution, the marginal effects of regressors, and the impact of bid designs on estimation efficiency. We show that the common practice of re-coding responses to derive point estimate of willingness-to-pay leaves useful information untapped and can produce misleading results if thresholds are highly correlated. |
Keywords: | Stated Preference; Multiple Bounded Elicitation; Polychotomous Choice; Bayesian Estimation; Value Uncertainty |
JEL: | C11 C15 C35 C52 Q51 |
Date: | 2010–01 |
URL: | http://d.repec.org/n?u=RePEc:unr:wpaper:10-001&r=ecm |
By: | Arnab Bhattacharjee; Sean Holly |
Abstract: | While much of the literature on cross section dependence has fo?cused mainly on estimation of the regression coefficients in the under?lying model, estimation and inferences on the magnitude and strength of spill-overs and interactions has been largely ignored. At the same time, such inferences are important in many applications, not least because they have structural interpretations and provide useful inter?pretation and structural explanation for the strength of any interac?tions. In this paper we propose GMM methods designed to uncover underlying (hidden) interactions in social networks and committees. Special attention is paid to the interval censored regression model. Our methods are applied to a study of committee decision making within the Bank of England¡¯s monetary policy committee. |
Keywords: | Committee Decision Making, Social Networks, Cross Section and Spatial Interaction, Generalised Method of Moments, Censored Regression Model, Expectation-Maximisation Algorithm, Monetary Policy, Interest Rates. |
JEL: | D71 D85 E43 E52 C31 C34 |
Date: | 2010–09 |
URL: | http://d.repec.org/n?u=RePEc:san:cdmawp:1004&r=ecm |
By: | Arnab Bhattacharjee; Sean Holly |
Abstract: | Until recently, much effort has been devoted to the estimation of panel data regression models without adequate attention being paid to the drivers of diffusion and interaction across cross section and spatial units. We discuss some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross section and spatial interactions. Specifically, we highlight the important dis?tinction between spatial dependence driven by unobserved common factors and those based on a spatial weights matrix. We argue that, purely factor driven models of spatial dependence may be somewhat inadequate because of their connection with the exchangeability as?sumption. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted. |
Keywords: | Cross Sectional and Spatial Dependence, Spatial Weights Matrix, Interactions and Diffusion, Monetary Policy Committee, Generalised Method of Moments. |
JEL: | E42 E43 E50 E58 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:san:cdmawp:1003&r=ecm |
By: | Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University) |
Abstract: | This paper proposes a method for assessing the information content and validity of a mathematical structural model for which only the directions of influence among its endogenous and exogenous variables are known, as expressed by the sign patterns of associated arrays. The traditional literature on this issue presents extremely restrictive conditions under which such a “qualitative analysis” can be conducted. As a result, there have been very few successful applications of the traditional method. We propose a means of vastly expanding the scope of such an analysis to virtually any applied model. Our method works with the restrictions found for the sign patterns of complete rows and columns, or even the entire sign pattern, of the reduced form, rather than only individual entries. The information provided by the model is measured by the Shannon entropy of the possible sign patterns of the reduced form and the frequency of occurrence of each possibility. An example of the method is provided for Klein’s Model I. Although this model has been used for over fifty years for a variety of purposes, we found that the sign pattern of the estimated, unrestricted reduced form from the original data set was not consistent with the proposed, structural directions of influence among the model’s variables. |
Keywords: | Qualitative Analysis, Entropy, Falsification, Comparative Statics |
JEL: | C12 C14 C52 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:tem:wpaper:1003&r=ecm |
By: | John M. Rose (Institute of Transport and Logistics Studies (ITLS), The University of Sydney, Australia); Lorenzo Masiero (Istituto Ricerche Economiche (IRE), Università della Svizzera Italiana, Svizzera) |
Abstract: | The importance of willingness to pay (WTP) and willingness to accept (WTA) measures in the evaluation of policy measures has led to a constant stream of research examining survey methods and model specifications seeking to capture and explain the concept of marginal rates of substitution as much as possible. Stated choice experiments pivoted around a reference alternative allow the specification of discrete choice models to accommodate the prospect theory reference dependence assumption. This permits an investigation of theories related to loss aversion and diminishing sensitivity, and to test the discrepancy between WTP and WTA, widely documented within the literature. With more advanced classes of discrete choice models at our disposal, it is now possible to test different preference specifications that are better able to measure WTP and WTA values. One such model allowing for utility to be directly specified in WTP space has recently shown interesting qualities. This paper compares and contrasts models estimated in preference space to those estimated in WTP space allowing for asymmetry in the marginal utilities by estimating different parameters according to reference, gain and loss values. The results suggest a better model fit for the data estimated in WTP space, contradicting the findings of previous researches. The parameter estimates report significant evidence of loss aversion and diminishing sensitivities even though the symmetric specification outperforms the asymmetric ones. Finally, the analysis of the WTP and WTA measures confirms the higher degree of WTA compared to WTP, and highlights the appeal of the WTP space specification in terms of plausibility of the estimated measures. |
Keywords: | choice experiments, willingness to pay space, preference asymmetry |
JEL: | C25 L91 |
Date: | 2010–03 |
URL: | http://d.repec.org/n?u=RePEc:lug:wpaper:1006&r=ecm |