|
on Econometrics |
By: | Baltagi, Badi H.; Bresson, Georges; Chaturvedi, Anoop; Lacroix, Guy |
Abstract: | The paper develops a general Bayesian framework for robust linear static panel data models using epsilon-contamination. A two-step approach is employed to derive the conditional type II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior densities are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman-Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case. We illustrate the performance of our estimator relative to classic panel estimators using data on earnings and crime. |
Keywords: | epsilon-contamination, hyper g-priors, type II maximum likelihood posterior density, panel data, robust Bayesian estimator, three-stage hierarchy. |
JEL: | C11 C23 C26 |
Date: | 2014–11–14 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:59896&r=ecm |
By: | Warne, Anders; Coenen, Günter; Christoffel, Kai |
Abstract: | The predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models. |
Keywords: | Bayesian inference,density forecasting,Kalman filter,missing data,Monte Carlo integration,predictive likelihood |
JEL: | C11 C32 C52 C53 E37 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:zbw:cfswop:478&r=ecm |
By: | Laurent Callot (VU University Amsterdam, the Tinbergen Institute and CREATES); Johannes Tang Kristensen (University of Southern Denmark and CREATES) |
Abstract: | This paper proposes a parsimoniously time varying parameter vector autoregressive model (with exogenous variables, VARX) and studies the properties of the Lasso and adaptive Lasso as estimators of this model. The parameters of the model are assumed to follow parsimonious random walks, where parsimony stems from the assumption that increments to the parameters have a non-zero probability of being exactly equal to zero. By varying the degree of parsimony our model can accommodate constant parameters, an unknown number of structural breaks, or parameters with a high degree of variation. We characterize the finite sample properties of the Lasso by deriving upper bounds on the estimation and prediction errors that are valid with high probability; and asymptotically we show that these bounds tend to zero with probability tending to one if the number of non zero increments grows slower than squareroot T. By simulation experiments we investigate the properties of the Lasso and the adaptive Lasso in settings where the parameters are stable, experience structural breaks, or follow a parsimonious random walk.We use our model to investigate the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule.We document substantial changes in the policy response of the Fed in the 1980s and since 2008. |
Keywords: | Parsimony, time varying parameters, VAR, structural break, Lasso |
JEL: | C01 C13 C32 E52 |
Date: | 2014–11–04 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-41&r=ecm |
By: | Eisenhauer, Philipp (University of Chicago); Heckman, James J. (University of Chicago); Mosso, Stefano (University of Chicago) |
Abstract: | We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. |
Keywords: | returns to education, dynamic discrete choice, simulation-based estimation |
JEL: | C13 C15 C35 I21 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp8548&r=ecm |
By: | Petra Andrlíková (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic) |
Abstract: | This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more accurate in statistical terms, which is evaluated based on Hosmer-Lemeshow goodness of fit test, Hosmer et al. (2013). |
Keywords: | default probability, bayesian analysis, logistic regression, goodness-of-fit |
JEL: | C11 C51 C52 G10 |
Date: | 2014–04 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2014_14&r=ecm |
By: | Yuki Kawakubo (Graduate School of Economics, The University of Tokyo); Shonosuke Sugasawa (Graduate School of Economics, The University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) |
Abstract: | In this paper, we consider the problem of selecting explanatory variables of fixed effects in linear mixed models under covariate shift, which is the situation that the values of covariates in the predictive model are different from those in the observed model. We construct a variable selection criterion based on the conditional Akaike information introduced by Vaida and Blanchard (2005) and the proposed criterion is generalization of the conditional Akaike information criterion (conditional AIC) in terms of covariate shift. We especially focus on covariate shift in small area prediction and show usefulness of the proposed criterion through simulation studies. |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2014cf944&r=ecm |
By: | Azam, Kazim (Vrije Universiteit, Amsterdam) |
Abstract: | This paper studies the effect of marginal distributions on a copula, in the case of mixed discrete-continuous random variables. The existing literature has proposed various methods to deal with mixed marginals: this paper is the rst to quantify their e ect in a uni ed Bayesian setting. Using order statistics based information for the marginals, as proposed by Ho (2007), we find that in small samples the bias and mean square error are at least half in size as compared to those of empirical or misspecified marginal distributions. The difference in the bias and mean square error enlarges with increasing sample size, especially for low count discrete variables. We employ the order statistics method on firm-level patents data, containing both discrete and continuous random variables, and consistently estimate their correlation. Key words: Bayesian copula ; discrete data ; order statistics ; semi-parametric JEL classification: C11 ; C14 ; C52 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:1053&r=ecm |
By: | Sundström, David (Department of Economics, Umeå School of Business and Economics) |
Abstract: | A novel method to measure bidders’ costs in descending first price sealed bid auctions is introduced. The novelty is based on two intuitively appealing economic assumptions on which bidders’ costs are given an imperfect measurements interpretation. Theory provides no guidance as to the shape of the cost distributions, while empirical evidence suggests them being positively skewed. Consequently, a flexible distribution is assumed in an imperfect measurements setting. An illustration of the method on Swedish public procurement data is provided. |
Keywords: | log-generalized gamma distribution; latent variable; maximum likelihood; prediction; public procurement |
JEL: | C24 C51 D22 D44 |
Date: | 2014–11–13 |
URL: | http://d.repec.org/n?u=RePEc:hhs:umnees:0899&r=ecm |
By: | Bellemare, Charles (Université Laval); Bissonnette, Luc (Laval University); Kröger, Sabine (Université Laval) |
Abstract: | This paper discusses the choice of the number of participants for within-subjects (WS) designs and between-subjects (BS) designs based on simulations of statistical power allowing for different numbers of experimental periods. We illustrate the usefulness of the approach in the context of field experiments on gift exchange. Our results suggest that a BS design requires between 4 to 8 times more subjects than a WS design to reach an acceptable level of statistical power. Moreover, the predicted minimal sample sizes required to correctly detect a treatment effect with a probability of 80% greatly exceed sizes currently used in the literature. Our results suggest that adding experimental periods in an experiment can substantially increase the statistical power of a WS design, but have very little effect on the statistical power of the BS design. Finally, we discuss issues relating to numerical computation and present the powerBBK package programmed for STATA. This package allows users to conduct their own analysis of power for the different designs (WS and BS), conditional on user specified experimental parameters (true effect size, sample size, number of periods, noise levels for control and treatment, error distributions), statistical tests (parametric and nonparametric), and estimation methods (linear regression, binary choice models (probit and logit), censored regression models (tobit)). |
Keywords: | within-subjects design, between-subjects design, sample sizes, statistical power, experiments |
JEL: | C8 C9 D03 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp8583&r=ecm |
By: | David Azriel; Yosef Rinott |
Abstract: | Statistical models in econometrics, biology, and most other areas, are not expected to be correct, and often are not very accurate. The choice of a model for the analysis of data depends on the purpose of the analysis, the relation between the data and the model, and also on the sample or data size. Combining ideas from Erev, Roth, Slonim, and Barron (2007) and the well-known AIC criterion and cross-validation, we propose a variant of model selection approach as a function of the models and the data size, with quantification of the chosen model's relative value. Our research is motivated by data from experimental economics, and we also give a simple biological example. |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:huj:dispap:dp669&r=ecm |
By: | Laurent Callot (VU University Amsterdam, the Netherlands); Anders B. Kock (Aarhus University, Denmark); Marcelo C. Medeiros (Pontifical Catholic University of Rio de Janeiro, Brasil) |
Abstract: | In this paper we consider modeling and forecasting of large realized covariance matrices by penalized vector autoregressive models. We propose using Lasso-type estimators to reduce the dimensionality to a manageable one and provide strong theoretical performance guarantees on the forecast capability of our procedure. To be precise, we show that we can forecast future realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics for the realized covariance matrices of the 30 Dow Jones stocks and find that these dynamics are not stable as the data is aggregated from the daily to the weekly and monthly frequency. The theoretical performance guarantees on our forecasts are illustrated on the Dow Jones index. In particular, we can beat our benchmark by a wide margin at the longer forecast horizons. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. |
Keywords: | Realized covariance; vector autoregression; shrinkage; Lasso; forecasting; portfolio allocation |
JEL: | C22 |
Date: | 2014–11–13 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20140147&r=ecm |
By: | Davide Pettenuzzo (Brandeis University); Francesco Ravazzolo (Norges Bank, and BI Norwegian Business School) |
Abstract: | We propose a novel Bayesian model combination approach where the combination weights depend on the past forecasting performance of the individual models entering the combina- tion through a utility-based objective function. We use this approach in the context of stock return predictability and optimal portfolio decisions, and investigate its forecasting perfor- mance relative to a host of existing combination schemes. We find that our method produces markedly more accurate predictions than the existing model combinations, both in terms of statistical and economic measures of out-of-sample predictability. We also investigate the incremental role of our model combination method in the presence of model instabilities, by considering predictive regressions that feature time-varying regression coefficients and volatil- ity. We find that the gains from using our model combination method increase significantly when we allow for instabilities in the individual models entering the combination. |
Keywords: | Bayesian econometrics; Time-varying parameters; Model combinations; Port- folio choice. |
JEL: | C11 C22 G11 G12 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:brd:wpaper:80&r=ecm |
By: | Becker, Gideon |
Abstract: | What determines the risk structure of financial portfolios of German households? In this paper we estimate the determinants of the share of financial wealth invested in three broad risk classes. We employ a new econometric approach - the so called fractional multinomial logit model - which allows for joint estimation of shares while accounting for their fractional nature. We extend the model to allow for unobserved heterogeneity across households via maximum simulated likelihood. We find that self-assessed appetite for risk as well as the level of wealth have strong positive effects on the riskiness of the average household's portfolio. These findings largely stay true even after we control for the potential confounding effects of unobserved differences across households via correlated random effects. |
Keywords: | household finance,portfolio composition,non-linear panel data model,fractional response model,unobserved heterogeneity |
JEL: | C15 C33 C35 C51 C58 D14 G11 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:zbw:tuewef:74&r=ecm |
By: | Francis X. Diebold (Department of Economics, University of Pennsylvania); Minchul Shin (Department of Economics, University of Pennsylvania) |
Abstract: | We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 (\stochastic error distance," or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, showing that all such loss functions can be written as weighted SED's. The leading case is absolute-error loss, in which the SED weights are unity, establishing its primacy. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts. |
Keywords: | Forecast accuracy, forecast evaluation, absolute-error loss, quadratic loss, squared-error loss |
JEL: | C53 |
Date: | 2014–11–02 |
URL: | http://d.repec.org/n?u=RePEc:pen:papers:14-038&r=ecm |
By: | Dominic Coey; Bradley Larsen; Kane Sweeney |
Abstract: | We introduce a simple and robust approach to answering two key questions in empirical auction analysis: discriminating between models of entry and quantifying the revenue gains from improving auction design. The approach builds on Bulow and Klemperer (1996), connecting their theoretical results to empirical work. It applies in a broad range of information settings and auction formats without requiring instruments or estimation of a complex structural model. We demonstrate the approach using US timber and used-car auction data. |
JEL: | C10 D44 L10 L13 L40 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:20523&r=ecm |
By: | Yuko Onishi (Graduate School of Economics, The University of Tokyo); Yasuhiro Omori (Faculty of Economics, The University of Tokyo) |
Abstract: | Entry game models are often used to study the nature of firms’ profit and the nature of competition among firms in empirical studies. However, when there are multiple players in an oligopoly market, resulting multiple equilibria have made it difficult in previous studies to estimate the payo↵ functions of players in complete information, static and discrete games without using unreasonable assumptions. To overcome this difficulty, this paper proposes a practical estimation method for an entry game with three players using a Bayesian approach. Some mild assumptions are imposed on the payoff function, and the average competitive effect is used to capture the entry effect of the number of firms. Our proposed methodology is applied to Japanese airline data in 2000, when there are three major airline companies, ANA, JAL and JAS. The model comparison is conducted to investigate the nature of strategic interaction among these Japanese airline companies. |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2014cf943&r=ecm |
By: | Jennifer Roberts (University of Sheffield); Gurleen Popli (University of Leicester); Rosemary J. Harris (Queen Mary University of London) |
Abstract: | In order to meet their ambitious climate change goals governments around the world will need to encourage behaviour change as well as technological progress; and in particular they need to weaken our attachment to the private car. A prerequisite to designing effective policy is a thorough understanding of the factors that drive behaviours and decisions. In an effort to better understand how the public’s environmental attitudes affect their behaviours we estimate a hybrid choice model (HCM) for commuting mode choice using a large household survey data set. HCMs combine traditional discrete choice models with a structural equation model to integrate latent variables, such as attitudes and other psychological constructs, into the choice process. To date HCMs have been estimated on small bespoke data sets, beset with problems of sample selection, focusing effects and limited generalizability. To overcome these problems we demonstrate the feasibility of using this valuable modelling approach with nationally representative data. Our estimates suggest that environmental attitudes and behaviours are separable constructs, and both have an important influence on commute mode choice. These psychological factors can be exploited by governments looking to add to their climate change policy toolbox in an effort to change travel behaviours. |
Keywords: | hybrid choice model, structural equation modelling, environment |
JEL: | C38 Q50 R41 |
Date: | 2014–11 |
URL: | http://d.repec.org/n?u=RePEc:shf:wpaper:2014019&r=ecm |
By: | Arcidiacono, Peter (Duke University); Hotz, V. Joseph (Duke University); Maurel, Arnaud (Duke University); Romano, Teresa (Goucher College) |
Abstract: | We show that data on subjective expectations, especially on outcomes from counterfactual choices and choice probabilities, are a powerful tool in recovering ex ante treatment effects as well as preferences for different treatments. In this paper we focus on the choice of occupation, and use elicited beliefs from a sample of male undergraduates at Duke University. By asking individuals about potential earnings associated with counterfactual choices of college majors and occupations, we can recover the distribution of the ex ante monetary returns to particular occupations, and how these returns vary across majors. We then propose a model of occupational choice which allows us to link subjective data on earnings and choice probabilities with the non-pecuniary preferences for each occupation. We find large differences in expected earnings across occupations, and substantial heterogeneity across individuals in the corresponding ex ante returns. However, while sorting across occupations is partly driven by the ex ante monetary returns, non-monetary factors play a key role in this decision. Finally, our results point to the existence of sizable complementarities between college major and occupations, both in terms of earnings and non-monetary benefits. |
Keywords: | ex ante treatment effects, occupational choice, subjective expectations |
JEL: | J24 I23 C31 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp8549&r=ecm |