|
on Econometrics |
By: | Ruben Crevits; Christophe Croux |
Abstract: | The model parameters of linear state space models are typically estimated with maximum likelihood estimation, where the likelihood is computed analytically with the Kalman filter. Outliers can deteriorate the estimation. Therefore we propose an alternative estimation method. The Kalman filter is replaced by a robust version and the maximum likelihood estimator is robustified as well. The performance of the robust estimator is investigated in a simulation study. Robust estimation of time varying parameter regression models is considered as a special case. Finally, the methodology is applied to real data. |
Keywords: | Kalman Filter, Forecasting, Outliers, Time varying parameters |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:ete:kbiper:588734&r=ecm |
By: | Shosei Sakaguchi (Graduate School of Economics, Kyoto University) |
Abstract: | This paper proposes a new approach to identifying and estimating the time-varying average treatment effect (ATE) using panel data to control for unobserved fixed effects. The proposed approach allows for treatment effect heterogeneity induced by unobserved fixed effects. Under such heterogeneity, while existing panel data approaches identify and estimate the ATEs only for limited subpopulations, the proposed approach identifies and estimates the ATE for the entire population. The proposed approach requires two conditions: (i) The proportion of additive unobserved fixed effects terms in the treated and untreated potential outcome models is constant across units and time, and (ii) We have exogenous variables that correlate with unobserved fixed effects conditional on the assigned treatment. Under these conditions, the approach first identifies observed covariates parameters and the proportion of fixed effects terms. The approach then identifies the ATE by combining observed data with them to predict and adjust unobserved potential outcome for each treated and untreated unit. Based on the identification results, this paper proposes an estimator of the ATE, which takes the form of a generalized method of moments. I apply the estimator to estimate the impact of a mother smoking during pregnancy on her child's birth weight. |
Keywords: | Potential outcome; Program evaluation; Time-varying treatment; Treatment effect heterogeneity; Unobserved heterogeneity |
JEL: | C21 C23 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:kyo:wpaper:970&r=ecm |
By: | Do Won Kwak (School of Economics, The University of Queensland); Kam Ki Tang (School of Economics, The University of Queensland); Juyoung Cheong (School of Economics, The University of Queensland) |
Abstract: | This paper presents a new estimation framework for partially balanced panel data models with multiple error components, which are the main source of the endogeneity problem in such models. When the dimension of error components is high, computational difficulty arises even if the data are partially balanced (i.e. balanced in some, but not all, dimensions). For linear models, our proposed projection approach can control for as many error components as the number of balanced dimensions plus one, and gives consistent estimates. For Poisson models, using correlated random effects obtained from multiple projections can also provide consistent estimates. Using Monte Carlo simulations, we confirm that the proposed estimators are consistent and produce correct inferences, with substantial power and no size bias. The proposed estimators are applied to improve the estimates of trade cost elasticity, through proper treatment of zero observations and unobserved factors. We obtain smaller trade cost elasticity estimates in both linear and Poisson models, compared to previous estimates. |
Keywords: | Pseudo Poisson ML; Unbalanced Panel Data; Within estimator |
JEL: | C23 F14 |
Date: | 2017–07–04 |
URL: | http://d.repec.org/n?u=RePEc:qld:uq2004:583&r=ecm |
By: | Matteo Barigozzi; Lorenzo Trapani |
Abstract: | We develop a monitoring procedure to detect a change in a large approximate factor model. Our statistics are based on a well-known property of the $% \left( r+1\right) $-th eigenvalue of the sample covariance matrix of the data: whilst under the null the $\left( r+1\right) $-th eigenvalue is bounded, under the alternative of a change (either in the loadings, or in the number of factors itself) it becomes spiked. Given that the sample eigenvalue cannot be estimated consistently under the null, we regularise the problem by randomising the test statistic in conjunction with sample conditioning, obtaining a sequence of i.i.d., asymptotically chi-square statistics which are then employed to build the monitoring scheme. Numerical evidence shows that our procedure works very well in finite samples, with a very small probability of false detections and tight detection times in presence of a genuine change-point. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1708.02786&r=ecm |
By: | Deniz Akinc; Martina Vandebroek |
Abstract: | In this study, we compare the parameter estimates of the mixed logit model obtained with maximum likelihood and with hierarchical Bayesian estimation. The choice of the priors in Bayesian estimation and of the type and the number of quasi-random draws for maximum likelihood estimation have a big impact on the estimates. Our main focus is on the effect of the prior for the covariance matrix in hierarchical Bayes estimation. We investigate several priors such as Inverse Wisharts, the Separation Strategy, Scaled Inverse Wisharts and the Huang Half-t priors and we compute the root mean square errors of the resulting estimates for the mean, covariance matrix and individual parameters in a large simulation study. We show that the default settings in many software packages can lead to very unreliable results and that it is important to check the robustness of the results. |
Keywords: | Mixed Logit Model, Hierarchical Bayesian Estimation, Separation Strategy, Inverse Wishart Distribution, Scaled Inverse Wishart Distribution, Huang Half-t Distribution |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:ete:kbiper:588550&r=ecm |
By: | David T. Frazier; Dan Zhu |
Abstract: | Indirect inference requires simulating realizations of endogenous variables from the model under study. When the endogenous variables are discontinuous functions of the model parameters, the resulting indirect inference criterion function is discontinuous and does not permit the use of derivative-based optimization routines. Using a specific class of measure changes, we propose a novel simulation algorithm that alleviates the underlying discontinuities inherent in the indirect inference criterion function, permitting the application of derivative-based optimization routines to estimate the unknown model parameters. Unlike competing approaches, this approach does not rely on kernel smoothing or bandwidth parameters. Several Monte Carlo examples that have featured in the literature on indirect inference with discontinuous outcomes illustrate the approach. These examples demonstrate that this new method gives superior performance over existing alternatives in terms of bias, variance and coverage. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1708.02365&r=ecm |
By: | Thomas Kneib; Nikolaus Umlauf |
Abstract: | Bayesian methods have become increasingly popular in the past two decades. With the constant rise of computational power even very complex models can be estimated on virtually any modern computer. Moreover, interest has shifted from conditional mean models to probabilistic distributional models capturing location, scale, shape and other aspects of a response distribution, where covariate effects can have flexible forms, e.g., linear, nonlinear, spatial or random effects. This tutorial paper discusses how to select models in the Bayesian distributional regression setting, how to monitor convergence of the Markov chains, evaluate relevance of effects using simultaneous credible intervals and how to use simulation-based inference also for quantities derived from the original model parameterisation. We exemplify the work flow using daily weather data on (i) temperatures on Germany's highest mountain and (ii) extreme values of precipitation all over Germany. |
Keywords: | Distributional regression, generalized additive models for location, scale and shape, Markov chain Monte Carlo simulations, semiparametric regression, tutorial |
JEL: | C11 C14 C61 C63 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2017-13&r=ecm |
By: | Ferman, Bruno; Ponczek, Vladimir |
Abstract: | It is well known that sample attrition can lead to inconsistent treatment effect estimators even in randomized control trials. Standard solutions to attrition problems either rely on strong assumptions on the attrition mechanisms or consider the estimation of bounds, which may be uninformative if attrition problems are severe. In this paper, we analyze strategies of focusing the analysis on subsets of the data with less observed attrition problems. We show that these strategies are asymptotically valid when the number of observations in each covariate cell goes to infinity. However, they can lead to important distortions when the number of observations per covariate cell is finite. |
Keywords: | impact evaluation, attrition, partial identification |
JEL: | C01 C93 |
Date: | 2017–08–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:80686&r=ecm |
By: | Michael B. Gordy; Hsiao Yen Lok; Alexander J. McNeil |
Abstract: | We study a class of backtests for forecast distributions in which the test statistic is a spectral transformation that weights exceedance events by a function of the modeled probability level. The choice of the kernel function makes explicit the user's priorities for model performance. The class of spectral backtests includes tests of unconditional coverage and tests of conditional coverage. We show how the class embeds a wide variety of backtests in the existing literature, and propose novel variants as well. We assess the size and power of the backtests in realistic sample sizes, and in particular demonstrate the tradeoff between power and specificity in validating quantile forecasts. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1708.01489&r=ecm |
By: | Ruben Crevits; Christophe Croux |
Abstract: | We provide a framework for robust exponential smoothing. For a class of exponential smoothing variants, we present a robust alternative. The class includes models with a damped trend and/or seasonal components. We provide robust forecasting equations, robust starting values, robust smoothing parameter estimation and a robust information criterion. The method is implemented in the R package robets, allowing for automatic forecasting. We compare the standard non-robust version with the robust alternative in a simulation study. Finally, the methodology is tested on data. |
Keywords: | Automatic Forecasting, Outliers, R package, Time series |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:ete:kbiper:588812&r=ecm |
By: | Aufenanger, Tobias |
Abstract: | This paper analyses optimal treatment allocation of experimental units to treatment and control group. 'Optimal' means that the allocation of treatments should balance covariates across treatment and control group in a way that minimizes the variance of the treatment estimator in a given linear model. This paper shows the benefits as well as the limits of this approach. In particular, it presents a sample size formula as well as several simulations to give some intuition on the minimum as well as the maximum benefits of this approach compared to random allocation as well as to alternative methods of treatment allocation. |
Keywords: | experiment design,treatment allocation |
JEL: | C90 C61 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iwqwdp:142017&r=ecm |
By: | Austin Nichols (Abt Associates); Linden McBride (Cornell University) |
Abstract: | We compare a variety of methods for predicting the probability of a binary treatment (the propensity score), with the goal of comparing otherwise like cases in treatment and control conditions for causal inference about treatment effects. Better prediction methods can under some circumstances improve causal inference both by reducing the finite sample bias and variability of estimators, but sometimes better predictions of the probability of treatment can increase bias and variance, and we clarify the conditions under which different methods produce better or worse inference (in terms of mean squared error of causal impact estimates). |
Date: | 2017–08–10 |
URL: | http://d.repec.org/n?u=RePEc:boc:scon17:13&r=ecm |
By: | Luca Barbaglia; Christophe Croux; Ines Wilms |
Abstract: | Volatility is a key measure of risk in financial analysis. The high volatility of one financial asset today could affect the volatility of another asset tomorrow. These lagged effects among volatilities - which we call volatility spillovers - are studied using the Vector AutoRegressive (VAR) model. We account for the possible fat-tailed distribution of the VAR model errors using a VAR model with errors following a multivariate Student t-distribution with unknown degrees of freedom. Moreover, we study volatility spillovers among a large number of assets. To this end, we use penalized estimation of the VAR model with t-distributed errors. We study volatility spillovers among energy, biofuel and agricultural commodities and reveal bidirectional volatility spillovers between energy and biofuel, and between energy and agricultural commodities. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1708.02073&r=ecm |
By: | Golan, Amos; LaFrance, Jeffrey T; Perloff, Jeffrey M.; Seabold, Skipper |
Abstract: | We present a new, information-theoretic approach for estimating a system of many demand equations where the unobserved reservation or choke prices vary across consumers. We illustrate this method by estimating a nonlinear, almost ideal demand system (AIDS) for four types of meat using cross-sectional data from Mexico, where most households did not buy at least one type of meat during the survey week. The system of deÂmand curves vary across demoÂgraphic groups. |
Keywords: | Social and Behavioral Sciences, demand system, choke prices, generalized maximum entropy |
Date: | 2017–08–04 |
URL: | http://d.repec.org/n?u=RePEc:cdl:agrebk:qt4qt9q8vr&r=ecm |
By: | Joseph P. Romano; Michael Wolf |
Abstract: | In many multiple testing problems, the individual null hypotheses (i) concern univariate parameters and (ii) are one-sided. In such problems, power gains can be obtained for bootstrap multiple testing procedures in scenarios where some of the parameters are 'deep in the null' by making certain adjustments to the null distribution under which to resample. In this paper, we compare a Bonferroni adjustment that is based on finite-sample considerations with certain 'asymptotic' adjustments previously suggested in the literature. |
Keywords: | Bonferroni, multiple hypothesis testing, stepwise method |
JEL: | C12 C14 |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:zur:econwp:254&r=ecm |
By: | JIANG, Peiyun; KUROZUMI, Eiji |
Abstract: | The CUSUM test has played an important role in theory and applications related to structural change, but its drawback is that it loses power when the break is orthogonal to the mean of the regressors. In this study, we consider two modified CUSUM tests that have been proposed, implicitly or explicitly, in the literature to detect such structural changes, and investigate the limiting power properties of these tests under a fixed alternative. We demonstrate that the modified tests are superior to the classic tests in terms of both asymptotic theory and in finite samples, when detecting the orthogonal structural shift. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:hit:econdp:2017-05&r=ecm |
By: | David T. Frazier; Christian P. Robert; Judith Rousseau |
Abstract: | We analyze the behavior of approximate Bayesian computation (ABC) when the model generating the simulated data differs from the actual data generating process; i.e., when the data simulator in ABC is misspecified. We demonstrate both theoretically and in simple, but practically relevant, examples that when the model is misspecified different versions of ABC can lead to substantially different results. Our theoretical results demonstrate that under regularity a version of the ABC accept-reject approach concentrates posterior mass on an appropriately defined pseudo-true parameter value, while the popular linear regression adjustment to ABC concentrates posterior mass on a completely different pseudo-true value. Our results suggest two diagnostic approaches to diagnose model misspecification in ABC. |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1708.01974&r=ecm |
By: | Ralf Brüggemann (Department of Economics, University of Konstanz, Germany); Christian Kascha (http://www.christiankascha.com/) |
Abstract: | We represent the dynamic relation among variables in vector autoregressive (VAR) models as directed graphs. Based on these graphs, we identify so-called strongly connected components (SCCs). Using this graphical representation, we consider the problem of variable selection. We use the relations among the strongly connected components to select variables that need to be included in a VAR if interest is in forecasting or impulse response analysis of a given set of variables. We show that the set of selected variables from the graphical method coincides with the set of variables that is multi-step causal for the variables of interest by relating the paths in the graph to the coecients of the `direct' VAR representation. Empirical applications illustrate the usefulness of the suggested approach: Including the selected variables into a small US monetary VAR is useful for impulse response analysis as it avoids the well-known `price-puzzle'. We also nd that including the selected variables into VARs typically improves forecasting accuracy at short horizons. |
Keywords: | Vector autoregression, Variable selection, Directed graphs, Multi-step causality, Forecasting, Impulse response analysis |
JEL: | C32 C51 E52 |
Date: | 2017–08–08 |
URL: | http://d.repec.org/n?u=RePEc:knz:dpteco:1706&r=ecm |
By: | Peter Z. Schochet |
Abstract: | This brief aims to broaden knowledge of design-based methods by describing their key concepts and how they compare to model-based methods. |
Keywords: | Randomized controlled trials, impact estimation, quantitative methods, causal inference, education interventions |
JEL: | I |
URL: | http://d.repec.org/n?u=RePEc:mpr:mprres:82a207630f374ef6a7dfd4a6050d80c8&r=ecm |
By: | Di Tillio, Alfredo; Ottaviani, Marco; Sorensen, Peter Norman |
Abstract: | What is the impact of sample selection on the inference payoff of an evaluator testing a simple hypothesis based on the outcome of a location experiment? We show that anticipated selection locally reduces noise dispersion and thus increases informativeness if and only if the noise distribution is double logconvex, as with normal noise. The results are applied to the analysis of strategic sample selection by a biased researcher and extended to the case of uncertain and unanticipated selection. Our theoretical analysis offers applied research a new angle on the problem of selection in empirical studies, by characterizing when selective assignment based on untreated outcomes benefits or hurts the evaluator. |
Keywords: | Comparison of experiments; Dispersion; Persuasion; Strategic selection; welfare |
JEL: | C72 C90 D82 D83 |
Date: | 2017–08 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:12202&r=ecm |