
on Econometrics 
By:  Giovanni Angelini (Università di Bologna); Giuseppe Cavaliere (Università di Bologna); Luca Fanelli (Università di Bologna) 
Abstract:  This paper explores the potential of bootstrap methods in the empirical evalu ation of dynamic stochastic general equilibrium (DSGE) models and, more generally, in linear rational expectations models featuring unobservable (latent) components. We consider two dimensions. First, we provide mild regularity conditions that suffice for the bootstrap Quasi Maximum Likelihood (QML) estimator of the structural parameters to mimic the asymptotic distribution of the QML estimator. Consistency of the bootstrap allows to keep the probability of false rejections of the crossequation restrictions under control. Second, we show that the realizations of the bootstrap estimator of the structural parameters can be constructively used to build novel, computationally straightforward tests for model misspecification, including the case of weak identification. In particular, we show that under strong identification and boot strap consistency, a test statistic based on a set of realizations of the bootstrap QML estimator approximates the Gaussian distribution. Instead, when the regularity conditions for inference do not hold as e.g. it happens when (part of) the structural parameters are weakly identified, the above result is no longer valid. Therefore, we can evaluate how close or distant is the esti mated model from the case of strong identification. Our Monte Carlo experimentations suggest that the bootstrap plays an important role along both dimensions and represents a promising evaluation tool of the crossequation restrictions and, under certain conditions, of the strength of identification. An empirical illustration based on a smallscale DSGE model estimated on U.S. quarterly observations shows the practical usefulness of our approach. 
Keywords:  Bootstrap, Crossequation restrictions, DSGE, QLR test, State space model, Weak identification. 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:133&r=ecm 
By:  Florian Huber (Department of Economics, Vienna University of Economics and Business); Gregor Kastner (Department of Statistics and Mathematics, Vienna University of Economics and Business); Martin Feldkircher (Oesterreichische Nationalbank (OeNB)) 
Abstract:  We provide a flexible means of estimating timevarying parameter models in a Bayesian framework. By specifying the state innovations to be characterized trough a threshold process that is driven by the absolute size of parameter changes, our model detects at each point in time whether a given regression coefficient is constant or timevarying. Moreover, our framework accounts for model uncertainty in a databased fashion through Bayesian shrinkage priors on the initial values of the states. In a simulation, we show that our model reliably identifies regime shifts in cases where the data generating processes display high, moderate, and low num bers of movements in the regression parameters. Finally, we illustrate the merits of our approach by means of two applications. In the first application we forecast the US equity premium and in the second application we investigate the macroeconomic effects of a US monetary policy shock. 
Keywords:  Change point model, Threshold mixture innovations, Structural breaks, Shrinkage, Bayesian statistics, Monetary policy 
JEL:  C11 C32 C52 E42 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp235&r=ecm 
By:  Paulo M.M. Rodrigues; Matei Demetrescu 
Abstract:  Bias correction in predictive regressions stabilizes the empirical size properties of OLSbased predictability tests. This paper shows that bias correction also improves the finite sample power of tests, in particular so in the context of the extended instrumental variable (IVX) predictability testing framework introduced by Kostakis et al. (2015, Review of Financial Studies). We introduce new IVXstatistics subject to a bias correction analogous to that proposed by Amihud and Hurvich (2014, Journal of Financial and Quantitative Analysis). Three important contributions are provided: first, we characterize the effects that biasreduction adjustments have on the asymptotic distributions of the IVX test statistics in a general context allowing for shortrun dynamics and heterogeneity; second, we discuss the validity of the procedure when predictors are stationary as well as nearintegrated; and third, we conduct an exhaustive Monte Carlo analysis to investigate the smallsample properties of the test procedure and its sensitivity to distinctive features that characterize predictive regressions in practice, such as strong persistence, endogeneity, nonGaussian innovations and heterogeneity. An application of the new procedure to the Welch and Goyal (2008) database illustrates its usefulness in practice. 
JEL:  C12 C22 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ptu:wpaper:w201605&r=ecm 
By:  Victor Chernozhukov; Ivan FernandezVal; Blaise Melly; Kaspar Wüthrich 
Abstract:  This paper provides a method to construct simultaneous confidence bands for quantile and quantile effect functions for possibly discrete or mixed discretecontinuous random variables. The construction is generic and does not depend on the nature of the underlying problem. It works in conjunction with parametric, semiparametric, and nonparametric modeling strategies and does not depend on the sampling schemes. It is based upon projection of simultaneous confidence bands for distribution functions. We apply our method to analyze the distributional impact of insurance coverage on health care utilization and to provide a distributional decomposition of the racial test score gap. Our analysis generates new interesting findings, and complements previous analyses that focused on mean effects only. In both applications, the outcomes of interest are discrete rendering standard inference methods invalid for obtaining uniform confidence bands for quantile and quantile effects functions. 
Keywords:  quantiles; quantile effects; treatment effects; distribution; discrete; mixed; count data; confidence bands; uniform inference. 
JEL:  C12 C21 C25 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1607&r=ecm 
By:  Athey, Susan (Stanford University); Imbens, Guido W. (Stanford University); Wager, Stefan (?) 
Abstract:  There are many studies where researchers are interested in estimating average treatment effects and are willing to rely on the unconfoundedness assumption, which requires that treatment assignment is as good as random conditional on pretreatment variables. The unconfoundedness assumption is often more plausible if a large number of pretreatment variables are included in the analysis, but this can worsen the finite sample properties of existing approaches to estimation. In particular, existing methods do not handle well the case where the model for the propensity score (that is, the model relating pretreatment variables to treatment assignment) is not sparse. In this paper, we propose a new method for estimating average treatment effects in high dimensions that combines balancing weights and regression adjustments. We show that our estimator achieves the semiparametric efficiency bound for estimating average treatment effects without requiring any modeling assumptions on the propensity score. The result relies on two key assumptions, namely overlap (that is, all units have a propensity score that is bounded away from 0 and 1), and sparsity of the model relating pretreatment variables to outcomes. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:ecl:stabus:3408&r=ecm 
By:  Chambers, Marcus J; Kyriacou, Maria 
Abstract:  This paper considers the specification and performance of jackknife estimators of the autoregressive coefficient in a model with a nearunit root. The limit distributions of subsample estimators that are used in the construction of the jackknife estimator are derived and the joint moment generating function (MGF) of two components of these distributions is obtained and its properties are explored. The MGF can be used to derive the weights for an optimal jackknife estimator that removes fully the firstorder finite sample bias from the estimator. The resulting jackknife estimator is shown to perform well in finite samples and, with a suitable choice of the number of subsamples, is shown to reduce the overall finite sample root mean squared error as well as bias. However, the optimal jackknife weights rely on knowledge of the nearunit root parameter, which is typically unknown in practice, and so an alternative, feasible, jackknife estimator is pro posed which achieves the intended bias reduction but does not rely on knowledge of this parameter. This feasible jackknife estimator is also capable of substantial bias and root mean squared error reductions in finite samples across a range of values of the nearunit root parameter and across different sample sizes. 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:esx:essedp:17623&r=ecm 
By:  Franses, Ph.H.B.F. 
Abstract:  A MIDAS regression involves a dependent variable observed at a low frequency and independent variables observed at a higher frequency. This paper relates a true high frequency data generating process, where also the dependent variable is observed (hypothetically) at the high frequency, with a MIDAS regression. It is shown that a correctly specified MIDAS regression usually includes lagged dependent variables, a substantial number of explanatory variables (observable at the low frequency) and a moving average term. Next, the parameters of the explanatory variables unlikely obey certain convenient patterns, and hence imposing such restrictions in practice is not recommended. 
Keywords:  high frequency, low frequency, MIDAS regression 
JEL:  C32 
Date:  2016–08–24 
URL:  http://d.repec.org/n?u=RePEc:ems:eureir:93331&r=ecm 
By:  António Rua 
Abstract:  In an increasingly data rich environment, factor models have become the workhorse approach for modelling and forecasting purposes. However, factors are nonobservable and have to be estimated. In particular, the space spanned by the unknown factors is typically estimated via principal components. Herein, it is proposed a novel procedure to estimate the factor space resorting to a wavelet based multiscale principal component analysis. Through a Monte Carlo simulation study, it is shown that such an approach allows to improve both factor model estimation and forecasting performance. In the empirical application, one illustrates its usefulness for forecasting GDP growth and inflation in the United States. 
JEL:  C22 C40 C53 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ptu:wpaper:w201612&r=ecm 
By:  Manganelli, Simone 
Abstract:  This paper shows how to incorporate judgment in a decision problem under uncertainty, within a classical framework. The method relies on the specification of a judgmental decision with associated confidence level and application of hypothesis testing. The null hypothesis tests whether marginal deviations from the judgmental decision generate negative changes in expected utility. The resulting estimator is always at the boundary of the confidence interval: beyond that point the probability of decreasing the expected utility becomes greater than the chosen confidence level. The decision maker chooses the confidence level as a mapping from the pvalue of the judgmental decision into the unit interval. I show how the choice of priors in Bayesian estimators is equivalent to the choice of this confidence level mapping. I illustrate the implications of this new framework with a portfolio choice between cash and the EuroStoxx50 index. JEL Classification: C1, C11, C12, C13, D81 
Keywords:  judgmental estimation, outofsample, portfolio selection, statistical risk propensity 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20161947&r=ecm 
By:  Gregory Connor (Department of Economics, Finance and Accounting, Maynooth University.) 
Abstract:  This paper provides a smallsample adjustment for Bonferonni corrected pvalues in multiple univariate regressions of a quantitative phenotype (such as a social trait) on individual genome markers. The pvalue estimator conventionally used in existing genomewide asso ciation (GWA) regressions assumes a normallydistributed dependent variable, or relies on a central limit theorem based approximation. We show that the central limit theorem approximation is unreliable for GWA regression Bonferonnicorrected pvalues except in very large samples. We note that measured phenotypes (particularly in the case of social traits) often have markedly nonnormal distributions. We propose a mixed normal distribution to better ?t observed pheno typic variables, and derive exact smallsample pvalues for the stan dard GWA regression under this distributional assumption. 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:may:mayecw:n27416.pdf&r=ecm 
By:  Michele Scagliarini (Università di Bologna) 
Abstract:  In this study we propose a sequential procedure for hypothesis testing on the Cpk process capability index. We compare the properties of the sequential test with the performances of nonsequential tests by performing an extensive simulation study. The results indicate that the proposed sequential procedure makes it possible to save a large amount of sample size, which can be translated into reduced costs, time and resources. 
Keywords:  Average Sample Size; Brownian Motion; Maximal Allowable Sample Size; Power Function; Simulation Studies. Ampiezza Campionaria Media; Moto Browniano; Ampiezza Campionaria Massima Ammissibile; Funzione di Potenza; Studio di Simulazione. 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:134&r=ecm 
By:  Yongquan Cao (Indiana University); Grey Gordon (Indiana University) 
Abstract:  A calibration strategy tries to match target moments using a model's parameters. We propose tests for determining whether this is possible. The tests use moments at random parameter draws to assess whether the target moments are similar to the computed ones (evidence of existence) or appear to be outliers (evidence of nonexistence). Our experiments show the tests are effective at detecting both existence and nonexistence in a nonlinear model. Multiple calibration strategies can be quickly tested using just one set of simulated data. Code is provided. 
Keywords:  Calibration, GMM, Existence, Outlier, Data Mining 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2016004&r=ecm 
By:  Jakir Hussain (Department of Economics, University of Ottawa, Ottawa, ON); JeanThomas Bernard (Department of Economics, University of Ottawa, Ottawa, ON) 
Abstract:  It is wellknown that econometric productivity estimation using flexible functional forms often encounter violations of curvature conditions. However, the productivity literature does not provide any guidance on the selection of appropriate functional forms once they satisfy the theoretical regularity conditions. In this paper, we provide an empirical evidence that imposing local curvature conditions on the flexible functional forms affect total factor productivity (TFP) estimates in addition to the elasticity estimates. Moreover, we use this as a criterion for evaluating the performances of three widely used locally flexible cost functional forms  the translog (TL), the Generalized Leontief (GL), and the Normalized Quadratic (NQ)  in providing TFP estimates. Results suggest that the NQ model performs better than the other two functional forms in providing TFP estimates. 
Keywords:  Technical change, Productivity, Flexible functional forms, Translog (TL), Generalized Leontief (GL), Normalized Quadratic (NQ), Cost function, Concavity 
JEL:  C22 F33 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ott:wpaper:1612e&r=ecm 
By:  Tom Reynkens; Roel Verbelen; Jan Beirlant; Katrien Antonio 
Abstract:  In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modeling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a lighttailed distribution for the body which covers light and moderate losses, and a heavytailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement. 
Keywords:  censoring, composite model, expectationmaximization algorithm, risk measurement, tail modeling 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ete:kbiper:549545&r=ecm 
By:  AntolinDiaz, Juan; RubioRamírez, Juan Francisco 
Abstract:  This paper identifies structural vector autoregressions using narrative sign restrictions. Narrative sign restrictions constrain the structural shocks and the historical decomposition of the data around key historical events, ensuring that they agree with the established account of these episodes. Using models of the oil market and monetary policy, we show that narrative sign restrictions can be highly informative. In particular we highlight that adding a small number of narrative sign restrictions, or sometimes even a single one, dramatically sharpens and even changes the inference of SVARs originally identified via the established practice of placing sign restrictions only on the impulse response functions. We see our approach as combining the appeal of narrative methods with the desire for basing inference on a few uncontroversial restrictions that popularized the use of sign restrictions. 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:11517&r=ecm 
By:  Acharya, Avidit (Stanford University); Blackwell, Matthew (Harvard University); Sen, Maya (Harvard University) 
Abstract:  Researchers seeking to establish causal relationships frequently control for variables on the purported causal pathway, checking whether the original treatment effect then disappears. Unfortunately, this common approach may lead to biased estimates. In this paper, we show that the bias can be avoided by focusing on a quantity of interest called the controlled direct effect. Under certain conditions, the controlled direct effect enables researchers to rule out competing explanationsan important objective for political scientists. To estimate the controlled direct effect without bias, we describe an easyto implement estimation strategy from the biostatistics literature. We extend this approach by deriving a consistent variance estimator and demonstrating how to conduct a sensitivity analysis. Two examplesone on ethnic fractionalization's effect on civil war and one on the impact of historical plough use on contemporary female political participationillustrate the framework and methodology. 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:ecl:harjfk:15064&r=ecm 
By:  Broockman, David E. (Stanford University); Kalla, Joshua L. (University of CA, Berkeley); Sekhon, Jasjeet S. (University of CA, Berkeley) 
Abstract:  Social scientists increasingly wish to use field experiments to test theories. However, common experimental designs for studying the effects of treatments delivered in the field on individuals' attitudes are infeasible for most researchers and vulnerable to bias. We detail an alternative field experiment design exploiting a placebo control and multiple waves of panel surveys delivered online with multiple measures of outcomes. This design can make persuasion field experiments feasible by decreasing costs (often by nearly two orders of magnitude), allows experiments to test additional theories, and facilitates the evaluation of design assumptions. We then report an original application study, a field experiment implementing the design to evaluate a persuasive canvass targeting abortion attitudes. This study estimated a precise zero, suggesting the design can successfully evade social desirability bias. We conclude by discussing potential limitations and extensions. 
Date:  2016–04 
URL:  http://d.repec.org/n?u=RePEc:ecl:stabus:3402&r=ecm 