
on Econometrics 
By:  Alessandro Casini; Pierre Perron 
Abstract:  Under the classical longspan asymptotic framework we develop a class of Generalized Laplace (GL) inference methods for the changepoint dates in a linear time series regression model with multiple structural changes analyzed in, e.g., Bai and Perron (1998). The GL estimator is defined by an integration rather than optimizationbased method and relies on the leastsquares criterion function. It is interpreted as a classical (nonBayesian) estimator and the inference methods proposed retain a frequentist interpretation. Since inference about the changepoint dates is a nonstandard statistical problem, the original insight of Laplace to interpret a certain transformation of a leastsquares criterion function as a statistical belief over the parameter space provides a better approximation about the uncertainty in the data about the changepoints relative to existing methods. Simulations show that the GL estimator is in general more precise than the OLS estimator. On the theoretical side, depending on some input (smoothing) parameter, the class of GL estimators exhibits a dual limiting distribution; namely, the classical shrinkage asymptotic distribution of Bai and Perron (1998), or a Bayestype asymptotic distribution. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.10871&r=ecm 
By:  Peng, Liang; Yao, Qiwei 
Abstract:  When a conditional distribution has an infinite variance, commonly employed kernel smoothing methods such as local polynomial estimators for the conditional mean admit nonnormal limiting distributions (Hall et al., 2002). This complicates the related inference as the conventional tests and confidence intervals based on asymptotic normality are no longer applicable, and the standard bootstrap method often fails. By utilizing the middle part of data nonparametrically and the tail parts parametrically based on extreme value theory, this paper proposes a new estimation method for conditional means, resulting in asymptotically normal estimators even when the conditional distribution has infinite variance. Consequently the standard bootstrap method could be employed to construct, for example, confidence intervals regardless of the tail heaviness. The same idea can be applied to estimating the difference between a conditional mean and a conditional median, which is a useful measure in data exploratory analysis. 
Keywords:  Asymptotic normality; Conditional mean; Extreme value theory; Heavy tail 
JEL:  C1 
Date:  2017–08–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:73082&r=ecm 
By:  Giovanni Angelini (University of Bologna, Italy); Paolo Gorgi (VU Amsterdam, The Netherlands) 
Abstract:  This paper proposes a novel approach to introduce timevariation in structural parameters of DSGE models. Structural parameters are allowed to evolve over time via an observationdriven updating equation. The estimation of the resulting DSGE model can be easily performed by maximum likelihood without the need of timeconsuming simulationbased methods. An application to a DSGE model with time varying volatility for structural shocks is presented. The results indicate a significant improvement in forecasting performance. 
Keywords:  DSGE models; scoredriven models; timevarying parameters 
JEL:  C32 C5 
Date:  2018–03–30 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20180030&r=ecm 
By:  Hirukawa, Masayuki; Prokhorov, Artem 
Abstract:  Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a nonstandard convergence rate to its probability limit. If only a few variables are used to impute the missing data, then it is possible to correct for the bias. We propose two semiparametric biascorrected estimators and explore their asymptotic properties. The estimators have an indirectinference interpretation and they attain the parametric convergence rate if the number of matching variables is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases. 
Keywords:  measurement error bias; matching estimation; linear regression; indirect inference; Bias correction 
Date:  2017–03–16 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/18063&r=ecm 
By:  Li, Kunpeng 
Abstract:  Spatial panel data models are widely used in empirical studies. The existing theories of spatial models so far have largely confine the analysis under the assumption of parameters stabilities. This is unduely restrictive, since a large number of studies have well documented the presence of structural changes in the relationship of economic variables. This paper proposes and studies spatial panel data models with structural change. We consider using the quasi maximum likelihood method to estimate the model. Static and dynamic models are both considered. Large$T$ and fixed$T$ setups are both considered. We provide a relatively complete asymptotic theory for the maximum likelihood estimators, including consistency, convergence rates and limiting distributions of the regression coefficients, the timing of structural change and variance of errors. We study the hypothesis testing for the presence of structural change. The three supertype statistics are proposed. The Monte Carlo simulation results are consistent with our theoretical results and show that the maximum likelihood estimators have good finite sample performance. 
Keywords:  Spatial panel data models, structural changes, hypothesis testing, asymptotic theory. 
JEL:  C31 C33 
Date:  2018–03–21 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:85388&r=ecm 
By:  Jonathan Roth 
Abstract:  The common practice in differenceindifference (DiD) designs is to check for parallel trends prior to treatment assignment, yet typical estimation and inference does not account for the fact that this test has occurred. I analyze the properties of the traditional DiD estimator conditional on having passed (i.e. not rejected) the test for parallel pretrends. When the DiD design is valid and the test for pretrends confirms it, the typical DiD estimator is unbiased, but traditional standard errors are overly conservative. Additionally, there exists an alternative unbiased estimator that is more efficient than the traditional DiD estimator under parallel trends. However, when in population there is a nonzero pretrend but we fail to reject the hypothesis of parallel pretrends, the DiD estimator is generally biased relative to the population DiD coefficient. Moreover, if the trend is monotone, then under reasonable assumptions the bias from conditioning exacerbates the bias relative to the true treatment effect. I propose new estimation and inference procedures that account for the test for parallel trends, and compare their performance to that of the traditional estimator in a Monte Carlo simulation. 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1804.01208&r=ecm 
By:  Bontemps, Christian 
Abstract:  This paper considers momentbased tests applied to estimated quantities. We propose a general class of transforms of moments to handle the parameter uncertainty problem. The construction requires only a linear correction that can be implemented insample and remains valid for some extended families of nonsmooth moments. We reemphasize the attractiveness of working with robust moments, which lead to testing procedures that do not depend on the estimator. Furthermore, no correction is needed when considering the implied test statistic in the outofsample case. We apply our methodology to various examples with an emphasis on the backtesting of valueatrisk forecasts. 
Keywords:  momentbased tests; parameter uncertainty; outofsample; discrete distributions; valueatrisk; backtesting 
JEL:  C12 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:ide:wpaper:32565&r=ecm 
By:  Carriero, Andrea (Queen Mary, University of London); Clark, Todd E. (Federal Reserve Bank of Cleveland); Marcellino, Massimiliano (Bocconi University, IGIER, and CEPR) 
Abstract:  We show that macroeconomic uncertainty can be considered as exogenous when assessing its effects on the U.S. economy. Instead, financial uncertainty can at least in part arise as an endogenous response to some macroeconomic developments, and overlooking this channel leads to distortions in the estimated effects of financial uncertainty shocks on the economy. We obtain these empirical findings with an econometric model that simultaneously allows for contemporaneous effects of both uncertainty shocks on economic variables and of economic shocks on uncertainty. While the traditional econometric approaches do not allow us to simultaneously identify both of these transmission channels, we achieve identification by exploiting the heteroskedasticity of macroeconomic data. Methodologically, we develop a structural VAR with timevarying volatility in which one of the variables (the uncertainty measure) impacts both the mean and the variance of the other variables. We provide conditional posterior distributions for this model, which is a substantial extension of the popular leverage model of Jacquier, Polson, and Rossi (2004), and provide an MCMC algorithm for estimation. 
Keywords:  Uncertainty; Endogeneity; Identification; Stochastic Volatility; Bayesian Methods; 
JEL:  C11 C32 D81 E32 
Date:  2018–03–29 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwp:1805&r=ecm 
By:  Timothy B. Armstrong; Michal Koles\'ar 
Abstract:  We consider estimation and inference on average treatment effects under unconfoundedness conditional on the realizations of the treatment variable and covariates. We derive finitesample optimal estimators and confidence intervals (CIs) under the assumption of normal errors when the conditional mean of the outcome variable is constrained only by nonparametric smoothness and/or shape restrictions. When the conditional mean is restricted to be Lipschitz with a large enough bound on the Lipschitz constant, we show that the optimal estimator reduces to a matching estimator with the number of matches set to one. In contrast to conventional CIs, our CIs use a larger critical value that explicitly takes into account the potential bias of the estimator. It is needed for correct coverage in finite samples and, in certain cases, asymptotically. We give conditions under which root$n$ inference is impossible, and we provide versions of our CIs that are feasible and asymptotically valid with unknown error distribution, including in this nonregular case. We apply our results in a numerical illustration and in an application to the National Supported Work Demonstration. 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1712.04594&r=ecm 
By:  Clarke, Damian 
Abstract:  Generally, determining the size and magnitude of the omitted variable bias (OVB) in regression models is challenging when multiple included and omitted variables are present. Here, I describe a convenient OVB formula for treatment effect models with potentially many included and omitted variables. I show that in these circumstances it is simple to infer the direction, and potentially the magnitude, of the bias. In a simple setting, this OVB is based on mutually exclusive binary variables, however I provide an extension which loosens the need for mutual exclusivity of variables, and derives the bias in differenceindifferences style models with an arbitrary number of included and excluded “treatment” indicators. 
Keywords:  Omitted variable bias; Ordinary Least Squares Regression; Treatment Effects; DifferenceinDifferences. 
JEL:  C13 C21 C22 
Date:  2018–03–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:85236&r=ecm 
By:  Costa, Manon; Gadat, Sébastien; Gonnord, Pauline; Risser, Laurent 
Abstract:  In this paper we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. In these experiments, biologists measure the fluorescence emission of treated cells and compare them with their natural emission to study the presence of specific molecules on the cells' surface. They observe a signal which is composed of a noise (the natural fluorescence) plus some additional signal related to the quantity of molecule present on the surface if any. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by [vEGS08, GvES11]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a biasvariance decomposition. We then derive some concentration inequalities for our estimators and obtain the convergence rates, that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on simulated and real biological data. 
Keywords:  Mixture models; Atomic deconvolution; Adaptive kernel estimators; Inverse problems 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:32575&r=ecm 
By:  Florian Huber 
Abstract:  In this paper, we provide a parsimonious means to estimate panel VARs with stochastic volatility. We assume that coefficients associated with domestic lagged endogenous variables arise from a Gaussian mixture model. Shrinkage on the cluster size is introduced through suitable priors on the component weights and clusterrelevant quantities are identified through novel shrinkage priors. To assess whether dynamic interdependencies between economies are needed, we moreover impose shrinkage priors on the coefficients related to other countries' endogenous variables. Finally, our model controls for static interdependencies by assuming that the reduced form shocks of the model feature a factor stochastic volatility structure. We assess the merits of the proposed approach by using synthetic data as well as a real data application. In the empirical application, we forecast Eurozone unemployment rates and show that our proposed approach works well in terms of predictions. 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1804.01554&r=ecm 
By:  Alessandro Casini 
Abstract:  We develop a novel continuoustime asymptotic framework for inference on whether the predictive ability of a given forecast model remains stable over time. We formally define forecast instability from the economic forecaster's perspective and highlight that the time duration of the instability bears no relationship with stable period. Our approach is applicable in forecasting environment involving lowfrequency as well as highfrequency macroeconomic and financial variables. As the sampling interval between observations shrinks to zero the sequence of forecast losses is approximated by a continuoustime stochastic process (i.e., an Ito semimartingale) possessing certain pathwise properties. We build an hypotheses testing problem based on the local properties of the continuoustime limit counterpart of the sequence of losses. The null distribution follows an extreme value distribution. While controlling the statistical size well, our class of test statistics feature uniform power over the location of the forecast failure in the sample. The test statistics are designed to have power against general form of insatiability and are robust to common forms of nonstationarity such as heteroskedasticty and serial correlation. The gains in power are substantial relative to extant methods, especially when the instability is shortlasting and when occurs toward the tail of the sample. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.10883&r=ecm 
By:  Alessandro Casini; Pierre Perron 
Abstract:  For a partial structural change in a linear regression model with a single break, we develop a continuous record asymptotic framework to build inference methods for the break date. We have T observations with a sampling frequency h over a fixed time horizon [0, N] , and let T with h 0 while keeping the time span N fixed. We impose very mild regularity conditions on an underlying continuoustime model assumed to generate the data. We consider the leastsquares estimate of the break date and establish consistency and convergence rate. We provide a limit theory for shrinking magnitudes of shifts and locally increasing variances. The asymptotic distribution corresponds to the location of the extremum of a function of the quadratic variation of the regressors and of a Gaussian centered martingale process over a certain time interval. We can account for the asymmetric informational content provided by the pre and postbreak regimes and show how the location of the break and shift magnitude are key ingredients in shaping the distribution. We consider a feasible version based on plugin estimates, which provides a very good approximation to the finite sample distribution. We use the concept of Highest Density Region to construct confidence sets. Overall, our method is reliable and delivers accurate coverage probabilities and relatively short average length of the confidence sets. Importantly, it does so irrespective of the size of the break. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.10881&r=ecm 
By:  YuChin Hsu; TaCheng Huang; Haiqing Xu 
Abstract:  Unobserved heterogeneous treatment effects have been emphasized in the policy evaluation literature. This paper proposes a nonparametric test for unobserved heterogeneous treatment effects in a general framework, allowing for selfselection to the treatment. The proposed modified KolmogorovSmirnovtype test is consistent and simple to implement. Monte Carlo simulations show that our test performs well in finite samples. For illustration, we apply our test to study heterogeneous treatment effects of the Job Training Partnership Act on earnings and the impacts of fertility on family income. 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1803.07514&r=ecm 
By:  Skrobotov Anton (RANEPA) 
Abstract:  In this paper we investigate the bootstrap implementation of the likelihood ratio test for a unit root recently proposed by Jansson and Nielsen (2012). We demonstrate that the likelihood ratio test shows poor finite sample properties under strongly autocorrelated errors, i.e. if the autoregressive or moving average roots are close to 1. The size distortions in these case are more pronounced in comparison to the bootstrap M and ADF tests. We found that the bootstrap version of likelihood ratio test (with autoregressive recolouring) demonstrates better performance than bootstrap M tests. Moreover, the bootstrap likelihood ratio test show better finite sample properties in comparison to the bootstrap ADF in some cases. 
Keywords:  likelihood ratio test, unit root test, bootstrap. 
JEL:  C12 C22 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:gai:wpaper:wpaper2018302&r=ecm 
By:  Harin, Alexander 
Abstract:  A forbidden zones theorem is proven in the present article. If some nonzero lower bound exists for the variance of a random variable, whose support is located in a finite interval, then nonzero bounds or forbidden zones exist for its expectation near the boundaries of the interval. The article is motivated by the need of a theoretical support for the practical analysis of the influence of a noise that was performed for the purposes of behavioral economics, utility and prospect theories, decision and social sciences and psychology. The four main contributions of the present article are: the mathematical support, approach and model those are developed for this analysis and the successful uniform applications of the model in more than one domain. In particular, the approach supposes that subjects decide as if there were some biases of the expectations. Possible general consequences and applications of the theorem for a noise and biases of measurement data are preliminary considered. 
Keywords:  probability; variance; noise; bias; utility theory; prospect theory; behavioral economics; decision sciences; measurement; 
JEL:  C02 C1 D8 D81 
Date:  2018–03–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:85607&r=ecm 
By:  Daniel Kinn 
Abstract:  In portfolio analysis, the traditional approach of replacing population moments with sample counterparts may lead to suboptimal portfolio choices. In this paper I show that selecting asset positions to maximize expected quadratic utility is equivalent to a machine learning (ML) problem, where the asset weights are chosen to minimize out of sample mean squared error. It follows that ML specifically targets estimation risk when choosing the asset weights, and that "offtheshelf" ML algorithms obtain optimal portfolios taking parameter uncertainty into account. Linear regression is a special case of the proposed ML framework, equivalent to the traditional approach. Standard results from the machine learning literature may be used to derive conditions for when ML algorithms improve upon linear regression. Based on simulation studies and several datasets, I find that ML significantly reduce estimation risk compared to the traditional approach and several shrinkage approaches proposed in the literature. 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1804.01764&r=ecm 
By:  Gualdani, Cristina 
Abstract:  The paper provides a framework for partially identifying the parameters governing agents’ preferences in a static game of network formation with interdependent link decisions, complete information, and transferable or nontransferable payoffs. The proposed methodology attenuates the computational difficulties arising at the inference stage  due to the huge number of moment inequalities characterising the sharp identified set and the impossibility of bruteforce calculating the integrals entering them  by decomposing the network formation game into local games which have a structure similar to entry games and are such that the network formation game is in equilibrium if and only if each local game is in equilibrium. As an empirical illustration of the developed procedure, the paper estimates firms’ incentives for having executives sitting on the board of competitors, using Italian data. 
Keywords:  network formation; pure strategy Nash equilibrium; pairwise stability; multiple equilibria; partial identification; moment inequalities; local games; board interlocks 
JEL:  C1 C62 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:32550&r=ecm 
By:  Michel Fliess (AL.I.E.N.  ALgèbre pour Identification & Estimation Numériques, LIX  Laboratoire d'informatique de l'École polytechnique [Palaiseau]  CNRS  Centre National de la Recherche Scientifique  Polytechnique  X); Cédric Join (AL.I.E.N.  ALgèbre pour Identification & Estimation Numériques, CRAN  Centre de Recherche en Automatique de Nancy  UL  Université de Lorraine  CNRS  Centre National de la Recherche Scientifique, NONA  NonAsymptotic estimation for online systems  Inria Lille  Nord Europe  Inria  Institut National de Recherche en Informatique et en Automatique  CRIStAL  Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL)  UMR 9189  Université de Lille, Sciences et Technologies  Ecole Centrale de Lille  Inria  Institut National de Recherche en Informatique et en Automatique  Université de Lille, Sciences Humaines et Sociales  Institut MinesTélécom [Paris]  CNRS  Centre National de la Recherche Scientifique); Cyril Voyant (SPE  Sciences pour l'environnement  UPP  Université Pascal Paoli  CNRS  Centre National de la Recherche Scientifique, Centre hospitalier d'Ajaccio) 
Abstract:  Shortterm forecasts and risk management for photovoltaic energy is studied via a new standpoint on time series: a result published by P. Cartier and Y. Perrin in 1995 permits, without any probabilistic and/or statistical assumption, an additive decomposition of a time series into its mean, or trend, and quick fluctuations around it. The forecasts are achieved by applying quite new estimation techniques and some extrapolation procedures where the classic concept of "seasonalities" is fundamental. The quick fluctuations allow to define easily prediction bands around the mean. Several convincing computer simulations via real data, where the Gaussian probability distribution law is not satisfied, are provided and discussed. The concrete implementation of our setting needs neither tedious machine learning nor large historical data, contrarily to many other viewpoints. 
Keywords:  mean,quick fluctuations,time series,prediction bands,shortterm forecasts,Solar energy,persistence,risk,volatility,normality tests 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal01736518&r=ecm 
By:  Li, Weiming; Gao, Jing; Li, Kunpeng; Yao, Qiwei 
Abstract:  Volatility, represented in the form of conditional heteroscedasticity, plays an impor tant role in controlling and forecasting risks in various financial operations including asset pricing, portfolio allocation, and hedging futures. However, modeling and fore casting multidimensional conditional heteroscedasticity are technically challenging. As the volatilities of many financial assets are often driven by a few common and latent factors, we propose in this paper a dimension reduction method to model a multivariate volatility process and to estimate a lowerdimensional space, to be called the volatility space, within which the dynamics of the multivariate volatility process is confined. The new method is simple to use, as technically it boils down to an eigenanalysis for a non negative definite matrix. Hence it is applicable to the cases when the number of assets concerned is in the order of thousands (using an ordinary PC/laptop). On the other hand, the model has the capability to cater for complex conditional heteroscedastic ity behavior for multidimensional processes. Some asymptotic properties for the new method are established. We further illustrate the new method using both simulated and real data examples. 
Keywords:  Eigenanalysis; latent factors; multidimensional volatility process; volatility space 
JEL:  C1 L81 
Date:  2016–10–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:68121&r=ecm 
By:  Öberg, Stefan (Department of Economic History, School of Business, Economics and Law, Göteborg University) 
Abstract:  Instrumental variables based on twin births are a wellknown and widespread method to find exogenous variation in the number of children when studying the effect on siblings or parents. This paper argues that there are serious problems with all versions of these instruments. Many of these problems have arisen because insufficient care has been given to defining the estimated causal effect. This paper discusses this definition and then applies the potential outcomes framework to reveal that instrumental variables based on twin birth violate the exclusion restriction, the independence assumption and one part of the stable unit treatment value assumption. These violations as well as the characteristics of the populations studied have contributed to hiding any true effect of the number of children. It is time to stop using these instrumental variables and to return to these important questions using other methods. 
Keywords:  causal inference; natural experiments; local average treatment effect; complier average causal effect; Rubin’s causal model; quantity–quality tradeoff; family size 
JEL:  C21 C26 J13 
Date:  2018–04–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunhis:0023&r=ecm 