Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics
2018-04-16
Generalized Laplace Inference in Multiple Change-Points Models
http://d.repec.org/n?u=RePEc:arx:papers:1803.10871&r=ecm
Under the classical long-span asymptotic framework we develop a class of Generalized Laplace (GL) inference methods for the change-point dates in a linear time series regression model with multiple structural changes analyzed in, e.g., Bai and Perron (1998). The GL estimator is defined by an integration rather than optimization-based method and relies on the least-squares criterion function. It is interpreted as a classical (non-Bayesian) estimator and the inference methods proposed retain a frequentist interpretation. Since inference about the change-point dates is a nonstandard statistical problem, the original insight of Laplace to interpret a certain transformation of a least-squares criterion function as a statistical belief over the parameter space provides a better approximation about the uncertainty in the data about the change-points relative to existing methods. Simulations show that the GL estimator is in general more precise than the OLS estimator. On the theoretical side, depending on some input (smoothing) parameter, the class of GL estimators exhibits a dual limiting distribution; namely, the classical shrinkage asymptotic distribution of Bai and Perron (1998), or a Bayes-type asymptotic distribution.
Alessandro Casini
Pierre Perron
2018-03
Estimating conditional means with heavy tails
http://d.repec.org/n?u=RePEc:ehl:lserod:73082&r=ecm
When a conditional distribution has an infinite variance, commonly employed kernel smoothing methods such as local polynomial estimators for the conditional mean admit non-normal limiting distributions (Hall et al., 2002). This complicates the related inference as the conventional tests and confidence intervals based on asymptotic normality are no longer applicable, and the standard bootstrap method often fails. By utilizing the middle part of data nonparametrically and the tail parts parametrically based on extreme value theory, this paper proposes a new estimation method for conditional means, resulting in asymptotically normal estimators even when the conditional distribution has infinite variance. Consequently the standard bootstrap method could be employed to construct, for example, confidence intervals regardless of the tail heaviness. The same idea can be applied to estimating the difference between a conditional mean and a conditional median, which is a useful measure in data exploratory analysis.
Peng, Liang
Yao, Qiwei
Asymptotic normality; Conditional mean; Extreme value theory; Heavy tail
2017-08-01
DSGE Models with Observation-Driven Time-Varying parameters
http://d.repec.org/n?u=RePEc:tin:wpaper:20180030&r=ecm
This paper proposes a novel approach to introduce time-variation in structural parameters of DSGE models. Structural parameters are allowed to evolve over time via an observation-driven updating equation. The estimation of the resulting DSGE model can be easily performed by maximum likelihood without the need of time-consuming simulation-based methods. An application to a DSGE model with time varying volatility for structural shocks is presented. The results indicate a significant improvement in forecasting performance.
Giovanni Angelini
Paolo Gorgi
DSGE models; score-driven models; time-varying parameters
2018-03-30
Consistent Estimation of Linear Regression Models Using Matched Data
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/18063&r=ecm
Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a nonstandard convergence rate to its probability limit. If only a few variables are used to impute the missing data, then it is possible to correct for the bias. We propose two semiparametric bias-corrected estimators and explore their asymptotic properties. The estimators have an indirect-inference interpretation and they attain the parametric convergence rate if the number of matching variables is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases.
Hirukawa, Masayuki
Prokhorov, Artem
measurement error bias; matching estimation; linear regression; indirect inference; Bias correction
2017-03-16
Spatial panel data models with structural change
http://d.repec.org/n?u=RePEc:pra:mprapa:85388&r=ecm
Spatial panel data models are widely used in empirical studies. The existing theories of spatial models so far have largely confine the analysis under the assumption of parameters stabilities. This is unduely restrictive, since a large number of studies have well documented the presence of structural changes in the relationship of economic variables. This paper proposes and studies spatial panel data models with structural change. We consider using the quasi maximum likelihood method to estimate the model. Static and dynamic models are both considered. Large-$T$ and fixed-$T$ setups are both considered. We provide a relatively complete asymptotic theory for the maximum likelihood estimators, including consistency, convergence rates and limiting distributions of the regression coefficients, the timing of structural change and variance of errors. We study the hypothesis testing for the presence of structural change. The three super-type statistics are proposed. The Monte Carlo simulation results are consistent with our theoretical results and show that the maximum likelihood estimators have good finite sample performance.
Li, Kunpeng
Spatial panel data models, structural changes, hypothesis testing, asymptotic theory.
2018-03-21
Should We Condition on the Test for Pre-trends in Difference-in-Difference Designs?
http://d.repec.org/n?u=RePEc:arx:papers:1804.01208&r=ecm
The common practice in difference-in-difference (DiD) designs is to check for parallel trends prior to treatment assignment, yet typical estimation and inference does not account for the fact that this test has occurred. I analyze the properties of the traditional DiD estimator conditional on having passed (i.e. not rejected) the test for parallel pre-trends. When the DiD design is valid and the test for pre-trends confirms it, the typical DiD estimator is unbiased, but traditional standard errors are overly conservative. Additionally, there exists an alternative unbiased estimator that is more efficient than the traditional DiD estimator under parallel trends. However, when in population there is a non-zero pre-trend but we fail to reject the hypothesis of parallel pre-trends, the DiD estimator is generally biased relative to the population DiD coefficient. Moreover, if the trend is monotone, then under reasonable assumptions the bias from conditioning exacerbates the bias relative to the true treatment effect. I propose new estimation and inference procedures that account for the test for parallel trends, and compare their performance to that of the traditional estimator in a Monte Carlo simulation.
Jonathan Roth
2018-04
Moment-based tests under parameter uncertainty
http://d.repec.org/n?u=RePEc:ide:wpaper:32565&r=ecm
This paper considers moment-based tests applied to estimated quantities. We propose a general class of transforms of moments to handle the parameter uncertainty problem. The construction requires only a linear correction that can be implemented in-sample and remains valid for some extended families of non-smooth moments. We reemphasize the attractiveness of working with robust moments, which lead to testing procedures that do not depend on the estimator. Furthermore, no correction is needed when considering the implied test statistic in the out-of-sample case. We apply our methodology to various examples with an emphasis on the backtesting of value-at-risk forecasts.
Bontemps, Christian
moment-based tests; parameter uncertainty; out-of-sample; discrete distributions; value-at-risk; backtesting
2018-03
Endogenous Uncertainty
http://d.repec.org/n?u=RePEc:fip:fedcwp:1805&r=ecm
We show that macroeconomic uncertainty can be considered as exogenous when assessing its effects on the U.S. economy. Instead, financial uncertainty can at least in part arise as an endogenous response to some macroeconomic developments, and overlooking this channel leads to distortions in the estimated effects of financial uncertainty shocks on the economy. We obtain these empirical findings with an econometric model that simultaneously allows for contemporaneous effects of both uncertainty shocks on economic variables and of economic shocks on uncertainty. While the traditional econometric approaches do not allow us to simultaneously identify both of these transmission channels, we achieve identification by exploiting the heteroskedasticity of macroeconomic data. Methodologically, we develop a structural VAR with time-varying volatility in which one of the variables (the uncertainty measure) impacts both the mean and the variance of the other variables. We provide conditional posterior distributions for this model, which is a substantial extension of the popular leverage model of Jacquier, Polson, and Rossi (2004), and provide an MCMC algorithm for estimation.
Carriero, Andrea
Clark, Todd E.
Marcellino, Massimiliano
Uncertainty; Endogeneity; Identification; Stochastic Volatility; Bayesian Methods;
2018-03-29
Finite-Sample Optimal Estimation and Inference on Average Treatment Effects Under Unconfoundedness
http://d.repec.org/n?u=RePEc:arx:papers:1712.04594&r=ecm
We consider estimation and inference on average treatment effects under unconfoundedness conditional on the realizations of the treatment variable and covariates. We derive finite-sample optimal estimators and confidence intervals (CIs) under the assumption of normal errors when the conditional mean of the outcome variable is constrained only by nonparametric smoothness and/or shape restrictions. When the conditional mean is restricted to be Lipschitz with a large enough bound on the Lipschitz constant, we show that the optimal estimator reduces to a matching estimator with the number of matches set to one. In contrast to conventional CIs, our CIs use a larger critical value that explicitly takes into account the potential bias of the estimator. It is needed for correct coverage in finite samples and, in certain cases, asymptotically. We give conditions under which root-$n$ inference is impossible, and we provide versions of our CIs that are feasible and asymptotically valid with unknown error distribution, including in this non-regular case. We apply our results in a numerical illustration and in an application to the National Supported Work Demonstration.
Timothy B. Armstrong
Michal Koles\'ar
2017-12
A Convenient Omitted Variable Bias Formula for Treatment Effect Models
http://d.repec.org/n?u=RePEc:pra:mprapa:85236&r=ecm
Generally, determining the size and magnitude of the omitted variable bias (OVB) in regression models is challenging when multiple included and omitted variables are present. Here, I describe a convenient OVB formula for treatment effect models with potentially many included and omitted variables. I show that in these circumstances it is simple to infer the direction, and potentially the magnitude, of the bias. In a simple setting, this OVB is based on mutually exclusive binary variables, however I provide an extension which loosens the need for mutual exclusivity of variables, and derives the bias in difference-in-differences style models with an arbitrary number of included and excluded “treatment” indicators.
Clarke, Damian
Omitted variable bias; Ordinary Least Squares Regression; Treatment Effects; Difference-in-Differences.
2018-03-10
Cytometry inference through adaptive atomic deconvolution
http://d.repec.org/n?u=RePEc:tse:wpaper:32575&r=ecm
In this paper we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. In these experiments, biologists measure the fluorescence emission of treated cells and compare them with their natural emission to study the presence of specific molecules on the cells' surface. They observe a signal which is composed of a noise (the natural fluorescence) plus some additional signal related to the quantity of molecule present on the surface if any. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by [vEGS08, GvES11]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some concentration inequalities for our estimators and obtain the convergence rates, that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on simulated and real biological data.
Costa, Manon
Gadat, Sébastien
Gonnord, Pauline
Risser, Laurent
Mixture models; Atomic deconvolution; Adaptive kernel estimators; Inverse problems
2018-03
Dealing with cross-country heterogeneity in panel VARs using finite mixture models
http://d.repec.org/n?u=RePEc:arx:papers:1804.01554&r=ecm
In this paper, we provide a parsimonious means to estimate panel VARs with stochastic volatility. We assume that coefficients associated with domestic lagged endogenous variables arise from a Gaussian mixture model. Shrinkage on the cluster size is introduced through suitable priors on the component weights and cluster-relevant quantities are identified through novel shrinkage priors. To assess whether dynamic interdependencies between economies are needed, we moreover impose shrinkage priors on the coefficients related to other countries' endogenous variables. Finally, our model controls for static interdependencies by assuming that the reduced form shocks of the model feature a factor stochastic volatility structure. We assess the merits of the proposed approach by using synthetic data as well as a real data application. In the empirical application, we forecast Eurozone unemployment rates and show that our proposed approach works well in terms of predictions.
Florian Huber
2018-04
Tests for Forecast Instability and Forecast Failure under a Continuous Record Asymptotic Framework
http://d.repec.org/n?u=RePEc:arx:papers:1803.10883&r=ecm
We develop a novel continuous-time asymptotic framework for inference on whether the predictive ability of a given forecast model remains stable over time. We formally define forecast instability from the economic forecaster's perspective and highlight that the time duration of the instability bears no relationship with stable period. Our approach is applicable in forecasting environment involving low-frequency as well as high-frequency macroeconomic and financial variables. As the sampling interval between observations shrinks to zero the sequence of forecast losses is approximated by a continuous-time stochastic process (i.e., an Ito semimartingale) possessing certain pathwise properties. We build an hypotheses testing problem based on the local properties of the continuous-time limit counterpart of the sequence of losses. The null distribution follows an extreme value distribution. While controlling the statistical size well, our class of test statistics feature uniform power over the location of the forecast failure in the sample. The test statistics are designed to have power against general form of insatiability and are robust to common forms of non-stationarity such as heteroskedasticty and serial correlation. The gains in power are substantial relative to extant methods, especially when the instability is short-lasting and when occurs toward the tail of the sample.
Alessandro Casini
2018-03
Continuous Record Asymptotics for Structural Change Models
http://d.repec.org/n?u=RePEc:arx:papers:1803.10881&r=ecm
For a partial structural change in a linear regression model with a single break, we develop a continuous record asymptotic framework to build inference methods for the break date. We have T observations with a sampling frequency h over a fixed time horizon [0, N] , and let T with h 0 while keeping the time span N fixed. We impose very mild regularity conditions on an underlying continuous-time model assumed to generate the data. We consider the least-squares estimate of the break date and establish consistency and convergence rate. We provide a limit theory for shrinking magnitudes of shifts and locally increasing variances. The asymptotic distribution corresponds to the location of the extremum of a function of the quadratic variation of the regressors and of a Gaussian centered martingale process over a certain time interval. We can account for the asymmetric informational content provided by the pre- and post-break regimes and show how the location of the break and shift magnitude are key ingredients in shaping the distribution. We consider a feasible version based on plug-in estimates, which provides a very good approximation to the finite sample distribution. We use the concept of Highest Density Region to construct confidence sets. Overall, our method is reliable and delivers accurate coverage probabilities and relatively short average length of the confidence sets. Importantly, it does so irrespective of the size of the break.
Alessandro Casini
Pierre Perron
2018-03
Testing for unobserved heterogeneous treatment effects in a nonseparable model with endogenous selection
http://d.repec.org/n?u=RePEc:arx:papers:1803.07514&r=ecm
Unobserved heterogeneous treatment effects have been emphasized in the policy evaluation literature. This paper proposes a nonparametric test for unobserved heterogeneous treatment effects in a general framework, allowing for self-selection to the treatment. The proposed modified Kolmogorov-Smirnov-type test is consistent and simple to implement. Monte Carlo simulations show that our test performs well in finite samples. For illustration, we apply our test to study heterogeneous treatment effects of the Job Training Partnership Act on earnings and the impacts of fertility on family income.
Yu-Chin Hsu
Ta-Cheng Huang
Haiqing Xu
2018-03
On Bootstrap Implementation of Likelihood Ratio Test for a Unit Root
http://d.repec.org/n?u=RePEc:gai:wpaper:wpaper-2018-302&r=ecm
In this paper we investigate the bootstrap implementation of the likelihood ratio test for a unit root recently proposed by Jansson and Nielsen (2012). We demonstrate that the likelihood ratio test shows poor finite sample properties under strongly autocorrelated errors, i.e. if the autoregressive or moving average roots are close to -1. The size distortions in these case are more pronounced in comparison to the bootstrap M and ADF tests. We found that the bootstrap version of likelihood ratio test (with autoregressive recolouring) demonstrates better performance than bootstrap M tests. Moreover, the bootstrap likelihood ratio test show better finite sample properties in comparison to the bootstrap ADF in some cases.
Skrobotov Anton
likelihood ratio test, unit root test, bootstrap.
2018
Forbidden zones and biases for the expectation of a random variable. Version 2
http://d.repec.org/n?u=RePEc:pra:mprapa:85607&r=ecm
A forbidden zones theorem is proven in the present article. If some non-zero lower bound exists for the variance of a random variable, whose support is located in a finite interval, then non-zero bounds or forbidden zones exist for its expectation near the boundaries of the interval. The article is motivated by the need of a theoretical support for the practical analysis of the influence of a noise that was performed for the purposes of behavioral economics, utility and prospect theories, decision and social sciences and psychology. The four main contributions of the present article are: the mathematical support, approach and model those are developed for this analysis and the successful uniform applications of the model in more than one domain. In particular, the approach supposes that subjects decide as if there were some biases of the expectations. Possible general consequences and applications of the theorem for a noise and biases of measurement data are preliminary considered.
Harin, Alexander
probability; variance; noise; bias; utility theory; prospect theory; behavioral economics; decision sciences; measurement;
2018-03-30
Reducing Estimation Risk in Mean-Variance Portfolios with Machine Learning
http://d.repec.org/n?u=RePEc:arx:papers:1804.01764&r=ecm
In portfolio analysis, the traditional approach of replacing population moments with sample counterparts may lead to suboptimal portfolio choices. In this paper I show that selecting asset positions to maximize expected quadratic utility is equivalent to a machine learning (ML) problem, where the asset weights are chosen to minimize out of sample mean squared error. It follows that ML specifically targets estimation risk when choosing the asset weights, and that "off-the-shelf" ML algorithms obtain optimal portfolios taking parameter uncertainty into account. Linear regression is a special case of the proposed ML framework, equivalent to the traditional approach. Standard results from the machine learning literature may be used to derive conditions for when ML algorithms improve upon linear regression. Based on simulation studies and several datasets, I find that ML significantly reduce estimation risk compared to the traditional approach and several shrinkage approaches proposed in the literature.
Daniel Kinn
2018-04
An Econometric Model of Network Formation with an Application to Board Interlocks between Firms
http://d.repec.org/n?u=RePEc:tse:wpaper:32550&r=ecm
The paper provides a framework for partially identifying the parameters governing agents’ preferences in a static game of network formation with interdependent link decisions, complete information, and transferable or non-transferable payoffs. The proposed methodology attenuates the computational difficulties arising at the inference stage - due to the huge number of moment inequalities characterising the sharp identified set and the impossibility of brute-force calculating the integrals entering them - by decomposing the network formation game into local games which have a structure similar to entry games and are such that the network formation game is in equilibrium if and only if each local game is in equilibrium. As an empirical illustration of the developed procedure, the paper estimates firms’ incentives for having executives sitting on the board of competitors, using Italian data.
Gualdani, Cristina
network formation; pure strategy Nash equilibrium; pairwise stability; multiple equilibria; partial identification; moment inequalities; local games; board interlocks
2018-03
Prediction bands for solar energy: New short-term time series forecasting techniques
http://d.repec.org/n?u=RePEc:hal:journl:hal-01736518&r=ecm
Short-term forecasts and risk management for photovoltaic energy is studied via a new standpoint on time series: a result published by P. Cartier and Y. Perrin in 1995 permits, without any probabilistic and/or statistical assumption, an additive decomposition of a time series into its mean, or trend, and quick fluctuations around it. The forecasts are achieved by applying quite new estimation techniques and some extrapolation procedures where the classic concept of "seasonalities" is fundamental. The quick fluctuations allow to define easily prediction bands around the mean. Several convincing computer simulations via real data, where the Gaussian probability distribution law is not satisfied, are provided and discussed. The concrete implementation of our setting needs neither tedious machine learning nor large historical data, contrarily to many other viewpoints.
Michel Fliess
Cédric Join
Cyril Voyant
mean,quick fluctuations,time series,prediction bands,short-term forecasts,Solar energy,persistence,risk,volatility,normality tests
2018
Modelling multivariate volatilities via latent common factors
http://d.repec.org/n?u=RePEc:ehl:lserod:68121&r=ecm
Volatility, represented in the form of conditional heteroscedasticity, plays an impor- tant role in controlling and forecasting risks in various financial operations including asset pricing, portfolio allocation, and hedging futures. However, modeling and fore- casting multi-dimensional conditional heteroscedasticity are technically challenging. As the volatilities of many financial assets are often driven by a few common and latent factors, we propose in this paper a dimension reduction method to model a multivariate volatility process and to estimate a lower-dimensional space, to be called the volatility space, within which the dynamics of the multivariate volatility process is confined. The new method is simple to use, as technically it boils down to an eigenanalysis for a non- negative definite matrix. Hence it is applicable to the cases when the number of assets concerned is in the order of thousands (using an ordinary PC/laptop). On the other hand, the model has the capability to cater for complex conditional heteroscedastic- ity behavior for multi-dimensional processes. Some asymptotic properties for the new method are established. We further illustrate the new method using both simulated and real data examples.
Li, Weiming
Gao, Jing
Li, Kunpeng
Yao, Qiwei
Eigenanalysis; latent factors; multi-dimensional volatility process; volatility space
2016-10-01
Instrumental variables based on twin births are by definition not valid
http://d.repec.org/n?u=RePEc:hhs:gunhis:0023&r=ecm
Instrumental variables based on twin births are a well-known and widespread method to find exogenous variation in the number of children when studying the effect on siblings or parents. This paper argues that there are serious problems with all versions of these instruments. Many of these problems have arisen because insufficient care has been given to defining the estimated causal effect. This paper discusses this definition and then applies the potential outcomes framework to reveal that instrumental variables based on twin birth violate the exclusion restriction, the independence assumption and one part of the stable unit treatment value assumption. These violations as well as the characteristics of the populations studied have contributed to hiding any true effect of the number of children. It is time to stop using these instrumental variables and to return to these important questions using other methods.
Öberg, Stefan
causal inference; natural experiments; local average treatment effect; complier average causal effect; Rubin’s causal model; quantity–quality trade-off; family size
2018-04-01