
on Econometrics 
By:  YAMAZAKI, Daisuke; KUROZUMI, Eiji 
Abstract:  It is widely known that structural break tests based on the longrun variance estimator, which is estimated under the alternative, suffer from serious size distortion when the errors are serially correlated. In this paper, we propose biascorrected tests for a shift in mean by correcting the bias of the longrun variance estimator up to O(1/T). Simulation results show that the proposed tests have good size and high power. 
Keywords:  structural change, longrun variance, bias correction 
JEL:  C12 C22 
Date:  2014–11–10 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201416&r=ecm 
By:  Chen, Xi; Plotnikova, Tatiana 
Abstract:  A common problem in the empirical production analysis at the firmlevel is that the initial values of capital are often missing in the data. Most empirical studies impute initial capital according to some ad hoc criteria based on a single arbitrary proxy. This paper evaluates the bias of production function estimations that is introduced when these traditional initial value approximations are used. We propose a generalized framework to deal with the missing initial capital problem by using multiple proxies where the choice of proxies is datadriven. We conduct a series of Monte Carlo experiments where the proposed method is tested against traditional approaches and apply the method to the firmlevel data. 
Keywords:  capital stock measurement, production function estimation, Monte Carlo simulation, nonlinear regression 
JEL:  C19 C81 D20 
Date:  2014–09–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:61154&r=ecm 
By:  Xiaohong Chen (Cowles Foundation, Yale University); Timothy M. Christensen (NYU) 
Abstract:  We show that spline and wavelet series regression estimators for weakly dependent regressors attain the optimal uniform (i.e., supnorm) convergence rate (n/log n)^{p/(2p+d)} of Stone (1982), where d is the number of regressors and p is the smoothness of the regression function. The optimal rate is achieved even for heavytailed martingale difference errors with finite (2 + (d/p))th absolute moment for d/p < 2. We also establish the asymptotic normality of t statistics for possibly nonlinear, irregular functionals of the conditional mean function under weak conditions. The results are proved by deriving a new exponential inequality for sums of weakly dependent random matrices, which is of independent interest. 
Keywords:  Nonparametric series regression, Optimal uniform convergence rates, Weak dependence, Random matrices, Splines, Wavelets, (Nonlinear) Irregular Functionals, Sieve t statistics 
JEL:  C12 C14 C32 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1976&r=ecm 
By:  Costantini, Mauro (Department of Economics and Finance, Brunel University); Gunter, Ulrich (Department of Tourism and Service Management, MODUL University Vienna); Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna and Department of Economics, University of Vienna) 
Abstract:  We explore the benefits of forecast combinations based on forecastencompassing tests compared to simple averages and to BatesGranger combinations. We also consider a new combination method that fuses testbased and BatesGranger weighting. For a realistic simulation design, we generate multivariate timeseries samples from a macroeconomic DSGEVAR model. Results generally support BatesGranger over uniform weighting, whereas benefits of testbased weights depend on the sample size and on the prediction horizon. In a corresponding application to realworld data, simple averaging performs best. Uniform averages may be the weighting scheme that is most robust to empirically observed irregularities. 
Keywords:  Combining forecasts, encompassing tests, model selection, time series, DSGEVAR model 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:309&r=ecm 
By:  Ranjani Atukorala; Maxwell L. King; Sivagowry Sriananthakumar 
Abstract:  The Central Limit Theorem (CLT) is an important result in statistics and econometrics and econometricians often rely on the CLT for inference in practice. Even though, different conditions apply to different kinds of data, the CLT results are believed to be generally available for a range of situations. This paper illustrates the use of the KullbackLeibler Information (KLI) measure to assess how close an approximating distribution is to a true distribution in the context of investigating how different population distributions affect convergence in the CLT. For this purpose, three different nonparametric methods for estimating the KLI are proposed and investigated. The main findings of this paper are 1) the distribution of the sample means better approximates the normal distribution as the sample size increases, as expected, 2) for any fixed sample size, the distribution of means of samples from skewed distributions converges faster to the normal distribution as the kurtosis increases, 3) at least in the range of values of kurtosis considered, the distribution of means of small samples generated from symmetric distributions is well approximated by the normal distribution, and 4) among the nonparametric methods used, Vasicek's (1976) estimator seems to be the best for the purpose of assessing asymptotic approximations. Based on the results of the paper, recommendations on minimum sample sizes required for an accurate normal approximation of the true distribution of sample means are made. 
Keywords:  KullbackLeibler Information, Central Limit Theorem, skewness and kurtosis 
JEL:  C1 C2 C4 C5 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201429&r=ecm 
By:  Nikolay Arefiev (National Research University Higher School of Economics) 
Abstract:  For linear Gaussian simultaneous equations models with orthogonal structural shocks, I show that, if appropriate instruments are available, there exists a set of inclusion and exclusion restrictions sucient for the full identication, such that each identication restriction from this set is testable. This result does not depend on the assumption whether the model is recursive or cyclical, although the causal representation of cyclical models is not unique. To prove this, I provide a reduced form rank condition for the identication of simultaneous equations models, propose a graphical interpretation of the rank condition, provide graphical interpretations of various sucient conditions for identication of structural vector autoregressions, and formulate new conditional independence tests 
Keywords:  Identication, instrumental variables, dataoriented identication, sparse structural models, structural vector autoregression, SVAR, simultaneous equations model, SEM, probabilistic graphical model, PGM. 
JEL:  C30 E31 E52 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:hig:wpaper:79/ec/2014&r=ecm 
By:  Timothy B. Armstrong (Cowles Foundation, Yale University) 
Abstract:  This note uses a simple example to show how moment inequality models used in the empirical economics literature lead to general minimax relative efficiency comparisons. The main point is that such models involve inference on a low dimensional parameter, which leads naturally to a definition of “distance” that, in full generality, would be arbitrary in minimax testing problems. This definition of distance is justified by the fact that it leads to a duality between minimaxity of confidence intervals and tests, which does not hold for other definitions of distance. Thus, the use of moment inequalities for inference in a low dimensional parametric model places additional structure on the testing problem, which leads to stronger conclusions regarding minimax relative efficiency than would otherwise be possible. 
Keywords:  Minimax, Relative efficiency, Moment inequalities 
JEL:  C12 C14 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1975&r=ecm 
By:  Frank Kleibergen; Zhaoguo Zhan (Tsinghua University) 
Abstract:  We construct the large sample distributions of the OLS and GLS R^2’s of the second pass regression of the FamaMacBeth (1973) two pass procedure when the observed proxy factors are minorly correlated with the true unobserved factors. This implies an unexplained factor structure in the first pass residuals and, consequently, a large estimation error in the estimated beta’s which is spanned by the beta’s of the unexplained true factors. The average portfolio returns and the estimation error of the estimated beta’s are then both linear in the beta’s of the unobserved true factors which leads to possibly large values of the OLS R2 of the second pass regression. These large values of the OLS R2 are not indicative of the strength of the relationship. Our results question many empirical findings that concern the relationship between expected portfolio returns and (macro)economic factors. 
Date:  2014–12–16 
URL:  http://d.repec.org/n?u=RePEc:ame:wpaper:1405&r=ecm 
By:  Arne Risa Hole (University of Sheffield); Hong Il Yoo (Durham University Business School, Durham University) 
Abstract:  The maximum simulated likelihood estimation of random parameter logit models is now commonplace in various areas of economics. Since these models have nonconcave simulated likelihood functions with potentially many optima, the selection of "good" starting values is crucial for avoiding a false solution at an inferior optimum. But little guidance exists on how to obtain "good" starting values. We advance an estimation strategy which makes joint use of heuristic global search routines and conventional gradientbased algorithms. The central idea is to use heuristic routines to locate a starting point which is likely to be close to the global maximum, and then to use gradientbased algorithms to refine this point further to a local maximum which stands a good chance of being the global maximum. In the context of a random parameter logit model featuring both scale and coefficient heterogeneity (GMNL), we apply this strategy as well as the conventional strategy of starting from estimated special cases of the final model. The results from several empirical datasets suggest that the heuristically assisted strategy is often capable of finding a solution which is better than the best that we have found using the conventional strategy. The results also suggest, however, that the configuration of the heuristic routines that leads to the best solution is likely to vary somewhat from application to application. 
Keywords:  mixed logit, generalized multinomial logit, differential evolution, particle swarm optimization 
JEL:  C25 C61 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:shf:wpaper:2014021&r=ecm 
By:  Deuchert, Eva; Huber, Martin 
Abstract:  Many instrumental variable (IV) regressions include control variables to justify (conditional) independence of the instrument and the potential outcomes. The plausibility of conditional IV independence crucially depends on the timing when the control variables are determined. This paper systemically works through different IV models and discusses the (conditions for the) satisfaction of conditional IV independence when controlling for covariates measured (a) prior to the instrument, (b) after the treatment, or (c) both. To illustrate these identification issues, we consider an empirical application using the Vietnam War draft risk as instrument either for veteran status or education to estimate the effects of these variables on labor market and health outcomes. 
Keywords:  Instrument, control variables, conditional independence, covariates 
JEL:  C26 J24 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2014:39&r=ecm 
By:  Chudik, Alexander (Federal Reserve Bank of Dallas); Grossman, Valerie (Federal Reserve Bank of Dallas); Pesaran, M. Hashem (University of Southern California) 
Abstract:  This paper derives new theoretical results for forecasting with Global VAR (GVAR) models. It is shown that the presence of a strong unobserved common factor can lead to an undetermined GVAR model. To solve this problem, we propose augmenting the GVAR with additional proxy equations for the strong factors and establish conditions under which forecasts from the augmented GVAR model (AugGVAR) uniformly converge in probability (as the panel dimensions N,T→ ∞ such that N/T→κ for some 0 
JEL:  C53 E37 
Date:  2014–11–01 
URL:  http://d.repec.org/n?u=RePEc:fip:feddgw:213&r=ecm 
By:  Tom Boot; Andreas Pick 
Abstract:  We derive optimal weights for Markov switching models by weighting observations such that forecasts are optimal in the MSFE sense. We provide analytic expressions of the weights conditional on the Markov states and conditional on state probabilities. This allows us to study the effect of uncertainty around states on forecasts. It emerges that, even in large samples, forecasting performance increases substantially when the construction of optimal weights takes uncertainty around states into account. Performance of the optimal weights is shown through simulations and an application to US GNP, where using optimal weights leads to significant reductions in MSFE. 
Keywords:  Markov switching models; forecasting; optimal weights; GNP forecasting 
JEL:  C25 C53 E37 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:452&r=ecm 
By:  Vogt, Erik (Federal Reserve Bank of New York) 
Abstract:  The illiquidity of longmaturity options has made it difficult to study the term structures of option spanning portfolios. This paper proposes a new estimation and inference framework for these optionimplied term structures that addresses longmaturity illiquidity. By building a sieve estimator around the riskneutral valuation equation, the framework theoretically justifies (fattailed) extrapolations beyond truncated strikes and between observed maturities while remaining nonparametric. New confidence intervals quantify the term structure estimation error. The framework is applied to estimating the term structure of the variance risk premium and finds that a shortrun component dominates market excess return predictability. 
Keywords:  equity risk premium; finance; options; predictability; sieve M estimation; stateprice density; term structures; variance risk premium; VIX 
JEL:  C12 C14 C58 G12 G13 G17 
Date:  2014–12–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:706&r=ecm 
By:  von der Lippe, Peter 
Abstract:  The New Stochastic Approach (NSA) – unjustly – pretends to promote a better understanding of price index (PI) formulas by viewing them as regression coefficients. As prices in the NSA are assumed to be collected in a random sample (what is particularly at odds with official price statistics), PIs are random variables so that not only a point estimate but also an interval estimate of a PI can be provided. However this often praised "main advantage" of the NSA is hardly of any use from a practical point of view. In the NSA goodness of fit is confused with adequacy of a PI formula. Regression models are mostly farfetched, stipulate restrictive and unrealistic assumptions, replicate only already known PI formulas and they say nothing about axioms satisfied or violated by a PI. 
Keywords:  Index theory. price index, Inflation measurement, regression, methodoloy of collecting macroeconomic data 
JEL:  C43 C82 E01 E31 
Date:  2014–12–14 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60839&r=ecm 
By:  Krueger, Fabian (Heidelburg Institute for Theoretical Studies); Clark, Todd E. (Federal Reserve Bank of Cleveland); Ravazzolo, Francesco (Norges Bank and the BI Norwegian Business School) 
Abstract:  This paper shows entropic tilting to be a flexible and powerful tool for combining mediumterm forecasts from BVARs with shortterm forecasts from other sources (nowcasts from either surveys or other models). Tilting systematically improves the accuracy of both point and density forecasts, and tilting the BVAR forecasts based on nowcast means and variances yields slightly greater gains in density accuracy than does just tilting based on the nowcast means. Hence entropic tilting can offer—more so for persistent variables than notpersistent variables—some benefits for accurately estimating the uncertainty of multistep forecasts that incorporate nowcast information. 
Keywords:  Forecasting; Prediction; Bayesian Analysis 
JEL:  C11 C53 E17 
Date:  2015–01–07 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwp:1439&r=ecm 
By:  Pan, Li; Politis, Dimitris 
Keywords:  Social and Behavioral Sciences 
Date:  2014–12–18 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt7555757g&r=ecm 
By:  Joseph G. Altonji; Richard K. Mansfield 
Abstract:  We consider the classic problem of estimating group treatment effects when individuals sort based on observed and unobserved characteristics that affect the outcome. Using a standard choice model, we show that controlling for group averages of observed individual characteristics potentially absorbs all the acrossgroup variation in unobservable individual characteristics. We use this insight to bound the treatment effect variance of school systems and associated neighborhoods for various outcomes. Across four datasets, our most conservative estimates indicate that a 90th versus 10th percentile school system increases the high school graduation probability by between 0.047 and 0.085 and increases the college enrollment probability by between 0.11 and 0.13. We also find large effects on adult earnings. We discuss a number of other applications of our methodology, including measurement of teacher valueadded. 
JEL:  C20 I20 I24 R20 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20781&r=ecm 