nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒01‒14
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Improving the Finite Sample Performance of Tests for a Shift in Mean By YAMAZAKI, Daisuke; KUROZUMI, Eiji
  2. Retrieving initial capital distributions from panel data By Chen, Xi; Plotnikova, Tatiana
  3. Optimal Uniform Convergence Rates and Asymptotic Normality for Series Estimators under Weak Dependence and Weak Conditions By Xiaohong Chen; Timothy M. Christensen
  4. Forecast combinations in a DSGE-VAR lab By Costantini, Mauro; Gunter, Ulrich; Kunst, Robert M.
  5. Applications of Information Measures to Assess Convergence in the Central Limit Theorem By Ranjani Atukorala; Maxwell L. King; Sivagowry Sriananthakumar
  6. Structural Models With Testable Identification By Nikolay Arefiev
  7. A Note on Minimax Testing and Confidence Intervals in Moment Inequality Models By Timothy B. Armstrong
  8. Unexplained factors and their effects on second pass R-squared’s By Frank Kleibergen; Zhaoguo Zhan
  9. The use of heuristic optimization algorithms to facilitate maximum simulated likelihood estimation of random parameter logit models By Arne Risa Hole; Hong Il Yoo
  10. A cautionary tale about control variables in IV estimation By Deuchert, Eva; Huber, Martin
  11. A multi-country approach to forecasting output growth using PMIs By Chudik, Alexander; Grossman, Valerie; Pesaran, M. Hashem
  12. Optimal forecasts from Markov switching models By Tom Boot; Andreas Pick
  13. Option-implied term structures By Vogt, Erik
  14. The Stochastic Approach to Index Numbers: Needless and Useless By von der Lippe, Peter
  15. Using Entropic Tilting to Combine BVAR Forecasts with External Nowcasts By Krueger, Fabian; Clark, Todd E.; Ravazzolo, Francesco
  16. Bootstrap prediction intervals for Markov processes  By Pan, Li; Politis, Dimitris
  17. Group-Average Observables as Controls for Sorting on Unobservables When Estimating Group Treatment Effects: the Case of School and Neighborhood Effects By Joseph G. Altonji; Richard K. Mansfield

  1. By: YAMAZAKI, Daisuke; KUROZUMI, Eiji
    Abstract: It is widely known that structural break tests based on the long-run variance estimator, which is estimated under the alternative, suffer from serious size distortion when the errors are serially correlated. In this paper, we propose bias-corrected tests for a shift in mean by correcting the bias of the long-run variance estimator up to O(1/T). Simulation results show that the proposed tests have good size and high power.
    Keywords: structural change, long-run variance, bias correction
    JEL: C12 C22
    Date: 2014–11–10
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2014-16&r=ecm
  2. By: Chen, Xi; Plotnikova, Tatiana
    Abstract: A common problem in the empirical production analysis at the firm-level is that the initial values of capital are often missing in the data. Most empirical studies impute initial capital according to some ad hoc criteria based on a single arbitrary proxy. This paper evaluates the bias of production function estimations that is introduced when these traditional initial value approximations are used. We propose a generalized framework to deal with the missing initial capital problem by using multiple proxies where the choice of proxies is data-driven. We conduct a series of Monte Carlo experiments where the proposed method is tested against traditional approaches and apply the method to the firm-level data.
    Keywords: capital stock measurement, production function estimation, Monte Carlo simulation, non-linear regression
    JEL: C19 C81 D20
    Date: 2014–09–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:61154&r=ecm
  3. By: Xiaohong Chen (Cowles Foundation, Yale University); Timothy M. Christensen (NYU)
    Abstract: We show that spline and wavelet series regression estimators for weakly dependent regressors attain the optimal uniform (i.e., sup-norm) convergence rate (n/log n)^{-p/(2p+d)} of Stone (1982), where d is the number of regressors and p is the smoothness of the regression function. The optimal rate is achieved even for heavy-tailed martingale difference errors with finite (2 + (d/p))th absolute moment for d/p < 2. We also establish the asymptotic normality of t statistics for possibly nonlinear, irregular functionals of the conditional mean function under weak conditions. The results are proved by deriving a new exponential inequality for sums of weakly dependent random matrices, which is of independent interest.
    Keywords: Nonparametric series regression, Optimal uniform convergence rates, Weak dependence, Random matrices, Splines, Wavelets, (Nonlinear) Irregular Functionals, Sieve t statistics
    JEL: C12 C14 C32
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1976&r=ecm
  4. By: Costantini, Mauro (Department of Economics and Finance, Brunel University); Gunter, Ulrich (Department of Tourism and Service Management, MODUL University Vienna); Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna and Department of Economics, University of Vienna)
    Abstract: We explore the benefits of forecast combinations based on forecast-encompassing tests compared to simple averages and to Bates-Granger combinations. We also consider a new combination method that fuses test-based and Bates-Granger weighting. For a realistic simulation design, we generate multivariate time-series samples from a macroeconomic DSGE-VAR model. Results generally support Bates-Granger over uniform weighting, whereas benefits of test-based weights depend on the sample size and on the prediction horizon. In a corresponding application to real-world data, simple averaging performs best. Uniform averages may be the weighting scheme that is most robust to empirically observed irregularities.
    Keywords: Combining forecasts, encompassing tests, model selection, time series, DSGE-VAR model
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:309&r=ecm
  5. By: Ranjani Atukorala; Maxwell L. King; Sivagowry Sriananthakumar
    Abstract: The Central Limit Theorem (CLT) is an important result in statistics and econometrics and econometricians often rely on the CLT for inference in practice. Even though, different conditions apply to different kinds of data, the CLT results are believed to be generally available for a range of situations. This paper illustrates the use of the Kullback-Leibler Information (KLI) measure to assess how close an approximating distribution is to a true distribution in the context of investigating how different population distributions affect convergence in the CLT. For this purpose, three different non-parametric methods for estimating the KLI are proposed and investigated. The main findings of this paper are 1) the distribution of the sample means better approximates the normal distribution as the sample size increases, as expected, 2) for any fixed sample size, the distribution of means of samples from skewed distributions converges faster to the normal distribution as the kurtosis increases, 3) at least in the range of values of kurtosis considered, the distribution of means of small samples generated from symmetric distributions is well approximated by the normal distribution, and 4) among the nonparametric methods used, Vasicek's (1976) estimator seems to be the best for the purpose of assessing asymptotic approximations. Based on the results of the paper, recommendations on minimum sample sizes required for an accurate normal approximation of the true distribution of sample means are made.
    Keywords: Kullback-Leibler Information, Central Limit Theorem, skewness and kurtosis
    JEL: C1 C2 C4 C5
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2014-29&r=ecm
  6. By: Nikolay Arefiev (National Research University Higher School of Economics)
    Abstract: For linear Gaussian simultaneous equations models with orthogonal structural shocks, I show that, if appropriate instruments are available, there exists a set of inclusion and exclusion restrictions sucient for the full identication, such that each identication restriction from this set is testable. This result does not depend on the assumption whether the model is recursive or cyclical, although the causal representation of cyclical models is not unique. To prove this, I provide a reduced form rank condition for the identication of simultaneous equations models, propose a graphical interpretation of the rank condition, provide graphical interpretations of various sucient conditions for identication of structural vector autoregressions, and formulate new conditional independence tests
    Keywords: Identication, instrumental variables, data-oriented identication, sparse structural models, structural vector autoregression, SVAR, simultaneous equations model, SEM, probabilistic graphical model, PGM.
    JEL: C30 E31 E52
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:hig:wpaper:79/ec/2014&r=ecm
  7. By: Timothy B. Armstrong (Cowles Foundation, Yale University)
    Abstract: This note uses a simple example to show how moment inequality models used in the empirical economics literature lead to general minimax relative efficiency comparisons. The main point is that such models involve inference on a low dimensional parameter, which leads naturally to a definition of “distance” that, in full generality, would be arbitrary in minimax testing problems. This definition of distance is justified by the fact that it leads to a duality between minimaxity of confidence intervals and tests, which does not hold for other definitions of distance. Thus, the use of moment inequalities for inference in a low dimensional parametric model places additional structure on the testing problem, which leads to stronger conclusions regarding minimax relative efficiency than would otherwise be possible.
    Keywords: Minimax, Relative efficiency, Moment inequalities
    JEL: C12 C14
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1975&r=ecm
  8. By: Frank Kleibergen; Zhaoguo Zhan (Tsinghua University)
    Abstract: We construct the large sample distributions of the OLS and GLS R^2’s of the second pass regression of the Fama-MacBeth (1973) two pass procedure when the observed proxy factors are minorly correlated with the true unobserved factors. This implies an unexplained factor structure in the first pass residuals and, consequently, a large estimation error in the estimated beta’s which is spanned by the beta’s of the unexplained true factors. The average portfolio returns and the estimation error of the estimated beta’s are then both linear in the beta’s of the unobserved true factors which leads to possibly large values of the OLS R2 of the second pass regression. These large values of the OLS R2 are not indicative of the strength of the relationship. Our results question many empirical findings that concern the relationship between expected portfolio returns and (macro-)economic factors.
    Date: 2014–12–16
    URL: http://d.repec.org/n?u=RePEc:ame:wpaper:1405&r=ecm
  9. By: Arne Risa Hole (University of Sheffield); Hong Il Yoo (Durham University Business School, Durham University)
    Abstract: The maximum simulated likelihood estimation of random parameter logit models is now commonplace in various areas of economics. Since these models have non-concave simulated likelihood functions with potentially many optima, the selection of "good" starting values is crucial for avoiding a false solution at an inferior optimum. But little guidance exists on how to obtain "good" starting values. We advance an estimation strategy which makes joint use of heuristic global search routines and conventional gradient-based algorithms. The central idea is to use heuristic routines to locate a starting point which is likely to be close to the global maximum, and then to use gradient-based algorithms to refine this point further to a local maximum which stands a good chance of being the global maximum. In the context of a random parameter logit model featuring both scale and coefficient heterogeneity (GMNL), we apply this strategy as well as the conventional strategy of starting from estimated special cases of the final model. The results from several empirical datasets suggest that the heuristically assisted strategy is often capable of finding a solution which is better than the best that we have found using the conventional strategy. The results also suggest, however, that the configuration of the heuristic routines that leads to the best solution is likely to vary somewhat from application to application.
    Keywords: mixed logit, generalized multinomial logit, differential evolution, particle swarm optimization
    JEL: C25 C61
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:shf:wpaper:2014021&r=ecm
  10. By: Deuchert, Eva; Huber, Martin
    Abstract: Many instrumental variable (IV) regressions include control variables to justify (conditional) independence of the instrument and the potential outcomes. The plausibility of conditional IV independence crucially depends on the timing when the control variables are determined. This paper systemically works through different IV models and discusses the (conditions for the) satisfaction of conditional IV independence when controlling for covariates measured (a) prior to the instrument, (b) after the treatment, or (c) both. To illustrate these identification issues, we consider an empirical application using the Vietnam War draft risk as instrument either for veteran status or education to estimate the effects of these variables on labor market and health outcomes.
    Keywords: Instrument, control variables, conditional independence, covariates
    JEL: C26 J24
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2014:39&r=ecm
  11. By: Chudik, Alexander (Federal Reserve Bank of Dallas); Grossman, Valerie (Federal Reserve Bank of Dallas); Pesaran, M. Hashem (University of Southern California)
    Abstract: This paper derives new theoretical results for forecasting with Global VAR (GVAR) models. It is shown that the presence of a strong unobserved common factor can lead to an undetermined GVAR model. To solve this problem, we propose augmenting the GVAR with additional proxy equations for the strong factors and establish conditions under which forecasts from the augmented GVAR model (AugGVAR) uniformly converge in probability (as the panel dimensions N,T→ ∞ such that N/T→κ for some 0
    JEL: C53 E37
    Date: 2014–11–01
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:213&r=ecm
  12. By: Tom Boot; Andreas Pick
    Abstract: We derive optimal weights for Markov switching models by weighting observations such that forecasts are optimal in the MSFE sense. We provide analytic expressions of the weights conditional on the Markov states and conditional on state probabilities. This allows us to study the effect of uncertainty around states on forecasts. It emerges that, even in large samples, forecasting performance increases substantially when the construction of optimal weights takes uncertainty around states into account. Performance of the optimal weights is shown through simulations and an application to US GNP, where using optimal weights leads to significant reductions in MSFE.
    Keywords: Markov switching models; forecasting; optimal weights; GNP forecasting
    JEL: C25 C53 E37
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:452&r=ecm
  13. By: Vogt, Erik (Federal Reserve Bank of New York)
    Abstract: The illiquidity of long-maturity options has made it difficult to study the term structures of option spanning portfolios. This paper proposes a new estimation and inference framework for these option-implied term structures that addresses long-maturity illiquidity. By building a sieve estimator around the risk-neutral valuation equation, the framework theoretically justifies (fat-tailed) extrapolations beyond truncated strikes and between observed maturities while remaining nonparametric. New confidence intervals quantify the term structure estimation error. The framework is applied to estimating the term structure of the variance risk premium and finds that a short-run component dominates market excess return predictability.
    Keywords: equity risk premium; finance; options; predictability; sieve M estimation; state-price density; term structures; variance risk premium; VIX
    JEL: C12 C14 C58 G12 G13 G17
    Date: 2014–12–01
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:706&r=ecm
  14. By: von der Lippe, Peter
    Abstract: The New Stochastic Approach (NSA) – unjustly – pretends to promote a better understanding of price index (PI) formulas by viewing them as regression coefficients. As prices in the NSA are assumed to be collected in a random sample (what is particularly at odds with official price statistics), PIs are random variables so that not only a point estimate but also an interval estimate of a PI can be provided. However this often praised "main advantage" of the NSA is hardly of any use from a practical point of view. In the NSA goodness of fit is confused with adequacy of a PI formula. Regression models are mostly farfetched, stipulate restrictive and unrealistic assumptions, replicate only already known PI formulas and they say nothing about axioms satisfied or violated by a PI.
    Keywords: Index theory. price index, Inflation measurement, regression, methodoloy of collecting macroeconomic data
    JEL: C43 C82 E01 E31
    Date: 2014–12–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:60839&r=ecm
  15. By: Krueger, Fabian (Heidelburg Institute for Theoretical Studies); Clark, Todd E. (Federal Reserve Bank of Cleveland); Ravazzolo, Francesco (Norges Bank and the BI Norwegian Business School)
    Abstract: This paper shows entropic tilting to be a flexible and powerful tool for combining medium-term forecasts from BVARs with short-term forecasts from other sources (nowcasts from either surveys or other models). Tilting systematically improves the accuracy of both point and density forecasts, and tilting the BVAR forecasts based on nowcast means and variances yields slightly greater gains in density accuracy than does just tilting based on the nowcast means. Hence entropic tilting can offer—more so for persistent variables than not-persistent variables—some benefits for accurately estimating the uncertainty of multi-step forecasts that incorporate nowcast information.
    Keywords: Forecasting; Prediction; Bayesian Analysis
    JEL: C11 C53 E17
    Date: 2015–01–07
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1439&r=ecm
  16. By: Pan, Li; Politis, Dimitris
    Keywords: Social and Behavioral Sciences
    Date: 2014–12–18
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt7555757g&r=ecm
  17. By: Joseph G. Altonji; Richard K. Mansfield
    Abstract: We consider the classic problem of estimating group treatment effects when individuals sort based on observed and unobserved characteristics that affect the outcome. Using a standard choice model, we show that controlling for group averages of observed individual characteristics potentially absorbs all the across-group variation in unobservable individual characteristics. We use this insight to bound the treatment effect variance of school systems and associated neighborhoods for various outcomes. Across four datasets, our most conservative estimates indicate that a 90th versus 10th percentile school system increases the high school graduation probability by between 0.047 and 0.085 and increases the college enrollment probability by between 0.11 and 0.13. We also find large effects on adult earnings. We discuss a number of other applications of our methodology, including measurement of teacher value-added.
    JEL: C20 I20 I24 R20
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:20781&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.