nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒10‒20
24 papers chosen by
Sune Karlsson
Orebro University

  1. Optimal Bandwidth Selection for Nonparametric Conditional Distribution and Quantile Functions By Qi Li; Juan Lin; Jeffrey S. Racine
  2. Improved tests for spatial correlation By Robinson, Peter M.; Rossi, Francesca
  3. Detecting outliers in time series By Ardelean, Vlad
  4. "Nonparametric Identication and Estimation of the Number of Components in Multivariate Mixtures" By Hiroyuki Kasahara; Katsumi Shimotsu
  5. Variable selection in Cox regression models with varying coefficients By Toshio Honda; Wolfgang Karl Härdle; ;
  6. Smooth Nonparametric Bernstein Vine Copulas By Gregor Wei{\ss}; Marcus Scheffer
  7. Let's Do It Again: Bagging Equity Premium Predictors By Eric Hillebrand; Tae-Hwy Lee; Marcelo C. Medeiros
  8. New Non-Linearity Test to Circumvent the Limitation of Volterra Expansion By Bai, Zhidong; Hui, Yongchang; Wong, Wing-Keung
  9. Lasso-type and Heuristic Strategies in Model Selection and Forecasting By Ivan Savin; Peter Winker
  10. Quasi-arithmetische Mittelwerte und Normalverteilung By Klein, Ingo
  11. Effects of simultaneity on testing Granger-causality – a cautionary note about statistical problems and economic misinterpretations By Joachim Wilde
  12. Asymptotic Efficiency of Semiparametric Two-step GMM By Xiaohong Chen; Jinyong Hahn; Zhipeng Liao
  13. Does Output Gap, Labor's Share or Unemployment Rate Drive Inflation? By Lanne, Markku; Luoto, Jani
  14. On parameter estimation for critical affine processes By Matyas Barczy; Leif Doering; Zenghu Li; Gyula Pap
  15. Parametric Bootstrap Tests for Futures Price and Implied Volatility Biases with Application to Rating Livestock Margin Insurance for Dairy Cattle By Bozic, Marin; Newton, John; Thraen, Cameron S.; Gould, Brian W.
  16. Occurrence of long and short term asymmetry in stock market volatilities By Lönnbark, Carl
  17. How did Fukushima-Daiichi core meltdown change the probability of nuclear accidents? By Lina Escobar Rangel; François Lévêque
  18. Estimation of Dynamic Discrete Choice Models in Continuous Time By Peter Arcidiacono; Patrick Bayer; Jason R. Blevins; Paul B. Ellickson
  19. Asymmetry with respect to the memory in stock market volatilities By Lönnbark, Carl
  20. Econometric analysis of games with multiple equilibria By Aureo de Paula
  21. Measuring the Shadow Economy: Endogenous Switching Regression with Unobserved Separation By Lichard, Tomáš; Hanousek, Jan; Filer, Randall K.
  22. Modelling general dependence between commodity forward curves By Mikhail Zolotko; Ostap Okhrin; ;
  23. Regime switches in the volatility and correlation of financial institutions By Kris Boudt; Jon Danielsson; Siem Jan Koopman; Andre Lucas
  24. How We Tend To Overestimate Powerlaw Tail Exponents By Nassim N. Taleb

  1. By: Qi Li; Juan Lin; Jeffrey S. Racine
    Abstract: We propose a data-driven least squares cross-validation method to optimally select smoothing parameters for the nonparametric estimation of conditional cumulative distribution functions and conditional quantile functions. We allow for general multivariate covariates that can be continuous, categorical or a mix of either. We provide asymptotic analysis, examine finite-sample properties via Monte Carlo simulation, and consider an application involving testing for first order stochastic dominance of children's health conditional on parental education and income.
    Date: 2012–10
  2. By: Robinson, Peter M.; Rossi, Francesca
    Abstract: We consider testing the null hypothesis of no spatial autocorrelation against the alternative of first order spatial autoregression. A Wald test statistic has good first order asymptotic properties, but these may not be relevant in small or moderate-sized samples, especially as (depending on properties of the spatial weight matrix) the usual parametric rate of convergence may not be attained. We thus develop tests with more accurate size properties, by means of Edgeworth expansions and the bootstrap. The finite-sample performance of the tests is examined in Monte Carlo simulations.
    Keywords: Spatial Autocorrelation; Ordinary Least Squares; Hypothesis Testing; Edgeworth Expansion; Bootstrap
    JEL: C12 C21
    Date: 2012–06–22
  3. By: Ardelean, Vlad
    Abstract: In parametric time series analysis there is the implicit assumption of no aberrant observations, so-called outliers. Outliers are observations that seem to be inconsistent with the assumed model. When these observations are included to estimate the model parameters, the resulting estimates are biased. The fact that markets have been affected by shocks (i.e. East Asian crisis, Dot-com bubble, sub-prime mortgage crisis) make the assumption that no outlier is present questionable. This paper addresses the problem of detecting outlying observations in time series. Outliers can be understood as a short transient change of the underlying parameters. Unfortunately tests designed to detect structural breaks cannot be used to find outlying observations. To overcome this problem a test normally used to detect structural breaks is modified. This test is based on the cumulative sum (CUSUM) of the squared observations. In comparison to a likelihood-ratio test neither the underlying model nor the functional form of the outliers have to be specified. In a simulation study the finite sample behaviour of the proposed test is analysed. The simulation study shows that the test has reasonable power against a variety of alternatives. Moreover, to illustrate the behaviour of the proposed test we analyse the returns of the Volkswagen stock. --
    Keywords: GARCH processes,Detection of outliers,CUSUM-type test
    Date: 2012
  4. By: Hiroyuki Kasahara (Department of Economics, University of British Columbia); Katsumi Shimotsu (Faculty of Economics, University of Tokyo)
    Abstract: This article analyzes the identiability of the number of components in k-variate, M- component nite mixture models in which each component distribution has independent marginals, including models in latent class analysis. Without making parametric assumptions on the component distributions, we investigate how one can identify the number of components from the distribution function of the observed data. When k 2, a lower bound on the number of components (M) is nonparametrically identiable from the rank of a matrix constructed from the distribution function of the observed variables. Building on this identication condition, we develop a procedure to consistently estimate a lower bound on the number of components.
    Date: 2012–10
  5. By: Toshio Honda; Wolfgang Karl Härdle; ;
    Abstract: We deal with two kinds of Cox regression models with varying coefficients. The coefficients vary with time in one model. In the other model, there is an important random variable called an index variable and the coefficients vary with the variable. In both models, we have p-dimensional covariates and p increases moderately. However, it is the case that only a small part of the covariates are relevant in these situations. We carry out variable selection and estimation of the coefficient functions by using the group SCAD-type estimator and the adaptive group Lasso estimator. We examine the theoretical properties of the estimators, especially the L2 convergence rate, the sparsity, and the oracle property. Simulation studies and a real data analysis show the performance of these new techniques.
    Keywords: Cox regression model, high-dimensional data, sparsity, oracle estimator, B-splines, group SCAD, adaptive group Lasso, L2 convergence rate
    JEL: C14 C24
    Date: 2012–10
  6. By: Gregor Wei{\ss}; Marcus Scheffer
    Abstract: We propose to use nonparametric Bernstein copulas as bivariate pair-copulas in high-dimensional vine models. The resulting smooth and nonparametric vine copulas completely obviate the error-prone need for choosing the pair-copulas from parametric copula families. By means of a simulation study and an empirical analysis of financial market data, we show that our proposed smooth nonparametric vine copula model is superior to competing parametric vine models calibrated via Akaike's Information Criterion.
    Date: 2012–10
  7. By: Eric Hillebrand (Aarhus University and CREATES); Tae-Hwy Lee (University of California, Riverside); Marcelo C. Medeiros (Pontifical Catholic University of Rio de Janeiro)
    Abstract: The literature on excess return prediction has considered a wide array of estimation schemes, among them unrestricted and restricted regression coefficients. We consider bootstrap aggregation (bagging) to smooth parameter restrictions. Two types of restrictions are considered: positivity of the regression coefficient and positivity of the forecast. Bagging constrained estimators can have smaller asymptotic mean-squared prediction errors than forecasts from a restricted model without bagging. Monte Carlo simulations show that forecast gains can be achieved in realistic sample sizes for the stock return problem. In an empirical application using the data set of Campbell, J., and S. Thompson (2008): “Predicting the Equity Premium Out of Sample: Can Anything Beat the Historical Average?”, Review of Financial Studies 21, 1511-1531, we show that we can improve the forecast performance further by smoothing the restriction through bagging.
    Keywords: Constraints on predictive regression function, Bagging, Asymptotic MSE, Equity premium; Out-of-sample forecasting, Economic value functions.
    JEL: C5 E4 G1
    Date: 2012–09–30
  8. By: Bai, Zhidong; Hui, Yongchang; Wong, Wing-Keung
    Abstract: In this article we propose a quick, efficient, and easy method to detect whether a time series Yt possesses any nonlinear feature. The advantage of our proposed nonlinearity test is that it is not required to know the exact nonlinear features and the detailed nonlinear forms of Yt. Our proposed test could also be used to test whether the model, including linear and nonlinear, hypothesized to be used for the variable is appropriate as long as the residuals of the model being used could be estimated. Our simulation results show that our proposed test is stable and powerful while our illustration on Wolf's sunspots numbers is consistent with the findings from existing literature.
    Keywords: linearity; nonlinearity; U-statistics; Volterra expansion
    JEL: C32 C14 C01
    Date: 2012–08–01
  9. By: Ivan Savin (DFG Research Training Program "The Economics of Innovative Change", Friedrich Schiller University Jena and Max Planck Institute of Economics); Peter Winker (Justus Liebig University Giessen, and Centre for European Economic Research, Mannheim)
    Abstract: Several approaches for subset recovery and improved forecasting accuracy have been proposed and studied. One way is to apply a regularization strategy and solve the model selection task as a continuous optimization problem. One of the most popular approaches in this research field is given by Lasso-type methods. An alternative approach is based on information criteria. In contrast to the Lasso, these methods also work well in the case of highly correlated predictors. However, this performance can be impaired by the only asymptotic consistency of the information criteria. The resulting discrete optimization problems exhibit a high computational complexity. Therefore, a heuristic optimization approach (Genetic Algorithm) is applied. The two strategies are compared by means of a Monte-Carlo simulation study together with an empirical application to leading business cycle indicators in Russia and Germany.
    Keywords: Adaptive Lasso, Elastic net, Forecasting, Genetic algorithms, Heuristic methods, Lasso, Model selection
    JEL: C51 C52 C53 C61 C63
    Date: 2012–10–11
  10. By: Klein, Ingo
    Abstract: -- J.M. Keynes (1911) shows how distributions look like for which the arithmetic, the geometric and the harmonic mean are most probable values. We propose a general class of distributions for which the quasi-arithmetic means are ML-estimators such that these distributions can be transformed into an normal or a truncated normal distribution. As special cases we get for example the generalized logarithmic distributions introduced by Chen (1995).
    Keywords: ML-estimator,quasi-arithmetic mean,exponential family,generalized logarithmic distribution,inverse transformed normal distribution
    Date: 2012
  11. By: Joachim Wilde (Universitaet Osnabrueck)
    Abstract: Interpreting Granger causality as economic causality implies that the underlying VAR model is a structural economic model. However, this is wrong if simultaneity occurs. Magnitude and stability of possible errors are analysed in a simulation study. It is shown that economic misinterpretations of tests of Granger causality can occur with probability one for realistic parameter values. Furthermore, the power of the test can be rather low even with a sample size of T=50.
    Keywords: Granger causality, test, simultaneity, instantaneous causality
    JEL: C32
    Date: 2012–10–12
  12. By: Xiaohong Chen (Cowles Foundation, Yale University); Jinyong Hahn (Dept. of Economics, UCLA); Zhipeng Liao (Dept. of Economics, UCLA)
    Abstract: In this note, we characterize the semiparametric efficiency bound for a class of semiparametric models in which the unknown nuisance functions are identified via nonparametric conditional moment restrictions with possibly non-nested or over-lapping conditioning sets, and the finite dimensional parameters are potentially over-identified via unconditional moment restrictions involving the nuisance functions. We discover a surprising result that semiparametric two-step optimally weighted GMM estimators achieve the efficiency bound, where the nuisance functions could be estimated via any consistent nonparametric procedures in the first step. Regardless of whether the efficiency bound has a closed form expression or not, we provide easy-to-compute sieve based optimal weight matrices that lead to asymptotically efficient two-step GMM estimators.
    Keywords: Overlapping Information Sets; Semiparametric Efficiency; Two-Step GMM
    JEL: C14 C31 C32
    Date: 2012–10
  13. By: Lanne, Markku; Luoto, Jani
    Abstract: We propose a new methodology for ranking in probability the commonly proposed drivers of inflation in the New Keynesian model. The approach is based on Bayesian model selection among restricted VAR models, each of which embodies only one or none of the candidate variables as the driver. Simulation experiments suggest that our procedure is superior to the previously used conventional pairwise Granger causality tests in detecting the true driver. Empirical results lend little support to labor share, output gap or unemployment rate as the driver of U.S. inflation.
    Keywords: Inflation; New Keynesian Phillips curve; Bayesian variable selection
    JEL: C32 C52 E31 C11
    Date: 2012
  14. By: Matyas Barczy; Leif Doering; Zenghu Li; Gyula Pap
    Abstract: First we provide a simple set of sufficient conditions for the weak convergence of scaled affine processes with state space $R_+ \times R^d$. We specialize our result to one-dimensional continuous state branching processes with immigration. As an application, we study the asymptotic behavior of least squares estimators of some parameters of a two-dimensional critical affine diffusion process.
    Date: 2012–10
  15. By: Bozic, Marin; Newton, John; Thraen, Cameron S.; Gould, Brian W.
    Abstract: A common approach in the literature, whether the investigation is about futures price risk premiums or biases in option-based implied volatility coefficients, is to use samples in which consecutive observations can be regarded as uncorrelated. That will be the case for non- overlapping forecast horizons constructed by either focusing on short time-to-maturity contracts or excluding some data. In this article we propose a parametric bootstrap procedure for uncovering futures and options biases in data characterized by overlapping horizons and correlated prediction errors. We apply our method to test hypotheses that futures prices are efficient and unbiased predictors of terminal prices, and that squared implied volatility, multiplied by time left to option expiry, is an unbiased predictor of terminal log-price variance. We apply the test to corn, soybean meal and Class III milk futures and options data for the period 2000-2011. We find evidence for downward bias in soybean meal futures, as well as downward volatility bias in Class III milk options. Importance of these results is illustrated on the example of premium determination for Livestock Gross Margin Insurance for Dairy Cattle (LGM-Dairy).
    Keywords: parametric bootstrap, risk premium, volatility bias, revenue insurance, LGM-Dairy, Demand and Price Analysis, Research Methods/ Statistical Methods, Risk and Uncertainty,
    Date: 2012–10
  16. By: Lönnbark, Carl (Department of Economics, Umeå University)
    Abstract: We introduce the notions of short and long term asymmetric effects in volatilities. With short term asymmetry we mean the conventional one, i.e. the asymmetric response of current volatility to the most recent return shocks. However, there may be asymmetries in the way the effect of past return shocks propagate over time as well. We refer to this as long term asymmetry. We propose a model that enables the study of such a feature. In an empirical application using stock market index data we found evidence of the joint presence of short and long term asymmetric effects.
    Keywords: Financial econometrics; GARCH; memory; nonlinear; risk prediction; time series
    JEL: C22 C51 C58 G15 G17
    Date: 2012–10–03
  17. By: Lina Escobar Rangel (CERNA - Centre d'économie industrielle - Mines ParisTech); François Lévêque (CERNA - Centre d'économie industrielle - Mines ParisTech)
    Abstract: What increase in probability the Fukushima Dai-ichi event does entail? Many models and approaches can be used to answer these questions. Poisson regression as well as Bayesian updating are good candidates. However, they fail to address these issues properly because the independence assumption in which they are based on is violated. We propose a Poisson Exponentially Weighted Moving Average (PEWMA) based in a state-space time series approach to overcome this critical drawback. We find an increase in the risk of a core meltdown accident for the next year in the world by a factor of ten owing to the new major accident that took place in Japan in 2011.
    Keywords: nuclear risk, nuclear safety, time series count data
    Date: 2012–10–08
  18. By: Peter Arcidiacono; Patrick Bayer; Jason R. Blevins; Paul B. Ellickson
    Abstract: This paper provides a method for estimating large-scale dynamic discrete choice models within a continuous time framework. An advantage of our model is that state changes occur sequentially, rather than simultaneously, avoiding a substantial curse of dimensionality that arises in multi-agent settings. Eliminating this computational bottleneck is the key to providing a seamless link between estimating the model and performing post-estimation counterfactuals. While recently developed two-step estimation techniques have made it possible to estimate large-scale problems, solving for equilibria remains computationally challenging. By modeling decisions in continuous time, we are able to take advantage of the recent advances in estimation while preserving a tight link between estimation and policy experiments. We address the most commonly encountered situation in empirical work in which only discrete-time data are available and the actual sequence of events that occur between two points in time is unobserved. We apply our techniques to examine the effects of Walmart’s entry into the retail grocery industry, showing that even the threat of entry by Walmart has a substantial effect on market structure.
    JEL: C13 C35 L11 L13
    Date: 2012–10
  19. By: Lönnbark, Carl (Department of Economics, Umeå University)
    Abstract: The empirically most relevant stylized facts when it comes to modeling time varying financial volatility are the asymmetric response to return shocks and the long memory property. Up till now, these have largely been modeled in isolation though. To more flexibly capture asymmetry also with respect to the memory structure we introduce a new model and apply it to stock market index data. We find that, although the effect on volatility of negative return shocks is higher than for positive ones, the latter are more persistent and relatively quickly dominate negative ones.
    Keywords: Financial econometrics; GARCH; news impact; nonlinear; risk prediction; time series
    JEL: C12 C51 C58 G10 G15
    Date: 2012–10–03
  20. By: Aureo de Paula (Institute for Fiscal Studies and University of Pennsylvania)
    Abstract: This article reviews the recent literature on the econometric analysis of games where multiple solutions are possible. Multiplicity does not necessarily preclude the estimation of a particular model (and in certain cases even improves its identification), but ignoring it can lead to misspecifications. The survey starts with a general characterisation of structural models that highlights how multiplicity affects the classical paradigm. Because the information structure is an important guide to identification and estimation strategies, I discuss games of complete and incomplete information separately. Whereas many of the techniques discussed in the article can be transported across different information environments, some of them are specific to particular models. I also survey models of social interactions in a different section. I close with a brief discussion of post-estimation issues and research prospects.
    Keywords: Identification, multiplicity, games, social interactions
    Date: 2012–10
  21. By: Lichard, Tomáš (CERGE-EI); Hanousek, Jan (CERGE-EI); Filer, Randall K. (Hunter College/CUNY)
    Abstract: We develop an estimator of unreported income, perhaps due to tax evasion, that does not depend on as strict identifying assumptions as previous estimators based on microeconomic data. The standard identifying assumption that the self-employed underreport income whereas wage and salary workers do not is likely to fail in countries where employees are often paid under the table or engage in corrupt activities. Assuming that evading individuals have a higher consumption-income gap than non-evading ones due underreporting both to tax authorities and in surveys, an endogenous switching model with unknown sample separation enables the estimation of consumption-income gaps for both underreporting and truthful households. This avoids the need to identify non-evading and evading groups ex-ante. This methodology is applied to data from Czech and Slovak household budget surveys and shows that estimated evasion is substantially higher than found using previous methodologies.
    Keywords: shadow economy, switch regression, income-consumption gap
    JEL: C34 E01 H26 J39
    Date: 2012–10
  22. By: Mikhail Zolotko; Ostap Okhrin; ;
    Abstract: This study proposes a novel framework for the joint modelling of commodity forward curves. Its key contribution is twofold. First, dynamic correlation models are applied in this context as part of the modelling scheme. Second, we introduce a family of dynamic conditional correlation models based on hierarchical Archimedean copulae (HAC DCC), which are flexible, but parsimonious instruments that capture a wide range of dynamic dependencies. The conducted analysis allows us to obtain precise out-of-sample forecasts of the distribution of the returns of various commodity futures portfolios. The Value-at-Risk analysis shows that HAC DCC models outperform other introduced benchmark models on a consistent basis.
    Keywords: commodity forward curves, multivariate GARCH, hierarchical Archimedean copula, Value-at-Risk
    JEL: C13 C53 Q40
    Date: 2012–10
  23. By: Kris Boudt (KU Leuven; Lessius; V.U. University Amsterdam); Jon Danielsson (London School of Economics); Siem Jan Koopman (V.U. University Amsterdam; Tinbergen Institute); Andre Lucas (V.U. University Amsterdam; Tinbergen Institute)
    Abstract: We propose a parsimonious regime switching model to characterize the dynamics in the volatilities and correlations of US deposit banks' stock returns over 1994-2011. A first innovative feature of the model is that the within-regime dynamics in the volatilities and correlation depend on the shape of the Student t innovations. Secondly, the across-regime dynamics in the transition probabilities of both volatilities and correlations are driven by macro-financial indicators such as the Saint Louis Financial Stability index, VIX or TED spread. We find strong evidence of time-variation in the regime switching probabilities and the within-regime volatility of most banks. The within-regime dynamics of the equicorrelation seem to be constant over the period.
    Date: 2012–10
  24. By: Nassim N. Taleb
    Abstract: In the presence of a layer of metaprobabilities (from uncertainty concerning the parameters), the asymptotic tail exponent corresponds to the lowest possible tail exponent regardless of its probability. The problem explains "Black Swan" effects, i.e., why measurements tend to chronically underestimate tail contributions, rather than merely deliver imprecise but unbiased estimates.
    Date: 2012–10

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.