nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒02‒17
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Bias-corrected estimation in mildly explosive autoregressions By Kruse, Yves Robinson; Kaufmann, Hendrik
  2. Theory for a Multivariate Markov--switching GARCH Model with an Application to Stock Markets By Haas, Markus; Liu, Ji-Chun
  3. The wrong skewness problem in stochastic frontier models: A new approach By Manner, Hans; Hafner, Christian; Simar, Leopold
  4. Testing heteroskedastic time series for normality By Demetrescu, Matei; Kruse, Robinson
  5. Fixed-b Asymptotics for t-Statistics in the Presence of Time-Varying Volatility By Hanck, Christoph; Demetrescu, Matei; Kruse, Robinson
  6. A Generalized Two-Part Model for Fractional Response Variables with Excess Zeros By Schwiebert, Jörg; Wagner, Joachim
  7. Likelihood based inference and prediction in spatio-temporal panel count models for urban crimes By Vogler, Jan; Liesenfeld, Roman; Richard, Jean-Francois
  8. Flexible Modeling of Binary Data Using the Log-Burr Link By Kaeding, Matthias
  9. Extended Yule-Walker identification of Varma models with single- or mixed frequency data By Zadrozny, Peter A.
  10. Signaling Crises: How to Get Good Out-of-Sample Performance Out of the Early Warning System By von Schweinitz, Gregor; Sarlin, Peter
  11. Structural Estimation of the Scoring Auction Model By NAKABAYASHI Jun; HIROSE Yohsuke
  12. Note on Higher-Order Statistics for the Pruned-State-Space of nonlinear DSGE models By Mutschler, Willi
  13. A new method for the correction of test scores manipulation By Santiago Pereda Fernández
  14. TESTS OF NON-CAUSALITY IN A FREQUENCY BAND By Schreiber, Sven; Breitung, Jörg
  15. Measurement Error in Subjective Expectations and the Empirical Content of Economic Models By von Gaudecker, Hans-Martin; Drerup, Tilman; Enke, Benjamin

  1. By: Kruse, Yves Robinson; Kaufmann, Hendrik
    Abstract: This paper provides a comprehensive Monte Carlo comparison of different finite-sample biascorrection methods for autoregressive processes. We consider situations where the process is either mildly explosive or has a unit root. The case of highly persistent stationary is also studied. We compare the empirical performance of the plain OLS estimator with an OLS and a Cauchy estimator based on recursive demeaning, as well as an estimator based on second differencing. In addition, we consider three different approaches for bias-correction for the OLS estimator: (i) bootstrap, (ii) jackknife and (iii) indirect inference. The estimators are evaluated in terms of bias and root mean squared errors (RMSE) in a variety of practically relevant settings. Our findings suggest that the indirect inference method clearly performs best in terms of RMSE for all considered orders of integration. If bias-correction abilities are solely considered, the jackknife works best for stationary and unit root processes. For the explosive case, the bootstrap and the indirect inference can be recommended. As an empirical application, we study Asian stock market overvaluation during bubbles and emphasize the importance of bias-correction for explosive series.
    JEL: C13 C22 G12
    Date: 2015
  2. By: Haas, Markus; Liu, Ji-Chun
    Abstract: We consider a multivariate Markov-switching GARCH model which allows for regime-specific volatility dynamics, leverage effects, and correlation structures. Stationarity conditions are derived, and consistency of the maximum likelihood estimator (MLE) is established under the assumption of Gaussian innovations. A Lagrange Multiplier (LM) test for correct specification of the correlation dynamics is devised, and a simple recursion for computing multi-step-ahead conditional covariance matrices is provided. The theory is illustrated with an application to global stock market and real estate equity returns. The empirical analysis highlights the importance of the conditional distribution in Markov-switching time series models. Specifications with Student's t innovations dominate their Gaussian counterparts both in- and out-of-sample. The dominating specification appears to be a two-regime Student's t process with correlations which are higher in the turbulent (high-volatility) regime.
    JEL: C32 C51 C58
    Date: 2015
  3. By: Manner, Hans; Hafner, Christian; Simar, Leopold
    Abstract: Stochastic frontier models are widely used to measure, e.g., technical efficiencies of firms. The classical stochastic frontier model often suffers from the empirical artefact that the residuals of the production function may have a positive skewness, whereas a negative one is expected under the model, which leads to estimated full efficiencies of all firms. We propose a new approach to the problem by generalizing the distribution used for the inefficiency variable. This generalized stochastic frontier model allows the sample data to have the wrong skewness while estimating well-defined and non-degenerate efficiency measures. We discuss the statistical properties of the model and we discuss a test for the symmetry of the error term (no inefficiency). We provide a simulation study to show that our model delivers estimators of efficiency with smaller bias than those of the classical model even if the population skewness has the correct sign. Finally, we apply the model to data of the U.S. textile industry for 1958-2005, and show that for a number of years our model suggests technical efficiencies well below the frontier, while the classical one estimates no inefficiency in those years.
    JEL: C13 C18 D24
    Date: 2015
  4. By: Demetrescu, Matei; Kruse, Robinson
    Abstract: Normality testing is an evergreen topic in statistics and econometrics and other disciplines. The paper focuses on testing economic time series for normality in a robust way, taking specific data features such as serial dependence and time-varying volatility into account. Here, we suggest tests based on raw moments of probability integral transform of standardized time series. The use of raw moments is advantageous as they are quite sensitive to deviations from the null other than asymmetry and excess kurtosis. To standardize the series, nonparametric estimators of the (time-varying) variance may be used, but the mean as a function of time has to be estimated parametrically. Short-run dynamics is taken into account using the Heteroskedasticity and Autocorrelation Robust [HAR] approach of Kiefer and Vogelsang (2005, ET). The effect of estimation uncertainty arising from estimated standardization is accounted for by providing a necessary modification. In a simulation study, we compare the suggested tests to a benchmark test by Bai and Ng (2005, JBES). The results show that the new tests are performing well in terms of size (which is mainly due to the adopted fixed-b framework for long-run covariance estimation), but also in terms of power. An empirical application to G7 industrial production growth rates sheds further light on the empirical usefulness and limitations of the proposed test.
    JEL: C22 C46 C52
    Date: 2015
  5. By: Hanck, Christoph; Demetrescu, Matei; Kruse, Robinson
    Abstract: The fixed-b asymptotic framework provides refinements in the use of heteroskedasticity and autocorrelation consistent variance estimators. We show however that the fixed-b limiting distributions of t-statistics are not pivotal when the variance of the underlying data generating process changes over time. To regain pivotal fixed-b inference under such time heteroskedasticity, we discuss three alternative approaches. We employ (1) the wild bootstrap (Cavaliere and Taylor, 2008, ET), (2) resort to time transformations (Cavaliere and Taylor, 2008, JTSA) and (3) suggest to pick suitable the asymptotics according to the outcome of a heteroskedasticity test, since small-b asymptotics deliver standard limiting distributions irrespective of the so-called variance profile of the series. We quantify the degree of size distortions from using the standard fixed-b approach and compare the effectiveness of the corrections via simulations. We also provide an empirical application to excess returns.
    JEL: C12 C32 C15
    Date: 2015
  6. By: Schwiebert, Jörg; Wagner, Joachim
    Abstract: The fractional probit (or fractional logit) model is used when the outcome variable is a fractional response variable, i.e. a variable taking a value between zero and one. In case of excess zeros, the fractional probit model might not be the optimal modeling device since this model does not predict zeros. As a solution, the two-part model has been proposed, which assumes different processes for having a (non-)zero outcome and, conditionally on having a non-zero outcome, the actual outcome. However, the two-part model assumes independence of these processes. This paper proposes a generalization of the two-part model which allows for dependence of these processes and which also nests the two-part model as a special case. A simulation study indicates that the proposed estimator performs well in finite samples. Two empirical examples illustrate that the model proposed in this paper improves upon the fractional probit and two-part model in terms of model fit and also leads to different marginal effects.
    JEL: C25 C35 C51
    Date: 2015
  7. By: Vogler, Jan; Liesenfeld, Roman; Richard, Jean-Francois
    Abstract: PRELIMINARY DRAFT We discuss maximum likelihood (ML) analysis for panel count data models, in which the observed counts are linked via a measurement density to a latent Gaussian process with spatial as well as temporal dynamics and random effects. For likelihood evaluation requiring high-dimensional integration we rely upon Efficient Importance Sampling (EIS). The algorithm we develop extends existing EIS implementations by constructing importance sampling densities, which closely approximate the nontrivial spatio-temporal correlation structure under dynamic spatial panel models. In order to make this high-dimensional approximation computationally feasible, our EIS implementation exploits the typical sparsity of spatial precision matrices in such a way that all the high-dimensional matrix operations it requires can be performed using computationally fast sparse matrix functions. We use the proposed sparse EIS-ML approach for an extensive empirical study analyzing the socio-demographic determinants and the space-time dynamics of urban crime in Pittsburgh, USA, between 2008 and 2013 for a panel of monthly crime rates at census-tract level.
    JEL: C15 C01 C23
    Date: 2015
  8. By: Kaeding, Matthias
    Abstract: Popular link functions often fit skewed binary data poorly. We propose the log-Burr link as flexible alternative. The link nests the complementary-log-log and logit link as special cases, determined by a shape parameter which can be estimated from the data. Shrinkage priors are used for the shape parameter, furthermore the parameter is allowed to vary between subgroups for clustered data. For modeling of nonlinear effects basis function expansions are used. Inference is done in a fully Bayesian framework. Posterior simulation is done via the No-U-Turn sampler implemented in Stan, avoiding convergence problems of the Gibbs sampler and allowing for easy use of nonconjugate priors. Regression coefficients associated with basis functions are reparameterized as random effects to speed up convergence. The proposed methods and the effect of misspecification of the modeled dgp are investigated in a simulation study. The approach is applied on large scale unemployment data.
    JEL: C10 C11 C63
    Date: 2015
  9. By: Zadrozny, Peter A.
    Abstract: Chen and Zadrozny (1998) developed the linear extended Yule-Walker (XYW) method for determining the parameters of a vector autoregressive (VAR) model with available covariances of mixed-frequency observations on the variables of the model. If the parameters are determined uniquely for available population covariances, then, the VAR model is identified. The present paper extends the original XYW method to an extended XYW method for determining all ARMA parameters of a vector autoregressive moving-average (VARMA) model with available covariances of single- or mixed-frequency observations on the variables of the model. The paper proves that under conditions of stationarity, regularity, miniphaseness, controllability, observability, and diagonalizability on the parameters of the model, the parameters are determined uniquely with available population covariances of single- or mixed-frequency observations on the variables of the model, so that the VARMA model is identified with the single- or mixed-frequency covariances.
    Keywords: block-Vandermonde eigenvectors of block-companion state-transition,matrix of state-space representation,matrix spectral factorization
    JEL: C32 C80
    Date: 2015
  10. By: von Schweinitz, Gregor; Sarlin, Peter
    Abstract: In past years, the most common approaches for deriving early-warning models belong to the family of binary-choice methods, which have been coupled with a separate loss function to optimize model signals based on policymakers preferences. The evidence in this paper shows that early-warning models should not be used in this traditional way, as the optimization of thresholds produces an in-sample overfit at the expense of out-of-sample performance. Instead of ex-post threshold optimization based upon a loss function, policymakers' preferences should rather be directly included as weights in the estimation function. Doing this strongly improves the out-of-sample performance of early-warning systems.
    JEL: C35 C53 G01
    Date: 2015
  11. By: NAKABAYASHI Jun; HIROSE Yohsuke
    Abstract: This paper offers an analytical framework for the scoring auction. We first characterize a symmetric monotone equilibrium in the scoring auction. We then propose a semiparametric procedure to identify the joint distribution of the bidder's multidimensional signal from scoring auction data. Our approach allows for a broad class of scoring rules in settings with multidimensional signals. Finally, using our analytical framework, we conduct an empirical experiment to estimate the impacts of the change of auction formats and scoring rules. The data on scoring auctions are from public procurement auctions for construction projects in Japan.
    Date: 2016–02
  12. By: Mutschler, Willi
    Abstract: This note shows how to derive unconditional moments, cumulants and polyspectra of order higher than two for the pruned state-space of nonlinear DSGE models. Useful Matrix tools and computational aspects are also discussed.
    JEL: C10 C51 E10
    Date: 2015
  13. By: Santiago Pereda Fernández (Banca d’Italia)
    Abstract: I propose a method to correct for test scores manipulation and apply it to a natural experiment in the Italian education system consisting in the random assignment of external monitors to classrooms. The empirical strategy is based on a likelihood approach, using nonlinear panel data methods to obtain clean estimates of cheating controlling for unobserved heterogeneity. The likelihood of each classroom's scores is later used to correct them for cheating. Cheating is not associated with an increase in the correlation of the answers after we control for mean test scores. The method produces estimates of manipulation more frequent in the South and Islands and among female students and immigrants in Italian tests. A simulation shows how the manipulation reduces the accuracy of an exam in reflecting students' knowledge, and the correction proposed in this paper makes up for about a half of this loss.
    Keywords: cheating correction, copula, nonlinear panel data, test scores manipulation
    JEL: C23 C25 I28
    Date: 2016–01
  14. By: Schreiber, Sven; Breitung, Jörg
    Abstract: We extend the frequency-specific Granger-causality test of Breitung and Candelon (2006) to a more general null hypothesis that allows non-causality at unknown frequencies within an interval, instead of having to prespecify a single frequency. This setup corresponds better to most hypotheses that are typically analyzed in applied research and is easy to implement. We also discuss a test approach that departs from strict non-causality, given the impossibility of (non-trivial) non-causality over a continuum of frequencies. In an empirical application dealing with the dynamics of US temperatures and CO2 emissions we find that emissions cause temperature changes only at very low frequencies with more than 30 years of oscillation.
    JEL: C32 Q54 C53
    Date: 2015
  15. By: von Gaudecker, Hans-Martin; Drerup, Tilman; Enke, Benjamin
    Abstract: While stock market expectations are among the most important primitives of portfolio choice models, their measurement has proved challenging for some respondents. We argue that the magnitude of measurement error in subjective expectations can be used as an indicator of the degree to which economic models of portfolio choice provide an ade- quate representation of individual decision processes. In order to explore this conjecture empirically, we estimate a semiparametric double index model on a dataset specifically collected for this purpose. Stock market participation reacts strongly to changes in model parameters for respondents at the lower end of the measurement error distribution; these effects are much less pronounced for individuals at the upper end. Our findings indicate that measurement error in subjective expectations provides useful information to uncover heterogeneity in choice behavior.
    JEL: C35 C51 G11
    Date: 2015

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.