nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒09‒26
ten papers chosen by
Sune Karlsson
Orebro University

  1. Residual-based Rank Specification Tests for AR-GARCH type models By Andreou, Elena; Werker, Bas J M
  2. Testing for Multiple Bubbles: Historical Episodes of Exuberance and Collapse in the S&P 500 By Peter C.B. Phillips; Shu-Ping Shi; Jun Yu
  3. Non-linear dependences in finance By R\'emy Chicheportiche
  4. Testing the Statistical Significance of Microsimulation Results: Often Easier than You Think. A Technical Note By Tim Goedemé; Karel Van den Bosch; Lina Salanauskaite; Gerlinde Verbist
  5. Zero Lower Bound and Parameter Bias in an Estimated DSGE Model By Yasuo Hirose; Atsushi Inoue
  6. Estimating spatial panel models using unbalanced panels By Gordon Hughes
  7. "Measurement Errors and Statistics" (in Japanese) By Naoto Kunitomo
  8. Error and Inference: an outsider stand on a frequentist philosophy. By Robert, Christian P.
  9. A Monte Carlo analysis of multilevel binary logit model estimator performance By Stephen P. Jenkins
  10. The Use And Misuse Of Structural Equation Modeling In Management Research By Nebojša St. Davèik

  1. By: Andreou, Elena; Werker, Bas J M
    Abstract: This paper derives the asymptotic distribution for a number of rank-based and classical residual specification tests in AR-GARCH type models. We consider tests for the null hypotheses of no linear and quadratic serial residual autocorrelation, residual symmetry, and no structural breaks. For these tests we show that, generally, no size correction is needed in the asymptotic test distribution when applied to AR-GARCH type residuals obtained through QMLE estimation. To be precise, we give exact expressions for the limiting null distribution of the test statistics applied to residuals, and find that standard critical values often lead to conservative tests. For this result, we give simple sufficient conditions. Simulations show that our asymptotic approximations work well for a large number of AR-GARCH models and parameter values. We also show that the rank-based tests often, though not always, have superior power properties over the classical tests, even if they are conservative. We thereby provide a useful extension to the econometrician's toolkit. An empirical application illustrates the relevance of these tests to the AR-GARCH models for the weekly stock market return indices of some major and emerging countries.
    Keywords: conditional heteroskedasticity; linear and quadratic residual autocorrelation tests; model misspecification test; nonlinear time series; parameter constancy; residual symmetry tests
    JEL: C22 C32 C51 C52
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:9583&r=ecm
  2. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Shu-Ping Shi (Australian National University); Jun Yu (Singapore Management University)
    Abstract: Recent work on econometric detection mechanisms has shown the effectiveness of recursive procedures in identifying and dating financial bubbles. These procedures are useful as warning alerts in surveillance strategies conducted by central banks and fiscal regulators with real time data. Use of these methods over long historical periods presents a more serious econometric challenge due to the complexity of the nonlinear structure and break mechanisms that are inherent in multiple bubble phenomena within the same sample period. To meet this challenge the present paper develops a new recursive flexible window method that is better suited for practical implementation with long historical time series. The method is a generalized version of the sup ADF test of Phillips, Wu and Yu (2011, PWY) and delivers a consistent date-stamping strategy for the origination and termination of multiple bubbles. Simulations show that the test significantly improves discriminatory power and leads to distinct power gains when multiple bubbles occur. An empirical application of the methodology is conducted on S&P 500 stock market data over a long historical period from January 1871 to December 2010. The new approach successfully identifies the well-known historical episodes of exuberance and collapse over this period, whereas the strategy of PWY and a related CUSUM dating procedure locate far fewer episodes in the same sample range.
    Keywords: Date-stamping strategy, Flexible window, Generalized sup ADF test, Multiple bubbles, Rational bubble, Periodically collapsing bubbles, Sup ADF test
    JEL: C15 C22
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1914&r=ecm
  3. By: R\'emy Chicheportiche
    Abstract: The thesis is composed of three parts. Part I introduces the mathematical and statistical tools that are relevant for the study of dependences, as well as statistical tests of Goodness-of-fit for empirical probability distributions. I propose two extensions of usual tests when dependence is present in the sample data and when observations have a fat-tailed distribution. The financial content of the thesis starts in Part II. I present there my studies regarding the "cross-sectional" dependences among the time series of daily stock returns, i.e. the instantaneous forces that link several stocks together and make them behave somewhat collectively rather than purely independently. A calibration of a new factor model is presented here, together with a comparison to measurements on real data. Finally, Part III investigates the temporal dependences of single time series, using the same tools and measures of correlation. I propose two contributions to the study of the origin and description of "volatility clustering": one is a generalization of the ARCH-like feedback construction where the returns are self-exciting, and the other one is a more original description of self-dependences in terms of copulas. The latter can be formulated model-free and is not specific to financial time series. In fact, I also show here how concepts like recurrences, records, aftershocks and waiting times, that characterize the dynamics in a time series can be written in the unifying framework of the copula.
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1309.5073&r=ecm
  4. By: Tim Goedemé; Karel Van den Bosch; Lina Salanauskaite; Gerlinde Verbist
    Abstract: In the microsimulation literature, it is still uncommon to test the statistical significance of results. In this note we argue that this situation is both undesirable and unnecessary. Provided the parameters used in the microsimulation are exogenous, as is often the case in static microsimulation of the first-order effects of policy changes, simple statistical tests can be sufficient. Moreover, standard routines have been developed which enable applied researchers to calculate the sampling variance of microsimulation results, while taking the sample design into account, even of relatively complex statistics such as relative poverty, inequality measures and indicators of polarization, with relative ease and a limited time investment. We stress that when comparing simulated and baseline variables, as well as when comparing two simulated variables, it is crucial to take account of the covariance between those variables. Due to this covariance, the mean difference between the variables can generally (though not always) be estimated with much greater precision than the means of the separate variables.
    Keywords: Statistical inference, significance tests, microsimulation, covariance, t-test, EUROMOD
    JEL: I32 D31 I38 C C1 C4 C6
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:hdl:improv:1310&r=ecm
  5. By: Yasuo Hirose; Atsushi Inoue
    Abstract: This paper examines how and to what extent parameter estimates can be biased in a dynamic stochastic general equilibrium (DSGE) model that omits the zero lower bound constraint on the nominal interest rate. Our experiments show that most of the parameter estimates in a standard sticky-price DSGE model are not biased although some biases are detected in the estimates of the monetary policy parameters and the steady-state real interest rate. Nevertheless, in our baseline experiment, these biases are so small that the estimated impulse response functions are quite similar to the true impulse response functions. However, as the probability of hitting the zero lower bound increases, the biases in the parameter estimates become larger and can therefore lead to substantial differences between the estimated and true impulse responses.
    Keywords: Zero lower bound, DSGE model, Parameter bias, Bayesian estimation
    JEL: C32 E30 E52
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2013-60&r=ecm
  6. By: Gordon Hughes (University of Edinburgh)
    Abstract: Econometricians have begun to devote more attention to spatial interactions when carrying out applied econometric studies. In part, this is motivated by an explicit focus on spatial interactions in policy formulation or market behavior, but it may also reflect concern about the role of omitted variables that are or may be spatially correlated. The Stata user-written procedure xsmle has been designed to estimate a wide range of spatial panel models, including spatial autocorrelation, spatial Durbin, and spatial error models using maximum likelihood methods. It relies upon the availability of balanced panel data with no missing observations. This requirement is stringent, but it arises from the fact that in principle, the values of the dependent variable for any panel unit may depend upon the values of the dependent and independent variables for all the other panel units. Thus even a single missing data point may require that all data for a time period, panel unit, or variable be discarded. The presence of missing data is an endemic problem for many types of applied work, often because of the creation or disappearance of panel units. At the macro level, the number and composition of countries in Europe or local government units in the United Kingdom has changed substantially over the last three decades. In longitudinal household surveys, new households are created and old ones disappear all the time. Restricting the analysis to a subset of panel units that have remained stable over time is a form of sample selection whose consequences are uncertain and that may have statistical implications that merit additional investigation. The simplest mechanisms by which missing data may arise underpin the missing-at-random (MAR) assumption. When this is appropriate, it is possible to use two approaches to estimation with missing data. The first is either simple or, preferably, multiple imputation, which involves the replacement of missing data by stochastic imputed values. The Stata procedure mi can be combined with xsmle to implement a variety of estimates that rely upon multiple imputation. While the combination of procedures is relatively simple to estimate, practical experience suggests that the results can be quite sensitive to the specification that is adopted for the imputation phase of the analysis. Hence, this is not a one-size-fits-all method of dealing with unbalanced panels, because the analyst must give serious consideration to the way in which imputed values are generated. The second approach has been developed by Pfaffermayr. It relies upon the spatial interactions in the model, which means that the influence of the missing observations can be inferred from the values taken by nonmissing observations. In effect, the missing observations are treated as latent variables whose distribution can be derived from the values of the nonmissing data. This leads to a likelihood function that can be partitioned between missing and nonmissing data and thus used to estimate the coefficients of the full model. The merit of the approach is that it takes explicit account of the spatial structure of the model. However, the procedure becomes computationally demanding if the proportion of missing observations is too large and, as one would expect, the information provided by the spatial interactions is not sufficient to generate well-defined estimates of the structural coefficients. The missing-at-random assumption is crucial for both of these approaches, but it is not reasonable to rely upon it when dealing with the birth or death of distinct panel units. A third approach, which is based on methods used in the literature on statistical signal processing, relies upon reducing the spatial interactions to immediate neighbors. Intuitively, the basic unit for the analysis becomes a block consisting of a central unit (the dependent variable) and its neighbors (the spatial interactions). Because spatial interactions are restricted to within-block effects, the population of blocks can vary over time and standard nonspatial panel methods can be applied. The presentation will describe and compare the three approaches to estimating spatial panel models as implemented in Stata as extensions to xsmle. It will be illustrated by analyses of i) state data on electricity consumption in the U.S. and ii) gridded historical data on temperature and precipitation to identify the effects of El Niño (ENSO) and other major weather oscillations.
    Date: 2013–09–16
    URL: http://d.repec.org/n?u=RePEc:boc:usug13:09&r=ecm
  7. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo)
    Abstract: In this lecture we illustrate several measurement errors issues and their statistical analyses arisen in Government Statistics, Econometrics and Financial Econometrics. We argue that there are some common structures and methods in many statistical problems and it shall be beneficial for many statisticians to think the roles of measurement errors and their statistical analyses in the ara of Big-Data.
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:tky:jseres:2013cj248&r=ecm
  8. By: Robert, Christian P.
    Abstract: This note is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach.
    Keywords: frequentist philosophy; criticisms of the Bayesian approach.;
    JEL: C11
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/7849&r=ecm
  9. By: Stephen P. Jenkins (London School of Economics)
    Abstract: Social scientists are increasingly fitting multilevel models to datasets in which a large number of individuals (N ~ several thousands) are nested within each of a small number of countries (C ~ 25). The researchers are particularly interested in “country effectsâ€, as summarized by either the coefficients on country-level predictors (or cross-level interactions) or the variance of the country-level random effects. Although questions have been raised about the potentially poor performance of estimators of these “country effects†when C is “smallâ€, this issue appears not to be widely appreciated by many social scientist researchers. Using Monte Carlo analysis, I examine the performance of two estimators of a binary-dependent two-level model using a design in which C = 5(5)50 100 and N = 1000 for each country. The results point to i) the superior performance of adaptive quadrature estimators compared with PQL2 estimators, and ii) poor coverage of estimates of “country effects†in models in which C ~ 25, regardless of estimator. The analysis makes extensive use of xtmelogit and simulate and user-written commands such as runmlwin, parmby, and eclplot. Issues associated with having extremely long runtimes are also discussed.
    Date: 2013–09–16
    URL: http://d.repec.org/n?u=RePEc:boc:usug13:04&r=ecm
  10. By: Nebojša St. Davèik
    Abstract: The research practice in management research is dominantly based on structural equation modeling, but almost exclusively, and often misguidedly, on covariance-based SEM. We adumbrate theoretical foundations and guidance for the two SEM streams: covariance-based, also known as LISREL, covariance structure analysis, latent variable analysis, etc.; and variance-based SEM, also known as a component-based SEM, PLS, etc. Our conceptual framework discusses the two streams by analysis of theory, measurement model specification, sample and goodness-of-fit. We question the usefulness of Cronbach’s alpha research paradigm and discuss alternatives that are well-established in social science, but not well-known in the management research community. We conclude with discussion of some open questions in management research practice that remain under-investigated and unutilized.
    Keywords: Structural equation modeling, covariance- and variance-based SEM, formative and reflective indicators, LISREL, PLS
    JEL: C18 C3 M0
    Date: 2013–09–13
    URL: http://d.repec.org/n?u=RePEc:isc:iscwp2:bruwp1307&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.