
on Econometrics 
By:  Andreou, Elena; Werker, Bas J M 
Abstract:  This paper derives the asymptotic distribution for a number of rankbased and classical residual specification tests in ARGARCH type models. We consider tests for the null hypotheses of no linear and quadratic serial residual autocorrelation, residual symmetry, and no structural breaks. For these tests we show that, generally, no size correction is needed in the asymptotic test distribution when applied to ARGARCH type residuals obtained through QMLE estimation. To be precise, we give exact expressions for the limiting null distribution of the test statistics applied to residuals, and find that standard critical values often lead to conservative tests. For this result, we give simple sufficient conditions. Simulations show that our asymptotic approximations work well for a large number of ARGARCH models and parameter values. We also show that the rankbased tests often, though not always, have superior power properties over the classical tests, even if they are conservative. We thereby provide a useful extension to the econometrician's toolkit. An empirical application illustrates the relevance of these tests to the ARGARCH models for the weekly stock market return indices of some major and emerging countries. 
Keywords:  conditional heteroskedasticity; linear and quadratic residual autocorrelation tests; model misspecification test; nonlinear time series; parameter constancy; residual symmetry tests 
JEL:  C22 C32 C51 C52 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:9583&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); ShuPing Shi (Australian National University); Jun Yu (Singapore Management University) 
Abstract:  Recent work on econometric detection mechanisms has shown the effectiveness of recursive procedures in identifying and dating financial bubbles. These procedures are useful as warning alerts in surveillance strategies conducted by central banks and fiscal regulators with real time data. Use of these methods over long historical periods presents a more serious econometric challenge due to the complexity of the nonlinear structure and break mechanisms that are inherent in multiple bubble phenomena within the same sample period. To meet this challenge the present paper develops a new recursive flexible window method that is better suited for practical implementation with long historical time series. The method is a generalized version of the sup ADF test of Phillips, Wu and Yu (2011, PWY) and delivers a consistent datestamping strategy for the origination and termination of multiple bubbles. Simulations show that the test significantly improves discriminatory power and leads to distinct power gains when multiple bubbles occur. An empirical application of the methodology is conducted on S&P 500 stock market data over a long historical period from January 1871 to December 2010. The new approach successfully identifies the wellknown historical episodes of exuberance and collapse over this period, whereas the strategy of PWY and a related CUSUM dating procedure locate far fewer episodes in the same sample range. 
Keywords:  Datestamping strategy, Flexible window, Generalized sup ADF test, Multiple bubbles, Rational bubble, Periodically collapsing bubbles, Sup ADF test 
JEL:  C15 C22 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1914&r=ecm 
By:  R\'emy Chicheportiche 
Abstract:  The thesis is composed of three parts. Part I introduces the mathematical and statistical tools that are relevant for the study of dependences, as well as statistical tests of Goodnessoffit for empirical probability distributions. I propose two extensions of usual tests when dependence is present in the sample data and when observations have a fattailed distribution. The financial content of the thesis starts in Part II. I present there my studies regarding the "crosssectional" dependences among the time series of daily stock returns, i.e. the instantaneous forces that link several stocks together and make them behave somewhat collectively rather than purely independently. A calibration of a new factor model is presented here, together with a comparison to measurements on real data. Finally, Part III investigates the temporal dependences of single time series, using the same tools and measures of correlation. I propose two contributions to the study of the origin and description of "volatility clustering": one is a generalization of the ARCHlike feedback construction where the returns are selfexciting, and the other one is a more original description of selfdependences in terms of copulas. The latter can be formulated modelfree and is not specific to financial time series. In fact, I also show here how concepts like recurrences, records, aftershocks and waiting times, that characterize the dynamics in a time series can be written in the unifying framework of the copula. 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1309.5073&r=ecm 
By:  Tim Goedemé; Karel Van den Bosch; Lina Salanauskaite; Gerlinde Verbist 
Abstract:  In the microsimulation literature, it is still uncommon to test the statistical significance of results. In this note we argue that this situation is both undesirable and unnecessary. Provided the parameters used in the microsimulation are exogenous, as is often the case in static microsimulation of the firstorder effects of policy changes, simple statistical tests can be sufficient. Moreover, standard routines have been developed which enable applied researchers to calculate the sampling variance of microsimulation results, while taking the sample design into account, even of relatively complex statistics such as relative poverty, inequality measures and indicators of polarization, with relative ease and a limited time investment. We stress that when comparing simulated and baseline variables, as well as when comparing two simulated variables, it is crucial to take account of the covariance between those variables. Due to this covariance, the mean difference between the variables can generally (though not always) be estimated with much greater precision than the means of the separate variables. 
Keywords:  Statistical inference, significance tests, microsimulation, covariance, ttest, EUROMOD 
JEL:  I32 D31 I38 C C1 C4 C6 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:hdl:improv:1310&r=ecm 
By:  Yasuo Hirose; Atsushi Inoue 
Abstract:  This paper examines how and to what extent parameter estimates can be biased in a dynamic stochastic general equilibrium (DSGE) model that omits the zero lower bound constraint on the nominal interest rate. Our experiments show that most of the parameter estimates in a standard stickyprice DSGE model are not biased although some biases are detected in the estimates of the monetary policy parameters and the steadystate real interest rate. Nevertheless, in our baseline experiment, these biases are so small that the estimated impulse response functions are quite similar to the true impulse response functions. However, as the probability of hitting the zero lower bound increases, the biases in the parameter estimates become larger and can therefore lead to substantial differences between the estimated and true impulse responses. 
Keywords:  Zero lower bound, DSGE model, Parameter bias, Bayesian estimation 
JEL:  C32 E30 E52 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201360&r=ecm 
By:  Gordon Hughes (University of Edinburgh) 
Abstract:  Econometricians have begun to devote more attention to spatial interactions when carrying out applied econometric studies. In part, this is motivated by an explicit focus on spatial interactions in policy formulation or market behavior, but it may also reflect concern about the role of omitted variables that are or may be spatially correlated. The Stata userwritten procedure xsmle has been designed to estimate a wide range of spatial panel models, including spatial autocorrelation, spatial Durbin, and spatial error models using maximum likelihood methods. It relies upon the availability of balanced panel data with no missing observations. This requirement is stringent, but it arises from the fact that in principle, the values of the dependent variable for any panel unit may depend upon the values of the dependent and independent variables for all the other panel units. Thus even a single missing data point may require that all data for a time period, panel unit, or variable be discarded. The presence of missing data is an endemic problem for many types of applied work, often because of the creation or disappearance of panel units. At the macro level, the number and composition of countries in Europe or local government units in the United Kingdom has changed substantially over the last three decades. In longitudinal household surveys, new households are created and old ones disappear all the time. Restricting the analysis to a subset of panel units that have remained stable over time is a form of sample selection whose consequences are uncertain and that may have statistical implications that merit additional investigation. The simplest mechanisms by which missing data may arise underpin the missingatrandom (MAR) assumption. When this is appropriate, it is possible to use two approaches to estimation with missing data. The first is either simple or, preferably, multiple imputation, which involves the replacement of missing data by stochastic imputed values. The Stata procedure mi can be combined with xsmle to implement a variety of estimates that rely upon multiple imputation. While the combination of procedures is relatively simple to estimate, practical experience suggests that the results can be quite sensitive to the specification that is adopted for the imputation phase of the analysis. Hence, this is not a onesizefitsall method of dealing with unbalanced panels, because the analyst must give serious consideration to the way in which imputed values are generated. The second approach has been developed by Pfaffermayr. It relies upon the spatial interactions in the model, which means that the influence of the missing observations can be inferred from the values taken by nonmissing observations. In effect, the missing observations are treated as latent variables whose distribution can be derived from the values of the nonmissing data. This leads to a likelihood function that can be partitioned between missing and nonmissing data and thus used to estimate the coefficients of the full model. The merit of the approach is that it takes explicit account of the spatial structure of the model. However, the procedure becomes computationally demanding if the proportion of missing observations is too large and, as one would expect, the information provided by the spatial interactions is not sufficient to generate welldefined estimates of the structural coefficients. The missingatrandom assumption is crucial for both of these approaches, but it is not reasonable to rely upon it when dealing with the birth or death of distinct panel units. A third approach, which is based on methods used in the literature on statistical signal processing, relies upon reducing the spatial interactions to immediate neighbors. Intuitively, the basic unit for the analysis becomes a block consisting of a central unit (the dependent variable) and its neighbors (the spatial interactions). Because spatial interactions are restricted to withinblock effects, the population of blocks can vary over time and standard nonspatial panel methods can be applied. The presentation will describe and compare the three approaches to estimating spatial panel models as implemented in Stata as extensions to xsmle. It will be illustrated by analyses of i) state data on electricity consumption in the U.S. and ii) gridded historical data on temperature and precipitation to identify the effects of El NiÃ±o (ENSO) and other major weather oscillations. 
Date:  2013–09–16 
URL:  http://d.repec.org/n?u=RePEc:boc:usug13:09&r=ecm 
By:  Naoto Kunitomo (Faculty of Economics, University of Tokyo) 
Abstract:  In this lecture we illustrate several measurement errors issues and their statistical analyses arisen in Government Statistics, Econometrics and Financial Econometrics. We argue that there are some common structures and methods in many statistical problems and it shall be beneficial for many statisticians to think the roles of measurement errors and their statistical analyses in the ara of BigData. 
Date:  2013–09 
URL:  http://d.repec.org/n?u=RePEc:tky:jseres:2013cj248&r=ecm 
By:  Robert, Christian P. 
Abstract:  This note is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach. 
Keywords:  frequentist philosophy; criticisms of the Bayesian approach.; 
JEL:  C11 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/7849&r=ecm 
By:  Stephen P. Jenkins (London School of Economics) 
Abstract:  Social scientists are increasingly fitting multilevel models to datasets in which a large number of individuals (N ~ several thousands) are nested within each of a small number of countries (C ~ 25). The researchers are particularly interested in â€œcountry effectsâ€, as summarized by either the coefficients on countrylevel predictors (or crosslevel interactions) or the variance of the countrylevel random effects. Although questions have been raised about the potentially poor performance of estimators of these â€œcountry effectsâ€ when C is â€œsmallâ€, this issue appears not to be widely appreciated by many social scientist researchers. Using Monte Carlo analysis, I examine the performance of two estimators of a binarydependent twolevel model using a design in which C = 5(5)50 100 and N = 1000 for each country. The results point to i) the superior performance of adaptive quadrature estimators compared with PQL2 estimators, and ii) poor coverage of estimates of â€œcountry effectsâ€ in models in which C ~ 25, regardless of estimator. The analysis makes extensive use of xtmelogit and simulate and userwritten commands such as runmlwin, parmby, and eclplot. Issues associated with having extremely long runtimes are also discussed. 
Date:  2013–09–16 
URL:  http://d.repec.org/n?u=RePEc:boc:usug13:04&r=ecm 
By:  Nebojša St. Davèik 
Abstract:  The research practice in management research is dominantly based on structural equation modeling, but almost exclusively, and often misguidedly, on covariancebased SEM. We adumbrate theoretical foundations and guidance for the two SEM streams: covariancebased, also known as LISREL, covariance structure analysis, latent variable analysis, etc.; and variancebased SEM, also known as a componentbased SEM, PLS, etc. Our conceptual framework discusses the two streams by analysis of theory, measurement model specification, sample and goodnessoffit. We question the usefulness of Cronbach’s alpha research paradigm and discuss alternatives that are wellestablished in social science, but not wellknown in the management research community. We conclude with discussion of some open questions in management research practice that remain underinvestigated and unutilized. 
Keywords:  Structural equation modeling, covariance and variancebased SEM, formative and reflective indicators, LISREL, PLS 
JEL:  C18 C3 M0 
Date:  2013–09–13 
URL:  http://d.repec.org/n?u=RePEc:isc:iscwp2:bruwp1307&r=ecm 