
on Econometrics 
By:  YAMAMOTO, Yohei 
Abstract:  Financial and macroeconomic timeseries data often exhibit infrequent but large jumps. Such jumps may be considered as outliers that are independent of the underlying datagenerating processes and contaminate inferences on their model. In this study, we investigate the effects of such jumps on asymptotic inference for largedimensional common factor models. We first derive the upper bound of jump magnitudes with which the standard asymptotic inference goes through. Second, we propose a jumpcorrection method based on a seriesbyseries outlier detection algorithm without accounting for the factor structure. This method gains standard asymptotic normality for the factor model unless outliers occur at common dates. Finally, we propose a test to investigate whether the jumps at a common date are independent outliers or are of factors. A Monte Carlo experiment confirms that the proposed jumpcorrection method retrieves good finite sample properties. The proposed test shows good size and power. Two small empirical applications illustrate usefulness of the proposed methods. 
Keywords:  outliers, largedimensional common factor models, principal components, jumps 
JEL:  C12 C38 
Date:  2015–07–02 
URL:  http://d.repec.org/n?u=RePEc:hit:econdp:201505&r=ecm 
By:  Koen Jochmans (Département d'économie); Geert Dhaene (KU Leuven) 
Abstract:  We derive biascorrected leastsquares estimators of panel vector autoregressions with fixed effects. The correction is straightforward to implement and yields an estimator that is asymptotically unbiased under asymptotics where the number of time series observations grows at the same rate as the number of crosssectional observations. This makes the estimator well suited for most macroeconomic data sets. Simulation results show that the estimator yields substantial improvements over withingroup leastsquares estimation. We illustrate the bias correction in a study of the relation between the unemployment rate and the economic growth rate at the U.S. state level. 
Keywords:  bias correction, fixed effects, panel data, vector autoregression 
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/4ect7tfnam9poo2tioundd7pb3&r=ecm 
By:  Marian Vavra (National Bank of Slovakia, Research Department) 
Abstract:  This paper is concerned with the problem of testing for the equal forecast accuracy of competing models using a bootstrapbased DieboldMariano test statistic. The finitesample properties of the test are assessed via Monte Carlo experiments. As an illustration, the forecast accuracy of the US Survey of Professional Forecasters is compared to that of an autoregressive model. The empirical results indicate that professionals beat AR models systematically only for a single economic variable – the unemployment rate 
Keywords:  Forecast evaluation; DieboldMariano test; Sieve bootstrap 
JEL:  C12 C15 C32 C53 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:svk:wpaper:1034&r=ecm 
By:  Shi, W.; Kleijnen, J.P.C. (Tilburg University, Center For Economic Research) 
Abstract:  Sequential bifurcation (SB) is a very efficient and effective method for identifying the important factors (inputs) of simulation models with very many factors, provided the SB assumptions are valid. A variant of SB called multiresponse SB (MSB) can be applied to simulation models with multiple types of responses (outputs). The specific SB and MSB assumptions are: (i) a secondorder<br/>polynomial per output is an adequate approximation (valid metamodel) of the implicit input/output function of the underlying simulation model; (ii) the directions (signs) of the firstorder effects are known (so the firstorder polynomial approximation per output is monotonic); (iii) heredity applies; i.e., if an input has no important firstorder effect, then this input has no important secondorder effects. To validate these three assumptions, we develop new methods. We compare these methods through Monte Carlo experiments and a case study. 
Keywords:  simulation; sensitivity analysis; Design of experiments; statistical analysis 
JEL:  C0 C1 C9 C15 C44 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:20917855af544d4da54b6235540f9bf7&r=ecm 
By:  Carsten Schröder; Shlomo Yitzhaki 
Abstract:  Well‐being (i.e., satisfaction, happiness) is a latent variable, impossible to observe directly. Hence, questionnaires ask people to grade their well‐being in different life domains. The most common practice—comparing well‐being by means of descriptive analysis or linear regressions—ignores that the underlying collected well‐being information is ordinal. If the well‐being function is ordinal, then monotonic transformations are allowed. We demonstrate that treating ordinal data by methods intended to be used for cardinal data may give an incorrect impression of a robust result. Particularly, we derive the conditions under which the use of cardinal method to an ordinal variable gives an illusionary sense of robustness, while in fact one can reverse the conclusion reached by using an alternative cardinal assumption. The paper provides empirical applications. 
Keywords:  satisfaction, wellbeing, ordinal, cardinal, dominance 
JEL:  C18 C23 C25 I30 I31 I39 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp772&r=ecm 
By:  Siem Jan Koopman (VU University Amsterdam); Rutger Lit (VU University Amsterdam); Andre Lucas (VU University Amsterdam) 
Abstract:  We introduce a dynamic Skellam model that measures stochastic volatility from highfrequency tickbytick discrete stock price changes. The likelihood function for our model is analytically intractable and requires Monte Carlo integration methods for its numerical evaluation. The proposed methodology is applied to tickbytick data of four stocks traded on the New York Stock Exchange. We require fast simulation methods for likelihood evaluation since the number of observations per series per day varies from 1000 to 10,000. Complexities in the intraday dynamics of volatility and in the frequency of trades without price impact require further nontrivial adjustments to the dynamic Skellam model. Insample residual diagnostics and goodnessofﬁt statistics show that the ﬁnal model provides a good ﬁt to the data. An extensive forecasting study of intraday volatility shows that the dynamic modiﬁed Skellam model provides accurate forecasts compared to alternative modeling approaches. 
Keywords:  nonGaussian time series models; volatility models; importance sampling; numerical integration; highfrequency data; discrete price changes. 
JEL:  C22 C32 C58 
Date:  2015–07–01 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20150076&r=ecm 
By:  Yoann Potiron; Per Mykland 
Abstract:  When estimating integrated covariation between two assets based on highfrequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the HayashiYoshida estimator in this general endogenous model in the case where prices follow purediffusion processes. 
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1507.01033&r=ecm 
By:  Antoine Kornprobst (CES  Centre d'économie de la Sorbonne  UP1  Université PanthéonSorbonne  CNRS); Raphael Douady (CES  Centre d'économie de la Sorbonne  UP1  Université PanthéonSorbonne  CNRS) 
Abstract:  The aim of this work is to build financial crisis indicators based on market data time series. After choosing an optimal size for a rolling window, the market data is seen every trading day as a random matrix from which a covariance and correlation matrix is obtained. Our indicators deal with the spectral properties of these covariance and correlation matrices. Our basic financial intuition is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. Our indicators will be mainly of two types. The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix. As reference distribution we will use the theoretical Marchenko Pastur distribution and, mainly, simulated ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Studentt coefficients with some simulated correlations. The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability. 
Abstract:  Le but de ce travail de recherche est la construction d'indicateurs de crises financières basés sur des données de marché. Après avoir choisi la taille optimale d'une fenêtre roulante, les données de marchés seront vues comme une matrice aléatoire à partir de laquelle une matrice de covariance et une matrice de corrélation seront obtenues. Nos indicateurs exploitent les propriétés spectrales de cette matrice de covariance et de cette matrice de corrélation. Notre intuition financière de base est que la corrélation et la volatilité sont le pouls d'un marché financier : quand les corrélations entre les actifs augmentent ou développent des comportements anormaux, quand la volatilité commence à augmenter, alors un évènement de crise est peut être sur le point de se produire. Nos indicateurs seront essentiellement de deux types. Le premier type est basé sur la distance de Hellinger, calculée entre la distribution des valeurs propres de la matrice de covariance empirique et la distribution des valeurs propres d'une matrice de covariance de référence. Comme distribution de référence nous utiliserons la distribution théorique de Marchenko Pasur et aussi, essentiellement, des distributions simulées en utilisant une matrice aléatoire de même taille que la matrice de covariance roulante empirique et constituée de coefficients suivant une loi Gaussienne ou tstudent et présentant des corrélations. L'idée derrière ce premier type d'indicateurs est que quand la distribution empirique du spectre de la matrice de covariance commence à dévier au sens de Hellinger de la référence, alors une crise est probablement sur le point de se produire. Le second type d'indicateurs est basé sur l'étude du rayon spectral et de la trace de la matrice de covariance et de la matrice de corrélation, dans le but d'étudier directement la volatilité et la corrélation à l'intérieur du marché. L'idée derrière ce second type d'indicateurs est que de grandes valeurs propres sont un signe d'instabilité dynamique. 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs01169307&r=ecm 
By:  Kleijnen, J.P.C. (Tilburg University, Center For Economic Research) 
Abstract:  This article reviews the design and analysis of simulation experiments. It focusses on analysis via either loworder polynomial regression or Kriging (also known as Gaussian process) metamodels. The type of metamodel determines the design of the experiment, which determines the input combinations of the simulation experiment. For example, a …firstorder polynomial metamodel requires a "resolutionIII" design, whereas Kriging may use Latin hypercube sampling. Polynomials of fi…rst or second order require resolution III, IV, V, or "central composite" designs. Before applying either regression or Kriging, sequential bifurcation may be applied to screen a great many inputs. Optimization of the simulated system may use either a sequence of loworder polynomials known as response surface methodology (RSM) or Kriging models …tted through sequential designs including e¢ cient global optimization (EGO). The review includes robust optimization, which accounts for uncertain simulation inputs. 
Keywords:  robustness and sensitivity; simulation; metamodel; design; regression; Kriging 
JEL:  C0 C1 C9 C15 C44 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:c592e895165643c38c7ef3530b04af9c&r=ecm 
By:  Susan Athey; Dean Eckles; Guido W. Imbens 
Abstract:  We study the calculation of exact pvalues for a large class of nonsharp null hypotheses about treatment effects in a setting with data from experiments involving members of a single connected network. The class includes null hypotheses that limit the effect of one unit's treatment status on another according to the distance between units; for example, the hypothesis might specify that the treatment status of immediate neighbors has no effect, or that units more than two edges away have no effect. We also consider hypotheses concerning the validity of sparsification of a network (for example based on the strength of ties) and hypotheses restricting heterogeneity in peer effects (so that, for example, only the number or fraction treated among neighboring units matters). Our general approach is to define an artificial experiment, such that the null hypothesis that was not sharp for the original experiment is sharp for the artificial experiment, and such that the randomization analysis for the artificial experiment is validated by the design of the original experiment. 
JEL:  C01 C1 
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:21313&r=ecm 