nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒07‒11
ten papers chosen by
Sune Karlsson
Örebro universitet

  1. Asymptotic Inference for Common Factor Models in the Presence of Jumps By YAMAMOTO, Yohei
  2. Bias-corrected estimation of panel vector autoregressions By Koen Jochmans; Geert Dhaene
  3. On a Bootstrap Test for Forecast Evaluations By Marian Vavra
  4. Validating the assumptions of sequential bifurcation in factor screening By Shi, W.; Kleijnen, J.P.C.
  5. Revisiting the Evidence for a Cardinal Treatment of Ordinal Variables By Carsten Schröder; Shlomo Yitzhaki
  6. Intraday Stochastic Volatility in Discrete Price Changes: the Dynamic Skellam Model By Siem Jan Koopman; Rutger Lit; Andre Lucas
  7. Estimation of integrated quadratic covariation between two assets with endogenous sampling times By Yoann Potiron; Per Mykland
  8. A Pratical Approach to Financial Crisis Indicators Based on Random Matrices By Antoine Kornprobst; Raphael Douady
  9. Regression and kriging metamodels with their experimental designs in simulation : review By Kleijnen, J.P.C.
  10. Exact P-values for Network Interference By Susan Athey; Dean Eckles; Guido W. Imbens

  1. By: YAMAMOTO, Yohei
    Abstract: Financial and macroeconomic time-series data often exhibit infrequent but large jumps. Such jumps may be considered as outliers that are independent of the underlying data-generating processes and contaminate inferences on their model. In this study, we investigate the effects of such jumps on asymptotic inference for large-dimensional common factor models. We first derive the upper bound of jump magnitudes with which the standard asymptotic inference goes through. Second, we propose a jump-correction method based on a series-by-series outlier detection algorithm without accounting for the factor structure. This method gains standard asymptotic normality for the factor model unless outliers occur at common dates. Finally, we propose a test to investigate whether the jumps at a common date are independent outliers or are of factors. A Monte Carlo experiment confirms that the proposed jump-correction method retrieves good finite sample properties. The proposed test shows good size and power. Two small empirical applications illustrate usefulness of the proposed methods.
    Keywords: outliers, large-dimensional common factor models, principal components, jumps
    JEL: C12 C38
    Date: 2015–07–02
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2015-05&r=ecm
  2. By: Koen Jochmans (Département d'économie); Geert Dhaene (KU Leuven)
    Abstract: We derive bias-corrected least-squares estimators of panel vector autoregressions with fixed effects. The correction is straightforward to implement and yields an estimator that is asymptotically unbiased under asymptotics where the number of time series observations grows at the same rate as the number of cross-sectional observations. This makes the estimator well suited for most macroeconomic data sets. Simulation results show that the estimator yields substantial improvements over within-group least-squares estimation. We illustrate the bias correction in a study of the relation between the unemployment rate and the economic growth rate at the U.S. state level.
    Keywords: bias correction, fixed effects, panel data, vector autoregression
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/4ect7tfnam9poo2tioundd7pb3&r=ecm
  3. By: Marian Vavra (National Bank of Slovakia, Research Department)
    Abstract: This paper is concerned with the problem of testing for the equal forecast accuracy of competing models using a bootstrap-based Diebold-Mariano test statistic. The finite-sample properties of the test are assessed via Monte Carlo experiments. As an illustration, the forecast accuracy of the US Survey of Professional Forecasters is compared to that of an autoregressive model. The empirical results indicate that professionals beat AR models systematically only for a single economic variable – the unemployment rate
    Keywords: Forecast evaluation; Diebold-Mariano test; Sieve bootstrap
    JEL: C12 C15 C32 C53
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:svk:wpaper:1034&r=ecm
  4. By: Shi, W.; Kleijnen, J.P.C. (Tilburg University, Center For Economic Research)
    Abstract: Sequential bifurcation (SB) is a very efficient and effective method for identifying the important factors (inputs) of simulation models with very many factors, provided the SB assumptions are valid. A variant of SB called multiresponse SB (MSB) can be applied to simulation models with multiple types of responses (outputs). The specific SB and MSB assumptions are: (i) a second-order<br/>polynomial per output is an adequate approximation (valid metamodel) of the implicit input/output function of the underlying simulation model; (ii) the directions (signs) of the first-order effects are known (so the first-order polynomial approximation per output is monotonic); (iii) heredity applies; i.e., if an input has no important first-order effect, then this input has no important second-order effects. To validate these three assumptions, we develop new methods. We compare these methods through Monte Carlo experiments and a case study.
    Keywords: simulation; sensitivity analysis; Design of experiments; statistical analysis
    JEL: C0 C1 C9 C15 C44
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:20917855-af54-4d4d-a54b-6235540f9bf7&r=ecm
  5. By: Carsten Schröder; Shlomo Yitzhaki
    Abstract: Well‐being (i.e., satisfaction, happiness) is a latent variable, impossible to observe directly. Hence, questionnaires ask people to grade their well‐being in different life domains. The most common practice—comparing well‐being by means of descriptive analysis or linear regressions—ignores that the underlying collected well‐being information is ordinal. If the well‐being function is ordinal, then monotonic transformations are allowed. We demonstrate that treating ordinal data by methods intended to be used for cardinal data may give an incorrect impression of a robust result. Particularly, we derive the conditions under which the use of cardinal method to an ordinal variable gives an illusionary sense of robustness, while in fact one can reverse the conclusion reached by using an alternative cardinal assumption. The paper provides empirical applications.
    Keywords: satisfaction, well-being, ordinal, cardinal, dominance
    JEL: C18 C23 C25 I30 I31 I39
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp772&r=ecm
  6. By: Siem Jan Koopman (VU University Amsterdam); Rutger Lit (VU University Amsterdam); Andre Lucas (VU University Amsterdam)
    Abstract: We introduce a dynamic Skellam model that measures stochastic volatility from high-frequency tick-by-tick discrete stock price changes. The likelihood function for our model is analytically intractable and requires Monte Carlo integration methods for its numerical evaluation. The proposed methodology is applied to tick-by-tick data of four stocks traded on the New York Stock Exchange. We require fast simulation methods for likelihood evaluation since the number of observations per series per day varies from 1000 to 10,000. Complexities in the intraday dynamics of volatility and in the frequency of trades without price impact require further non-trivial adjustments to the dynamic Skellam model. In-sample residual diagnostics and goodness-of-fit statistics show that the final model provides a good fit to the data. An extensive forecasting study of intraday volatility shows that the dynamic modified Skellam model provides accurate forecasts compared to alternative modeling approaches.
    Keywords: non-Gaussian time series models; volatility models; importance sampling; numerical integration; high-frequency data; discrete price changes.
    JEL: C22 C32 C58
    Date: 2015–07–01
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150076&r=ecm
  7. By: Yoann Potiron; Per Mykland
    Abstract: When estimating integrated covariation between two assets based on high-frequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes.
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1507.01033&r=ecm
  8. By: Antoine Kornprobst (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS); Raphael Douady (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS)
    Abstract: The aim of this work is to build financial crisis indicators based on market data time series. After choosing an optimal size for a rolling window, the market data is seen every trading day as a random matrix from which a covariance and correlation matrix is obtained. Our indicators deal with the spectral properties of these covariance and correlation matrices. Our basic financial intuition is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. Our indicators will be mainly of two types. The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix. As reference distribution we will use the theoretical Marchenko Pastur distribution and, mainly, simulated ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations. The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability.
    Abstract: Le but de ce travail de recherche est la construction d'indicateurs de crises financières basés sur des données de marché. Après avoir choisi la taille optimale d'une fenêtre roulante, les données de marchés seront vues comme une matrice aléatoire à partir de laquelle une matrice de covariance et une matrice de corrélation seront obtenues. Nos indicateurs exploitent les propriétés spectrales de cette matrice de covariance et de cette matrice de corrélation. Notre intuition financière de base est que la corrélation et la volatilité sont le pouls d'un marché financier : quand les corrélations entre les actifs augmentent ou développent des comportements anormaux, quand la volatilité commence à augmenter, alors un évènement de crise est peut être sur le point de se produire. Nos indicateurs seront essentiellement de deux types. Le premier type est basé sur la distance de Hellinger, calculée entre la distribution des valeurs propres de la matrice de covariance empirique et la distribution des valeurs propres d'une matrice de covariance de référence. Comme distribution de référence nous utiliserons la distribution théorique de Marchenko Pasur et aussi, essentiellement, des distributions simulées en utilisant une matrice aléatoire de même taille que la matrice de covariance roulante empirique et constituée de coefficients suivant une loi Gaussienne ou t-student et présentant des corrélations. L'idée derrière ce premier type d'indicateurs est que quand la distribution empirique du spectre de la matrice de covariance commence à dévier au sens de Hellinger de la référence, alors une crise est probablement sur le point de se produire. Le second type d'indicateurs est basé sur l'étude du rayon spectral et de la trace de la matrice de covariance et de la matrice de corrélation, dans le but d'étudier directement la volatilité et la corrélation à l'intérieur du marché. L'idée derrière ce second type d'indicateurs est que de grandes valeurs propres sont un signe d'instabilité dynamique.
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01169307&r=ecm
  9. By: Kleijnen, J.P.C. (Tilburg University, Center For Economic Research)
    Abstract: This article reviews the design and analysis of simulation experiments. It focusses on analysis via either low-order polynomial regression or Kriging (also known as Gaussian process) metamodels. The type of metamodel determines the design of the experiment, which determines the input combinations of the simulation experiment. For example, a …first-order polynomial metamodel requires a "resolution-III" design, whereas Kriging may use Latin hypercube sampling. Polynomials of fi…rst or second order require resolution III, IV, V, or "central composite" designs. Before applying either regression or Kriging, sequential bifurcation may be applied to screen a great many inputs. Optimization of the simulated system may use either a sequence of low-order polynomials known as response surface methodology (RSM) or Kriging models …tted through sequential designs including e¢ cient global optimization (EGO). The review includes robust optimization, which accounts for uncertain simulation inputs.
    Keywords: robustness and sensitivity; simulation; metamodel; design; regression; Kriging
    JEL: C0 C1 C9 C15 C44
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:c592e895-1656-43c3-8c7e-f3530b04af9c&r=ecm
  10. By: Susan Athey; Dean Eckles; Guido W. Imbens
    Abstract: We study the calculation of exact p-values for a large class of non-sharp null hypotheses about treatment effects in a setting with data from experiments involving members of a single connected network. The class includes null hypotheses that limit the effect of one unit's treatment status on another according to the distance between units; for example, the hypothesis might specify that the treatment status of immediate neighbors has no effect, or that units more than two edges away have no effect. We also consider hypotheses concerning the validity of sparsification of a network (for example based on the strength of ties) and hypotheses restricting heterogeneity in peer effects (so that, for example, only the number or fraction treated among neighboring units matters). Our general approach is to define an artificial experiment, such that the null hypothesis that was not sharp for the original experiment is sharp for the artificial experiment, and such that the randomization analysis for the artificial experiment is validated by the design of the original experiment.
    JEL: C01 C1
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21313&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.