nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒11‒20
ten papers chosen by
Sune Karlsson
Örebro universitet

  1. Almost Unbiased Variance Estimation in Simultaneous Equation Models By Phillip, Garry; Xu, Yongdeng
  2. Empirical characteristic functions-based estimation and distance correlation for locally stationary processes By Jentsch, Carsten; Leucht, Anne; Meyer, Marco; Beering, Carina
  3. Nonparametric density estimation and testing By Patrick Marsh
  4. Estimation of a Multiplicative Covariance Structure in the Large Dimensional Case By Hafner, C. M.; Linton, O.
  5. Crisis-Contingent Dynamics of Connectedness: An SVAR-Spatial-Network “Tripod” Model with Thresholds By Sun, Hang
  6. The Devil is in the Tails: Regression Discontinuity Design with Measurement Error in the Assignment Variable By Pei, Zhuan; Shen, Yi
  7. Multinomial VaR Backtests: A simple implicit approach to backtesting expected shortfall By Marie Kratz; Yen H. Lok; Alexander J McNeil
  8. High Frequency vs. Daily Resolution: the Economic Value of Forecasting Volatility Models By F. Lilla
  9. Empirical analysis of daily cash flow time series and its implications for forecasting By Francisco Salas-Molina; Juan A. Rodr\'iguez-Aguilar; Joan Serr\`a; Francisco J. Martin
  10. Policy Analysis, Forediction, and Forecast Failure By Jennifer Castle; David Hendry

  1. By: Phillip, Garry (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: While a good deal of research in simultaneous equation models has been conducted to examine the small sample properties of coefficient estimators there has not been a corresponding interest in the properties of estimators for the associated variances. In this paper we build on Kiviet and Phillips (2000) and explore the biases in variance estimators. This is done for the 2SLS and the MLIML estimators.The approximations to the bias are then used to develop less biased estimators whose properties are examined and compared in a number of simulation experiments. In addition, a bootstrap estimator is included which is found to perform especially well. The experiments also consider coverage probabilities/test sizes and test powers of the t-tests where it is shown that tests based on 2SLS are generally oversized while test sizes based on MLIML are closer to nominal levels. In both cases test statistics based on the corrected variance estimates generally have a higher power than standard procedures.
    Keywords: Simultaneous equation models, 2SLS and Fuller's estimators, Bias corrected variance estimation, Inference and bias corrected variance
    JEL: C12 C13 C26 C30
    Date: 2016–10
  2. By: Jentsch, Carsten; Leucht, Anne; Meyer, Marco; Beering, Carina
    Abstract: In this paper, we propose a kernel-type estimator for the local characteristic function of locally stationary processes. Under weak moment conditions, we prove joint asymptotic normality for local empirical characteristic functions. For time-varying linear processes, we establish a central limit theorem under the assumption of finite absolute first moments of the process. Additionally, we prove weak convergence of the local empirical characteristic process. We apply our asymptotic results to parameter estimation. Furthermore, by extending the notion of distance correlation of Szekely, Rizzo and Bakirov (2007) to locally stationary processes, we are able to provide asymptotic theory for local empirical distance correlations. Finally, we provide a simulation study on minimum distance estimation for a-stable distributions and illustrate the pairwise dependence structure over time of log returns of German stock prices via local empirical distance correlations.
    Keywords: empirical characteristic function , local stationarity , time series , stable distributions , (local) distance correlation , minimum distance estimation , process convergence , asymptotic theory
    Date: 2016
  3. By: Patrick Marsh
    Abstract: A nonparametric likelihood ratio test for the exponential series density estimator is employed as a goodness-of—fit test in the presence of nuisance parameters. These tests are designed to address the perceived weaknesses of those based on the empirical distribution function, such as the Kolmogorov-Smirnov, Cramer-von Mises and Anderson Darling tests. These tests are often criticized for not being asymptotically pivotal, having low power and offering no direction if the null hypothesis is rejected. Instead the tests of this paper are proven to be asymptotically pivotal and numerical experiments illustrate this. Further experiments suggest the tests are generally more powerful in a variety of testing problems whether bootstrap critical values are used or not. Finally, in the event of rejection, the proposed procedures involve density estimators which can be used directly and accurately to estimate quantiles.
    Keywords: Goodness-of-fit, series density estimator, likelihood ratio, nuisance parameters, parametric bootstrap.
  4. By: Hafner, C. M.; Linton, O.
    Abstract: We propose a Kronecker product structure for large covariance or correlation matrices. One feature of this model is that it scales logarithmically with dimension in the sense that the number of free parameters increases logarithmically with the dimension of the matrix. We propose an estimation method of the parameters based on a log-linear property of the structure, and also a quasi-maximum likelihood estimation (QMLE) method. We establish the rate of convergence of the estimated parameters when the size of the matrix diverges. We also establish a central limit theorem (CLT) for our method. We derive the asymptotic distributions of the estimators of the parameters of the spectral distribution of the Kronecker product correlation matrix, of the extreme logarithmic eigenvalues of this matrix, and of the variance of the minimum variance portfolio formed using this matrix. We also develop tools of inference including a test for over-identification. We apply our methods to portfolio choice for S&P500 daily returns and compare with sample covariance-based methods and with the recent Fan, Liao, and Mincheva (2013) method.
    Keywords: Correlation Matrix, Kronecker Product, Matrix Logarithm, Multiarray data, Multi-trai Multi method, Portfolio Choice, Sparsity
    Date: 2016–11–09
  5. By: Sun, Hang
    Abstract: In recent years a growing number of methodologies have been proposed to empirically measure the connectedness among financial entities. However, few of them capture the dynamics of the financial connectedness. This paper aims to fill this gap and proposes a novel and systematic model to portray not only the connectedness in a given regime but also its transitions across different regimes. The model is based on an improved version of a “tripod” model, which unifies structural vector auto-regressions (SVARs), spatial models, and network models under one framework. I introduce a transition mechanism into my model, thus making it possible to observe how the interconnections among financial entities vary with certain threshold variables. My model may be applied to various issues regarding the financial and non-financial interconnections among certain entities. As an illustration, I show how my model can shed some new light on the modeling of the recent Eurozone contagions. I reveal a clear causal map of the propagation of shocks among the stock markets in five selected Eurozone countries in both contagion and non-contagion periods, which are determined automatically. The unique roles played by each selected market under both the contagion and non-contagion regimes are efficiently summarized.
    Keywords: Econometrics, Financial Economics and Financial Manageemnt, International economics and trade
    JEL: C30 C59 G01 G15
    Date: 2016
  6. By: Pei, Zhuan (Cornell University); Shen, Yi (University of Waterloo)
    Abstract: Identification in a regression discontinuity (RD) design hinges on the discontinuity in the probability of treatment when a covariate (assignment variable) exceeds a known threshold. If the assignment variable is measured with error, however, the discontinuity in the first stage relationship between the probability of treatment and the observed mismeasured assignment variable may disappear. Therefore, the presence of measurement error in the assignment variable poses a challenge to treatment effect identification. This paper provides sufficient conditions for identification when only the mismeasured assignment variable, the treatment status and the outcome variable are observed. We prove identification separately for discrete and continuous assignment variables and study the properties of various estimation procedures. We illustrate the proposed methods in an empirical application, where we estimate Medicaid takeup and its crowdout effect on private health insurance coverage.
    Keywords: regression discontinuity design, measurement error
    JEL: C10 C18
    Date: 2016–10
  7. By: Marie Kratz; Yen H. Lok; Alexander J McNeil
    Abstract: Under the Fundamental Review of the Trading Book (FRTB) capital charges for the trading book are based on the coherent expected shortfall (ES) risk measure, which show greater sensitivity to tail risk. In this paper it is argued that backtesting of expected shortfall - or the trading book model from which it is calculated - can be based on a simultaneous multinomial test of value-at-risk (VaR) exceptions at different levels, an idea supported by an approximation of ES in terms of multiple quantiles of a distribution proposed in Emmer et al. (2015). By comparing Pearson, Nass and likelihood-ratio tests (LRTs) for different numbers of VaR levels $N$ it is shown in a series of simulation experiments that multinomial tests with $N\geq 4$ are much more powerful at detecting misspecifications of trading book loss models than standard binomial exception tests corresponding to the case $N=1$. Each test has its merits: Pearson offers simplicity; Nass is robust in its size properties to the choice of $N$; the LRT is very powerful though slightly over-sized in small samples and more computationally burdensome. A traffic-light system for trading book models based on the multinomial test is proposed and the recommended procedure is applied to a real-data example spanning the 2008 financial crisis.
    Date: 2016–11
  8. By: F. Lilla
    Abstract: Forecasting-volatility models typically rely on either daily or high frequency (HF) data and the choice between these two categories is not obvious. In particular, the latter allows to treat volatility as observable but they suffer of many limitations. HF data feature microstructure problem, such as the discreteness of the data, the properties of the trading mechanism and the existence of bid-ask spread. Moreover, these data are not always available and, even if they are, the asset’s liquidity may be not sufficient to allow for frequent transactions. This paper considers different variants of these two family forecasting-volatility models, comparing their performance (in terms of Value at Risk, VaR) under the assumptions of jumping prices and leverage effects for volatility. Findings suggest that GARJI model provides more accurate VaR measures for the S&P 500 index than RV models. Furthermore, the assumption of conditional normality is shown to be not sufficient to obtain accurate risk measures even if jump contribution is provided. More sophisticated models might address this issue, improving VaR results.
    JEL: C58 C53 C22 C01 C13
    Date: 2016–11
  9. By: Francisco Salas-Molina; Juan A. Rodr\'iguez-Aguilar; Joan Serr\`a; Francisco J. Martin
    Abstract: Cash management models determine policies based either on the statistical properties of daily cash flow or on forecasts. Usual assumptions on the statistical properties of daily cash flow include normality, independence and stationarity. Surprisingly, little empirical evidence confirming these assumptions has been provided. In this work, we provide a comprehensive study on 54 real-world daily cash flow data sets, which we also make publicly available. Apart from the previous assumptions, we also consider linearity, meaning that cash flow is proportional to a particular explanatory variable, and we propose a new cross-validated test for time series non-linearity. We further analyze the implications of all aforementioned assumptions for forecasting, showing that: (i) the usual assumption of normality, independence and stationarity hardly appear; (ii) non-linearity is often relevant for forecasting; and (iii) common data transformations such as outlier treatment and Box-Cox have little impact on linearity and normality. Our results highlight the utility of non-linear models as a justifiable alternative for time series forecasting.
    Date: 2016–11
  10. By: Jennifer Castle; David Hendry
    Abstract: Economic policy agencies accompany forecasts with narratives, a combination we call foredictions, often basing policy changes on developments envisaged. Forecast failure need not impugn a forecasting model, although it may, but almost inevitably entails forediction failure and invalidity of the associated policy. Most policy regime changes involve location shifts, which can induce forediction failure unless the policy variable is super exogenous in the policy model. We propose a step-indicator saturation test to check in advance for invariance to policy changes. Systematic forecast failure, or a lack of invariance, previously justified by narratives reveals such stories to be economic fiction.
    Keywords: Forediction; Invariance; Super exogeneity; Indicator saturation; Co-breaking; Autometrics
    JEL: C51 C22
    Date: 2016–10–24

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.