nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒03‒17
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. Unified M-Estimation of Fixed-Effects Spatial Dynamic Models with Short Panels By Yang Zhenlin
  2. Alternative Formulation of the Leverage Effect in a Stochastic Volatility Model with Asymmetric Heavy-Tailed Errors By Deschamps, P.
  3. Model selection and model averaging in nonparametric instrumental variables models By Liu, Chu-An; Tao, Jing
  4. Big data models of bank risk contagion By Paola Cerchiello; Paolo Giudici; Giancarlo Nicola
  5. On the Treatment of a Measurement Error Regression Model By TAKU YAMAMOTO
  6. A time-varying long run HEAVY model By BRAIONE, M.
  7. Identifying Collusion in English Auctions By Kaplan, Uma; Marmer, Vadim; Shneyerov, Artyom
  8. The issue of control in multivariate systems A contribution of structural modelling By MOUCHART, M.; WUNSCH, G.; RUSSO, F.
  9. Quantile Cross-Spectral Measures of Dependence between Economic Variables By Jozef Barun\'ik; Tobias Kley
  10. Nonparametric Instrumental Variable Methods for Dynamic Treatment Evaluation By van den Berg, Gerard J.; Bonev, Petyo; Mammen, Enno
  11. Modified-likelihood estimation of the b-model By Koen Jochmans
  12. Statistical Risk Models By Zura Kakushadze; Willie Yu
  13. A Competing Risks Model with Time-varying Heterogeneity and Simultaneous Failure By Ruixuan Liu
  14. Noise Fit, Estimation Error and a Sharpe Information Criterion By Dirk Paulsen; Jakob S\"ohl
  15. Efficient computation of adjusted p-values for resampling-based stepdown multiple testing By Joseph P. Romano; Michael Wolf
  16. Estimating and forecasting value-at-risk using the unbiased extreme value volatility estimator By Dilip Kumar
  17. Advances in multivariate back-testing for credit risk underestimation By Coppens, François; Mayer, Manuel; Millischer, Laurent; Resch, Florian; Sauer, Stephan; Schulze, Klaas
  18. The Value of A Statistical Life in Absence of Panel Data: What can we do? By Andr\'es Riquelme; Marcela Parada
  19. Semiparametric Generalized Long Memory Modelling of GCC Stock Market Returns: A Wavelet Approach By Heni Boubaker; Nadia Sghaier
  20. Testing for news and noise in non-stationary time series subject to multiple historical revisions By Hecq A.W.; Jacobs J.P.A.M.; Stamatogiannis M.
  21. Comparison of Methods for Estimating the Uncertainty of Value at Risk By Santiago Gamba Santamaría; Oscar Fernando Jaulín Méndez; Luis Fernando Melo Velandia; Carlos Andrés Quicazán Moreno

  1. By: Yang Zhenlin (Singapore Management University)
    Abstract: It is well known that quasi maximum likelihood (QML) estimation of dynamic panel data (DPD) models with short panels depends on the assumptions on the initial values, and a wrong treatment of them will result in inconsistency and serious bias. The same issues apply to spatial DPD (SDPD) models with short panels. In this paper, a unifiedMestimation method is proposed for estimating the fixed-effects SDPD models containing three major types of spatial effects, namely spatial lag, spatial error and space-time lag. The method is free from the specification of the distribution of the initial observations and robust against nonnormality of the errors. Consistency and asymptotic normality of the proposed M-estimator are established. A martingale difference representation of the underlying estimating functions is developed, which leads to an initial-condition free estimate of the variance of the M-estimators. Monte Carlo results show that the proposed methods have excellent finite sample performance.
    Keywords: Adjusted quasi score; Dynamic panels; Fixed effects; Initial-condition free estimation; Martingale difference; Spatial effects; Short panels.
    JEL: C10 C13 C21 C23 C15
    Date: 2015–12
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:14-2015&r=ecm
  2. By: Deschamps, P. (Université catholique de Louvain, CORE, Belgium)
    Abstract: This paper investigates three formulations of the leverage effect in a stochastic volatility model with a skewed and heavy-tailed observation distribution. The first formulation is the conventional one, where the observation and evolution errors are correlated. The second is a hierarchical one, where log-volatility depends on the past log-return multiplied by a time-varying latent coefficient. In the third formulation, this coefficient is replaced by a constant. The three models are compared with each other and with a GARCH formulation, using Bayes fac- tors. MCMC estimation relies on a parametric proposal density estimated from the output of a particle smoother. The results, obtained with recent S&P500 and Swiss Market Index data, suggest that the last two leverage formulations strongly dominate the conventional one. The performance of the MCMC method is consistent across models and sample sizes, and its implementation only requires a very modest (and constant) number of filter and smoother particles.
    Keywords: Stochastic volatility models; Markov chain Monte Carlo; Particle methods; Generalized hyperbolic distribution; Bayesian analysis
    JEL: C11 C15 C22 C58
    Date: 2015–05–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2015020&r=ecm
  3. By: Liu, Chu-An; Tao, Jing
    Abstract: This paper considers the problem of choosing the regularization parameter and the smoothing parameter in nonparametric instrumental variables estimation. We propose a simple Mallows’ Cp-type criterion to select these two parameters simultaneously. We show that the proposed selection criterion is optimal in the sense that the selected estimate asymptotically achieves the lowest possible mean squared error among all candidates. To account for model uncertainty, we introduce a new model averaging estimator for nonparametric instrumental variables regressions. We propose a Mallows criterion for the weight selection and demonstrate its asymptotic optimality. Monte Carlo simulations show that both selection and averaging methods generally achieve lower root mean squared error than other existing methods. The proposed methods are applied to two empirical examples, the effect of class size question and Engel curve.
    Keywords: Ill-posed inverse problem, Mallows criterion, Model averaging, Model selection, Nonparametric instrumental variables, Series estimation
    JEL: C14 C26 C52
    Date: 2016–02–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:69492&r=ecm
  4. By: Paola Cerchiello (Department of Economics and Management, University of Pavia); Paolo Giudici (Department of Economics and Management, University of Pavia); Giancarlo Nicola (Department of Economics and Management, University of Pavia)
    Abstract: A very important area of financial risk management is systemic risk modelling,which concerns the estimation of the interrelationships between financial institutions, with the aim of establishing which of them are more central and, therefore, more contagious/subject to contagion. The aim of this paper is to develop a systemic risk model which, differently from existing ones, employs not only the information contained in financial market prices, but also big data coming from financial tweets. From a methodological viewpoint, we propose a new framework, based on graphical models, that can estimate systemic risks with models based on two different sources: financial markets and financial tweets, and suggest a way to combine them, using a Bayesian approach. From an applied viewpoint, we present the first systemic risk model based on big data, and show that such a model can shed further light on the interrelationships between financial institutions. This can help predicting the level of returns of a bank, conditionally on the others, for example when a shock occurs in another bank, or exogeneously.
    Keywords: Financial Risk Management, Graphical models, Systemic risks, Twitter data analysis
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0117&r=ecm
  5. By: TAKU YAMAMOTO (The Institute of Statistical Research)
    Abstract: The treatment of the measurement error regression model has been a focus of various studies, in particular, in the analysis of economic time series data. The ordinary least squares (OLS) estimator of the measurement error model is known to be asymptotically biased (inconsistent). In the present paper, we propose a new approach to the problem. Namely, it elaborates the effects of temporal aggregation, i.e. aggregation over time, on the OLS estimator. In the present paper, we consider a simple regression model whose explanatory variable consists of a latent variable and an measurement error. As the key assumption, we assume that the latent explanatory variable is positively autocorrelated. Since most economic time series data are positively autocorrelated, this assumption is satisfied in many situations and is not restrictive. Further, the measurement error is assumed to be serially independent We firstly analytically show that the non-overlapping temporal aggregation of the model decreases the bias and the mean squared error (MSE) of the OLS estimator when the sample size is large. This result comes from the fact that temporal aggregation of the positively autocorrelated latent variable increases their variability faster than the non-autocorrelated measurement error. That is, the noise-signal ratio becomes smaller in the temporally aggregated model. We may note that the aggregation scheme can be easily generalized to the overlapping one. It should be noted, however, that the temporal aggregation does not completely eliminate the bias (inconsistency). Thus, we secondly propose a consistent estimator by suitably combining the original disaggregated estimator and the aggregated estimator. Thirdly, we conduct appropriate simulation experiments which exhibit that the above analytical results are valid even in the small sample. Finally, we apply the proposed consistent method to an empirical application of Japanese data and the result appears to be effective in decreasing the inconsistency caused by the measurement error.
    Keywords: Regression Model, Measurement Error, Temporal Aggregation, Cosistent Estimator
    JEL: C01 C13 C26
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:3305807&r=ecm
  6. By: BRAIONE, M. (Université catholique de Louvain, CORE, Belgium)
    Abstract: We propose a scalar variation of the multivariate HEAVY model of Noureldin et al. which allows for a time-varying long run component in the specification of the daily conditional covariance matrix. Differently from the original model featuring a BEKK-type parameterization, ours extends it to allow for a separate modeling of the conditional volatilities and the conditional correlation matrix, in a DCC fashion. Estimation is performed in one step by QML and multi-step ahead forecasting is feasible applying the direct approach to the HEAVY-P equation. In an empirical application aiming at modeling and forecasting the conditional covariance matrix of a stock (BAC) and an index (S&P 500), we find that the new model statistically outperforms the original HEAVY model both in-sample and out-of-sample.
    Keywords: HEAVY model, Long term models, Mixed Data Sampling, Direct forecasting
    Date: 2016–02–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2016002&r=ecm
  7. By: Kaplan, Uma; Marmer, Vadim; Shneyerov, Artyom
    Abstract: We develop a fully nonparametric identification framework and a test of collusion in ascending bid auctions. Assuming efficient collusion, we show that the underlying distributions of values can be identified despite collusive behaviour when there is at least one bidder outside the cartel. We propose a nonparametric estimation procedure for the distributions of values and a bootstrap test of the null hypothesis of competitive behaviour against the alternative of collusion. Our framework allows for asymmetric bidders, and the test can be performed on individual bidders. The test is applied to the Guaranteed Investment Certificate auctions conducted by US municipalities over the Internet. Despite the fact that there have been allegations of collusion in this market, our test does not detect deviations from competition. A plausible explanation of this finding is that the Internet auction design involves very limited information disclosure.
    Keywords: English auctions, identification, collusion, nonparametric estimation
    JEL: C14
    Date: 2016–02–26
    URL: http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer-2016-3&r=ecm
  8. By: MOUCHART, M. (Université catholique de Louvain, CORE, Belgium); WUNSCH, G. (Université catholique de Louvain); RUSSO, F. (University of Amsterdam)
    Abstract: This paper builds upon Judea Pearl’s directed acyclic graphs approach to causality and the tradition of structural modelling in economics and social science. The paper re- examines the issue of control in complex systems with multiple causes and outcomes, in a specific perspective of structural modelling. It begins with three-variable saturated and unsaturated models, and then examines more complex systems including models with collider and latent confounder discussed by Pearl. In particular, focusing on the causes of an outcome, the paper proposes two simple rules for selecting the variables to be controlled for when studying the direct effect of a cause on an outcome of interest or the total effect when dealing with multiple causal paths. This paper presents a model building strategy that allows a statistical model to be considered as structural. The challenge for the model builder amounts to developing an explanation through a recursive decomposition of the joint distribution of the variables congruent with background knowledge and stable with respect to specified changes of the environment.
    Keywords: Causality, Control, Structural Modelling, Recursive Decomposition, Total Effect, Direct Effect
    Date: 2015–06–30
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2015029&r=ecm
  9. By: Jozef Barun\'ik; Tobias Kley
    Abstract: In this paper we introduce quantile cross-spectral analysis of multiple time series which is designed to detect general dependence structures emerging in quantiles of the joint distribution in the frequency domain. We argue that this type of dependence is natural for economic time series but remains invisible when the traditional analysis is employed. To illustrate how such dependence structures can arise between variables in different parts of the joint distribution and across frequencies, we consider quantile vector autoregression processes. We define new estimators which capture the general dependence structure, provide a detailed analysis of their asymptotic properties and discuss how to conduct inference for a general class of possibly nonlinear processes. In an empirical illustration we examine one of the most prominent time series in economics and shed new light on the dependence of bivariate stock market returns.
    Date: 2015–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1510.06946&r=ecm
  10. By: van den Berg, Gerard J. (University of Bristol); Bonev, Petyo (MINES ParisTech); Mammen, Enno (Heidelberg University)
    Abstract: We develop a nonparametric instrumental variable approach for the estimation of average treatment effects on hazard rates and conditional survival probabilities, without model structure. We derive constructive identification proofs for average treatment effects under noncompliance and dynamic selection, exploiting instrumental variation taking place during ongoing spells. We derive asymptotic distributions of the corresponding estimators. This includes a detailed examination of noncompliance in a dynamic context. In an empirical application, we evaluate the French labor market policy reform PARE which abolished the dependence of unemployment insurance benefits on the elapsed unemployment duration and simultaneously introduced additional active labor market policy measures. The estimated effect of the reform on the survival function of the duration of unemployment duration is positive and significant. Neglecting selectivity leads to an underestimation of the effects in absolute terms.
    Keywords: hazard rate, duration variable, treatment effects, survival function, noncompliance, regression discontinuity design, unemployment, labor market policy reform, active labor market policy, unemployment benefits
    JEL: C14 C41 J64 J65
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp9782&r=ecm
  11. By: Koen Jochmans (Département d'économie)
    Abstract: We consider point estimation and inference based on modifications of the profile likelihood in models for dyadic interactions between agents featuring n agent-specific parameters. This setup covers the b-model of network formation and generalizations thereof. The maximum-likelihood estimator of such models has bias and standard deviation of O(n−1) and so is asymptotically biased. Estimation based on modified likelihoods leads to estimators that are asymptotically unbiased and likelihood-ratio tests that exhibit correct size. We apply the modifications to versions of the b-model for network formation and of the Bradley-Terry model for paired comparisons.
    Keywords: asymptotic bias, b-model, Bradley-Terry model, fixed effects, modified profile likelihood, paired comparisons, matching, network formation, undirected random graph
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/5l8aj0dpmg9pbahnt8a4k2fcrh&r=ecm
  12. By: Zura Kakushadze; Willie Yu
    Abstract: We give complete algorithms and source code for constructing statistical risk models, including methods for fixing the number of risk factors. One such method is based on eRank (effective rank) and yields results similar to (and further validates) the method set forth in an earlier paper by one of us. We also give a complete algorithm and source code for computing eigenvectors and eigenvalues of a sample covariance matrix which requires i) no costly iterations and ii) the number of operations linear in the number of returns. The presentation is intended to be pedagogical and oriented toward practical applications.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1602.08070&r=ecm
  13. By: Ruixuan Liu
    Abstract: This paper proposes a new bivariate competing risks model that specifies each marginal duration as the first time a Levy subordinator crosses a random threshold. Our specification is a natural variant of the competing risks version of the mixed proportional hazards model, but it allows time-varying heterogeneity and simultaneous termination of both durations. When the structural multiplicative effects from covariates are of linear-index forms, we identify these effects, the baseline hazard function, and the characteristics of latent Levy subordinators with competing risks data. A semiparametric estimation of the finite dimensional parameter in our model is developed based on a certain average derivative estimation.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:emo:wp2003:1603&r=ecm
  14. By: Dirk Paulsen; Jakob S\"ohl
    Abstract: When optimizing the Sharpe ratio over a k-dimensional parameter space the thus obtained in-sample Sharpe ratio tends to be higher than what will be captured out-of-sample. For two reasons: the estimated parameter will be skewed towards the noise in the in-sample data (noise fitting) and, second, the estimated parameter will deviate from the optimal parameter (estimation error). This article derives a simple correction for both. Selecting a model with the highest corrected Sharpe selects the model with the highest expected out-of-sample Sharpe in the same way as selection by Akaike Information Criterion does for the log-likelihood as measure of fit.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1602.06186&r=ecm
  15. By: Joseph P. Romano; Michael Wolf
    Abstract: There has been a recent interest in reporting p-values adjusted for resampling-based stepdown multiple testing procedures proposed in Romano and Wolf (2005a,b). The original papers only describe how to carry out multiple testing at a fixed significance level. Computing adjusted p-values instead in an efficient manner is not entirely trivial. Therefore, this paper fills an apparent gap by detailing such an algorithm.
    Keywords: Adjusted p-values, multiple testing, resampling, stepdown procedure
    JEL: C12
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:219&r=ecm
  16. By: Dilip Kumar (Indian Institute of Management Kashipur)
    Abstract: We provide a framework based on the unbiased extreme value volatility estimator (Namely, the AddRS estimator) to compute and predict the long position and a short position VaR, henceforth referred to as the ARFIMA-AddRS-SKST model. We evaluate its VaR forecasting performance using the unconditional coverage test and the conditional coverage test for long and short positions on four global indices (S&P 500, CAC 40, IBOVESPA and S&P CNX Nifty) and compare the results with that of a bunch of alternative models. Our findings indicate that the ARFIMA-AddRS-SKST model outperforms the alternative models in predicting the long and short position VaR. Finally, we examine the economic significance of the proposed framework in estimating and predicting VaR using Lopez loss function approach so as to identify the best model that provides the least monetary loss. Our findings indicate that the VaR forecasts based on the ARFIMA-AddRS-SKST model provides the least total loss for various x% long and short positions VaR and this supports the superior properties of the proposed framework in forecasting VaR more accurately.
    Keywords: Extreme value volatility estimator; Value-at-risk; Skewed Student t distribution; Risk management.
    JEL: C22 C53
    URL: http://d.repec.org/n?u=RePEc:sek:iefpro:3205528&r=ecm
  17. By: Coppens, François; Mayer, Manuel; Millischer, Laurent; Resch, Florian; Sauer, Stephan; Schulze, Klaas
    Abstract: When back-testing the calibration quality of rating systems two-sided statistical tests can detect over- and underestimation of credit risk. Some users though, such as risk-averse investors and regulators, are primarily interested in the underestimation of risk only, and thus require one-sided tests. The established one-sided tests are multiple tests, which assess each rating class of the rating system separately and then combine the results to an overall assessment. However, these multiple tests may fail to detect underperformance of the whole rating system. Aiming to improve the overall assessment of rating systems, this paper presents a set of one-sided tests, which assess the performance of all rating classes jointly. These joint tests build on the method of Sterne [1954] for ranking possible outcomes by probability, which allows to extend back-testing to a setting of multiple rating classes. The new joint tests are compared to the most established one-sided multiple test and are further shown to outperform this benchmark in terms of power and size of the acceptance region. JEL Classification: C12, C52, G21, G24
    Keywords: back-testing, credit ratings, one-sided, probability of default
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20161885&r=ecm
  18. By: Andr\'es Riquelme; Marcela Parada
    Abstract: In this paper I show how reliable estimates of the Value of a Statistical Life (VSL) can be obtained using cross sectional data using Garen's instrumental variable (IV) approach. The increase in the range confidence intervals due to the IV setup can be reduced by a factor of 3 by using a proxy to risk attitude. In order state the "precision" of the cross sectional VSL estimates I estimate the VSL using Chilean panel data and use them as benchmark for different cross sectional specifications. The use of the proxy eliminates need for using hard-to-find instruments for the job risk level and narrows the confidence intervals for the workers in the Chilean labor market for the year 2009.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1603.00568&r=ecm
  19. By: Heni Boubaker; Nadia Sghaier
    Date: 2016–02–18
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-66&r=ecm
  20. By: Hecq A.W.; Jacobs J.P.A.M.; Stamatogiannis M. (GSBE)
    Abstract: Before being considered definitive, data currently produced by statistical agencies undergo a recurrent revision process resulting in different releases of the same phenomenon. The collection of all these vintages is referred to as a real-time data set. Economists and econometricians have realized the importance of this type of information for economic modeling and forecasting. This paper focuses on testing non-stationary data for forecastability, i.e., whether revisions reduce noise or are news. To deal with historical revisions which affect the whole vintage of time series due to redefinitions, methodological innovations etc., we employ the recently developed impulse indicator saturation approach, which involves potentially adding an indicator dummy for each observation to the model. We illustrate our procedures with the U.S. Real Gross National Product series from ALFRED and that revisions to this series neither reduce noise nor can be considered as news.
    Keywords: Multiple or Simultaneous Equation Models: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Methodology for Collecting, Estimating, and Organizing Macroeconomic Data; Data Access; Measurement and Data on National Income and Product Accounts and Wealth; Environmental Accounts;
    JEL: C32 C82 E01
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:unm:umagsb:2016004&r=ecm
  21. By: Santiago Gamba Santamaría; Oscar Fernando Jaulín Méndez; Luis Fernando Melo Velandia; Carlos Andrés Quicazán Moreno
    Abstract: Value at Risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities. There is a variety of methodologies proposed in the literature for the estimation of VaR. However, few of them get to say something about its distribution or its confidence intervals. This paper compares different methodologies for computing such intervals. Several methods, based on asymptotic normality, extreme value theory and subsample bootstrap, are used. Using Monte Carlo simulations, it is found that these approaches are only valid for high quantiles. In particular, there is a good performance for VaR (99%), in terms of coverage rates, and bad performance for VaR (95%) and VaR (90%). The results are confirmed by an empirical application for the stock market index returns of G7 countries.
    Keywords: Value at Risk, confidence intervals, data tilting, subsample bootstrap.
    JEL: C51 C52 C53 G32
    Date: 2016–02–24
    URL: http://d.repec.org/n?u=RePEc:col:000094:014263&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.