nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒02‒23
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Periodic autoregressive stochastic volatility By Aknouche, Abdelhakim
  2. Difference-in-Differences Inference with Few Treated Clusters By James G. MacKinnon; Matthew D. Webb
  3. Estimation bias due to duplicated observations: a Monte Carlo simulation By Sarracino, Francesco; Mikucka, Malgorzata
  4. Blue Chip Italian Bank Stocks: Chain Graph models for VAR And MARCH parameters shrinking By Andrea Pierini
  5. Sparse Kalman Filtering Approaches to Covariance Estimation from High Frequency Data in the Presence of Jumps By Michael Ho; Jack Xin
  6. A Simple Estimation of Bid-Ask Spreads from Daily Close, High, and Low Prices By Abdi, Farshid; Ranaldo, Angelo
  7. DOUBLY ROBUST UNIFORM CONFIDENCE BAND FOR THE CONDITIONAL AVERAGE TREATMENT EFFECT FUNCTION By SOKBAE LEE; RYO OKUI; YOON-JAE WHANG
  8. Credit risk stress testing and copulas: Is the Gaussian copula better than its reputation? By Koziol, Philipp; Schell, Carmen; Eckhardt, Meik
  9. ABC and Hamiltonian Monte-Carlo methods in COGARCH models By J. Miguel Marín; M. T. Rodríguez-Bernal; E. Romero
  10. Identification and Estimation of Risk Aversion in First Price Auctions With Unobserved Auction Heterogeneity By Grundl, Serafin J.; Zhu, Yu
  11. NEIGHBORHOOD EFFECTS ON THE PROPENSITY SCORE MATCHING By Marusca De Castris; Guido Pellegrini
  12. Estimation of DSGE models: Maximum Likelihood vs. Bayesian methods By Mickelsson, Glenn
  13. GenSVM: A Generalized Multiclass Support Vector Machine By van den Burg, G.J.J.; Groenen, P.J.F.
  14. Filterbased Stochastic Volatility in Continuous-Time Hidden Markov Models By Vikram Krishnamurthy; Elisabeth Leoff; J\"orn Sass
  15. Value-at-Risk and backtesting with the APARCH model and the standardized Pearson type IV distribution By Stavros Stavroyiannis

  1. By: Aknouche, Abdelhakim
    Abstract: This paper proposes a stochastic volatility model (PAR-SV) in which the log-volatility follows a first-order periodic autoregression. This model aims at representing time series with volatility displaying a stochastic periodic dynamic structure, and may then be seen as an alternative to the familiar periodic GARCH process. The probabilistic structure of the proposed PAR-SV model such as periodic stationarity and autocovariance structure are first studied. Then, parameter estimation is examined through the quasi-maximum likelihood (QML) method where the likelihood is evaluated using the prediction error decomposition approach and Kalman filtering. In addition, a Bayesian MCMC method is also considered, where the posteriors are given from conjugate priors using the Gibbs sampler in which the augmented volatilities are sampled from the Griddy Gibbs technique in a single-move way. As a-by-product, period selection for the PAR-SV is carried out using the (conditional) Deviance Information Criterion (DIC). A simulation study is undertaken to assess the performances of the QML and Bayesian Griddy Gibbs estimates. Applications of Bayesian PAR-SV modeling to daily, quarterly and monthly S&P 500 returns are considered.
    Keywords: Periodic stochastic volatility, periodic autoregression, QML via prediction error decomposition and Kalman filtering, Bayesian Griddy Gibbs sampler, single-move approach, DIC.
    JEL: C11 C15 C51 C58
    Date: 2013–06–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:69571&r=ecm
  2. By: James G. MacKinnon (Queen's University); Matthew D. Webb (Carleton University)
    Abstract: Inference using difference-in-differences with clustered data requires care. Previous research has shown that t tests based on a cluster-robust variance estimator (CRVE) severely over-reject when there are few treated clusters, that different variants of the wild cluster bootstrap can over-reject or under-reject severely, and that procedures based on randomization show promise. We demonstrate that randomization inference (RI) procedures based on estimated coefficients, such as the one proposed by Conley and Taber (2011), fail whenever the treated clusters are atypical. We propose an RI procedure based on t statistics which fails only when the treated clusters are atypical and few in number. We also propose a bootstrap-based alternative to randomization inference, which mitigates the discrete nature of RI P values when the number of clusters is small.
    Keywords: CRVE, grouped data, clustered data, panel data, randomization inference, difference-in-differences, wild cluster bootstrap
    JEL: C12 C21
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1355&r=ecm
  3. By: Sarracino, Francesco; Mikucka, Malgorzata
    Abstract: This paper assesses how duplicate records affect the results from regression analysis of survey data, and it compares the effectiveness of five solutions to minimize the risk of obtaining biased estimates. Results show that duplicate records create considerable risk of obtaining biased estimates. The chances of obtaining unbiased estimates in presence of a single sextuplet of identical observations is 41.6%. If the dataset contains about 10% of duplicated observations, then the probability of obtaining unbiased estimates reduces to nearly 11%. Weighting the duplicate cases by the inversion of their multiplicity minimizes the bias when multiple doublets are present in the data. Our results demonstrate the risks of using data in presence of non-unique observations and call for further research on strategies to analyze affected data.
    Keywords: duplicated observations, estimation bias, Monte Carlo simulation, inference
    JEL: C13 C18 C21 C81
    Date: 2016–01–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:69064&r=ecm
  4. By: Andrea Pierini
    Abstract: Abstract In this paper Chain Graph models are applied to a multivariate time series of blue chip Italian bank stock returns, in order to construct graphs with minimum BIC among a particular class of graphs called decomposable, which have the desirable property of a closed form estimation . Firstly a chain graph is built for present and past values of the time series in order to reduce the parameters of a VAR(1) model estimation, setting to zero the parameters related to non-edges in the graphs. Then another chain graph is built for present and past values of the squared residuals of the previous estimated model. An MARCH(1) model is so constructed by restricting to zero the parameters which are not indicated by these graphs. In this way a great reduction of parameters is put in place, using the opportune multidimensional modelling only if it is necessary. The parameter shrinking doesn’t affect the return and standard deviation forecasts while improving the efficiency of the estimations. This approach seems powerful in that it can use the methodology of chain graph to identify the needed past to present relationships for multivariate time series
    Keywords: Chain Graph Model, VAR, MARCH, Blue Chip Bank returns.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:rtr:wpaper:0205&r=ecm
  5. By: Michael Ho; Jack Xin
    Abstract: Estimation of the covariance matrix of asset returns from high frequency data is complicated by asynchronous returns, market microstructure noise and jumps. One technique for addressing both asynchronous returns and market microstructure is the Kalman-EM (KEM) algorithm. However the KEM approach assumes log-normal prices and does not address jumps in the return process which can corrupt estimation of the covariance matrix. In this paper we extend the KEM algorithm to price models that include jumps. We propose two sparse Kalman filtering approaches to this problem. In the first approach we develop a Kalman Expectation Conditional Maximization (KECM) algorithm to determine the unknown covariance as well as detecting the jumps. For this algorithm we consider Laplace and the spike and slab jump models, both of which promote sparse estimates of the jumps. In the second method we take a Bayesian approach and use Gibbs sampling to sample from the posterior distribution of the covariance matrix under the spike and slab jump model. Numerical results using simulated data show that each of these approaches provide for improved covariance estimation relative to the KEM method in a variety of settings where jumps occur.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1602.02185&r=ecm
  6. By: Abdi, Farshid; Ranaldo, Angelo
    Abstract: Using readily available data on daily close, high, and low prices, we develop a straightforward method to estimate the bid-ask spread. Compared with other spread estimators, our method is simpler and has an intuitive closed-form solution without need of further approximations. We test our method numerically and empirically using the Trade and Quotes (TAQ) data. Assessed against other daily estimates, our estimator generally provides the highest cross-sectional and average time-series correlation with the TAQ effective spread benchmark, as well as smallest predictions errors. To illustrate some potential applications, we show that our estimator improves the measurement of systematic liquidity risk and commonality in liquidity.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:usg:sfwpfi:2016:04&r=ecm
  7. By: SOKBAE LEE (Seoul National University, Institute for Fiscal Studies); RYO OKUI (Kyoto University, VU University Amsterdam); YOON-JAE WHANG (Seoul National University)
    Abstract: In this paper, we propose a doubly robust method to present the het- erogeneity of the average treatment e ect with respect to observed covariates of interest. We consider a situation where a large number of covariates are needed for identifying the average treatment e ect but the covariates of interest for analyzing heterogeneity are of much lower dimension. Our proposed estimator is doubly ro- bust and avoids the curse of dimensionality. We propose a uniform con dence band that is easy to compute, and we illustrate its usefulness via Monte Carlo experiments and an application to the e ects of smoking on birth weights.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:931&r=ecm
  8. By: Koziol, Philipp; Schell, Carmen; Eckhardt, Meik
    Abstract: In the last decade, stress tests have become indispensable in bank risk management which has led to significantly increased requirements for stress tests for banks and regulators. Although the complexity of stress testing frameworks has been enhanced considerably over the course of the last few years, the majority of credit risk models (e.g. Merton (1974), CreditMetrics, KMV) still rely on Gaussian copulas. This paper complements the finance literature providing new insights into the impact of different copulas in stress test applications using supervisory data of 17 large German banks. Our findings imply that the use of a Gaussian copula in credit risk stress testing should not by default be dismissed in favor of a heavy-tailed copula which is widely recommended in the finance literature. Gaussian copula would be the appropriate choice for estimating high stress effects under extreme scenarios. Heavy-tailed copulas like the Clayton or the t copula are recommended in the case of less severe scenarios. Furthermore, the paper provides clear advice for designing a credit risk stress test.
    Keywords: credit risk,top-down stress tests,copulas,macroeconomic scenario
    JEL: G21 G33 C13 C15
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:462015&r=ecm
  9. By: J. Miguel Marín; M. T. Rodríguez-Bernal; E. Romero
    Abstract: The analysis of financial series, assuming calendar effects and unequally spaced times over continuous time, can be studied by means of COGARCH models based on Lévy processes. In order to estimate the COGARCH model parameters, we propose to use two different Bayesian approaches. First, we suggest to use a Hamiltonian Montecarlo (HMC) algorithm that improves the performance of standard MCMC methods. Secondly, we introduce an Approximate Bayesian Computational (ABC) methodology which allows to work with analytically infeasible or computationally expensive likelihoods. After a simulation and comparison study for both methods, HMC and ABC, we apply them to model the behaviour of some NASDAQ time series and we discuss the results.
    Keywords: Approximate Bayesian Computation methods (ABC) , Bayesian inference , COGARCH model , Continuous-time GARCH process , Hamiltonian Monte Carlo methods (HMC) , Lévy process
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1601&r=ecm
  10. By: Grundl, Serafin J. (Board of Governors of the Federal Reserve System (U.S.)); Zhu, Yu (University of Leicester)
    Abstract: We extent the point-identification result in Guerre, Perrigne, and Vuong (2009) to environments with one-dimensional unobserved auction heterogeneity. In addition, we also show a robustness result for the case where the exclusion restriction used for point identification is violated: We provide conditions to ensure that the primitives recovered under the violated exclusion restriction still bound the true primitives in this case. We propose a new Sieve Maximum Likelihood Estimator, show its consistency and illustrate its finite sample performance in a Monte Carlo experiment. We investigate the bias in risk aversion estimates if unobserved auction heterogeneity is ignored and explain why the sign of the bias depends on the correlation between the number of bidders and the unobserved auction heterogeneity. In an application to USFS timber auctions we find that the bidders are risk neutral, but we would reject risk neutrality without accounting for unobserved auction heterogeneity.
    Keywords: Estimation; First Price Auction; Identification; Risk Aversion; Unobserved Heterogeneity
    Date: 2015–09–29
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2015-89&r=ecm
  11. By: Marusca De Castris (Roma Tre, University of Rome); Guido Pellegrini (Sapienza, University of Rome)
    Abstract: The focus of our paper is the identification of the regional effects of industrial subsidies when the presence of subsidized firms is spatially correlated. In this case the stable unit treatment value assumption (SUTVA) in the Rubin model is not valid and some econometric methods should be used in order to detect the consistent policy impact in presence of spatial dependence. We propose a new methodology for estimating the unbiased “net” effect of the subsidy, based on novel “spatial propensity score matching” technique that compare treated and not treated units affected by similar spillover effects due to treatment. We offer different econometrical approaches, where the “spatial” propensity score is estimated by standard or spatial probit models. Some robustness tests are also implemented, using different instrumental variable spatial models applied to a probit model. We test the model using an empirical application, based on a dataset that incorporates information on incentives to private capital accumulation by Law 488/92, mainly devoted to SME, and Planning Contracts, created for large projects, in Italy. The analysis is carried out on a disaggregated territorial level, using the grid of the local labour system. The results show a direct effect of subsidies on subsidized firms. The sign of the impact is generally positive, the output effect outweighing the substitution effect. Confronting the standard and the “spatial” estimation, we observed a positive but small crowding out effect across firms in the same area and across neighbouring areas, mostly in the labour market. However, due to the small sample, the difference in impacts estimated by the standard and the “spatial” effect of subsidies is not statistically significant.
    Keywords: spatial propensity score, policy evaluation, propensity score matching, spatial analysis
    JEL: R12 R23 C21
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:rcr:wpaper:05_15&r=ecm
  12. By: Mickelsson, Glenn (Department of Economics)
    Abstract: DSGE models are typically estimated using Bayesian methods, but a researcher may want to estimate a DSGE model with full information maximum likelihood (FIML) so as to avoid the use of prior distributions. A very robust algorithm is needed to find the global maximum within the relevant parameter space. I suggest such an algorithm and show that it is possible to estimate the model of Smets and Wouters (2007) using FIML. Inference is carried out using stochastic bootstrapping techniques. Several FIML estimates turn out to be significantly diffrent from the Bayesian estimates and the reasons behind those differences are analyzed.
    Keywords: Bayesian methods; Maximum likelihood; Business Cycles; Estimate DSGE models
    JEL: C11 E32 E32 E37
    Date: 2015–12–22
    URL: http://d.repec.org/n?u=RePEc:hhs:uunewp:2015_006&r=ecm
  13. By: van den Burg, G.J.J.; Groenen, P.J.F.
    Abstract: __Abstract__ Traditional extensions of the binary support vector machine (SVM) to multiclass problems are either heuristics or require solving a large dual optimization problem. Here, a generalized multiclass SVM called GenSVM is proposed, which can be used for classification problems where the number of classes K is larger than or equal to 2. In the proposed method, classification boundaries are constructed in a K - 1 dimensional space. The method is based on a convex loss function, which is flexible due to several different weightings. An iterative majorization algorithm is derived that solves the optimization problem without the need of a dual formulation. The method is compared to seven other multiclass SVM approaches on a large number of datasets. These comparisons show that the proposed method is competitive with existing methods in both predictive accuracy and training time, and that it significantly outperforms several existing methods on these criteria.
    Keywords: Support Vector Machines (SVMs), Multiclass Classification, Iterative Majorization, MM Algorithm, Classifier Comparison
    Date: 2014–12–18
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:77638&r=ecm
  14. By: Vikram Krishnamurthy; Elisabeth Leoff; J\"orn Sass
    Abstract: Regime-switching models, in particular Hidden Markov Models (HMMs) where the switching is driven by an unobservable Markov chain, are widely-used in financial applications, due to their tractability and good econometric properties. In this work we consider HMMs in continuous time with both constant and switching volatility. In the continuous-time model with switching volatility the underlying Markov chain could be observed due to this stochastic volatility, and no estimation (filtering) of it is needed (in theory), while in the discretized model or the model with constant volatility one has to filter for the underlying Markov chain. The motivations for continuous-time models are explicit computations in finance. To have a realistic model with unobservable Markov chain in continuous time and good econometric properties we introduce a regime-switching model where the volatility depends on the filter for the underlying chain and state the filtering equations. We prove an approximation result for a fixed information filtration and further motivate the model by considering social learning arguments. We analyze its relation to the switching volatility model and present a convergence result for the discretized model. We then illustrate its econometric properties by considering numerical simulations.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1602.05323&r=ecm
  15. By: Stavros Stavroyiannis
    Abstract: We examine the efficiency of the Asymmetric Power ARCH (APARCH) model in the case where the residuals follow the standardized Pearson type IV distribution. The model is tested with a variety of loss functions and the efficiency is examined via application of several statistical tests and risk measures. The results indicate that the APARCH model with the standardized Pearson type IV distribution is accurate, within the general financial risk modeling perspective, providing the financial analyst with an additional skewed distribution for incorporation in the risk management tools.
    Date: 2016–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1602.05749&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.