Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-12-13Sune KarlssonApproximate Bayesian Computation in State Space Models
http://d.repec.org/n?u=RePEc:msh:ebswps:2014-20&r=ecm
A new approach to inference in state space models is proposed, based on approximate Bayesian computation (ABC). ABC avoids evaluation of the likelihood function by matching observed summary statistics with statistics computed from data simulated from the true process; exact inference being feasible only if the statistics are sufficient. With finite sample sufficiency unattainable in the state space setting, we seek asymptotic sufficiency via the maximum likelihood estimator (MLE) of the parameters of an auxiliary model. We prove that this auxiliary model-based approach achieves Bayesian consistency, and that - in a precise limiting sense - the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score. In multiple parameter settings a separate treatment of scalar parameters, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality. Some attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case as a result of intractable transitions. The ABC method is demonstrated using the unscented Kalman filter as a fast and simple way of producing an approximation in this setting, with a stochastic volatility model for financial returns used for illustration.Gael M. Martin, Brendan P.M. McCabe, Worapree Maneesoonthorn, Christian P. Robert2014Likelihood-free methods, latent diffusion models, linear Gaussian state space models, asymptotic sufficiency, unscented Kalman filter, stochastic volatility.Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis
http://d.repec.org/n?u=RePEc:pra:mprapa:59973&r=ecm
Ng and Perron (2001) designed a unit root test which incorporates the properties of DF-GLS and Phillips Perron test. Ng and Perron claim that the test performs exceptionally well especially in the presence of negative moving average. However, the performance of test depends heavily on the choice of spectral density estimators used in the construction of test. There are various estimators for spectral density available in literature, having crucial impact on the output of test however there is no clarity on which of these estimators gives optimal size and power properties. This study aims to evaluate the performance of Ng-Perron for different choices of spectral density estimators in the presence of negative and positive moving average using Monte Carlo simulations. The results for large samples show that: (a) in the presence of positive moving average, test with kernel based estimator give good effective power and no size distortion (b) in the presence of negative moving average, autoregressive estimator gives better effective power, however, huge size distortion is observed in several specifications of data generating processMalik, Muhammad Irfan, Rehman, Atiq-ur-2014-11-17Ng-Perron test, Monte Carlo, Spectral Density, Unit Root TestingLocal Polynomial Order in Regression Discontinuity Designs
http://d.repec.org/n?u=RePEc:brd:wpaper:81&r=ecm
The local linear estimator has become the standard in the regression discontinuity design literature, but we argue that it should not always dominate other local polynomial estimators in empirical studies. We show that the local linear estimator in the data generating processes (DGP’s) based on two well- known empirical examples does not always have the lowest (asymptotic) mean squared error (MSE). Therefore, we advocate for a more flexible view towards the choice of the polynomial order, p, and suggest two complementary approaches for picking p: comparing the MSE of alternative estimators from Monte Carlo simulations based on an approximating DGP, and comparing the estimated asymptotic MSE using actual data.David Card, Zhuan Pei, David S. Lee, Andrea Weber2014-10Regression Discontinuity Design; Regression Kink Design; Local Polynomial Estima- tion; Polynomial OrderOptimal Rank Tests for Symmetry against Edgeworth-Type Alternatives
http://d.repec.org/n?u=RePEc:eca:wpaper:2013/177105&r=ecm
Delphine Cassart, Marc Hallin, Davy Paindaveine2014-11test of symmetry; skewness; edgeworth expansion; local asymptotic normality; signed rank testA Multivariate Model for Multinomial Choices
http://d.repec.org/n?u=RePEc:ems:eureir:77168&r=ecm
__Abstract__
Multinomial choices of individuals are likely to be correlated. Nonetheless, econometric models for this phenomenon are scarce. A problem of multivariate multinomial choice models is that the number of potential outcomes can become very large which makes parameter interpretation and inference difficult. We propose a novel Multivariate Multinomial Logit specification, where (i) the number of parameters stays limited; (ii) there is a clear interpretation of the parameters in terms of odds ratios; (iii) zero restrictions on parameters result in independence between the multinomial choices and; (iv) parameter inference is feasible using a composite likelihood approach even if the multivariate dimension is large. Finally, these nice properties are also valid in a fixed-effects panel version of the model.Bel, K., Paap, R.2014-10-13Discrete Choices, Multivariate analysis, Multinomial Logit, Composite Likelihood"Tests for Covariance Matrices in High Dimension with Less Sample Size"
http://d.repec.org/n?u=RePEc:tky:fseres:2014cf933&r=ecm
<p>In this article, we propose tests for covariance matrices of high dimension with fewer observations than the dimension for a general class of distributions with positive definite covariance matrices. In one-sample case, tests are proposed for sphericity and for testing the hypothesis that the covariance matrix ∑ is an identity matrix, by providing an unbiased estimator of tr [∑<sup>2</sup>] under the general model which requires no more computing time than the one available in the literature for normal model. In the two-sample case, tests for the equality of two covariance matrices are given. The asymptotic distributions of proposed tests in one-sample case are derived under the assumption that the sample size <em>N</em> = <em>O</em>(<em>p</em><sup>δ</sup>), 1/2 < δ < 1, where p is the dimension of the random vector, and <em>O</em>(<em>p</em><sup><sup>δ</sup>) means that <em>N/p</em> goes to zero as <em>N</em> and <em>p</em> go to infinity. Similar assumptions are made in the two-sample case.Muni S. Srivastava, Hirokazu Yanagihara, Tatsuya Kubokawa2014-06"On Predictive Density Estimation for Location Families under Integrated <em>L</em><sub>2</sub> and<em>L</em><sub>1</sub> Losses"
http://d.repec.org/n?u=RePEc:tky:fseres:2014cf935&r=ecm
Our investigation concerns the estimation of predictive densities and a study of effiency as measured by the frequentist risk of such predictive densities with integrated L2 and L1 losses. Our findings relate to a p-variate spherically symmetric observable X âˆ¼ px (||x -Î¼||2) and the objective of estimating the density of Y âˆ¼ qY (||y - Î¼||2) based on X. For L2 loss, we describe Bayes estimation, minimum risk equivariant estimation (MRE), and minimax estimation. We focus on the risk performance of the benchmark minimum risk equivariant estimator, plug-in estimators, and plug-in type estimators with expanded scale. For the multivariate normal case, we make use of a duality result with a point estimation problem bringing into play reflected normal loss. In three of more dimensions (i.e., p â‰¥ 3), we show that the MRE estimator is inadmissible under L2 loss and provide dominating estimators. This brings into play Stein-type results for estimating a multivariate normal mean with a loss which is a concave and increasing function of ||Î´ - Î¼||2. We also study the phenomenon of improvement on the plug-in density estimator of the form qY (||y - aX ||2), 0 < a â‰¤ 1, by a subclass of scale expansions c-pqY (||(y - aX) / c ||2) with c > 1, showing in some cases, inevitably for large enough p, that all choices c > 1 are dominating estimators. Extensions are obtained for scale mixture of normals including a general inadmissibility result of the MRE estimator for p â‰¥ 3. Finally, we describe and expand on analogous plug-in dominance results for spherically symmetric distributions with p â‰¥ 4 under L1 loss.Tatsuya Kubokawa, Éric Marchand, William E. Strawderman2014-07Dealing with unobservable common trends in small samples: a panel cointegration approach
http://d.repec.org/n?u=RePEc:sas:wpaper:20145&r=ecm
Non stationary panel models allowing for unobservable common trends have recently become very popular. However, standard methods, which are based on factor extraction or models augmented with cross-section averages, require large sample sizes, not always available in practice. In these cases we propose the simple and robust alternative of augmenting the panel regres- sion with common time dummies. The underlying assumption of additive e¤ects can be tested by means of a panel cointegration test, with no need of estimating a general interactive e¤ects model. An application to modelling labour productivity growth in the four major European economies (France, Germany, Italy and UK) illustrates the method.Francesca Di Iorio, Stefano Fachin2014-11Common trends, Panel cointegration, TFP.A Performance Comparison of Large-n Factor Estimators
http://d.repec.org/n?u=RePEc:may:mayecw:n255-14.pdf&r=ecm
This paper uses simulations to evaluate the performance of various methods for estimating factor returns in an approximate factor model when the cross-sectional sample (n) is large relative to the time-series sample (T). We study the performance of the estimators under a variety of alternative speci?cations of the underlying factor structure. We ?nd that 1) all of the estimators perform well, even when they do not accommodate the form of heteroskedasticity present in the data; 2) for the sample sizes considered here, accommodating heteroskedasticity does not deteriorate performance much when simple forms of heteroskedaticity are present; 3) estimators that handle missing data by substituting ?tted returns from the factor model converge to the true factors more slowly than the other estimators.Gregory Connor, Zhuo Chen, Robert A. Korajczyk2014Group Interaction in Research and the Use of General Nesting Spatial Models
http://d.repec.org/n?u=RePEc:knz:dpteco:1419&r=ecm
This paper tests the feasibility and empirical implications of a spatial econometric model with a full set of interaction effects and weight matrix defined as an equally weighted group interaction matrix applied to research productivity of individuals. We also elaborate two extensions of this model, namely with group fixed effects and with heteroskedasticity. In our setting the model with a full set of interaction effects is overparameterised: only the SDM and SDEM specifications produce acceptable results. They imply comparable spillover effects, but by applying a Bayesian approach taken from LeSage (2014), we are able to show that the SDEM specification is more appropriate and thus that colleague interaction effects work through observed and unobserved exogenous characteristics common to researchers within a group.Peter Burridge, J. Paul Elhorst, Katarina Zigova2014-09-17Spatial econometrics, identifcation, heteroskedasticity, group fixed effects, interaction effects, research productivity. . . and the Cross-Section of Expected Returns
http://d.repec.org/n?u=RePEc:nbr:nberwo:20592&r=ecm
Hundreds of papers and hundreds of factors attempt to explain the cross-section of expected returns. Given this extensive data mining, it does not make any economic or statistical sense to use the usual significance criteria for a newly discovered factor, e.g., a t-ratio greater than 2.0. However, what hurdle should be used for current research? Our paper introduces a multiple testing framework and provides a time series of historical significance cutoffs from the first empirical tests in 1967 to today. Our new method allows for correlation among the tests as well as missing data. We also project forward 20 years assuming the rate of factor production remains similar to the experience of the last few years. The estimation of our model suggests that a newly discovered factor needs to clear a much higher hurdle, with a t-ratio greater than 3.0. Echoing a recent disturbing conclusion in the medical literature, we argue that most claimed research findings in financial economics are likely false.Campbell R. Harvey, Yan Liu, Heqing Zhu2014-10Multi-curve HJM modelling for risk management
http://d.repec.org/n?u=RePEc:arx:papers:1411.3977&r=ecm
We present a HJM approach to the projection of multiple yield curves developed to capture the volatility content of historical term structures for risk management purposes. Since we observe the empirical data at daily frequency and only for a finite number of time to maturity buckets, we propose a modelling framework which is inherently discrete. In particular, we show how to approximate the HJM continuous time description of the multi-curve dynamics by a Vector Autoregressive process of order one. The resulting dynamics lends itself to a feasible estimation of the model volatility-correlation structure. Then, resorting to the Principal Component Analysis we further simplify the dynamics reducing the number of covariance components. Applying the constant volatility version of our model on a sample of curves from the Euro area, we demonstrate its forecasting ability through an out-of-sample test.Chiara Sabelli, Michele Pioppi, Luca Sitzia, Giacomo Bormetti2014-11An Application of Kernel Density Estimation via Diffusion to Group Yield Insurance
http://d.repec.org/n?u=RePEc:ags:aaea14:170173&r=ecm
The recent priority given to Federal Crop Insurance as an agricultural policy instrument has increased the importance of rate making procedures. Actuarial soundness requires rates that are actuarially fair: the premium is set equal to expected loss. Formation of this expectation depends, in the case of group or area yield insurance, on precise estimation of the probability density of the crop yield in question. This paper applies kernel density estimation via diffusion to the estimation of crop yield probability densities and determines ensuing premium rates. The diffusion estimator improves on existing methods by providing a cogent answer to some of the issues that plague both parametric and nonparametric techniques. Application shows that premium rates can vary significantly depending on underlying distributional assumptions; from a practical point of view there is value to be had in proper specification.Ramsey, Ford2014crop insurance, yield distributions, density estimation via diffusion, nonparametric density estimation, Agricultural and Food Policy, Research Methods/ Statistical Methods, Risk and Uncertainty, C520, Q180, C140,A Bayesian Latent Variable Mixture Model for Filtering Firm Profit Rate
http://d.repec.org/n?u=RePEc:epa:cepawp:2014-1&r=ecm
By using Bayesian Markov chain Monte Carlo methods we select the proper subset of competitive firms and find striking evidence for Laplace shaped firm profit rate distributions. Our approach enables us to extract more information from data than previous research. We filter US firm-level data into signal and noise distributions by Gibbs-sampling from a latent variable mixture distribution, extracting a sharply peaked, negatively skewed Laplace-type profit rate distribution. A Bayesian change point analysis yields the subset of large firms with symmetric and stationary Laplace distributed profit rates, adding to the evidence for statistical equilibrium at the economy wide and sectoral levels.Gregor Semieniuk, Ellis Scharfenaker2014-02Firm competition, Laplace distribution, Gibbs sampler, Profit rate, Statistical equilibrium