Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2015-07-18Sune KarlssonSieve Semiparametric Two-Step GMM under Weak Dependence
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2012&r=ecm
This paper considers semiparametric two-step GMM estimation and inference with weakly dependent data, where unknown nuisance functions are estimated via sieve extremum estimation in the first step. We show that although the asymptotic variance of the second-step GMM estimator may not have a closed form expression, it can be well approximated by sieve variances that have simple closed form expressions. We present consistent or robust variance estimation, Wald tests and Hansen's (1982) over-identification tests for the second step GMM that properly reflect the first-step estimated functions and the weak dependence of the data. Our sieve semiparametric two-step GMM inference procedures are shown to be numerically equivalent to the ones computed as if the first step were parametric. A new consistent random-perturbation estimator of the derivative of the expectation of the non-smooth moment function is also provided.Xiaohong Chen, Zhipeng Liao2015-07Sieve two-step GMM, Weakly dependent data, Auto-correlation robust inference, Semiparametric over-identification test, Numerical equivalence, Random perturbation derivative estimatorHigh-Dimensional Copula-Based Distributions with Mixed Frequency Data
http://d.repec.org/n?u=RePEc:fip:fedgfe:2015-50&r=ecm
This paper proposes a new model for high-dimensional distributions of asset returns that utilizes mixed frequency data and copulas. The dependence between returns is decomposed into linear and nonlinear components, enabling the use of high frequency data to accurately forecast linear dependence, and a new class of copulas designed to capture nonlinear dependence among the resulting uncorrelated, low frequency, residuals. Estimation of the new class of copulas is conducted using composite likelihood, facilitating applications involving hundreds of variables. In- and out-of-sample tests confirm the superiority of the proposed models applied to daily returns on constituents of the S&P 100 index.Oh, Dong Hwan, Patton, Andrew J.2015-05-19Composite likelihood; forecasting; high frequency data; nonlinear dependenceInference in Near Singular Regression
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2009&r=ecm
This paper considers stationary regression models with near-collinear regressors. Limit theory is developed for regression estimates and test statistics in cases where the signal matrix is nearly singular in finite samples and is asymptotically degenerate. Examples include models that involve evaporating trends in the regressors that arise in conditions such as growth convergence. Structural equation models are also considered and limit theory is derived for the corresponding instrumental variable estimator, Wald test statistic, and overidentification test when the regressors are endogenous.Peter C. B. Phillips2015-07Endogeneity, Instrumental variable, Overidentification test, Regression, Singular Signal Matrix, Structural equationConfidence sets for the date of a break in level and trend when the order of integration is unknown
http://d.repec.org/n?u=RePEc:not:notgts:14/04&r=ecm
We propose methods for constructing confidence sets for the timing of a break in level and/or trend that have asymptotically correct coverage for both I(0) and I(1) processes. These are based on inverting a sequence of tests for the break location, evaluated across all possible break dates. We separately derive locally best invariant tests for the I(0) and I(1) cases; under their respective assumptions, the resulting confidence sets provide correct asymptotic coverage regardless of the magnitude of the break. We suggest use of a pre-test procedure to select between the I(0)- and I(1)- based confidence sets, and Monte Carlo evidence demonstrates that our recommended procedure achieves good finite sample properties in terms of coverage and length across both I(0) and I(1) environments. An application using US macroeconomic data is provided which further evinces the value of these procedures.David Harvey, Stephen LeybourneLevel break; Trend break; Stationary; Unit root; Locally best invariant test; Con?- dence sets. JEL classification: C22,Inference Based on Many Conditional Moment Inequalities
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2010&r=ecm
In this paper, we construct confidence sets for models defined by many conditional moment inequalities/equalities. The conditional moment restrictions in the models can be finite, countably infinite, or uncountably infinite. To deal with the complication brought about by the vast number of moment restrictions, we exploit the manageability (Pollard (1990)) of the class of moment functions. We verify the manageability condition in five examples from the recent partial identification literature. The proposed confidence sets are shown to have correct asymptotic size in a uniform sense and to exclude parameter values outside the identified set with probability approaching one. Monte Carlo experiments for a conditional stochastic dominance example and a random-coefficients binary-outcome example support the theoretical results.Donald W.K. Andrews, Xiaoxia Shi2015-07Asymptotic size, Conditional moment inequalities, Confidence set, Many moments, Multiple equilibria, Partial identification, Random coefficients, Stochastic dominance, TestIn-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20150083&r=ecm
We study the performance of alternative methods for calculating in-sample confidence and out of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty only. The out-of-sample bands reflect both parameter uncertainty and innovation uncertainty. The bands are applicable to a large class of observation driven models and a wide range of estimation procedures. A Monte Carlo study is conducted for time-varying parameter models such as generalized autoregressive conditional heteroskedasticity and autoregressive conditional duration models. Our results show clear differences between the actual coverage provided by the different methods. We illustrate our findings in a volatility analysis for monthly Standard & Poor’s 500 index returns.Francisco Blasques, Siem Jan Koopman, Katarzyna Łasak, André Lucas2015-07-09autoregressive conditional duration; delta-method; generalized autoregressiveModelling Dependence in High Dimensions with Factor Copulas
http://d.repec.org/n?u=RePEc:fip:fedgfe:2015-51&r=ecm
his paper presents flexible new models for the dependence structure, or copula, of economic variables based on a latent factor structure. The proposed models are particularly attractive for relatively high dimensional applications, involving fifty or more variables, and can be combined with semiparametric marginal distributions to obtain flexible multivariate distributions. Factor copulas generally lack a closed-form density, but we obtain analytical results for the implied tail dependence using extreme value theory, and we verify that simulation-based estimation using rank statistics is reliable even in high dimensions. We consider "scree" plots to aid the choice of the number of factors in the model. The model is applied to daily returns on all 100 constituents of the S&P 100 index, and we find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. We also show that factor copula models provide superior estimates of some measures of systemic risk.Oh, Dong Hwan, Patton, Andrew J.2015-05-18Copulas; correlation; dependence; systemic risk; tail dependenceShape Regressions
http://d.repec.org/n?u=RePEc:geo:guwopa:gueconwpa~15-15-06&r=ecm
Learning about the shape of a probability distribution, not just about its location or dispersion, is often an important goal of empirical analysis. Given a continuous random variable Y and a random vector X defined on the same probability space, the conditional distribution function (CDF) and the conditional quantile function (CQF) offer two equivalent ways of describing the shape of the conditional distribution of Y given X. To these equivalent representations correspond two alternative approaches to shape regression. One approach - distribution regression - is based on direct estimation of the conditional distribution function (CDF); the other approach - quantile regression - is instead based on direct estimation of the conditional quantile function (CQF). Since the CDF and the CQF are generalized inverses of each other, indirect estimates of the CQF and the CDF may be obtained by taking the generalized inverse of the direct estimates obtained from either approach, possibly after rearranging to guarantee monotonicity of estimated CDFs and CQFs. The equivalence between the two approaches holds for standard nonparametric estimators in the unconditional case. In the conditional case, when modeling assumptions are introduced to avoid curse-of-dimensionality problems, this equivalence is generally lost as a convenient parametric model for the CDF need not imply a convenient parametric model for the CQF, and vice versa. Despite the vast literature on the quantile regression approach, and the recent attention to the distribution regression approach, no systematic comparison of the two has been carried out yet. Our paper fills-in this gap by comparing the asymptotic properties of estimators obtained from the two approaches, both when the assumed parametric models on which they are based are correctly specified and when they are not.Franco Peracchi, Samantha Leorato2015-07-10Distribution regression; quantile regression; functional delta-method; non-separable models; influence functionMisspeciﬁcation Testing in GARCH-MIDAS Models
http://d.repec.org/n?u=RePEc:awi:wpaper:0597&r=ecm
We develop a misspeciﬁcation test for the multiplicative two-component GARCH-MIDAS model suggested in Engle et al. (2013). In the GARCH-MIDAS model a short-term unit variance GARCH component ﬂuctuates around a smoothly time-varying long-term component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange Multiplier statistic for testing the null hypothesis that the variable has no explanatory power. Hence, under the null hypothesis the long-term component is constant and the GARCH-MIDAS reduces to the simple GARCH model. We derive the asymptotic theory for our test statistic and investigate its ﬁnite sample properties by Monte-Carlo simulation. The usefulness of our procedure is illustrated by an empirical application to S&P 500 return data.Conrad, Christian, Schienle, Melanie2015-07-09Volatility Component Models; LM test; Long-term Volatility.Identification of Nonparametric Simultaneous Equations Models with a Residual Index Structure
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2008&r=ecm
We present new results on the identifiability of a class of nonseparable nonparametric simultaneous equations models introduced by Matzkin (2008). These models combine exclusion restrictions with a requirement that each structural error enter through a "residual index." Our identification results encompass a variety of special cases allowing tradeoffs between the exogenous variation required of instruments and restrictions on the joint density of structural errors. Among these special cases are results avoiding any density restriction and results allowing instruments with arbitrarily small support.Steven T. Berry, Philip A. Haile2015-07Simultaneous equations, Nonseparable models, Nonparametric identificationQuantile Cointegration in the Autoregressive Distributed-Lag Modelling Framework
http://d.repec.org/n?u=RePEc:yon:wpaper:2015rwp-80&r=ecm
Xiao (2009) develops a novel estimation technique for quantile cointegrated time series by extending Phillips and Hansen¡¯s (1990) semiparametric approach and Saikkonen¡¯s (1991) parametrically augmented approach. This paper extends Pesaran and Shin¡¯s (1998) autoregressive distributed-lag approach into quantile regression by jointly analysing short-run dynamics and long-run cointegrating relationships across a range of quantiles. We derive the asymptotic theory and provide a general package in which the model can be estimated and tested within and across quantiles. We further affirm our theoretical results by Monte Carlo simulations. The main utilities of this analysis are demonstrated through the empirical application to the dividend policy in the U.S.JIN SEO CHO, TAE-HWAN KIM, YONGCHEOL SHIN2015-06QARDL, Quantile Regression, Long-run Cointegrating Relationship, Dividend Smoothing, Time-varying Rolling Estimation.Backtesting Strategies Based on Multiple Signals
http://d.repec.org/n?u=RePEc:nbr:nberwo:21329&r=ecm
Strategies selected by combining multiple signals suffer severe overfitting biases, because underlying signals are typically signed such that each predicts positive in-sample returns. “Highly significant” backtested performance is easy to generate by selecting stocks on the basis of combinations of randomly generated signals, which by construction have no true power. This paper analyzes t-statistic distributions for multi-signal strategies, both empirically and theoretically, to determine appropriate critical values, which can be several times standard levels. Overfitting bias also severely exacerbates the multiple testing bias that arises when investigators consider more results than they present. Combining the best k out of n candidate signals yields a bias almost as large as those obtained by selecting the single best of n<sup>k</sup> candidate signals.Robert Novy-Marx2015-07Estimating the Competitive Storage Model with Trending Commodity Prices
http://d.repec.org/n?u=RePEc:drm:wpaper:2015-15&r=ecm
We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred to models without trend. They yield more plausible estimates of the structural parameters, with storage costs and demand elasticities that are more consistent with the literature. They imply occasional stockouts, whereas without trend the estimated models predict no stockouts over the sample period for most commodities. Moreover, accounting for a trend in the estimation imply price moments closer to those observed in commodity prices. Our results support the empirical relevance of the speculative storage model, and show that storage model estimations should not neglect the possibility of long-run price trends.Christophe Gouel, Nicolas Legrand2015Commodity prices, non-linear dynamic models, storage, structural estimation, trend.Hybrid scheme for Brownian semistationary processes
http://d.repec.org/n?u=RePEc:arx:papers:1507.03004&r=ecm
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. (2015), respectively.Mikkel Bennedsen, Asger Lunde, Mikko S. Pakkanen2015-07Statistical Inference and Efficient Portfolio Investment Performance
http://d.repec.org/n?u=RePEc:lbo:lbowps:2015_01&r=ecm
The original Morey and Morey (1999) paper was the first to explicitly link the efficient theoretical frontier of the Markowitz portfolio balance model to the concept of the efficient empirical frontier in data envelopment analysis. The contribution of this paper is to extend the application of this linked research strategy to incorporate both sampling error addressed through bootstrapping and contextual explanation of the efficiency results through statistically robust second stage analysis. This paper first applies the procedures in Morey and Morey (1999) to a new modern data set comprising a multi-year sample of investment funds and then utilises Simar-Wilson (2008) bootstrapping algorithms to develop statistical inference and confidence intervals for the indexes of efficient investment fund performance. For the second stage analysis, robust-OLS regression, Tobit models and Papke-Wooldridge (PW) models are conducted and compared to evaluate contextual variables affecting the performance of investment funds.Shibo Liu, Tom Weyman-Jones, Karligash Glass2015-01nonlinear-DEA, portfolios, bootstrapping, second stage DEAPersistence, Signal-Noise Pattern and Heterogeneity in Panel Data: With an Application to the Impact of Foreign Direct Investment on GDP
http://d.repec.org/n?u=RePEc:hhs:osloec:2015_004&r=ecm
GMM estimation of autoregressive panel data equations in error-ridden variables when the noise has memory, is considered. The impact of variation in the memory length in signal and noise spread and in the degree of individual heterogeneity are discussed with respect to ﬁnite sample bias, using Monte Carlo simulations. Also explored are also the impact of the strength of autocorrelation and the size of the IV set. GMM procedures using IVs in diﬀerences on equations in levels, in general perform better in small samples than procedures using IVs in levels on equations in diﬀerences. A case-study of the impact of Foreign Direct Investment (FDI) on GDP, inter alia, contrasting the manufacturing and the service sector, based on country panel data supplements the simulation results.Biørn, Erik, Han, Xuehui2015-02-12Panel data; Measurement error; ARMA; GMM; Error memory; Monte Carlo; Foreign Direct Investment; Economic development; Country panelCausal transmission in reduced-form models
http://d.repec.org/n?u=RePEc:nuf:econwp:1507&r=ecm
We propose a method to explore the causal transmission of a catalyst variable through two endogenous variables of interest. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instru- mental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.Vassili Bazinas, Bent Nielsen2015-07-13Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of Small Numbers
http://d.repec.org/n?u=RePEc:igi:igierp:552&r=ecm
We find a subtle but substantial bias in a standard measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data. The mechanism is driven by a form of selection bias, which leads to an underestimate of the true conditional probability of a given outcome when conditioning on prior outcomes of the same kind. The biased measure has been used prominently in the literature that investigates incorrect beliefs in sequential decision making—most notably the Gambler’s Fallacy and the Hot Hand Fallacy. Upon correcting for the bias, the conclusions of some prominent studies in the literature are reversed. The bias also provides a structural explanation of why the belief in the law of small numbers persists, as repeated experience with finite sequences can only reinforce these beliefs, on average. JEL Classification Numbers: C12; C14; C18;C19; C91; D03; G02. Keywords: Law of Small Numbers; Alternation Bias; Negative Recency Bias; Gambler’s Fallacy; Hot Hand Fallacy; Hot Hand Effect; Sequential Decision Making; Sequential Data; Selection Bias; Finite Sample Bias; Small Sample Bias.Joshua B. Miller, Adam Sanjurjo2015Measuring spot variance spillovers when (co)variances are time-varying – the case of multivariate GARCH models
http://d.repec.org/n?u=RePEc:usg:econwp:2015:17&r=ecm
In highly integrated markets, news spreads at a fast pace and bedevils risk monitoring and optimal asset allocation. We therefore propose global and disaggregated measures of variance transmission that allow one to assess spillovers locally in time. Key to our approach is the vector ARMA representation of the second-order dynamics of the popular BEKK model. In an empirical application to a four-dimensional system of US asset classes - equity, fixed income, foreign exchange and commodities - we illustrate the second-order transmissions at various levels of (dis)aggregation. Moreover, we demonstrate that the proposed spillover indices are informative on the value-at-risk violations of portfolios composed of the considered asset classes.Fengler, Matthias R., Herwartz, Helmut2015-07Multivariate GARCH, spillover index, value-at-risk, variance spillovers, variance decomposition