|
on Econometrics |
By: | Ulrich Hounyo (Oxford-Man Institute, University of Oxford, and Aarhus University and CREATES) |
Abstract: | We propose a bootstrap method for estimating the distribution (and functionals of it such as the variance) of various integrated covariance matrix estimators. In particular, we first adapt the wild blocks of blocks bootstrap method suggested for the pre-averaged realized volatility estimator to a general class of estimators of integrated covolatility. We then show the first-order asymptotic validity of this method in the multivariate context with a potential presence of jumps, dependent microstructure noise, irregularly spaced and non-synchronous data. Due to our focus on nonstudentized statistics, our results justify using the bootstrap to estimate the covariance matrix of a broad class of covolatility estimators. The bootstrap variance estimator is positive semi-definite by construction, an appealing feature that is not always shared by existing variance estimators of the integrated covariance estimator. As an application of our results, we also consider the bootstrap for regression coefficients. We show that the wild blocks of blocks bootstrap, appropriately centered, is able to mimic both the dependence and heterogeneity of the scores, thus justifying the construction of bootstrap percentile intervals as well as variance estimates in this context. This contrasts with the traditional pairs bootstrap which is not able to mimic the score heterogeneity even in the simple case where no microstructure noise is present. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first-order asymptotic theory. We illustrate its practical use on high-frequency equity data. |
Keywords: | High-frequency data, market microstructure noise, non-synchronous data, jumps, realized measures, integrated covariance, wild bootstrap, block bootstrap |
JEL: | C15 C22 C58 |
Date: | 2014–10–07 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-35&r=ecm |
By: | Fabio A. Miessi Sanches; Daniel Silva Junior, Sorawoot Srisuma |
Abstract: | The estimation of dynamic games is known to be a numerically challenging task. In this paper we propose an alternative class of asymptotic least squares estimators to Pesendorfer and Schmidt-Dengler’s (2008), which includes several well known estimators in the literature as special cases. Our estimator can be substantially easier to compute. In the leading case with linear payoffs specification our estimator has a familiar OLS/GLS closed-form that does not require any optimization. When payoffs have partially linear form, we propose a sequential estimator where the parameters in the nonlinear term can be estimated independently of the linear components, the latter can then be obtained in closed-form. We show the class of estimators we propose and Pesendorfer and Schmidt-Dengler’s are in fact asymptotically equivalent. Hence there is no theoretical cost in reducing the computational burden. Our estimator appears to perform well in a simple Monte Carlo experiment. |
Keywords: | Closed-from Estimation; Dynamic Discrete Choice; Markovian Games |
JEL: | C14 C25 C61 |
Date: | 2014–10–16 |
URL: | http://d.repec.org/n?u=RePEc:spa:wpaper:2014wpecon19&r=ecm |
By: | Marc Hallin; Davide La Vecchia |
Abstract: | We define rank-based estimators (R-estimators) for semiparametric time series models in whichthe conditional location and scale depend on a Euclidean parameter, while the innovation density isan infinite-dimensional nuisance. Applications include linear and nonlinear models, featuring eitherhomo- or heteroskedasticity (e.g. AR-ARCH and discretely observed diffusions with jumps). We showhow to construct easy-to-implement R-estimators, which achieve semiparametric efficiency at somepredetermined reference density while preserving root-n consistency, irrespective of the actual density.Numerical examples illustrate the good performances of the proposed estimators. An empirical analysisof the log-return and log-transformed two-scale realized volatility concludes the paper. |
Keywords: | conditional heteroskedasticity; distribution-freeness; forecasting; Lévy processes; one-step R-Estimators |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/176572&r=ecm |
By: | Giacomo Sbrana (NEOMA Business School); Andrea Silvestrini (Bank of Italy, Economic Research Department) |
Abstract: | Exponential smoothing models are an important prediction tool in macroeconomics, finance and business. This paper presents the analytical forecasting properties of the random coefficient exponential smoothing model in the multiple source of error framework. The random coefficient state-space representation allows for switching between simple exponential smoothing and the local linear trend. Therefore it is possible to control, in a flexible manner, the random changing dynamic behaviour of the time series. The paper establishes the algebraic mapping between the state-space parameters and the implied reduced form ARIMA parameters. In addition, it shows that parametric mapping surmounts the difficulties that are likely to emerge in a direct estimatation of the random coefficient state-space model. Finally, it presents an empirical application comparing the forecast accuracy of the suggested model vis-Ã -vis other benchmark models, both in the ARIMA and in the Exponential Smoothing class. Using time series relative to wholesalersÂ’ inventories in the USA, the out-of-sample results show that the reduced form of the random coefficient exponential smoothing model tends to be superior to its competitors. |
Keywords: | exponential smoothing, ARIMA, inventory, forecasting. |
Date: | 2014–07 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_971_14&r=ecm |
By: | Timothy B. Armstrong (Cowles Foundation, Yale University) |
Abstract: | This paper derives asymptotic power functions for Cramer-von Mises (CvM) style tests for conditional moment inequality models in the set identified case. Combined with power results for Kolmogorov-Smirnov (KS) tests, these results can be used to choose the optimal test statistic, weighting function and, for tests based on kernel estimates, kernel bandwidth. The results show that KS tests are preferred to CvM tests, and that a truncated variance weighting is preferred to bounded weightings under a minimax criterion, and for a class of alternatives that arises naturally in these models. The results also provide insight into how moment selection and the choice of instruments affect power. Such considerations have a large effect on power for instrument based approaches when a CvM statistic or an unweighted KS statistic is used and relatively little effect on power with optimally weighted KS tests. |
Keywords: | Moment inequalities, Relative efficiency |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1960&r=ecm |
By: | Nikolay, Iskrev |
Abstract: | In a recent article Canova et al. (2014) study the optimal choice of variables to use in the estimation of a simplified version of the Smets and Wouters (2007) model. In this comment I examine their conclusions by applying a different methodology to the same model. The results call into question most of Canova et al. (2014) findings. |
Keywords: | DSGE models, Observables, Identification, Information matrix, Cramer-Rao lower bound |
JEL: | C32 C51 C52 E32 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:cpm:dynare:041&r=ecm |
By: | van de Velden, M.; Iodice D' Enza, A.; Palumbo, F. |
Abstract: | __Abstract__ A new method is proposed that combines dimension reduction and cluster analysis for categorical data. A least-squares objective function is formulated that approximates the cluster by variables cross-tabulation. Individual observations are assigned to clusters in such a way that the distributions over the categorical variables for the different clusters are optimally separated. In a unified framework, a brief review of alternative methods is provided and performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with cluster analysis based on the full dimensional data. Our results show that the joint dimension reduction and clustering methods outperform, both with respect to the retrieval of the true underlying cluster structure and with respect to internal cluster validity measures, full dimensional clustering. The differences increase when more variables are involved and in the presence of noise variables. |
Keywords: | Correspondence analysis, cluster analysis, dimension, reduction, categorical variables |
Date: | 2014–10–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:77010&r=ecm |
By: | Sylvia Kaufmann (Study Center Gerzensee) |
Abstract: | Two Bayesian sampling schemes are outlined to estimate a K-state Markov switching model with time-varying transition probabilities. Data augmentation for the multinomial logit model of the transition probabilities is alternatively based on a random utility and a difference in random utility extension. We propose a definition to determine a relevant threshold level of the covariate determining the transition distribution, at which the transition distributions are balanced across states. Identification issues are addressed with random permutation sampling. In terms of efficiency, the extension to the difference in random utility specification in combination with random permutation sampling performs best. We apply the method to estimate a regime dependent two-pillar Phillips curve for the euro area, in which lagged credit growth determines the transition distribution of the states. |
Date: | 2014–02 |
URL: | http://d.repec.org/n?u=RePEc:szg:worpap:1404&r=ecm |