nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒09‒03
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. Fully Modified Narrow-Band Least Squares Estimation of Weak Fractional Cointegration By Morten Ørregaard Nielsen; Per Frederiksen
  2. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models By Dennis Kristensen
  3. Simple simulation of diffusion bridges with application to likelihood inference for diffusions By Mogens Bladt; Michael Sørensen
  4. "On Properties of Separating Information Maximum Likelihood Estimation of Realized Volatility and Covariance with Micro-Market Noise" By Naoto Kunitomo; Seisho Sato
  5. Maximum likelihood estimation for integrated diffusion processes By Fernando Baltazar-Larios; Michael Sørensen
  6. The Role of Realized Ex-post Covariance Measures and Dynamic Model Choice on the Quality of Covariance Forecasts By Rasmus Tangsgaard Varneskov; Valeri Voev
  7. Linearity Testing in Time-Varying Smooth Transition Autoregressive Models under Unknown Degree of Persistency By Robinson Kruse; Rickard Sandberg
  8. Characterizing economic trends by Bayesian stochastic model specification search By Stefano Grassi; Tommaso Proietti
  9. Long memory and changing persistence By Robinson Kruse; Philipp Sibbertsen
  10. Random Coefficient Logit Model for Large Datasets By Hernández-Mireles, C.; Fok, D.
  11. Jump-robust volatility estimation using nearest neighbor truncation By Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg
  12. A note on identification patterns in DSGE models By Michal Andrle
  13. The Role of Dynamic Specification in Forecasting Volatility in the Presence of Jumps and Noisy High-Frequency Data By Rasmus Tangsgaard Varneskov
  14. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models By Jeroen V.K. Rombouts; Lars Stentoft
  15. Asymptotic normality of the QMLE in the level-effect ARCH model By Christian M. Dahl; Emma M. Iglesias
  16. Predictable return distributions By Thomas Q. Pedersen
  17. On Statistical Inference for Inequality Measures Calculated from Complex Survey Data By Judith A. Clarke; Nilanjana Roy
  18. A comparative study of the Lasso-type and heuristic model selection methods By Ivan Savin
  19. Sign Restrictions in Structural Vector Autoregressions: A Critical Review By Renee Fry; Adrian Pagan
  20. Sensitivity Analysis of SAR Estimators: A Simulation Study By Shuangzhe Liu; Wolfgang Polasek; Richard Sellner
  21. Imposing parsimony in cross-country growth regressions By Marek Jarociński
  22. Is Economic Recovery a Myth? Robust Estimation of Impulse Responses By Coen N. Teulings; Nick Zubanov

  1. By: Morten Ørregaard Nielsen (Queen?s University and CREATES); Per Frederiksen (Nordea Markets)
    Abstract: We consider estimation of the cointegrating relation in the weak fractional cointegration model, where the strength of the cointegrating relation (difference in memory parameters) is less than one-half. A special case is the stationary fractional cointegration model, which has found important application recently, especially in financial economics. Previous research on this model has considered a semiparametric narrow-band least squares (NBLS) estimator in the frequency domain, but in the stationary case its asymptotic distribution has been derived only under a condition of non-coherence between regressors and errors at the zero frequency. We show that in the absence of this condition, the NBLS estimator is asymptotically biased, and also that the bias can be consistently estimated. Consequently, we introduce a fully modi?ed NBLS estimator which eliminates the bias, and indeed enjoys a faster rate of convergence than NBLS in general. We also show that local Whittle estimation of the integration order of the errors can be conducted consistently based on NBLS residuals, but the estimator has the same asymptotic distribution as if the errors were observed only under the condition of non-coherence. Furthermore, compared to much previous research, the development of the asymptotic distribution theory is based on a different spectral density representation, which is relevant for multivariate fractionally integrated processes, and the use of this representation is shown to result in lower asymptotic bias and variance of the narrow-band estimators. We present simulation evidence and a series of empirical illustrations to demonstrate the feasibility and empirical relevance of our methodology.
    Keywords: Fractional cointegration, frequency domain, fully modi?ed estimation, long memory, semiparametric.
    JEL: C22
    Date: 2010–05–12
  2. By: Dennis Kristensen (Dep. of Economics, Columbia University and CREATES)
    Abstract: Novel transition-based misspeci?cation tests of semiparametric and fully parametric univariate diffusion models based on the estimators developed in Kristensen (Journal of Econometrics, 2010) are proposed. It is demonstrated that transition-based tests in general lack power in detecting local departures from the null since they integrate out local features of the drift and volatility. As a solution to this, tests that directly compare drift and volatility estimators under the relevant null and alternative are also developed which exhibit better power against local alternatives.
    Keywords: Diffusion process, kernel estimation, nonparametric, speci?cation testing, semiparametric, transition density
    JEL: C12 C13 C14 C22
    Date: 2010–08–01
  3. By: Mogens Bladt (Universidad Nacional Autónoma de México); Michael Sørensen (University of Copenhagen and CREATES)
    Abstract: With a view to likelihood inference for discretely observed diffusion type models, we propose a simple method of simulating approximations to diffusion bridges. The method is applicable to all one-dimensional diffusion processes and has the advantage that simple simulation methods like the Euler scheme can be applied to bridge simulation. Another advantage over other bridge simulation methods is that the proposed method works well when the diffusion bridge is defined in a long interval because the computational complexity of the method is linear in the length of the interval. In a simulation study we investigate the accuracy and efficiency of the new method and compare it to exact simulation methods. In the study the method provides a very good approximation to the distribution of a diffusion bridge for bridges that are likely to occur in applications to likelihood inference. To illustrate the usefulness of the new method, we present an EM-algorithm for a discretely observed diffusion process. We demonstrate how this estimation method simplifies for exponential families of diffusions and very briefly consider Bayesian inference.
    Keywords: Bayesian inference, diffusion bridge, discretely sampled diffusions, EM-algorithm, Euler scheme, likelihood inference, time-reversion
    JEL: C22 C15
    Date: 2010–08–05
  4. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); Seisho Sato (Institute of Statistical Mathematics)
    Abstract: For estimating the realized volatility and covariance by using high frequency data, we have introduced the Separating Information Maximum Likelihood (SIML) method when there are possibly micro-market noises by Kunitomo and Sato (2008a, 2008b, 2010a, 2010b). The resulting estimator is simple and it has the representation as a specific quadratic form of returns. We show that the SIML estimator has reasonable asymptotic properties; it is consistent and it has the asymptotic normality (or the stable convergence in the general case) when the sample size is large under general conditions including some non-Gaussian processes and some volatility models. Based on simulations, we find that the SIML estimator has reasonable finite sample properties and thus it would be useful for practice. The SIML estimator has the asymptotic robustness properties in the sense it is consistent when the noise terms are weakly dependent and they are endogenously correlated with the efficient market price process. We also apply our method to an analysis of Nikkei-225 Futures, which has been the major stock index in the Japanese financial sector.
    Date: 2010–08
  5. By: Fernando Baltazar-Larios (Universidad Nacional Autónoma de México); Michael Sørensen (University of Copenhagen and CREATES)
    Abstract: We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works well.
    Keywords: Diffusion bridge, discretely sampled diffusions, EM-algorithm, likelihood inference, measurement error, stochastic differential equation, stochastic volatility.
    JEL: C22 C51
    Date: 2010–08–05
  6. By: Rasmus Tangsgaard Varneskov (School of Economics and Management, Aarhus University and CREATES); Valeri Voev (School of Economics and Management, Aarhus University and CREATES)
    Abstract: Recently, consistent measures of the ex-post covariation of financial assets based on noisy high-frequency data have been proposed. A related strand of literature focuses on dynamic models and covariance forecasting for high-frequency data based covariance measures. The aim of this paper is to investigate whether more sophisticated estimation approaches lead to more precise covariance forecasts, both in a statistical precision sense and in terms of economic value. A further issue we address, is the relative importance of the quality of the realized measure as an input in a given forecasting model vs. the model’s dynamic specification. The main finding is that the largest gains result from switching from daily to high-frequency data. Further gains are achieved if a simple sparsesampling covariance measure is replaced with a more efficient and noise-robust estimator.
    Keywords: Forecast evaluation, Volatility forecasting, Portfolio optimization, Mean-variance analysis.
    JEL: C32 C53 G11
    Date: 2010–08–26
  7. By: Robinson Kruse (School of Economics and Management, Aarhus University and CREATES); Rickard Sandberg (Department of Economic Statistics, Stockholm School of Economics)
    Abstract: Building upon the work of Vogelsang (1998) and Harvey and Leybourne (2007) we derive tests that are invariant to the order of integration when the null hypothesis of linearity is tested in time-varying smooth transition models. As heteroscedasticity may lead to spurious rejections of the null hypothesis, a White correction is also considered. The asymptotic properties of the tests are studied. Our Monte Carlo simulations suggest that the newly proposed tests exhibit good size and competitive power properties. An empirical application to US inflation data from the Post-Bretton Woods period underlines the empirical usefulness of our tests.
    Keywords: Linearity testing, Linear I(0) and (1) models, Non-linear I(0) and I(1) models,White correction.
    JEL: C22
    Date: 2010–07–26
  8. By: Stefano Grassi; Tommaso Proietti
    Abstract: We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate of drift, are fixed or evolutive.
    Keywords: Bayesian model selection; stationarity; unit roots; stochastic trends; variable selection.
    JEL: E32 C52 C22
    Date: 2010–08–25
  9. By: Robinson Kruse (School of Economics and Management, Aarhus University and CREATES); Philipp Sibbertsen (Leibniz University Hannover, School of Economics and Management, Institute of Statistics)
    Abstract: We study the empirical behaviour of semi-parametric log-periodogram estimation for long memory models when the true process exhibits a change in persistence. Simulation results confirm theoretical arguments which suggest that evidence for long memory is likely to be found. A recently proposed test by Sibbertsen and Kruse (2009) is shown to exhibit noticeable power to discriminate between long memory and a structural change in autoregressive parameters.
    Keywords: Long memory, changing persistence, structural break, semi-parametric estimation
    JEL: C12 C22
    Date: 2010–08–01
  10. By: Hernández-Mireles, C.; Fok, D.
    Abstract: We present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang, Renna, Machanda, Puneet and Peter Rossi, 2009. Journal of Econometrics 149 (2) 136-148] and we extend their method in four directions. First, we reduce the dimensionality of the covariance matrix of the random effects by using a factor structure. The dimension reduction can be substantial depending on the number of common factors and the number of products. Second, we parametrize the covariance matrix in terms of correlations and standard deviations, like Barnard et al. [Barnard, John, McCulloch, Robert and Xiao-Li Meng, 2000. Statistica Sinica 10 1281-1311] and we present a Metropolis sampling scheme based on this specification. Third, we allow for long term trends in preferences using time-varying common factors. Inference on these factors is obtained using a simulation smoother for state space time series. Finally, we consider an attractive combination of priors applied to each market and globally to all markets to speed up computation time. The main advantage of this prior specification is that it let us estimate the random coefficients based on all data available. We study both simulated data and a real dataset containing several markets each consisting of 30 to 60 products and our method proves to be promising with immediate practical applicability.
    Keywords: random coefficient logit;aggregate share models;bayesian analysis
    Date: 2010–05–31
  11. By: Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg
    Abstract: We propose two new jump-robust estimators of integrated variance based on high-frequency return observations. These MinRV and MedRV estimators provide an attractive alternative to the prevailing bipower and multipower variation measures. Specifically, the MedRV estimator has better theoretical efficiency properties than the tripower variation measure and displays better finite-sample robustness to both jumps and the occurrence of “zero” returns in the sample. Unlike the bipower variation measure, the new estimators allow for the development of an asymptotic limit theory in the presence of jumps. Finally, they retain the local nature associated with the low-order multipower variation measures. This proves essential for alleviating finite sample biases arising from the pronounced intraday volatility pattern that afflicts alternative jump-robust estimators based on longer blocks of returns. An empirical investigation of the Dow Jones 30 stocks and an extensive simulation study corroborate the robustness and efficiency properties of the new estimators.
    Keywords: Stocks - Rate of return ; Stock market ; Stock - Prices
    Date: 2010
  12. By: Michal Andrle (Czech National Bank, Monetary and Statistics Dept., Macroeconomic Forecasting Division.)
    Abstract: This paper comments on selected aspects of identification issues of DSGE models. It suggests the singular value decomposition (SVD) as a useful tool for detecting local weak and non- identification. This decomposition is useful for checking rank conditions of identification, identification strength, and it also offers parameter space ‘identification patterns’. With respect to other methods of identification the singular value decomposition is particularly easy to apply and offers an intuitive interpretation. We suggest a simple algorithm for analyzing identification and an algorithm for finding a set of the most identifiable set of parameters. We also demonstrate that the use of bivariate and multiple correlation coefficients of parameters provides only limited check of identification problems. JEL Classification: F31, F41.
    Keywords: DSGE, identification, information matrix, rank, singular value decomposition.
    Date: 2010–08
  13. By: Rasmus Tangsgaard Varneskov (School of Economics and Management, Aarhus University and CREATES)
    Abstract: This paper considers the performance of di erent long-memory dynamic models when forecasting volatility in the stock market using implied volatility as an exogenous variable in the information set. Observed volatility is sep- arated into its continuous and jump components in a framework that allows for consistent estimation in the presence of market microstructure noise. A comparison between a class of HAR- and ARFIMA models is facilitated on the basis of out-of-sample forecasting performance. Implied volatility conveys incremental information about future volatility in both specifications, improv- ing performance both in- and out-of-sample for all models. Furthermore, the ARFIMA class of models dominates the HAR specications in terms of out-of- sample performance both with and without implied volatility in the information set. A vectorized ARFIMA (vecARFIMA) model is introduced to control for possible endogeneity issues. This model is compared to a vecHAR specication, re-enforcing the results from the single equation framework.
    Keywords: ARFIMA, HAR, Implied Volatility, Jumps, Market Microstructure Noise, VecARFIMA, Volatility Forecasting
    JEL: C14 C22 C32 C53 G10
    Date: 2010–08–19
  14. By: Jeroen V.K. Rombouts (Institute of Applied Economics at HEC Montréal, CIRANO, CIRPEE, Université catholique de Louvain (CORE)); Lars Stentoft (Department of Finance at HEC Montréal, CIRANO, CIRPEE and CREATES)
    Abstract: This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities. Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%.
    Keywords: Asymmetric heteroskedastic models, finite mixture models, option pricing, out-of-sample prediction, statistical fit
    JEL: C11 C15 C22 G13
    Date: 2010–08–24
  15. By: Christian M. Dahl (Department of Business and Economics, University of Southern Denmark and CREATES); Emma M. Iglesias (Department of Economics, Michigan State University)
    Abstract: In this paper consistency and asymptotic normality of the quasi maximum like-lihood estimator in the level-effect ARCH model of Chan, Karolyi, Longstaff and Sanders (1992) is established. We consider explicitly the case where the parameters of the conditional heteroskedastic process are in the stationary region and discuss carefully how the results can be extended to the region where the conditional heteroskedastic process is nonstationary. The results illustrate that Jensen and Rahbek's (2004a,2004b) approach can be extended further than to traditional ARCH and GARCH models.
    Keywords: Level-effect ARCH, QMLE, Asymptotics, Stationarity, Nonstationarity.
    JEL: C12 C13 C22
    Date: 2010–08–25
  16. By: Thomas Q. Pedersen (School of Economics and Management, Aarhus University and CREATES)
    Abstract: This paper provides detailed insights into predictability of the entire stock and bond return distribution through the use of quantile regression. This allows us to examine speci?c parts of the return distribution such as the tails or the center, and for a suf?ciently ?ne grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions are predictable as a function of economic state variables. The results are, however, very different for stocks and bonds. The state variables primarily predict only location shifts in the stock return distribution, while they also predict changes in higher-order moments in the bond return distribution. Out-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing an assumption of lognormally distributed returns.
    Keywords: Return predictability, return distribution, quantile regression, multivariate model, out-of-sample forecast, portfolio choice
    JEL: C21 C31 G11 G12
    Date: 2010–07–01
  17. By: Judith A. Clarke (Department of Economics, University of Victoria); Nilanjana Roy (Department of Economics, University of Victoria)
    Abstract: We examine inference for Generalized Entropy and Atkinson inequality measures with complex survey data, using Wald statistics with variance-covariance matrices estimated from a linearization approximation method. Testing the equality of two or more inequality measures, including sub-group decomposition indices and group shares, are covered. We illustrate with Indian data from three surveys, examining pre-school children’s height, an anthropometric measure that can indicate long-term malnutrition. Sampling involved an urban/rural stratification with clustering before selection of households. We compare the linearization complex survey outcomes with those from an incorrect independently and identically distributed (iid) assumption and a bootstrap that accounts for the survey design. For our samples, the results from the easy to implement linearization method and the more computationally burdensome bootstrap are in close agreement. This finding is of interest to applied researchers, as bootstrapping is currently the method that is most commonly used for undertaking statistical inference in this literature.
    Keywords: Complex survey, inequality, generalized entropy, Atkinson, decomposition, linearization
    JEL: C12 C42 D31
    Date: 2010–08–23
  18. By: Ivan Savin
    Abstract: This study presents a first comparative analysis of Lasso-type (Lasso, adaptive Lasso, elastic net) and heuristic subset selection methods. Although the Lasso has shown success in many situations, it has some limitations. In particular, inconsistent results are obtained for pairwise strongly correlated predictors. An alternative to the Lasso is constituted by model selection based on information criteria (IC), which remains consistent in the situation mentioned. However, these criteria are hard to optimize due to a discrete search space. To overcome this problem, an optimization heuristic (Genetic Algorithm) is applied. Monte-Carlo simulation results are reported to illustrate the performance of the methods.
    Keywords: Model selection, Lasso, adaptive Lasso, elastic net, heuristic methods, genetic algorithms
    Date: 2010–08–24
  19. By: Renee Fry; Adrian Pagan
    Abstract: Structural Vector Autoregressions (SVARs) have become one of the major ways of extracting information about the macro economy. One might cite three major uses of them in macro-econometric research. 1. For quantifying impulse responses to macroeconomic shocks. 2. For measuring the degree of uncertainty about the impulse responses or other quantities formed from them. 3. For deciding on the contribution of different shocks to fluctuations and forecast errors through variance decompositions. To determine this information a VAR is first fitted to summarize the data and then a structural VAR (SVAR) is proposed whose structural equation errors are taken to be the economic shocks. The parameters of these structural equations are then estimated by utilizing the information in the VAR. The VAR is a reduced form which summarizes the data; the SVAR provides an interpretation of the data. As for any set of structural equations, recovery of the structural equation parameters (shocks) requires the use of identification restrictions that reduce the number of "free" parameters in the structural equations to the number that can be recovered from the information in the reduced form.
    Date: 2010–07
  20. By: Shuangzhe Liu (University of Canberra, Australia); Wolfgang Polasek (Institute for Advanced Studies (IHS), Austria; The Rimini Centre for Economic Analysis (RCEA), Italy); Richard Sellner (Institute for Advanced Studies (IHS), Austria)
    Abstract: Spatial autoregressive models come with a variety of estimators and it is interesting and useful to compare the estimators by location and covariance properties. In this paper, we first study the local sensitivity behavior of the main least squares estimator by using matrix derivatives. We then calculate the Taylor approximation of the least squares estimator in the SAR model up to the second order. Also, we compare the estimators of the spatial autoregression (SAR) model in terms of the covariance structure of the least squares estimators and we make efficiency comparisons using Kantorovich inequalities. Finally, we demonstrate our approach by an example for GDP and employment in 239 European NUTS2 regions. We find a quite good approximation behavior of the SAR estimator in the neighborhood of ρ = 0, i.e. a small spatial correlation.
    Keywords: Spatial autoregressive models, least-squares estimators, Taylor approximations, Kantorovich inequality
    Date: 2010–01
  21. By: Marek Jarociński (European Central Bank, DG-Research, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: The number of variables related to long-run economic growth is large compared with the number of countries. Bayesian model averaging is often used to impose parsimony in the cross-country growth regression. The underlying prior is that many of the considered variables need to be excluded from the model. This paper, instead, advocates priors that impose parsimony without excluding variables. The resulting models fit the data better and are more robust to revisions of income data. The positive relationship between measures of trade openness and growth is much stronger than found in the literature. JEL Classification: C20, C52, O40, O47.
    Keywords: Economic Growth, Bayesian Model Averaging, Adaptive Ridge Regression, Measurement Error.
    Date: 2010–08
  22. By: Coen N. Teulings (CPB, The Hague, and University of Amsterdam); Nick Zubanov (CPB, The Hague)
    Abstract: There is a lively debate on the persistence of the current banking crisis' impact on GDP. Impulse Response Functions (IRF) estimated by Cerra and Saxena (2008) suggest that the effects of earlier crises were long-lasting. We show that standard estimates of IRFs are highly sensitive to misspecification of the underlying data generation process. Direct estimation of IRFs by a methodology similar to Jorda's (2005) local projection method is robust to misspecifications of the data generation process but yields biased estimates when country fixed effects are added. We propose a simple method to deal with this bias, which we apply to panel data from 99 countries for the period 1974-2001. Our estimates suggest that an average banking crisis leads to an output loss of around 10 percent with little sign of recovery. GDP losses from banking crises are more severe for African countries and economies in transition.
    Keywords: banking crisis; impulse response; panel data
    JEL: E27 C53
    Date: 2010–04–13

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.