nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒02‒05
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Testing for unit roots in the presence of a possible break in trend and non-stationary volatility By Giuseppe Cavaliere; David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
  2. "Selection of Variables in Multivariate Regression Models for Large Dimensions" By Muni S. Srivastava; Tatsuya Kubokawa
  3. Asymptotic equivalence and sufficiency for volatility estimation under microstructure noise By Markus Rei\ss
  4. Automatic Selection for Non-linear Models By Jennifer L. Castle; David F. Hendry
  5. Euroization in Central, Eastern and Southeastern Europe – New Evidence On Its Extent and Some Evidence On Its Causes. By Jesús Crespo Cuaresma; Martin Feldkircher
  6. Model Selection when there are Multiple Breaks By Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
  7. Catching Growth Determinants with the Adaptive Lasso By Ulrike Schneider; Martin Wagner
  8. Learning and filtering via simulation: smoothly jittered particle filters By Thomas Flury; Neil Shephard
  9. Functional Forms in Discrete/Continuous Choice Models with General Corner Solution. By Felipe Vásquez; Michael Hanemann
  11. Segmentation algorithm for non-stationary compound Poisson processes By Bence Toth; Fabrizio Lillo; J. Doyne Farmer
  12. Econometric Modelling of Changing Time Series By David F. Hendry; Grayham E. Mizon
  13. Forecasting with Equilibrium-correction Models during Structural Breaks By Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry
  14. Point Processes Modeling of Time Series Exhibiting Power-Law Statistics By B. Kaulakys; M. Alaburda; V. Gontis
  15. Understanding Interactions in Social Networks and Committees By Bhattacharjee, A.; Holly, S.
  16. Spurious Regressions of Stable AR(p) Processes with Structural Breaks By Ba Chu; Roman Kozhan
  17. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise By Ingmar Nolte; Valeri Voev
  18. Structural Interactions in Spatial Panels By Bhattacharjee, A.; Holly, S.

  1. By: Giuseppe Cavaliere; David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
    Abstract: In this paper we analyse the impact of non-stationary volatility on the recently developed unit root tests which allow for a possible break in trend occurring at an unknown point in the sample, considered in Harris, Harvey, Leybourne and Taylor (2009) [HHLT]. HHLT's analysis hinges on a new break fraction estimator which, when a break in trend occurs, is consistent for the true break fraction at rate Op(T^-1). Unlike other available estimators, however, when there is no trend break HHLT's estimator converges to zero at rate Op(T^-1/2). In their analysis HHLT assume the shocks to follow a linear process driven by IID innovations. Our first contribution is to show that HHLT's break fraction estimator retains the same consistency properties as demonstrated by HHLT for the IID case when the innovations display non-stationary behaviour of a quite general form, including, for example, the case of a single break in the volatility of the innovations which may or may not occur at the same time as a break in trend. However, as we subsequently demonstrate, the limiting null distribution of unit root statistics based around this estimator are not pivotal in the presence of non-stationary volatility. Associated Monte Carlo evidence is presented to quantify the impact of a one-time change in volatility on both the asymptotic and finite sample behaviour of such tests. A solution to the identified inference problem is then provided by considering wild bootstrap-based implementations of the HHLT tests, using the trend break estimator from the original sample data. The proposed bootstrap method does not require the practitioner to specify a parametric model for volatility, and is shown to perform very well in practice across a range of models.
    Keywords: Unit root tests; quasi difference de-trending; trend break; non-stationary volatility; wild bootstrap
    Date: 2009–12
  2. By: Muni S. Srivastava (Department of Statistics, University of Toronto); Tatsuya Kubokawa (Faculty of Economics, University of Tokyo)
    Abstract: The Akaike information criterion, AIC, and Mallows' Cp statistic have been proposed for selecting a smaller number of regressor variables in the multivariate regression models with fully unknown covariance matrix. All these criteria are, however, based on the implicit assumption that the sample size is substantially larger than the dimension of the covariance matrix. To obtain a stable estimator of the covariance matrix, it is required that the dimension of the covariance matrix be much smaller than the sample size. When the dimension is close to the sample size, it is necessary to use ridge type of estimators for the covariance matrix. In this paper, we use a ridge type of estimators for the covariance matrix and obtain the modified AIC and modified Cp statistic under the asymptotic theory that both the sample size and the dimension go to infinity. It is numerically shown that these modified procedures perform very well in the sense of selecting the true model in large dimensional cases.
    Date: 2010–01
  3. By: Markus Rei\ss
    Abstract: The basic model for high-frequency data in finance is considered, where an efficient price process is observed under microstructure noise. It is shown that this nonparametric model is in Le Cam's sense asymptotically equivalent to a Gaussian shift experiment in terms of the square root of the volatility function $\sigma$. As an application, simple rate-optimal estimators of the volatility and efficient estimators of the integrated volatility are constructed.
    Date: 2010–01
  4. By: Jennifer L. Castle; David F. Hendry
    Abstract: Our strategy for automatic selection in potentially non-linear processes is: test for non-linearity in the unrestricted linear formulation; if that test rejects, specify a general model using polynomials, to be simplified to a minimal congruent representation; finally select by encompassing tests of specific non-linear forms against the selected model. Non-linearity poses many problems: extreme observations leading to non-normal (fat-tailed) distributions; collinearity between non-linear functions; usually more variables than observations when approximating the non-linearity; and excess retention of irrelevant variables; but solutions are proposed. A returns-to-education empirical application demonstrates the feasiblity of the non-linear automatic model selection algorithm Autometrics.
    Keywords: Econometric methodology, Model selection, Autometrics, Non-linearity, Outlier, Returns to education
    JEL: C51 C22 C87
    Date: 2010
  5. By: Jesús Crespo Cuaresma (Department of Economics, University of Innsbruck; Universitätstrasse 15, 6020 Innsbruck, Austria); Martin Feldkircher (Oesterreichische Nationalbank, Foreign Research Division, Otto-Wagner-Platz 3, POB 61, A-1011 Vienna)
    Abstract: In this paper we put forward a Bayesian Model Averaging method dealing with model uncertainty in the presence of potential spatial autocorrelation. The method uses spatial filtering in order to account for different types of spatial links. We contribute to existing methods that handle spatial dependence among observations by explicitly taking care of uncertainty stemming from the choice of a particular spatial structure. Our method is applied to estimate the conditional speed of income convergence across 255 NUTS-2 European regions for the period from 1995 to 2005. We show that the choice of a spatial weight matrix - and in particular the choice of a class thereof - can have an important effect on the estimates of the parameters attached to the model covariates. We also show that estimates of the speed of income convergence across European regions depend strongly on the form of the spatial patterns which are assumed to underlie the dataset. When we take into account this dimension of model uncertainty, the posterior distribution of the speed of convergence parameter has a large probability mass around a rate of convergence of 1%, approximately half of the value which is usually reported in the literature.
    Keywords: Dollarization, Model uncertainty, spatial filtering, determinants of economic growth, European regions
    JEL: C11 C15 C21 R11 O52
    Date: 2010–01–11
  6. By: Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
    Abstract: We consider model selection when there is uncertainty over the choice of variables and the occurrence and timing of multiple location shifts. General-to-simple selection is extended using Autometrics by adding an impulse indicator for every observation to the set of candidate regressors (see Hendry, Johansen and Santos, 2008, and Johansen and Nielsen, 2009). We apply that approach to a fat-tailed distribution and processes with breaks: Monte Carlo experiments show its capability of detecting up to 20 shifts in 100 observations, while jointly selecting variables. An illustration to U.S. real interest rates compares impulse-indicator saturation with the procedure in Bai and Perron (1998).
    Keywords: Impulse-indicator saturation, Location shifts, Model selection, Autometrics
    JEL: C51 C22
    Date: 2010
  7. By: Ulrike Schneider; Martin Wagner
    Abstract: This paper uses the adaptive Lasso estimator to determine the variables important for economic growth. The adaptive Lasso estimator is a computationally very simple procedure that can perform at the same time model selection and consistent parameter estimation. The methodology is applied to three data sets, the data used in Sala-i-Martin et al. (2004), in Fernandez et al. (2001) and a data set for the regions in the European Union. The results for the former two data sets are similar in several respects to those found in the published papers, yet are obtained at a negligible fraction of computational cost. Furthermore, the results for the European regional data highlight the importance of human capital for economic growth.
    Keywords: adaptive Lasso, economic convergence, growth regressions, model selection
    JEL: C31 C52 O11 O18 O47
    Date: 2009–06
  8. By: Thomas Flury; Neil Shephard
    Abstract: A key ingredient of many particle filters is the use of the sampling importance resampling algorithm (SIR), which transforms a sample of weighted draws from a prior distribution into equally weighted draws from a posterior distribution. We give a novel analysis of the SIR algorithm and analyse the jittered generalisation of SIR, showing that existing implementations of jittering lead to marked inferior behaviour over the base SIR algorithm. We show how jittering can be designed to improve the performance of the SIR algorithm. We illustrate its performance in practice in the context of three filtering problems.
    Keywords: Importance sampling, Particle filter, Random numbers, Sampling importance resampling, State space models
    JEL: C14 C32
    Date: 2009
  9. By: Felipe Vásquez (Departamento de Economía, Universidad de Concepción.); Michael Hanemann (Department of Agricultural and Resource Economics,University of California, Berkeley)
    Abstract: In this paper we present a new utility model that serves as the basis for modeling discrete/continuous consumer choices with a general corner solution.The new model involves a more flexible representation of preferences than what has been used in the previous literature and, unlike most of this literature, it is not additively separable. This functional form can handle richer substitution patterns such as complementarity as well as substitution among goods. We focus in part on the Quadratic Box-Cox utility function and examine its properties from both theoretical and empirical perspectives. We identify the significance of the various parameters of the utility function, and demonstrate an estimation strategy that can be applied to demand systems involving both a small and large number of commodities.
    Date: 2009
  10. By: Antonios Antypas (Department of Banking and Financial Management, University of Piraeus); Phoebe Koundouri (Dept. of International and European Economic Studies, Athens University of Economics and Business); Nikolaos Kourogenis (Department of Banking and Financial Management, University of Piraeus.)
    Abstract: This paper aims at reconciling two apparently contradictory empirical regularities of financial returns, namely the fact that the empirical distribution of returns tends to normality as the frequency of observation decreases (aggregational Gaussianity) combined with the fact that the conditional variance of high frequency returns seems to have a unit root, in which case the unconditional variance is infinite. We show that aggregational Gaussianity and infinite variance can coexist, provided that all the moments of the unconditional distribution whose order is less than two exist. The latter characterises the case of Integrated GARCH (IGARCH) processes. Finally, we discuss testing for aggregational Gaussianity under barely infinite varian
    Keywords: aggregational Gausianity, infinite variance, IGARCH, crop prices
    JEL: C10 G12 Q14
    Date: 2010–01–23
  11. By: Bence Toth; Fabrizio Lillo; J. Doyne Farmer
    Abstract: We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of the time series. The process is composed of consecutive patches of variable length, each patch being described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated to a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galvan, et al., Phys. Rev. Lett., 87, 168105 (2001). We show that the new algorithm outperforms the original one for regime switching compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.
    Date: 2010–01
  12. By: David F. Hendry; Grayham E. Mizon
    Abstract: We model expenditure on food in the USA, using an extended time series. Even when a theory is essentially ‘correct’, it can manifest serious mis-specification if just fitted to data, ignoring its observed characteristics and major external events such as wars, recessions and policy changes. When the same theory is embedded in a general framework embracing dynamics and structural breaks, it performs well even over an extended data period, as shown using Autometrics with impulse-indicator saturation. Although this particular illustration involves a simple theory, the point made is generic, and applies no matter how sophisticated the theory.
    Keywords: Econometric modelling, Food expenditure, Structural breaks, Impulse-indicator saturation, Autometrics
    JEL: C51 C22
    Date: 2010
  13. By: Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry
    Abstract: When location shifts occur, cointegration-based equilibrium-correction models (EqCMs) face forecasting problems. We consider alleviating such forecast failure by updating, intercept corrections, differencing, and estimating the future progress of an ‘internal’ break. Updating leads to a loss of cointegration when an EqCM suffers an equilibrium-mean shift, but helps when collinearities are changed by an ‘external’ break with the EqCM staying constant. Both mechanistic corrections help compared to retaining a pre-break estimated model, but an estimated model of the break process could outperform. We apply the approaches to EqCMs for UK M1, compared with updating a learning function as the break evolves.
    Keywords: Cointegration, Equilibrium-correction, Forecasting, Location shifts, Collinearity, M1
    JEL: C1 C53
    Date: 2010
  14. By: B. Kaulakys; M. Alaburda; V. Gontis
    Abstract: We consider stochastic point processes generating time series exhibiting power laws of spectrum and distribution density (Phys. Rev. E 71, 051105 (2005)) and apply them for modeling the trading activity in the financial markets and for the frequencies of word occurrences in the language.
    Date: 2010–01
  15. By: Bhattacharjee, A.; Holly, S.
    Abstract: While much of the literature on cross section dependence has focused mainly on estimation of the regression coefficients in the underlying model, estimation and inferences on the magnitude and strength of spill-overs and interactions has been largely ignored. At the same time, such inferences are important in many applications, not least because they have structural interpretations and provide useful interpretation and structural explanation for the strength of any interactions. In this paper we propose GMM methods designed to uncover underlying (hidden) interactions in social networks and committees. Special attention is paid to the interval censored regression model. Our methods are applied to a study of committee decision making within the Bank of England’s monetary policy committee.
    Keywords: Committee decision making; Social networks; Cross section and spatial interaction; Generalised method of moments; Censored regression model; Expectation-Maximisation Algorithm; Monetary policy; Interest rates
    JEL: D71 D85 E43 E52 C31 C34
    Date: 2010–01–22
  16. By: Ba Chu; Roman Kozhan
    Date: 2009
  17. By: Ingmar Nolte; Valeri Voev
    Date: 2009
  18. By: Bhattacharjee, A.; Holly, S.
    Abstract: Traditionally, research has been devoted almost exclusively to estimation of underlying structural models without adequate attention to the drivers of diffusion and interaction across cross section and spatial units. We review some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross section and spatial interactions. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted.
    Keywords: Cross Sectional and Spatial Dependence; SpatialWeights Matrix; Interactions and Di¤usion; Monetary Policy Committee; Generalised Method of Moments.
    JEL: E42 E43 E50 E58
    Date: 2010–01–22

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.