nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒05‒17
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Exact Maximum Likelihood estimation for the BL-GARCH model under elliptical distributed innovations. By Abdou Kâ Diongue; Dominique Guegan; Rodney C. Wolff
  2. A Model for Multivariate Non-negative Valued Processes in Financial Econometrics By Fabrizio Cipollini; Robert F. Engle; Giampiero M. Gallo
  3. A non-parametric method to nowcast the Euro Area IPI. By Laurent Ferrara; Thomas Raffinot
  4. Comparison of Volatility Measures: a Risk Management Perspective By Christian T. Brownlees; Giampiero Gallo
  5. Non-stationarity and meta-distribution. By Dominique Guegan
  6. Inverse Probability Tilting and Missing Data Problems By Daniel Egel; Bryan S. Graham; Cristine Campos de Xavier Pinto
  7. A Nonlinear Unit Root Test in the Presence of an Unknown Break By Stephan Popp
  8. Wavelets unit root test vs DF test : A further investigation based on monte carlo experiments. By Ibrahim Ahamada; Philippe Jolivaldt
  9. Volatility Forecasting Using Explanatory Variables and Focused Selection Criteria By Christian T. Brownlees; Giampiero Gallo
  10. Estimating Fundamental Cross-Section Dispersion from Fixed Event Forecasts By Jonas Dovern; Ulrich Fritsche
  11. A new unit root test against ESTAR based on a class of modified statistics By Kruse, Robinson
  12. Business surveys modelling with seasonal-cyclical long memory models. By Laurent Ferrara; Dominique Guegan
  13. Identification of Treatment Effects Using Control Functions in Models with Continuous, Endogenous Treatment and Heterogeneous Effects By Jean-Pierre Florens; James J. Heckman; Costas Meghir; Edward J. Vytlacil
  14. Dartboard Tests for the Location Quotient By Paulo Guimarães; Octávio Figueiredo; Douglas Woodward
  15. "Statistical Matching Using Propensity Scores Theory and Application to the Levy Institute Measure of Economic Wellbeing" By Hyunsub Kum; Thomas Masterson

  1. By: Abdou Kâ Diongue (Université Gaston Berger - Sénégal); Dominique Guegan (Centre d'Economie de la Sorbonne et Paris School of Economics); Rodney C. Wolff (School of Mathematical Sciences, QUT - Brisbane)
    Abstract: In this paper, we discuss the class of Bilinear GATRCH (BL-GARCH) models which are capable of capturing simultaneously two key properties of non-linear time series : volatility clustering and leverage effects. It has been observed often that the marginal distributions of such time series have heavy tails ; thus we examine the BL-GARCH model in a general setting under some non-Normal distributions. We investigate some probabilistic properties of this model and we propose and implement a maximum likelihood estimation (MLE) methodology. To evaluate the small-sample performance of this method for the various models, a Monte Carlo study is conducted. Finally, within-sample estimation properties are studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects.
    Keywords: BL-GARCH process, elliptical distribution, leverage effects, maximum likelihood, Monte Carlo method, volatility clustering.
    Date: 2008–04
  2. By: Fabrizio Cipollini (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti"); Robert F. Engle (Department of Finance, Stern School of Business, New York University); Giampiero M. Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti")
    Abstract: The Multiplicative Error Model introduced by Engle (2002) for non-negative valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multivariate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate non-negative valued random variables. We suggest the use of copula functions to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. We illustrate the feasibility of the procedure and the gains over the equation by equation approach using a four variable fully interdependent model with different volatility measures.
    Keywords: Volatility, Copula functions, Forecasting, GARCH, MEM.
    JEL: C22 C51 C52 C53
    Date: 2007–12
  3. By: Laurent Ferrara (Banque de France et Centre d'Economie de la Sorbonne); Thomas Raffinot (CPR Asset Management)
    Abstract: Non-parametric methods have been empirically proved to be of great interest in the statistical literature in order to forecast stationary time series, but very few applications have been proposed in the econometrics literature. In this paper, our aim is to test whether non-parametric statistical procedures based on a Kernel method can improve classical linear models in order to nowcast the Euro area manufacturing industrial production index (IPI) by using business surveys released by the European Commission. Moreover, we consider the methodology based on bootstrap replications to estimate the confidence interval of the nowcasts.
    Keywords: Non-parametric, kernel, nowcasting, bootstrap, Euro area IPI.
    JEL: C22 C51 E66
    Date: 2008–04
  4. By: Christian T. Brownlees (Università degli Studi di Firenze, Dipartimento di Statistica); Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti")
    Abstract: In this paper we address the issue of forecasting Value–at–Risk (VaR) using different volatility measures: realized volatility, bipower realized volatility, two scales realized volatility, realized kernel as well as the daily range. We propose a dynamic model with a flexible trend specification bonded with a penalized maximum likelihood estimation strategy: the P-Spline Multiplicative Error Model. Exploiting UHFD volatility measures, VaR predictive ability is considerably improved upon relative to a baseline GARCH but not so relative to the range; there are relevant gains from modeling volatility trends and using realized kernels that are robust to dependent microstructure noise.
    Keywords: Volatility Measures, VaR Forecasting, GARCH, MEM, P-Spline.
    JEL: C22 C51 C52 C53
    Date: 2008–02
  5. By: Dominique Guegan (Centre d'Economie de la Sorbonne et Paris School of Economics)
    Abstract: In this paper we deal with the problem of non-stationarity encountered in a lot of data sets, mainly in financial and economics domains, coming from the presence of multiple seasonnalities, jumps, volatility, distorsion, aggregation, etc. Existence of non-stationarity involves spurious behaviors in estimated statistics as soon as we work with finite samples. We illustrate this fact using Markov switching processes, Stopbreak models and SETAR processes. Thus, working with a theoretical framework based on the existence of an invariant measure for a whole sample is not satisfactory. Empirically alternative strategies have been developed introducing dynamics inside modelling mainly through the parameter with the use of rolling windows. A specific framework has not yet been proposed to study such non-invariant data sets. The question is difficult. Here, we address a discussion on this topic proposing the concept of meta-distribution which can be used to improve risk management strategies or forecasts.
    Keywords: Non-stationarity, switching processes, SETAR processes, jumps, forecast, risk management, copula, probability distribution function.
    JEL: C32 C51 G12
    Date: 2008–03
  6. By: Daniel Egel; Bryan S. Graham; Cristine Campos de Xavier Pinto
    Abstract: This paper outlines a new minimum empirical discrepancy (MD) estimator for missing data, sample combination and related problems: inverse probability tilting (IPT). Covered examples include estimation of the average treatment effect (ATE), the average treatment effect on the treated (ATT) and the two sample instrumental variables (TSIV) model. The proposed estimator attains the semiparametric efficiency bound under two auxiliary parametric restrictions (local efficiency), but is consistent so long as one or the other holds (double robustness). A novel feature of IPT is its 'exact balancing' property: after reweighting, sample moments of always-observed covariates in the complete-case subsample equal their corresponding (unweighted) full sample means. We also show how prior restrictions on the marginal distribution of always-observed covariates can be efficiently incorporated into our procedure. We use our methods, and compare them to several alternatives, in an evaluation of the National Supported Work (NSW) demonstration using 'non-experimental' comparison groups drawn from the Panel Study of Income Dynamics (PSID) and the Current Population Survey (CPS) as in LaLonde (1986) and Dehejia and Wahba (1999). We explore the small sample properties of IPT in a Monte Carlo study. IPT performs well, relative to several alternative estimators, across a variety of data generating processes.
    JEL: C14 C21 C23
    Date: 2008–05
  7. By: Stephan Popp
    Abstract: The Perron test is the most commonly applied procedure to test for a unit root in the presence of a structural break of unknown timing in the trend function. Deriving the Perron-type test regression from an unobserved component model, it is shown that the test regression in fact is nonlinear in coefficient. Taking account of the nonlinearity leads to a test with properties that are exclusively assigned to Schmidt-Phillips LM-type unit root tests.
    Keywords: Unit root tests, nonlinear regression, structural breaks, innovational outliers
    JEL: C12 C22
    Date: 2008–05
  8. By: Ibrahim Ahamada (Centre d'Economie de la Sorbonne et Paris School of Economics); Philippe Jolivaldt (Centre d'Economie de la Sorbonne et Paris School of Economics)
    Abstract: Test for unit root based in wavelets theory is recently defined (Gençay and Fan, 2007). While the new test is supposed to be robust to the initial value, we bring out by contrast the significant effects of the initial value in the size and the power. We found also that both the wavelets unit root test and ADF test give the same efficiency if the data are corrected of the initial value. Our approach is based in monte carlo experiment.
    Keywords: Unit root tests, wavelets, monte carlo experiments, size-power curve.
    JEL: C12 C15 C16 C22
    Date: 2008–03
  9. By: Christian T. Brownlees (Università degli Studi di Firenze, Dipartimento di Statistica); Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti")
    Abstract: This paper assesses the performance of volatility forecasting using focused selection and combination strategies to include relevant explanatory variables in the forecasting model. The focused selection/combination strategies consist of picking up the model that minimizes the estimated risk (e.g. MSE) of a given smooth function of the parameters of interest to the forecaster. The proposed focused methods are compared with other strategies, including the well established AIC and BIC. The methodology is applied to a daily recursive 1--step ahead value--at--risk (VaR) forecasting exercise of 4 widely traded New York Stock Exchange stocks. Results show that VaR forecasts can significantly be improved upon using focused forecast strategies for the selection of relevant exogenous information. The set of explanatory variables that helps improving prediction is stock dependent. Traditional information criteria do not appear to be helpful in suggesting the inclusion of explanatory variables that actually improve prediction significantly. In line with recent theoretical findings, the predictive performance of the BIC appears to be modest.
    Keywords: Forecasting, Shrinkage Estimation, FIC, MEM, GARCH, ACD
    JEL: C22 C51 C53
    Date: 2007–05
  10. By: Jonas Dovern; Ulrich Fritsche
    Abstract: A couple of recent papers have shifted the focus towards disagreement of professional forecasters. When dealing with survey data that is sampled at a frequency higher than annual and that includes only fixed event forecasts, e.g. expectation of average annual growth rates measures of disagreement across forecasters naturally are distorted by a component that mainly reflects the time varying forecast horizon. We use data from the Survey of Professional Forecasters, which reports both fixed event and fixed horizon forecasts, to evaluate different methods for extracting the "fundamental" component of disagreement. Based on the paper's results we suggest two methods to estimate dispersion measures from panels of fixed event forecasts: a moving average transformation of the underlying forecasts and estimation with constant forecast-horizon- effects. Both models are easy to handle and deliver equally well performing results, which show a surprisingly high correlation (up to 0:94) with the true dispersion.
    Keywords: Survey data, dispersion, disagreement, fixed event forecasts
    JEL: C22 C32 E37
    Date: 2008
  11. By: Kruse, Robinson
    Abstract: This paper proposes a new unit root test against a non-linear exponential smooth transition autoregressive (ESTAR) model. The new test is build upon the non-standard testing approach of Abadir and Distaso (2007) who introduce a class of modified statistics for testing joint hypotheses when one of the alternatives is one-sided. In a Monte Carlo study the popular Dickey-Fuller type test proposed by Kapetanios et al. (2003) is compared with the new test. The results suggest that the new test is generally superior in terms of power. An application to a real effective exchange rate underlines its usefulness.
    Keywords: Unit root test, Nonlinearities, Smooth transition
    JEL: C12 C22 F31
    Date: 2008–04
  12. By: Laurent Ferrara (Banque de France et Centre d'Economie de la Sorbonne); Dominique Guegan (Centre d'Economie de la Sorbonne et Paris School of Economics)
    Abstract: Business surveys are an important element in the analysis of the short-term economic situation because of the timeliness and nature of the information they convey. Especially, surveys are often involved in econometric models in order to provide an early assessment of the current state of the economy, which is of great interest for policy-makers. In this paper, we focus on non-seasonally adjusted business surveys released by the European Commission. We introduce an innovative way for modelling those series taking the persistence of the seasonal roots into account through seasonal-cyclical long memory models. We empirically prove that such models produce more accurate forecasts than classical seasonal linear models.
    Keywords: Euro area, nowcasting, business surveys, seasonal, long memory.
    JEL: C22 C53 E32
    Date: 2008–05
  13. By: Jean-Pierre Florens; James J. Heckman; Costas Meghir; Edward J. Vytlacil
    Abstract: We use the control function approach to identify the average treatment effect and the effect of treatment on the treated in models with a continuous endogenous regressor whose impact is heterogeneous. We assume a stochastic polynomial restriction on the form of the heterogeneity but, unlike alternative nonparametric control function approaches, our approach does not require large support assumptions.
    JEL: C21 C31
    Date: 2008–05
  14. By: Paulo Guimarães (University of South Carolina and CEMPRE); Octávio Figueiredo (Universidade do Porto and CEMPRE); Douglas Woodward (University of South Carolina)
    Abstract: In this paper we reinterpret the location quotient, the commonly employed measure of regional industrial agglomeration, as an estimator derived from Ellison and Glaeser’s (1997) dartboard framework. This approach provides a theoretical foundation on which to build statistical tests for the measure. With a simple application, we show that these tests provide valuable information about the accuracy of the location quotient. The tests are relatively easy to implement using regional employment and establishment data.
    Keywords: Dartboard Location Model, Location Quotient, Statistical Tests
    JEL: R10 R12 C12
    Date: 2008–04
  15. By: Hyunsub Kum; Thomas Masterson
    Abstract: This paper summarizes the background, type, logic, and working procedure of the statistical matching used in the Levy Institute Measure of Economic Wellbeing (LIMEW) project to combine the various data sets used to produce the synthetic data set with which the LIMEW is constructed. We use the match between the 2001 Survey of Consumer Finances and Annual Demographic Survey of Current Population Survey data sets to demonstrate the procedure and results of the matching. Challenges facing the use of this technique, such as the distribution of weights, are discussed in the conclusion.
    Date: 2008–05

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.