nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒09‒11
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Simulation-based finite-sample linearity test against smooth transition models By González, Andrés; Teräsvirta, Timo
  2. Weighted Average Power Similar Tests for Structural Change for the Gaussian Linear Regression Model By Giovanni Forchini
  4. Sequential Procedure for Testing Unit Roots in the Presence of Structural Break in Time Series Data By Shrestha, Min B.; Chowdhury, Khorshed
  5. ANALYTIC AND BOOTSTRAP APPROXIMATIONS OF PREDICTION ERRORS UNDER A MULTIVARIATE FAY-HERRIOT MODEL By Wenceslao Gonzalez-Manteiga; Maria J. Lombardia; Isabel Molina; Domingo Morales; Laureano Santamaria
  6. Robust Estimation of Multiple Regression Model with asymmetric innovations and Its Applicability on Asset Pricing Model By Wing-Keung Wong; Guorui Bian
  7. Panel Smooth Transition Regression Models By González, Andrés; Teräsvirta, Timo; van Dijk, Dick
  8. Panel Data Econometrics: Modelling and Estimation By Hübler, Olaf
  9. Unit Roots and Cointegration in Panels By Jörg Breitung; M. Hashem Pesaran
  10. Standard errors of marginal effects in the heteroskedastic probit model By Cornelißen, Thomas
  12. SYNCHRONIZATION OF CYCLES By Don Harding; Adrian Pagan
  13. A necessary and sufficient condition for the strict stationarity of a family of GARCH processes By Meitz, Mika
  14. Do Macro Variables, Asset Markets or Surveys Forecast Inflation Better? By Andrew Ang; Geert Bekaert; Min Wei
  15. Bootstrapping Hedonic Price Indices: Experience From Used Cars Data By Michael Beer
  16. Inflation Dynamics and the New Keynesian Phillips Curve: an Identification Robust Econometric Analysis By Jean-Marie Dufour; Lynda Khalaf; Maral Kichian

  1. By: González, Andrés (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (Dept. of Economic Statistics, Stockholm School of Economics)
    Abstract: In this paper we use Monte Carlo testing techniques for testing linearity against the smooth transition models. The Monte Carlo approach allows us to introduce a new test that differs from the tests existing in the literature in two respects. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less that or equal to the nominal size of the test. Second, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply Monte Carlo testing methods for size-correcting the test proposed by Luukkonen Saikkonen and Teräsvirta (1988). Simulated annealing is used in computing values of the test statistics. The results show that the power loss implied by the auxiliary regression based test is nonexistent compared to a supremum-based test but is more substantial when compared to the other three tests under consideration.
    Keywords: Exact test; Monte Carlo test; Sequential Monte Carlo test; Nonlinear modelling; Panel smooth transition regression
    JEL: C12 C15 C52
    Date: 2005–08–17
  2. By: Giovanni Forchini
    Abstract: The average exponential tests for structural change of Andrews and Ploberger (Econometrica, 62, 1994) and Andrews, Lee and Ploberger (Journal of Econometrics 70, 1996) and modifications thereof maximize a weighted average power which incorporates specific weighting functions in order to make the resulting test statistics simple. Generalizations of these tests involve the numerical evaluation of (potentially) complicated integrals. In this paper we suggest a uniform Laplace approximation to evaluate weighted average power test statistics for which a simple closed form does not exist. We also show that a modification of the avg-F test is optimal under a very large class of weighting functions and can be written as a ratio of quadratic forms. Finally, we discuss how the computational burden of averaging over all possible change-points can be addressed.
    Keywords: Linear Regression Model, Similar Tests, Invariant Tests, Structural Change, Weighted Average Power Tests, Laplace Approximation, Uniform Laplace Approximation.
    JEL: C12 C21 C22
    Date: 2005–08
  3. By: Marco Avarucci; Domenico Marinucci
    Abstract: In this paper we consider polynomial cointegrating relationships among stationary processes with long range dependence. We express the regression functions in terms of Hermite polynomials and we consider a form of spectral regression around frequency zero. For these estimates, we establish consistency by means of a more general result on continuously averaged estimates of the spectral density matrix at frequency zero.
    Date: 2005–09
  4. By: Shrestha, Min B. (University of Wollongong); Chowdhury, Khorshed (University of Wollongong)
    Abstract: Testing for unit roots has special significance in terms of both economic theory and the interpretation of estimation results. As there are several methods available, researchers face method selection problem while conducting the unit root test on time series data in the presence of structural break. This paper proposes a sequential search procedure to determine the best test method for each time series. Different test methods or models may be appropriate for different time series. Therefore, instead of sticking to one particular test method for all the time series under consideration, selection of a set of mixed methods is recommended for obtaining better results.
    Keywords: Time Series, Stationarity, Unit Root Test, Structural Break, Sequential Procedure
    Date: 2005
  5. By: Wenceslao Gonzalez-Manteiga; Maria J. Lombardia; Isabel Molina; Domingo Morales; Laureano Santamaria
    Abstract: A Multivariate Fay-Herriot model is used to aid the prediction of small area parameters of dependent variables with sample data aggregated to area level. The empirical best linear unbiased predictor of the parameter vector is used, and an approximation of the elements of the mean cross product error matrix is obtained by an extension of the results of Prasad and Rao (1990) to the multiparameter case. Three different bootstrap approximations of those elements are introduced, and a simulation study is developed in order to compare the efficiency of all presented approximations, including a comparison under lack of normality. Further, the number of replications needed for the bootstrap procedures to get stabilized are studied.
    Date: 2005–09
  6. By: Wing-Keung Wong (National University of Singapore); Guorui Bian (East China Normal University, China)
    Abstract: In this paper, we first develop the modified maximum likelihood (MML) estimators for the multiple regression coefficients in linear model with the underlying distribution assumed to be symmetric, one of Student's t family. We obtain the closed form of the estimators and derive their asymptotic properties. In addition, we demonstrate that the MML estimators are more appropriate to estimate the parameters in the Capital Asset Pricing Model by comparing its performance with that of least squares estimators (LSE) on the monthly returns of US portfolios. Our empirical study reveals that the MML estimators are more efficient than the LSE in terms of relative efficiency of one-step-ahead forecast mean square error for small samples.
    Keywords: Maximum likelihood estimators, Modified maximum likelihood estimators, Student’s t family, Capital Asset Pricing Model, Robustness
    JEL: C1 C2 G1
    Date: 2005–05
  7. By: González, Andrés (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (Dept. of Economic Statistics, Stockholm School of Economics); van Dijk, Dick (Econometric Institute, Erasmus University Rotterdam)
    Abstract: We develop a non-dynamic panel smooth transition regression model with fixed individual effects. The model is useful for describing heterogenous panels, with regression coefficients that vary across individuals and over time. Heterogeneity is allowed for by assuming that these coefficients are continuous functions of an observable variable through a bounded function of this variable and fluctuate between a limited number (often two) of “extreme regimes”. The model can be viewed as a generalization of the threshold panel model of Hansen (1999). We extend the modelling strategy for univariate smooth transition regression models to the panel context. This comprises of model specification based on homogeneity tests, parameter estimation, and diagnostic checking, including tests for parameter constancy and no remaining nonlinearity. The new model is applied to describe firms' investment decisions in the presence of capital market imperfections.
    Keywords: financial constraints; heterogeneous panel; invesatment; misspecification test; nonlinear modelling panel data; smooth transition model
    JEL: C12 C23 C52 G31 G32
    Date: 2005–08–17
  8. By: Hübler, Olaf
    Abstract: This paper presents a survey on panel data methods in which Iemphasize new developments. Inparticular, linear multilevel models with a new variant are discussed. Furthermore, non-linear, nonparametric and semiparametric models are analyzed. In contrast to linear models there do not exist unified methods for nonlinear approaches. In this case FEM are dominated by CML methods. Under REM assumptions it is often possible to use the ML method directly. As alternatives GMM and simulated estimators exist. If the nonlinear function is not exactly known, nonparametric or semiparametric methods should be preferred.
    Keywords: panel data, linear multilevel, nonlinear, non- and semiparametric models
    JEL: C14 C23 C24 C25
    Date: 2005–08
  9. By: Jörg Breitung; M. Hashem Pesaran
    Abstract: This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T), and the cross section dimension (N) are relatively large. It distinguishes between the first generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the different units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the different cross section units are due to common random walk components.
    Keywords: Panel Unit Roots, Panel Cointegration, Cross Section Dependence, Common Effects
    JEL: C12 C15 C22 C23
    Date: 2005–08
  10. By: Cornelißen, Thomas
    Abstract: In non-linear regression models, such as the heteroskedastic probit model, coefficients cannot be interpreted as marginal effects. Marginal effects can be computed as a non-linear combination of the regression coefficients. Standard errors of the marginal effects needed for inference and hypothesis testing have to be derived by approximation using methods such as the delta method. This paper applies the delta method to derive analytically the standard errors of marginal effects in a heteroskedastic probit model. The computation is implemented as a Stata ado-file called mehetprob which can be downloaded from the internet. This allows to compute marginal effects at means and their standard errors in a heteroskedastic probit model faster than by numerical calculation which is implemented in the mfx routine currently available in Stata for that purposes.
    Keywords: heteroskedastic probit model, marginal effects, Stata
    JEL: C25 C87
    Date: 2005–08
  11. By: Heather M. Anderson; Chin Nam Low; Ralph Snyder
    Abstract: A well known property of the Beveridge Nelson decomposition is that the innovations in the permanent and transitory components are perfectly correlated. We use a single source of error state space model to exploit this property and perform a Beveridge Nelson decomposition. The single source of error state space approach to the decomposition is computationally simple and it incorporates the direct estimation of the long-run multiplier.
    JEL: C22 C51 E32
    Date: 2005–05
  12. By: Don Harding; Adrian Pagan
    Abstract: Many interesting issues are posed by synchronisation of cycles. In this paper we define synchronisation and show how the degree of synchronisation can be measured. We propose heteroscedasticity and serial correlation robust tests of the hypotheses that cycles are either unsynchronised or perfectly synchronised. Tests of synchronisation are performed using data on industrial production, on monthly stock indices and on series that are used to construct the reference cycle for the United States. An algorithm is developed to extract a common cycle. It is used to extract the reference cycle for the United States and common cycles in stock prices and European industrial production.
    JEL: C12 C14 C22 C33 E32
    Date: 2004–07
  13. By: Meitz, Mika (Dept. of Economic Statistics, Stockholm School of Economics)
    Abstract: We consider a family of GARCH(1,1) processes introduced in He and Teräsvirta (1999a). This family contains various popular GARCH models as special cases. A necessary and sufficient condition for the existence of a strictly stationary solution is given.
    Keywords: GARCH; strict stationarity; Lyapunov exponent
    JEL: C22
    Date: 2005–07–23
  14. By: Andrew Ang; Geert Bekaert; Min Wei
    Abstract: Surveys do! We examine the forecasting power of four alternative methods of forecasting U.S. inflation out-of-sample: time series ARIMA models; regressions using real activity measures motivated from the Phillips curve; term structure models that include linear, non-linear, and arbitrage-free specifications; and survey-based measures. We also investigate several optimal methods of combining forecasts. Our results show that surveys outperform the other forecasting methods and that the term structure specifications perform relatively poorly. We find little evidence that combining forecasts using means or medians, or using optimal weights with prior information produces superior forecasts to survey information alone. When combining forecasts, the data consistently places the highest weights on survey information.
    JEL: E31 E37 E43 E44
    Date: 2005–08
  15. By: Michael Beer (Department of Quantitative Economics)
    Abstract: Every hedonic price index is an estimate of an unknown economic parameter. It depends, in practice, on one or more random samples of prices and characteristics of a certain good. Bootstrap resampling methods provide a tool for quantifying estimation errors. Following some general reflections on hedonic elementary price indices, this paper proposes a case-based and a model-based bootstrap approach for estimating confidence intervals for hedonic price indices. Empirical results are obtained for a data set on used cars in Switzerland. A semi-logarithmic model is fit to monthly samples serving as the input to different index formulae. Finally, bootstrap confidence intervals are estimated for Jevons-type hedonic elementary price indices.
    Keywords: hedonic regression; hedonic price indices; bootstrap methods; confidence intervals; used cars
    JEL: C43 C15 E31 L62
    Date: 2005–07–22
  16. By: Jean-Marie Dufour; Lynda Khalaf; Maral Kichian
    Abstract: In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations. <P>Dans cet article, nous employons des méthodes robustes aux problèmes d’identification afin d’évaluer la valeur empirique d’une nouvelle équation de courbe de Phillips keynésienne (NKPC). Nous concentrons notre analyse sur la spécification de Gali et Gertler (1999), en considérant des données américaines et canadiennes. Nous étudions deux variantes du modèle : une première fondée sur une hypothèse d’attentes rationnelles et une seconde où les attentes sont mesurées à partir de données d’enquête. Les résultats basés sur ces deux spécifications diffèrent de manière notable sur plusieurs points : (i) les problèmes liés à l’identification, (ii) les comportements rétrospectifs, (iii) la fréquence des ajustements. En fin de compte, nos résultats sont compatibles dans une faible mesure avec un modèle NKPC hybride, tandis que le modèle ne semble pas compatible avec les données canadiennes. Nos résultats soulignent l’importance d’utiliser des méthodes robustes à l’identification dans l’analyse empirique de relations macroéconomiques où interviennent des attentes.
    Keywords: identification robust inference, inflation dynamics, macroeconomics, New Keynesian Phillips Curve, optimal instruments, weak instruments, dynamique de l’inflation, inférence robuste à l’identification, instruments faibles, instruments optimaux, macroéconomie, nouvelle courbe de Phillips keynésienne
    JEL: C12 C13 C3 C52 E3 E31 E5
    Date: 2005–08–01
  17. By: Graham Elliott; Ivana Komunjer; Allan Timmermann
    Abstract: Empirical studies using survey data on expectations have frequently observed that forecasts are biased and have concluded that agents are not rational. We establish that existing rationality tests are not robust to even small deviations from symmetric loss and hence have little ability to tell whether the forecaster is irrational or the loss function is asymmetric. We quantify the exact trade-off between forecast inefficiency and asymmetric loss leading to identical outcomes of standard rationality tests and explore new and more general methods for testing forecast rationality jointly with flexible families of loss functions that embed quadratic loss as a special case. An empirical application to survey data on forecasts of nominal output growth demonstrates the empirical significance of our results and finds that rejections of rationality may largely have been driven by the assumption of symmetric loss.
    Date: 2005–05

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.