nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒04‒04
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Empirical Likelihood Block Bootstrapping By Allen, Jason; Gregory, Allan W.; Shimotsu, Katsumi
  2. Identification problems in ESTAR models and a new model By Donauer, Stefanie; Heinen, Florian; Sibbertsen, Philipp
  3. Finite Sample Nonparametric Tests for Linear Regressions By Karl Schlag; Olivier Gossner
  4. Bootstrap Union Tests for Unit Roots in the Presence of Nonstationary Volatility By Smeekes Stephan; Taylor A. M. Robert
  5. On the fixed-effects vector decomposition By Breusch, Trevor; Ward, Michael B; Nguyen, Hoa; Kompas, Tom
  6. Fixed, Random, or Something in Between? – A Variant of HAUSMAN’s Specifi cation Test for Panel Data Estimators By Manuel Frondel; Colin Vance
  7. On the Advantages of Disaggregated Data: Insights from Forecasting the U.S. Economy in a Data-Rich Environment By Nikita Perevalov; Philipp Maier
  8. Testing a Conditional Form of Exogeneity By Halbert White; Karim Chalak
  9. GME versus OLS - Which is the best to estimate utility functions? By Cesaltina Pires; Andreia Dionisio; Luís Coelho
  10. Volatility models with innovations from new maximum entropy densities at work By Fischer, Matthias; Gao, Yang; Herrmann, Klaus
  11. Modelling ‘crime-proneness’. A comparison of models for repeated count outcomes By Torbjørn Skardhamar, Tore Schweder and Simen Gan Schweder
  12. The Dynamics of Brand Equity: A Hedonic Regression Approach to the Laser Printer Market By Ludwig von Auer; Mark Trede
  13. Key Moments in the Rouwenhorst Method By Damba Lkhagvasuren
  14. Testing non-linear dependence in the hedge fund industry By Javier Mencía
  15. Trend Estimation By Proietti, Tommaso
  16. "Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments" By Philip Hans Franses; Michael McAleer; Rianne Legerstee
  17. Econometrics and Decision Making: Effects of Presentation Mode By Robin Hogarth; Emre Soyer
  18. Forecasting with a CGE model: does it work? By Peter B. Dixon; Maureen T. Rimmer

  1. By: Allen, Jason; Gregory, Allan W.; Shimotsu, Katsumi
    Abstract: Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping.
    Keywords: generalized methods of moments, empirical likelihood, block bootstrap
    JEL: C14 C22
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2010-01&r=ecm
  2. By: Donauer, Stefanie; Heinen, Florian; Sibbertsen, Philipp
    Abstract: In ESTAR models it is usually difficult to determine parameter estimates, as it can be observed in the literature. We show that the phenomena of getting strongly biased estimators is a consequence of the so-called identification problem, the problem of properly distinguishing the transition function in relation to extreme parameter combinations. This happens in particular for either very small or very large values of the error term variance. Furthermore, we introduce a new alternative model - the T-STAR model - which has similar properties as the ESTAR model but reduces the effects of the identification problem. We also derive a linearity and a unit root test for this model.
    Keywords: Nonlinearities, Smooth transition, Linearity testing, Unit root testing, Real exchange rates
    JEL: C12 C22 C52
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-444&r=ecm
  3. By: Karl Schlag; Olivier Gossner
    Abstract: We introduce several exact nonparametric tests for finite sample multivariate linear regressions, and compare their powers. This fills an important gap in the literature where the only known nonparametric tests are either asymptotic, or assume one covariate only.
    Keywords: Exact, linear regression, nonparametric.
    JEL: C14 C12 C20
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1212&r=ecm
  4. By: Smeekes Stephan; Taylor A. M. Robert (METEOR)
    Abstract: We provide a joint treatment of three major issues that surround testing for a unit root in practice: uncertainty as to whether or not a linear deterministic trend is present in the data, uncertainty as to whether the initial condition of the process is (asymptotically) negligible or not, and the possible presence of nonstationary volatility in the data. Harvey, Leybourne and Taylor (2010, Journal of Econometrics, forthcoming) propose decision rules based on a four-way union of rejections of QD and OLS detrended tests, both with and without allowing for a linear trend, to deal with the first two problems. However, in the presence of nonstationary volatility these test statistics have limit distributions which depend on the form of the volatility process, making tests based on the standard asymptotic critical values invalid. We construct bootstrap versions of the four-way union of rejections test, which, by employing the wild bootstrap, are shown to be asymptotically valid in the presence of nonstationary volatility. These bootstrap union tests therefore allow for a joint treatment of all three of the aforementioned problems.
    Keywords: econometrics;
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2010015&r=ecm
  5. By: Breusch, Trevor; Ward, Michael B; Nguyen, Hoa; Kompas, Tom
    Abstract: This paper analyses the properties of the fixed-effects vector decomposition estimator, an emerging and popular technique for estimating time-invariant variables in panel data models with unit effects. This estimator was initially motivated on heuristic grounds, and advocated on the strength of favorable Monte Carlo results, but with no formal analysis. We show that the three-stage procedure of this decomposition is equivalent to a standard instrumental variables approach, for a specific set of instruments. The instrumental variables representation facilitates the present formal analysis which finds: (1) The estimator reproduces exactly classical fixed-effects estimates for time-varying variables. (2) The standard errors recommended for this estimator are too small for both time-varying and time-invariant variables. (3) The estimator is inconsistent when the time-invariant variables are endogenous. (4) The reported sampling properties in the original Monte Carlo evidence are incorrect. (5) We recommend an alternative shrinkage estimator that has superior risk properties to the decomposition estimator, unless the endogeneity problem is known to be small or no relevant instruments exist.
    Keywords: panel data models; fixed-effects vector decomposition; instrumental variables; inconsistent estimator; incorrect standard errors; improved shrinkage estimator
    JEL: C23 C33
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:21452&r=ecm
  6. By: Manuel Frondel; Colin Vance
    Abstract: This paper proposes a variant of the classical HAUSMAN specifi cation test commonly employed to decide whether the estimation of a random-eff ects model is a viable alternative to estimating fi xed eff ects. Whereas the classical test probes the equality of fi xed- and random eff ects, the proposed variant focuses on the equality of between-groups and fi xed-eff ects coeffi cients. While both test procedures lead to the same conclusions, the panel model specifi cation underlying our testing strategy facilitates the simultaneous estimation of the fi xed- and between-groups eff ects. As a consequence, we are able to examine both the equality of the whole range of coeffi cients as well as that of individual variables. The usefulness of the test is illustrated using a panel of household travel data for Germany.
    Keywords: Specification tests, fuel price elasticity
    JEL: C12
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rwi:repape:0160&r=ecm
  7. By: Nikita Perevalov; Philipp Maier
    Abstract: The good forecasting performance of factor models has been well documented in the literature. While many studies focus on a very limited set of variables (typically GDP and inflation), this study evaluates forecasting performance at disaggregated levels to examine the source of the improved forecasting accuracy, relative to a simple autoregressive model. We use the latest revision of over 100 U.S. time series over the period 1974-2009 (monthly and quarterly data). We employ restrictions derived from national accounting identities to derive jointly consistent forecasts for the different components of U.S. GDP. In line with previous studies, we find that our factor model yields vastly improved forecasts for U.S. GDP, relative to simple autoregressive benchmark models, but we also conclude that the gains in terms of forecasting accuracy differ substantially between GDP components. As a rule of thumb, the largest improvements in terms of forecasting accuracy are found for relatively more volatile series, with the greatest gains coming from improvements of the forecasts for investment and trade. Consumption forecasts, in contrast, perform only marginally better than a simple AR benchmark model. In addition, we show that for most GDP components, an unrestricted, direct forecast outperforms forecasts subject to national accounting identity restrictions. In contrast, GDP itself is best forecasted as the sum of individual forecasts for GDP components, but the improvement over a direct, unconstrained factor forecast is small.
    Keywords: Econometric and statistical methods; International topics
    JEL: C50 C53 E37 E47
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:10-10&r=ecm
  8. By: Halbert White (University of California-San Diego); Karim Chalak (Boston College)
    Abstract: We give two new approaches to testing a conditional form of exogeneity. This condition ensures unconfoundedness and identification of effects of interest in structural systems. As these approaches do not rely on the absence of causal effects of treatment under the null, they complement earlier methods of Rosenbaum (1987) and Heckman and Hotz (1989).
    Keywords: Causality, Conditional Exogeneity, Nonparametric Test, Treatment Effect, Unconfoundedness
    JEL: C12 C14 C21 C31
    Date: 2010–03–24
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:733&r=ecm
  9. By: Cesaltina Pires (Departamento de Gestão, Universidade de Evora and CEFAGE-UE); Andreia Dionisio (Departamento de Gestão, Universidade de Evora and CEFAGE-UE); Luís Coelho (Departamento de Gestão, Universidade de Evora and CEFAGE-UE)
    Abstract: This paper estimates von Neumann andMorgenstern utility functions comparing the generalized maximum entropy (GME) with OLS, using data obtained by utility elicitation methods. Thus, it provides a comparison of the performance of the two estimators in a real data small sample setup. The results confirm the ones obtained for small samples through Monte Carlo simulations. The difference between the two estimators is small and it decreases as the width of the parameter support vector increases. Moreover the GME estimator is more precise than the OLS one. Overall the results suggest that GME is an interesting alternative to OLS in the estimation of utility functions when data is generated by utility elicitation methods.
    Keywords: Generalized maximum entropy; Maximum entropy principle; von Neumann and Morgenstern utility; Utility elicitation.
    JEL: C13 C14 C49 D81
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:cfe:wpcefa:2010_02&r=ecm
  10. By: Fischer, Matthias; Gao, Yang; Herrmann, Klaus
    Abstract: Generalized autoregressive conditional heteroskedasticity (GARCH) processes have become very popular as models for financial return data because they are able to capture volatility clustering as well as leptokurtic unconditional distributions which result from the assumption of conditionally normal error distributions. In contrast, Bollerslev (1987) and several follow-ups provided evidence that starting with leptokurtic and possibly skewed (conditional) error distributions will achieve better results. Parallel to these exible but to some extend arbitrary chosen parametric distributions, recent years saw a rise in suggestions for maximum entropy distributions (e.g. Rockinger and Jondeau, 2002, Park and Bera, 2009 or Fischer and Herrmann, 2010). Within this contribution we provide a comprehensive comparison between both different ME densities and their parametric competitors within different generalized GARCH models such as APARCH and GJR-GARCH. --
    Keywords: GARCH,APARCH,Entropy density,Skewness,Kurtosis
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:032010&r=ecm
  11. By: Torbjørn Skardhamar, Tore Schweder and Simen Gan Schweder (Statistics Norway)
    Abstract: In the criminal career literature, the individual-level age-crime relationship is commonly modelled using generalized linear mixed models, where between-individual heterogeneity is then handled through specifying random effect(s) with some distribution. It is common to specify either a normal or discrete distribution for the random effects. However, there are also other options, and the choice of specification might have substantial effect on the results. In this article, we compare how various methods perform on Norwegian longitudinal data on registered crimes. We also present an approach that might be new to criminologists: the Poisson-gamma regression model. This model is interpretable, parsimonious, and quick to compute. For our data, the distributional assumptions have not dramatic effect on substantive interpretation. In criminology, the mixture distribution is also of theoretical interest by its own right, and we conclude that a gamma distribution is reasonable. We emphasize the importance of comparing multiple methods in any setting where the distributional assumptions are uncertain.
    Keywords: criminal careers; repeated count data; random effects; Poisson-gamma regression; comparing methods
    JEL: C02 C23 K40
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:611&r=ecm
  12. By: Ludwig von Auer; Mark Trede
    Abstract: The authors develop a dynamic approach to measuring the evolution of comparative brand premium, an important component of brand equity. A comparative brand premium is defined as the pairwise price difference between two products being identical in every respect but brand. The model is based on hedonic regressions and grounded in economic theory. In constrast to existing approaches, the authors explicitly take into account and model the dynamics of the brand premia. By exploiting the premia’s intertemporal dependence structure, the Bayesian estimation method produces more accurate estimators of the time paths of the brand premia than other methods. In addition, the authors present a novel yet straightforward way to construct confidence bands that cover the entire time series of brand premia with high probability. The data required for estimation are readily available, cheap, and observable on the market under investigation. The authors apply the dynamic hedonic regression to a large and detailed data set about laser printers gathered on a monthly basis over a four-year period. It transpires that, in general, the estimated brand premia change only gradually from period to period. Nevertheless the method can diagnose sudden downturns of a comparative brand premium. The authors’ dynamic hedonic regression approach facilitates the practical evaluation of brand management.
    Keywords: brand equity, price premium, hedonic regression, Bayesian estimation, dynamic linear model
    JEL: C23 L11
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:1210&r=ecm
  13. By: Damba Lkhagvasuren (Concordia University)
    Abstract: Recent work of Galindev and Lkhagvasuren (2009) shows that for highly persistent autoregressive processes Rouwenhorst's (1995) method outperforms other existing methods in the scalar case along key lower-order moments. Although lower order moments are sufficient to evaluate the existing few methods, it is important to understand how the method performs along other moments of the underlying process. This note calculates the key higher order moments of the process generated by the Rouwenhorst method for its unrestricted, non-symmetric case. The results can also be useful in discretizing autoregressive processes whose higher-order moments are different than those of the normal distribution.
    Keywords: the Rouwenhorst method, Finite-State Markov Chain Approximation, AR(1) shocks
    JEL: C60
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:crd:wpaper:09010&r=ecm
  14. By: Javier Mencía (Banco de España)
    Abstract: This paper proposes a parsimonious approach to test non-linear dependence on the conditional mean and variance of hedge funds with respect to several market factors. My approach introduces non-linear dependence by means of empirically relevant polynomial functions of the factors. For comparison purposes, I also consider multifactor extensions of tests based on piecewise linear alternatives. I apply these tests to a database of monthly returns on 1,071 hedge funds. I find that non-linear dependence on the mean is highly sensitive to the factors that I consider. However, I obtain a much stronger evidence of nonlinear dependence on the conditional variance.
    Keywords: Generalised Hyperbolic Distribution, Correlation, Asymmetry, Multifactor Models
    JEL: C12 G11 C32 C22
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1007&r=ecm
  15. By: Proietti, Tommaso
    Abstract: Trend estimation deals with the characterization of the underlying, or long–run, evolution of a time series. Despite being a very pervasive theme in time series analysis since its inception, it still raises a lot of controversies. The difficulties, or better, the challenges, lie in the identification of the sources of the trend dynamics, and in the definition of the time horizon which defines the long run. The prevalent view in the literature considers the trend as a genuinely latent component, i.e. as the component of the evolution of a series that is persistent and cannot be ascribed to observable factors. As a matter of fact, the univariate approaches reviewed here assume that the trend is either a deterministic or random function of time.
    Keywords: Time series models; unobserved components.
    JEL: C10 C22
    Date: 2010–03–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:21607&r=ecm
  16. By: Philip Hans Franses (Erasmus School of Economics, Erasmus University Rotterdam); Michael McAleer (Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute); Rianne Legerstee (Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute)
    Abstract: Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.
    Date: 2010–03
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2010cf729&r=ecm
  17. By: Robin Hogarth; Emre Soyer
    Abstract: Much of empirical economics involves regression analysis. However, does the presentation of results affect economists’ ability to make inferences for decision making purposes? In a survey, 257 academic economists were asked to make probabilistic inferences on the basis of the outputs of a regression analysis presented in a standard format. Questions concerned the distribution of the dependent variable conditional on known values of the independent variable. However, many respondents underestimated uncertainty by failing to take into account the standard deviation of the estimated residuals. The addition of graphs did not substantially improve inferences. On the other hand, when only graphs were provided (i.e., with no statistics), respondents were substantially more accurate. We discuss implications for improving practice in reporting results of regression analyses.
    Keywords: Regression analysis; presentation formats; probabilistic predictions; graphs.
    JEL: C01 C20 C53 Y10
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1204&r=ecm
  18. By: Peter B. Dixon; Maureen T. Rimmer
    Abstract: Computable general equilibrium models can be used to generate detailed forecasts of output growth for commodities/industries. The main objective is to provide realistic baselines from which to calculate the effects of policy changes. In this paper, we assess a CGE forecasting method that has been applied in policy analyses in the U.S. and Australia. Using data available up to 1998, we apply the method with the USAGE model to generate "genuine forecasts" for 500 U.S. commodities/industries for the period 1998 to 2005. We then compare these forecasts with actual outcomes and with alternate forecasts derived as extrapolated trends from 1992 to 1998.
    Keywords: CGE validation Forecasting U S CGE
    JEL: C68 E37 F14
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-197&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.