nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒07‒30
twenty-two papers chosen by
Sune Karlsson
Orebro University

  1. The ‘Pre-Eminence of Theory’ versus the ‘General-to-Specific’ Cointegrated VAR Perspectives in Macro-Econometric Modeling By Spanos, Aris
  2. Fully Modified Narrow-Band Least Squares Estimation of Stationary Fractional Cointegration By Morten Ørregaard Nielsen; Per Frederiksen
  3. Nonparametric Cointegration Analysis of Fractional Systems With Unknown Integration Orders By Morten Ørregaard Nielsen
  4. "Improving the LIML Estimation with Many Instruments and Persistent Heteroscedasticity" By Naoto Kunitomo
  5. Estimating Derivatives in Nonseparable Models with Limited Dependent Variables By Joseph G. Altonji; Hidehiko Ichimura; Taisuke Otsu
  6. The Finite-Sample E ects of VAR Dimensions on OLS Bias, OLS Variance, and Minimum MSE Estimators By Steve Lawford; Michalis P. Stamatogiannis
  7. Optimal Linear Filtering, Smoothing and Trend Extraction for Processes with Unit Roots and Cointegration By Dimitrios D. Thomakos
  8. Nonlinear Cointegration Analysis and the Environmental Kuznets Curve By Hong, Seung Hyun; Wagner, Martin
  9. The Empirical Properties of Some Popular Estimators of Long Memory Processes By Jennifer Brown; Les Oxley; William Rea; Marco Reale
  10. Characteristic function estimation of non-Gaussian Ornstein-Uhlenbeck processes. By Emanuele Taufer
  11. Recurrent Support Vector Regression for a Nonlinear ARMA Model with Applications to Forecasting Financial Returns By Shiyi Chen; Kiho Jeong; Wolfgang K. Härdle
  12. Bayesian Forecasting using Stochastic Search Variable Selection in a VAR Subject to Breaks By Gary Koop; Markus Jochmann; Rodney W. Strachan
  13. Optimal designs for both model discrimination and parameter estimation By Chiara Tommasi
  14. Marginal and Interaction Effects in Ordered Response Models By Mallick, Debdulal
  15. Agglomeration within and between regions: Two econometric based indicators By Valter Di Giacinto; Marcello Pagnini
  16. Panel estimation of state dependent adjustment when the target is unobserved By Kalckreuth, Ulf von
  17. Matching for Causal Inference Without Balance Checking By Stefano Iacus; Gary King; Giuseppe Porro
  18. Spanning Tests in Return and Stochastic Discount Factor Mean-Variance Frontiers: A Unifying Approach By Francisco Peñaranda; Enrique Sentana
  19. A two-step procedure to analyse users' satisfaction By Pieralda Ferrari; Laura Pagani; Carlo Fiorio
  20. Nelson-Plosser revisited: the ACF approach By Karim M. Abadir; Gabriel Talmain; Giovanni Caggiano
  21. Optimal HP filtering for South Africa By Leon du Toit
  22. Using propensity score methods to analyse individual patient-level cost-effectiveness data from observational studies By Manca, A; Austin, P. C

  1. By: Spanos, Aris
    Abstract: The primary aim of the paper is to place current methodological discussions on empirical modeling contrasting the ‘theory first’ versus the ‘data first’ perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander’s argument in his paper “Economists, Incentives, Judgement and Empirical Work” relating to the two different perspectives in Europe and the US that are currently dominating empirical macro-econometric modeling and delves deeper into their methodological/philosophical foundations. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories.
    Keywords: Econometric methodology, ‘general-to-specific’, pre-eminence of theory, VAR, statistical adequacy, realisticness of theory, statistical model
    JEL: B4 C1 C3
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:7336&r=ecm
  2. By: Morten Ørregaard Nielsen (Queen's University and CREATES); Per Frederiksen (Nordea Markets)
    Abstract: We consider estimation of the cointegrating relation in the stationary fractional cointegration model. This model has found important application recently, especially in financial economics. Previous research has considered a semiparametric narrow-band least squares (NBLS) estimator in the frequency domain, often under a condition of non-coherence between regressors and errors at the zero frequency. We show that in the absence of this condition, the NBLS estimator is asymptotically biased, and also that the bias can be consistently estimated. Consequently, we introduce a fully modified NBLS estimator which eliminates the bias while still having the same asymptotic variance as the NBLS estimator. We also show that local Whittle estimation of the integration order of the errors can be conducted consistently on the residuals from NBLS regression, whereas the estimator only has the same asymptotic distribution as if the errors were observed under the condition of non-coherence. Furthermore, compared to much previous research, the development of the asymptotic distribution theory is based on a different spectral density representation, which is relevant for multivariate fractionally integrated processes, and the use of this representation is shown to reduce both the asymptotic bias and variance of the narrow-band estimators. We also present simulation evidence and a series of empirical illustrations to demonstrate the feasibility and empirical relevance of our proposed methodology.
    Keywords: Fractional cointegration, frequency domain, fully modified estimation, long memory, semiparametric
    JEL: C22
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1171&r=ecm
  3. By: Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: In this paper a nonparametric variance ratio testing approach is proposed for determining the number of cointegrating relations in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data or of the strength of the cointegrating relations. Since the test is nonparametric, it does not require the specification of a particular model and is invariant to short-run dynamics. Nor does it require the choice of any lag length or bandwidth parameters which change the test statistic without being reflecting in the asymptotic distribution. Furthermore, a consistent estimate of the cointegration space can be obtained as part of the procedure. The asymptotic distribution theory for the proposed test is non-standard but easily tabulated. Monte Carlo simulations demonstrate excellent finite sample properties, even rivaling those of well-specified parametric tests. The proposed methodology is applied to the term structure of interest rates, and contrary to (fractional and integer-based) parametric approaches, evidence in favor of the expectations hypothesis is found using the nonparametric approach.
    Keywords: cointegration rank, cointegration space, fractional integration and cointegration, interest rates, long memory, nonparametric, term structure, variance ratio
    JEL: C32
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1174&r=ecm
  4. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo)
    Abstract: We consider the estimation of coefficients of a structural equation with many instrumental variables in a simultaneous equation system. We propose a class of modifications of the limited information maximum likelihood (MLIML) estimator for improving its asymptotic properties as well as the small sample properties with many instruments and persistent heteroscedasticity. We show that the MLIML estimator improves the LIML estimator and we relate a particular MLIML estimator with the HLIM (or JLIML) estimation. We also give a set of sufficient conditions for an asymptotic optimality when the number of instruments is large with persistent heteroscedasticity. Our method can be extended to the generalized LIML (GLIML) estimation.
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2008cf576&r=ecm
  5. By: Joseph G. Altonji (Dept. of Economics, Yale University); Hidehiko Ichimura (University of Tokyo); Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: We present a simple way to estimate the effects of changes in a vector of observable variables X on a limited dependent variable Y when Y is a general nonseparable function of X and unobservables. We treat models in which Y is censored from above or below or potentially from both. The basic idea is to first estimate the derivative of the conditional mean of Y given X at x with respect to x on the uncensored sample without correcting for the effect of changes in x induced on the censored population. We then correct the derivative for the effects of the selection bias. We propose nonparametric and semiparametric estimators for the derivative. As extensions, we discuss the cases of discrete regressors, measurement error in dependent variables, and endogenous regressors in a cross section and panel data context.
    Keywords: Censored regression, Nonseparable models, Endogenous regressors, Tobit, Extreme quantiles
    JEL: C1 C14 C23 C24
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1668&r=ecm
  6. By: Steve Lawford (ENAC, France, University of Nottingham, UK, Philips College, Cyprus and The Rimini The Rimini Center for Economic Analysis, Italy); Michalis P. Stamatogiannis
    Abstract: Vector autoregressions (VARs) are important tools in time series analysis. However, relatively little is known about the nite-sample behaviour of parameter estimators. We address this issue, by investigating ordinary least squares (OLS) estimators given a data generating process that is a purely nonstationary rst-order VAR. Speci cally, we use Monte Carlo simulation and numerical optimization to derive response surfaces for OLS bias and variance, in terms of VAR dimensions, given correct speci cation and several types of over-parameterization of the model: we include a constant, and a constant and trend, and introduce excess lags. We then examine the correction factors that are required for the least squares estimator to attain minimum mean squared error (MSE). Our results improve and extend one of the main nite-sample multivariate analytical bias results of Abadir, Hadri and Tzavalis (Econometrica 67 (1999) 163), generalize the univariate variance and MSE ndings of Abadir (Economics Letters 47 (1995) 263) to the multivariate setting, and complement various asymptotic studies.
    Keywords: Finite-sample bias, Monte Carlo simulation, nonstationary time series, response surfaces, vector autoregression.
    JEL: C15 C22 C32
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:13-08&r=ecm
  7. By: Dimitrios D. Thomakos (University of Peloponnese, Greece and The Rimini Centre for Economic Analysis)
    Abstract: In this paper I propose a novel optimal linear ølter for smoothing, trend and signal extraction for time series with a unit root. The filter is based on the Singular Spectrum Analysis (SSA) methodology, takes the form of a particular moving average and is di¨erent from other linear filters that have been used in the existing literature. To best of my knowledge this is the first time that moving average smoothing is given an optimality justification for use with unit root processes. The frequency response function of the filter is examined and a new method for selecting the degree of smoothing is suggested. I also show that the filter can be used for successfully extracting a unit root signal from stationary noise. The proposed methodology can be extended to also deal with two cointegrated series and I show how to estimate the cointegrating coe±cient using SSA and how to extract the common stochastic trend component. A simulation study explores some of the characteristics of the filter for signal extraction, trend prediction and cointegration estimation for univariate and bivariate series. The practical usefulness of the method is illustrated using data for the US real GDP and two financial time series. Classification-JEL:
    Keywords: cointegration, forecasting, linear øltering, singular spectrum analysis, smoothing, trend extraction and prediction, unit root.
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:14-08&r=ecm
  8. By: Hong, Seung Hyun (Department of Economics, Concordia University, Montreal, Quebec, Canada); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: Recent years have seen a growing literature on the environmental Kuznets curve (EKC) that resorts in a large part to cointegration techniques. The EKC literature has failed to acknowledge that such regressions involve unit root nonstationary regressors and their integer powers (e.g. GDP and GDP squared), which behave differently from linear cointegrating regressions. Here we provide the necessary tools for EKC analysis by deriving estimation and testing theory for cointegrating equations including stationary regressors, deterministic regressors, unit root nonstationary regressors and their integer powers. We consider fully modified OLS estimation, specification tests based on augmented and auxiliary regressions, as well as a sub-sample KPSS type cointegration test. We present simulation results illustrating the performance of the estimators and tests. In the empirical application for CO2 and SO2 emissions for 19 early industrialized countries over the period 1870-2000 we find evidence for an EKC in roughly half of the countries.
    Keywords: Integrated process, Nonlinear transformation, Fully modified estimation, Nonlinear cointegration analysis, Environmental Kuznets curve
    JEL: C12 C13 Q20
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:224&r=ecm
  9. By: Jennifer Brown; Les Oxley (University of Canterbury); William Rea; Marco Reale
    Abstract: We present the results of a simulation study into the properties of 12 different estimators of the Hurst parameter, H, or the fractional integration parameter, d, in long memory time series. We compare and contrast their performance on simulated Fractional Gaussian Noises and fractionally integrated series with lengths between 100 and 10,000 data points and H values between 0.55 and 0.90 or d values between 0.05 and 0.40. We apply all 12 estimators to the Campito Mountain data and estimate the accuracy of their estimates using the Beran goodness of t test for long memory time series.
    Keywords: Strong dependence; global dependence; long range dependence; Hurst parameter estimators
    JEL: C13 C22
    Date: 2008–06–26
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:08/13&r=ecm
  10. By: Emanuele Taufer (DISA, Faculty of Economics, Trento University)
    Abstract: Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop e±cient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.
    Keywords: Ornstein-Uhlenbeck process; Lévy process; self-decomposable distribution; characteristic function; estimation
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:trt:disawp:0805&r=ecm
  11. By: Shiyi Chen; Kiho Jeong; Wolfgang K. Härdle
    Abstract: Motivated by the recurrent Neural Networks, this paper proposes a recurrent Support Vector Regression (SVR) procedure to forecast nonlinear ARMA model based simulated data and real data of financial returns. The forecasting ability of the recurrent SVR is compared with three competing methods, MLE, recurrent MLP and feedforward SVR. Theoretically, MLE and MLP only focus on fit in-sample, but SVR considers both fit and forecast out-of-sample which endows SVR with an excellent forecasting ability. This is confirmed by the evidence from the simulated and real data based on two forecasting accuracy evaluation metrics (NSME and sign). That is, for one-step-ahead forecasting, the recurrent SVR is consistently better than the MLE and the recurrent MLP in forecasting both the magnitude and turning points, and really improves the forecasting performance as opposed to the usual feedforward SVR.
    Keywords: Recurrent Support Vector Regression; MLE; recurrent MLP; nonlinear ARMA; financial forecasting
    JEL: C45 F37 F47
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-051&r=ecm
  12. By: Gary Koop (University of Strathclyde, Glasgow, UK and The Rimini Centre for Economic Analysis, Italy); Markus Jochmann (University of Strathclyde, Glasgow, UK and The Rimini Centre for Economic Analysis, Italy); Rodney W. Strachan (University of Queensland, UK and The Rimini Centre for Economic Analysis, Italy)
    Abstract: This paper builds a model which has two extensions over a standard VAR. The first of these is stochastic search variable selection, which is an automatic model selection device which allows for coefficients in a possibly over-parameterized VAR to be set to zero. The second allows for an unknown number of structual breaks in the VAR parameters. We investigate the in-sample and forecasting performance of our model in an application involving a commonly-used US macro-economic data set. We find that, in-sample, these extensions clearly are warranted. In a recursive forecasting exercise, we find moderate improvements over a standard VAR, although most of these improvements are due to the use of stochastic search variable selection rather than the inclusion of breaks. Classification-JEL:
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:19-08&r=ecm
  13. By: Chiara Tommasi (University of Milano)
    Abstract: The KL-optimality criterion has been recently proposed to discriminate between any two statistical models. However, designs which are optimal for model discrimination may be inadequate for parameter estimation. In this paper, the DKL-optimality criterion is proposed which is useful for the dual problem of model discrimination and parameter estimation. An equivalence theorem and a stopping rule for the corresponding iterative algorithms are provided. A pharmacokinetics application is given to show the good properties of a DKL-optimum design.
    Keywords: D-optimality, T-optimality, Kullback-Leibler distance, KL-optimality,
    Date: 2008–05–05
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1071&r=ecm
  14. By: Mallick, Debdulal
    Abstract: In discrete choice models the marginal effect of a variable of interest that is interacted with another variable differs from the marginal effect of a variable that is not interacted with any variable. The magnitude of the interaction effect is also not equal to the marginal effect of the interaction term. I present consistent estimators of both marginal and interaction effects in ordered response models. This procedure is general and can easily be extended to other discrete choice models.
    Keywords: Marginal effect; interaction effect; ordered probit
    JEL: C12 C25
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:9617&r=ecm
  15. By: Valter Di Giacinto (Bank of Italy, LÂ’Aquila Branch, Economic Research Unit); Marcello Pagnini (Bank of Italy, Bologna Branch, Economic Research Unit)
    Abstract: We propose two indexes to measure the agglomeration forces acting within and between different regions. Unlike the existing measures of agglomeration, our model-based indexes allow for simultaneous treatment of both aspects. Local plant diffusion in a given industry is modelled as a spatial error components process (SEC). Maximum likelihood inference on model parameters is dealt with, including the problem of data censoring. The statistical properties of standard agglomeration indexes in the data environment provided by our SEC model are then treated. Finally, our methodology is applied to Italian census data for both manufacturing and service industries.
    Keywords: agglomeration, spatial autocorrelation, spatial error components model
    JEL: R12 L70 C19
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_674_08&r=ecm
  16. By: Kalckreuth, Ulf von
    Abstract: Understanding adjustment processes has become central in economics. Empirical analysis is fraught with the problem that the target is usually unobserved. This paper develops, simulates and applies GMM methods for estimating dynamic adjustment models in a panel data context with partially unobserved targets and endogenous, time-varying persistence. In this setup, the standard first difference GMM procedure fails. I propose three estimation strategies. One is based on quasi-differencing, and it leads to two different, but related sets of moment conditions. The second is characterised by a statedependent filter, while the third is an adaptation of the GMM level estimator. Ökonomische Anpassungsvorgänge auf der Mikroebene sind inhärent schwierig zu schätzen, da typischerweise das Ziel der Anpassung nur unvollkommen beobachtet werden kann. Das vorliegende Papier untersucht zustandsabhängige ökonomische Anpassungsprozesse, wie sie sich bei zeitlich variablen Beschränkungen ergeben, wie etwa unter finanziellen Restriktionen. Das Problem latenter Zielniveaus wird hier mit Hilfe von Panelinformationen und einem Fehlerkomponentenansatz angegangen. Die Standardmethoden dynamische Panelmodelle, wie sie von Anderson and Hsiao (1982), Arellano and Bond (1991), Arellano and Bover (1995) and Blundell and Bond (1998) entwickelt wurden, sind auf diesen Fall nicht anwendbar, da sie eine zeitlich invariante lineare Dynamik voraussetzen. Das Papier zeigt, wie die GMM-Methodik auf den Fall ökonomischer Anpassungsvorgänge verallgemeinert werden können, bei denen das Ziel teilweise unbeobachtet ist und die Nichtlinearität die Form diskreter Regime annimmt. Dies ist nicht trivial, weil der unbekannte und zeitlich variable Anpassungskoeffizient mit dem ebenso unbekannten individuellen Störterm interagiert. Aber der Ertrag ist reichhaltig, weil eine Reihe wohlbekannter Prozeduren und Standardtests für das Problem der ökonomischen Anpassung nutzbar gemacht werden kann. Die hier beschriebenen Schätzverfahren können dazu beitragen, eine große Zahl ökonomischer Fragestellungen adäquater zu behandeln als dies bislang möglich war. Beispiele sind die Dynamik der Nachfrage nach Arbeit und Kapital, bei der Preissetzung und bei der Anpassung der finanziellen Struktur in Firmen und Banken.
    Keywords: Dynamic panel data models, economic adjustment
    JEL: C15 C23 D21
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:7337&r=ecm
  17. By: Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Gary King (Institute for Quantitative Social Science, Harvard University); Giuseppe Porro (Department of Economics and Statistics, University of Trieste)
    Abstract: We address a major discrepancy in matching methods for causal inference in observational data. Since these data are typically plentiful, the goal of matching is to reduce bias and only secondarily to keep variance low. However, most matching methods seem designed for the opposite problem, guaranteeing sample size ex ante but limiting bias by controlling for covariates through reductions in the imbalance between treated and control groups only ex post and only sometimes. (The resulting practical difficulty may explain why many published applications do not check whether imbalance was reduced and so may not even be decreasing bias.) We introduce a new class of ``Monotonic Imbalance Bounding'' (MIB) matching methods that enables one to choose a fixed level of maximum imbalance, or to reduce maximum imbalance for one variable without changing it for the others. We then discuss a specific MIB method called ``Coarsened Exact Matching'' (CEM) which, unlike most existing approaches, also explicitly bounds through ex ante user choice both the degree of model dependence and the causal effect estimation error, eliminates the need for a separate procedure to restrict data to common support, meets the congruence principle, is approximately invariant to measurement error, works well with modern methods of imputation for missing data, is computationally efficient even with massive data sets, and is easy to understand and use. This method can improve causal inferences in a wide range of applications, and may be preferred for simplicity of use even when it is possible to design superior methods for particular problems. We also make available open source software which implements all our suggestions.
    Keywords: causal inferences, matching, treatment effect estimation,
    Date: 2008–06–29
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1073&r=ecm
  18. By: Francisco Peñaranda; Enrique Sentana
    Abstract: We propose new spanning tests that assess if the economically meaningful cost and mean representing portfolios are shared by the initial and additional assets. We show that our proposed tests are asymptotically equivalent to existing ones under local alternatives, and analyse their asymptotic relative efficiency. We extend optimal GMM inference to deal with singularities arising in some spanning tests, and show that our tests generalise naturally to situations in which we consider all active portfolio strategies. Finally, we apply our tests to strategies involving size and book-to-market sorted stock portfolios whose weights depend on the state of the credit cycle.
    Keywords: Asset Pricing, Asymptotic Slopes, Dynamic Portfolio Strategies, GMM, Representing portfolios, Singular Covariance Matrix
    JEL: G11 G12 C12 C13
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1101&r=ecm
  19. By: Pieralda Ferrari (University Milan); Laura Pagani (University of Udine); Carlo Fiorio (University of Milan)
    Abstract: In this paper an integrated use of Nonlinear Principal Component Analysis (NLPCA) and Multilevel Models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable that depends on individual and contextual covariates. NLPCA is used to measure the level of a latent variable and MLM are adopted for detecting individual and environmental determinants of the level. By using the Eurobarometer survey data, this approach is applied to analyse the European users' satisfaction with services of general interest after the recent privatisation and liberalisation policies.
    Keywords: Nonlinear Principal Component Analysis, Multilevel Models, satisfaction, privatisation and liberalisation policies,
    Date: 2008–03–17
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1068&r=ecm
  20. By: Karim M. Abadir (Imperial College London, London, UK and The Rimini Centre for Economic Analysis, Italy); Gabriel Talmain (University of Glasgow, Glasgow, UK); Giovanni Caggiano (University of Padua, Italy)
    Abstract: We detect a new stylized fact about the common dynamics of macroeconomic and financial aggregates. The rate of decay of the memory of these series is depicted by their Auto-Correlation Functions (ACFs). They all share a common four-parameter functional form that we derive from the dynamics of an RBC model with heterogeneous firms. We find that, not only does our formula fit the data better than the ACFs that arise from autoregressive models, but it also yields the correct shape of the ACF. This can help policymakers understand better the lags with which an economy evolves, and the onset of its turning points. Classification-JEL: JEL E32, E52, E63
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:18-08&r=ecm
  21. By: Leon du Toit (Department of Economics, Stellenbosch University)
    Abstract: Among the various methods used to identify the business cycle from aggregate data, the Hodrick-Prescott filter has become an industry standard – it ‘identifies’ the business cycle by removing low-frequency information, thereby smoothing the data. Since the filter’s inception in 1980, the value of the smoothing constant for quarterly data has been set at a ‘default’ of 1600, following the suggestion of Hodrick and Prescott (1980). This paper argues that this ‘default value’ is inappropriate due to its ad hoc nature and problematic underlying assumptions. Instead this paper uses the method of optimal filtering, developed by Pedersen (1998, 2001, and 2002), to determine the optimal value of the smoothing constant for South Africa. The optimal smoothing constant is that value which least distorts the frequency information of the time series. The result depends on both the censoring rule for the duration of the business cycles and the structure of the economy. The paper raises a number of important issues concerning the practical use of the HP filter, and provides an easily replicable method in the form of MATLAB code.
    Keywords: Hodrick-Prescott filter, Spectral analysis, Ideal filtering, Optimal filtering, Distortionary filtering, Business cycles, MATLAB
    JEL: C22 E32
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers55&r=ecm
  22. By: Manca, A; Austin, P. C
    Abstract: The methodology relating to the statistical analysis of individual patient-level cost-effectiveness data collected alongside randomised controlled trials (RCTs) has evolved dramatically in the last ten years. This body of techniques has been developed and applied mainly in the context of the randomised clinical trial design. There are, however, many situations in which a trial is neither the most suitable nor the most efficient vehicle for the evaluation. This paper provides a tutorial-like discussion of the ways in which propensity score methods could be used to assist in the analysis of observational individual patient-level cost-effectiveness data. As a motivating example, we assessed the cost-effectiveness of CABG versus PTCA – one year post procedure - in a cohort of individuals who received the intervention within 365 days of their index admission for AMI. The data used for this paper were obtained from the Ontario Myocardial Infarction Database (OMID), linking these with data from the Canadian Institute for Health Information (CIHI), the Ontario Health Insurance Plan (OHIP), the Ontario Drug Benefit (ODB) program, and Ontario Registered Persons Database (RPDB). We discuss three ways in which propensity score can be used to control for confounding in the estimation of average cost-effectiveness, and provide syntax codes for both propensity score matching and cost-effectiveness modelling.
    Keywords: Cost, cost-effectiveness, propensity score, revascularisation, statistical methods
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:08/20&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.