nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒02‒27
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. Forecasting Realized Volatility Using A Nonnegative Semiparametric Model By Daniel PREVE; Anders ERIKSSON; Jun YU
  2. Adaptive hybrid Metropolis-Hastings samplers for DSGE models By Strid, Ingvar; Giordani, Paolo; Kohn, Robert
  3. Modeling Sample Selection for Durations with Time-Varying Covariates, with an Application to the Duration of Exchange Rate Regimes By Boehmke, Frederick J.; Meissner, Christopher M.
  4. Sample Survey Calibration: An Informationtheoretic perspective By Martin Wittenberg
  5. Econometric Analysis of Continuous Time Models: A Survey of Peter Philip¡¯s Work and Some New Results By Jun YU
  6. Time varying Hierarchical Archimedean Copulae By Wolfgang Karl Härdle; Ostap Okhrin; Yarema Okhrin
  7. Minimum Wages and Employment: Reconsidering the Use of a Time-Series Approach as an Evaluation Tool By Lee, Wang-Sheng; Suardi, Sandy
  8. Survival Analysis in LGD Modeling By Jiří Witzany; Michal Rychnovský; Pavel Charamza
  9. "Panel Data Analysis of Japanese Residential Water Demand Using a Discrete/Continuous Choice Approach" By Koji Miyawaki; Yasuhiro Omori; Akira Hibiki
  10. Finite-Sample Bias and Inconsistency in the Estimation of Poverty Maps By Jesse Naidoo
  11. Monetary Policy and Identification in SVAR Models: A Data Oriented Perspective By Fragetta, Matteo
  12. Estimating distributions of potential outcomes using local instrumental variables with an application to changes in college enrollment and wage inequality. By Carneiro, P.; Lee, S.
  13. Multiple imputation of missing values in the wave 2007 of the IAB Establishment Panel By Drechsler, Jörg
  14. Evaluating the Impact of Health Programmes By Justine Burns; Malcolm Kewsell; Rebecca Thornton

  1. By: Daniel PREVE (School of Economics, Singapore Management University); Anders ERIKSSON (Department of Information Science/Statistics, University of Uppsala); Jun YU (School of Economics, Singapore Management University)
    Abstract: This paper introduces a parsimonious and yet flexible nonnegative semiparametric model to forecast financial volatility. The new model extends the linear nonnegative autoregressive model of Barndorff-Nielsen & Shephard (2001) and Nielsen & Shephard (2003) by way of a power transformation. It is semiparametric in the sense that the dependency structure and distributional form of its error component are left unspecified. The statistical properties of the model are discussed and a novel estimation method is proposed. Simulation studies validate the new estimation method and suggest that it works reasonably well in finite samples. The out-of-sample performance of the proposed model is evaluated against a number of standard methods, using data on S&P 500 monthly realized volatilities. The competing models include the exponential smoothing method, a linear AR(1) model, a log-linear AR(1) model, and two long-memory ARFIMA models. Various loss functions are utilized to evaluate the predictive accuracy of the alternative methods. It is found that the new model generally produces highly competitive forecasts.
    Keywords: Autoregression, nonlinear/non-Gaussian time series, realized volatility, semiparametric model, volatility forecast.
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:22-2009&r=ecm
  2. By: Strid, Ingvar (Dept. of Economic Statistics, Stockholm School of Economics); Giordani, Paolo (Research division, Sveriges Riksbank); Kohn, Robert (Australian School of Business, University of New South Wales)
    Abstract: Bayesian inference for DSGE models is typically carried out by single block random walk Metropolis, involving very high computing costs. This paper combines two features, adaptive independent Metropolis-Hastings and parallelisation, to achieve large computational gains in DSGE model estimation. The history of the draws is used to continuously improve a t-copula proposal distribution, and an adaptive random walk step is inserted at predetermined intervals to escape difficult points. In linear estimation applications to a medium scale (23 parameters) and a large scale (51 parameters) DSGE model, the computing time per independent draw is reduced by 85% and 65-75% respectively. In a stylised nonlinear estimation example (13 parameters) the reduction is 80%. The sampler is also better suited to parallelisation than random walk Metropolis or blocking strategies, so that the effective computational gains, i.e. the reduction in wall-clock time per independent equivalent draw, can potentially be much larger.
    Keywords: Markov Chain Monte Carlo (MCMC); Adaptive Metropolis-Hastings; Parallel algorithm; DSGE model; Copula
    JEL: C11 C63
    Date: 2010–02–14
    URL: http://d.repec.org/n?u=RePEc:hhs:hastef:0724&r=ecm
  3. By: Boehmke, Frederick J. (University of Iowa); Meissner, Christopher M. (University of California, Davis and NBER)
    Abstract: We extend existing estimators for duration data that suffer from non-random sample selection to allow for time-varying covariates. Rather than a continuous-time duration model, we propose a discrete-time alternative that models the effects of sample selection at the time of selection across all subsequent years of the resulting spell. Properties of the estimator are compared to those of a naive discrete duration model through Monte Carlo analysis and indicate that our estimator outperforms the naive model when selection is non-trivial. We then apply this estimator to the question of the duration of monetary regimes and find evidence that ignoring selection into pegs leads to faulty inferences.
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:ecl:ucdeco:09-22&r=ecm
  4. By: Martin Wittenberg (School of Economics, University of Cape Town)
    Abstract: We show that the pseudo empirical maximum likelihood estimator can be recast as a calibration estimator. The process of estimating the probabilities pk of the distribution function can be done also in a maximum entropy framework. We suggest that a minimum cross-entropy estimator has attractive theoretical properties. A Monte Carlo simulation suggests that this estimator outperforms the PEMLE and the Horvitz-Thompson estimator. This is a joint SALDRU/DataFirst Working Paper as part of the Mellon Data Quality Project. For more information about the project visit www.datafirst.uct.ac.za.
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ldr:wpaper::41&r=ecm
  5. By: Jun YU (School of Economics, Singapore Management University)
    Abstract: Econometric analysis of continuous time models has drawn the attention of Peter Phillips for nearly 40 years, resulting in many important publications by him. In these publications he has dealt with a wide range of continuous time models and econometric problems, from univariate equations to systems of equations, from asymptotic theory to nite sample issues, from parametric models to nonparametric models, from identication problems to estimation and inference problems, from stationary models to nonstationary and nearly nonstationary models. This paper provides an overview of Peter Phillips' contributions in the continuous time econometrics literature. We review the problems that have been tackled by him, outline the main techniques suggested by him, and discuss the main results obtained by him. Based on his early work, we compare the performance of two asymptotic distributions in a simple setup. Results indicate that the in-ll asymptotics signicantly outperforms the long-span asymptotics.
    JEL: C22 C32
    Date: 2009–11
    URL: http://d.repec.org/n?u=RePEc:siu:wpaper:21-2009&r=ecm
  6. By: Wolfgang Karl Härdle; Ostap Okhrin; Yarema Okhrin
    Abstract: There is increasing demand for models of time-varying and non-Gaussian dependencies for mul- tivariate time-series. Available models suffer from the curse of dimensionality or restrictive assumptions on the parameters and the distribution. A promising class of models are the hierarchical Archimedean copulae (HAC) that allow for non-exchangeable and non-Gaussian dependency structures with a small number of parameters. In this paper we develop a novel adaptive estimation technique of the parameters and of the structure of HAC for time-series. The approach relies on a local change point detection procedure and a locally constant HAC approximation. Typical applications are in the financial area but also recently in the spatial analysis of weather parameters. We analyse the time varying dependency structure of stock indices and exchange rates. We find that for stock indices the copula parameter changes dynam- ically but the hierarchical structure is constant over time. Interestingly in our exchange rate example both structure and parameters vary dynamically.
    Keywords: copula, multivariate distribution, Archimedean copula, adaptive estimation
    JEL: C13 C14 C50
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-018&r=ecm
  7. By: Lee, Wang-Sheng (RMIT University); Suardi, Sandy (La Trobe University)
    Abstract: The time-series approach used in the minimum wage literature essentially aims to estimate a treatment effect of increasing the minimum wage. In this paper, we employ a novel approach based on aggregate time-series data that allows us to determine if minimum wage changes have significant effects on employment. This involves the use of tests for structural breaks as a device for identifying discontinuities in the data which potentially represent treatment effects. In an application based on Australian data, the tentative conclusion is that the introduction of minimum wage legislation in Australia in 1997 and subsequent minimum wage increases appear not to have had any significant negative employment effects for teenagers.
    Keywords: minimum wage, teenage employment, structural break
    JEL: C22 J3
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4748&r=ecm
  8. By: Jiří Witzany (University of Economics, Prague, Czech Republic); Michal Rychnovský (University of Economics, Prague, Czech Republic); Pavel Charamza (University of Economics, Prague, Czech Republic)
    Abstract: The paper proposes an application of the survival time analysis methodology to estimations of the Loss Given Default (LGD) parameter. The main advantage of the survival analysis approach compared to classical regression methods is that it allows exploiting partial recovery data. The model is also modified in order to improve performance of the appropriate goodness of fit measures. The empirical testing shows that the Cox proportional model applied to LGD modeling performs better than the linear and logistic regressions. In addition a significant improvement is achieved with the modified “pseudo” Cox LGD model.
    Keywords: credit risk, recovery rate, loss given default, correlation, regulatory capital
    JEL: G21 G28 C14
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2010_02&r=ecm
  9. By: Koji Miyawaki (National Institute for Environmental Studies); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Akira Hibiki (National Institute for Environmental Studies)
    Abstract: Block rate pricing is often applied to income taxation, telecommunication services, and brand marketing in addition to its best-known application in public utility services. Under block rate pricing, consumers face piecewise-linear budget constraints. A discrete/ continuous choice approach is usually used to account for piecewise-linear budget constraints for demand and price endogeneity. A recent study proposed a methodology to incorporate a separability condition that previous studies ignore, by implementing a Markov chain Monte Carlo simulation based on a hierarchical Bayesian approach. To extend this approach to panel data, our study proposes a Bayesian hierarchical model incorporating the individual effect. The random coefficients model result shows that the price and income elasticities are estimated to be negative and positive, respectively, and the coefficients of the number of members and the number of rooms per household are estimated to be positive. Furthermore, the AR(1) error component model suggests that the Japanese residential water demand does not have serial correlation.
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2010cf717&r=ecm
  10. By: Jesse Naidoo
    Abstract: I argue that the estimation technique - widely used in the poverty mapping literature - introduced by Elbers, Lanjouw and Lanjouw, is highly sensitive to specification, severely biased in finite samples, and almost certain to fail to estimate the poverty headcount consistently. First, I show that the specification of the first-stage model of household expenditure strongly influences the estimated headcount; the range of obtainable estimates is on the order of 20% for many districts, and is as high as 48% for some areas. Further, some specifications imply province-level headcounts which diverge from the direct estimates by many as six standard deviations. Secondly, I construct bootstrap confidence intervals for the difference between the estimates under alternative specifications, which shows that (at a 2% level of significance) finite sample-bias is present in more than 42% of districts in even the best-performing regions. I calculate approximate lower bounds for the bias; I find it to be on the order of 3% for most areas, but the lower bounds range as high as 19.6% in some provinces. Finally, I argue that consistent estimation of the first stage model is necessary for consistent second-stage imputations and I decompose the difference between the true and estimated headcount into a sampling component and a specification component, the latter of which is asymptotically persistent. Given these results, it appears that the poverty maps estimated by this technique reflect primarily the arbitrary and unexamined methodological choices of their authors rather than robust features of the data.
    Date: 2009–09
    URL: http://d.repec.org/n?u=RePEc:ldr:wpaper:36&r=ecm
  11. By: Fragetta, Matteo
    Abstract: There is an ongoing debate on how to identify monetary policy shocks in SVAR models. Graphical modelling exploits statistical properties of data for identification and offers a data based tool to shed light on the issue. The information set of the monetary authorities, which is essential for the identification of the monetary shock seems to depend on availability of data in terms of higher frequency with respect to the policy instrument.
    Keywords: Monetary Policy; SVAR; Graphical Modelling;
    JEL: C32 E50
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:20616&r=ecm
  12. By: Carneiro, P.; Lee, S.
    Abstract: This paper extends the method of local instrumental variables developed by Heckman and Vytlacil [Heckman, J., Vytlacil E., 2005. Structural equations, treatment, effects and econometric policy evaluation. Econometrica 73(3), 669–738] to the estimation of not only means, but also distributions of potential outcomes. The newly developed method is illustrated by applying it to changes in college enrollment and wage inequality using data from the National Longitudinal Survey of Youth of 1979. Increases in college enrollment cause changes in the distribution of ability among college and high school graduates. This paper estimates a semiparametric selection model of schooling and wages to show that, for fixed skill prices, a 14% increase in college participation (analogous to the increase observed in the 1980s), reduces the college premium by 12% and increases the 90–10 percentile ratio among college graduates by 2%.
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:ner:ucllon:http://eprints.ucl.ac.uk/16157/&r=ecm
  13. By: Drechsler, Jörg (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany])
    Abstract: "The basic concept of multiple imputation is straightforward and easy to understand, but the application to real data imposes many implementation problems. To define useful imputation models for a dataset that consists of categorical and of continuous variables with distributions that are anything but normal, contains skip patterns and all sorts of logical constraints is a challenging task. In this paper, we review different approaches to handle these problems and illustrate their successful implementation for a complex imputation project at the German Institute for Employment Research (IAB): The imputation of missing values in one wave of the IAB Establishment Panel." (author's abstract, IAB-Doku) ((en))
    Keywords: Missing Data-Technik, Datenqualität, IAB-Betriebspanel, statistische Methode
    JEL: C52 C81
    Date: 2010–02–16
    URL: http://d.repec.org/n?u=RePEc:iab:iabdpa:201006&r=ecm
  14. By: Justine Burns; Malcolm Kewsell; Rebecca Thornton
    Abstract: This paper has two broad objectives. The first objective is broadly methodological and deals with some of the more pertinent estimation issues one should be aware of when studying the impact of health status on economic outcomes. We discuss some alternatives for constructing counterfactuals when designing health program evaluations such as randomization, matching and instrumental variables. Our second objective is to present a review of the existing evidence on the impact of health interventions on individual welfare.
    Date: 2009–10
    URL: http://d.repec.org/n?u=RePEc:ldr:wpaper:40&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.