nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒12‒01
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Sieve Nonparametric Likelihood Methods for Unit Root Tests By Francesco Bravo
  2. Empirical likelihood confidence intervals for the mean of a long-range dependent process By Nordman, Dan Nordman; Sibbertsen, Philipp; Lahiri, Soumendra N.
  3. Approximately normal tests for equal predictive accuracy in nested models By Todd E. Clark; Kenneth D. West
  4. Estimation and Testing for Varying Coefficients in Additive Models with Marginal Integration By Lijian Yang; Byeong U. Park; Lan Xue; Wolfgang Härdle
  5. Forecasting the yield curve in a data-rich environment - a no-arbitrage factor-augmented VAR approach By Emanuel Mönch
  6. Bayesian Estimation of Dynamic Discrete Choice Models By Susumu Imai; Neelam Jain
  7. Alternative procedures for estimating vector autoregressions identified with long-run restrictions By Lawrence J. Christiano; Martin Eichenbaum; Robert J. Vigfusson
  8. Testing for Long Memory and Nonlinear Time Series: A Demand for Money Study By Derek Bond; Michael J. Harrison; Edward J. O'Brien
  9. The Log of Gravity By Joao Santos Silva; Silvana Tenreyro
  10. Dynamics of State Price Densities By Wolfgang Härdle; Zdenek Hlavka
  11. Predicting Bankruptcy with Support Vector Machines By Wolfgang Härdle; Rouslan A. Moro; Dorothea Schäfer
  12. Business failure prediction: simple-intuitive models versus statistical models By H. OOGHE; C. SPAENJERS; P. VANDERMOERE
  13. A Non-Gaussian Panel Time Series Model for Estimating and Decomposing Default Risk By Siem Jan Koopman; André Lucas; Robert J. Daniels
  14. Controlling starting-point bias in double-bounded contingent valuation surveys. By Emmanuel Flachaire; Guillaume Hollard
  15. Vector Autoregressions and Reduced Form Representations of DSGE Models By Federico Ravenna
  16. Assessing the Usefulness of Structural Vector Autoregressions By Lawrence Christiano; Martin Eichenbaum
  17. Nonparametric Productivity Analysis By Wolfgang Härdle; Seok-Oh Jeong

  1. By: Francesco Bravo
    Abstract: This paper develops a new test for a unit root in autoregressive models with serially correlated errors. The test is based on the ``empirical'' Cressie-Read statistic and uses a sieve approximation to eliminate the bias in the asymptotic distribution of the test due to presence of serial correlation. The paper derives the asymptotic distributions of the sieve empirical Cressie-Read statistic under the null hypothesis of a unit root and under a local-to-unity alternative hypothesis. The paper uses a Monte Carlo study to assess the finite sample properties of two well-known members of the proposed test statistic: the empirical likelihood ratio and the Kullback-Liebler distance statistic. The results of the simulations seem to suggest that these two statistics have, in general, similar size and in most cases better power properties than those of standard Augmented Dickey-Fuller tests of a unit root. The paper also analyses the finite sample properties of a sieve bootstrap version of the (square of) the standard Augmented Dickey-Fuller test for a unit root. The results of the simulations seem to indicate that the bootstrap does solve almost completely the size distortion problem, yet at the same time produces a test statistic that has considerably less power than either that of the empirical likelihood or of the Kullback-Liebler distance statistic.
    Keywords: Autogregressive approximation; bootstrap; empirical Cressie-Read statistic; generalized empirical likelihood; linear process; unit root test.
    URL: http://d.repec.org/n?u=RePEc:yor:yorken:05/33&r=ecm
  2. By: Nordman, Dan Nordman; Sibbertsen, Philipp; Lahiri, Soumendra N.
    Abstract: This paper considers blockwise empirical likelihood for real-valued linear time processes which may exhibit either short- or long-range dependence. Empirical likelihood approaches intended for weakly dependent time series can fail in the presence of strong dependence. However, a modified blockwise method is proposed for confidence interval estimation of the process mean, which is valid for various dependence structures including long-range dependence. The finite-sample performance of the method is evaluated through a simulation study and compared to other confidence interval procedures involving subsampling or normal approximations.
    Keywords: blocking, confidence interval, empirical likelihood, FARIMA, long-range dependence
    JEL: C13 C22
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-327&r=ecm
  3. By: Todd E. Clark; Kenneth D. West
    Abstract: Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp05-05&r=ecm
  4. By: Lijian Yang; Byeong U. Park; Lan Xue; Wolfgang Härdle
    Abstract: We propose marginal integration estimation and testing methods for the coefficients of varying coefficient multivariate regression model. Asymptotic distribution theory is developed for the estimation method which enjoys the same rate of convergence as univariate function estimation. For the test statistic, asymptotic normal theory is established. These theoretical results are derived under the fairly general conditions of absolute regularity (beta-mixing). Application of the test procedure to the West German real GNP data reveals that a partially linear varying coefficient model is best parsimonious in fitting the data dynamics, a fact that is also confirmed with residual diagnostics.
    Keywords: adaptive volatility estimation, generalized hyperbolic distribution, value at risk, risk management
    JEL: C13 C14 C40
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-047&r=ecm
  5. By: Emanuel Mönch (Humboldt University, School of Business and Economics, Institute of Economic Policy, Spandauer Str. 1, 10178 Berlin, Germany)
    Abstract: This paper suggests a term structure model which parsimoniously exploits a broad macroeconomic information set. The model does not incorporate latent yield curve factors, but instead uses the common components of a large number of macroeconomic variables and the short rate as explanatory factors. Precisely, an affine term structure model with parameter restrictions implied by no-arbitrage is added to a Factor-Augmented Vector Autoregression (FAVAR). The model is found to strongly outperform different benchmark models in out-of-sample yield forecasts, reducing root mean squared forecast errors relative to the random walk up to 50% for short and around 20% for long maturities.
    Keywords: Affine term structure models, Yield curve, Dynamic factor models, FAVAR.
    JEL: C13 C32 E43 E44 E52
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20050544&r=ecm
  6. By: Susumu Imai; Neelam Jain (Economics Northern Illinois University)
    Keywords: Structural estimation, Dynamic programming, MCMC
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:red:sed005:432&r=ecm
  7. By: Lawrence J. Christiano; Martin Eichenbaum; Robert J. Vigfusson
    Abstract: We show that the standard procedure for estimating long-run identified vector autoregressions uses a particular estimator of the zero-frequency spectral density matrix of the data. We develop alternatives to the standard procedure and evaluate the properties of these alternative procedures using Monte Carlo experiments in which data are generated from estimated real business cycle models. We focus on the properties of estimated impulse response functions. In our examples, the alternative procedures have better small sample properties than the standard procedure, with smaller bias, smaller mean square error and better coverage rates for estimated confidence intervals.
    Keywords: Vector analysis ; Vector autoregression ; Econometric models
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:842&r=ecm
  8. By: Derek Bond (University of Ulster); Michael J. Harrison (Department of Economics, Trinity College); Edward J. O'Brien (Department of Economics, Trinity College and Central Bank of Ireland)
    Abstract: This paper draws attention to the limitations of the standard unit root/cointegration approach to economic and financial modelling, and to some of the alternatives based on the idea of fractional integration, long memory models, and the random field regression approach to nonlinearity. Following brief explanations of fractional integration and random field regression, and the methods of applying them, selected techniques are applied to a demand for money dataset. Comparisons of the results from this illustrative case study are presented, and conclusions are drawn that should aid practitioners in applied time-series econometrics.
    JEL: C22 C52 E41
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:tcd:tcduee:tep20021&r=ecm
  9. By: Joao Santos Silva; Silvana Tenreyro
    Abstract: Although economists have long been aware of Jensen's inequality, many econometric applicationshave neglected an important implication of it: the standard practice of interpreting the parameters oflog-linearized models estimated by ordinary least squares as elasticities can be highly misleading inthe presence of heteroskedasticity. This paper explains why this problem arises and proposes anappropriate estimator. Our criticism to conventional practices and the solution we propose extends toa broad range of economic applications where the equation under study is log-linearized. We developthe argument using one particular illustration, the gravity equation for trade, and apply the proposedtechnique to provide new estimates of this equation. We find significant differences betweenestimates obtained with the proposed estimator and those obtained with the traditional method. Thesediscrepancies persist even when the gravity equation takes into account multilateral resistance termsor fixed effects
    Keywords: Elasticities, Gravity equation, Heteroskedasticity, Jensens inequality, Poisson regression,Preferential-trade agreements
    JEL: C13 C21 F10 F11 F12 F15
    Date: 2005–07
    URL: http://d.repec.org/n?u=RePEc:cep:cepdps:dp0701&r=ecm
  10. By: Wolfgang Härdle; Zdenek Hlavka
    Abstract: State price densities (SPD) are an important element in applied quantitative finance. In a Black-Scholes model they are lognormal distributions with constant volatility parameter. In practice volatility changes and the distribution deviates from log-normality. We estimate SPDs using EUREX option data on the DAX index via a nonparametric estimator of the second derivative of the (European) call price function. The estimator is constrained so as to satisfy no-arbitrage constraints and it corrects for intraday covariance structure. Given a low dimensional representation of this SPD we study its dynamic for the years 1995–2003. We calculate a prediction corridor for the DAX for a 45 day forecast. The proposed algorithm is simple, it allows calculation of future volatility and can be applied to hedging exotic options.
    Keywords: option pricing, state price density estimation, nonlinear least squares, confidence intervals
    JEL: C13 C14 G12
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-021&r=ecm
  11. By: Wolfgang Härdle; Rouslan A. Moro; Dorothea Schäfer
    Abstract: The purpose of this work is to introduce one of the most promising among recently developed statistical techniques – the support vector machine (SVM) – to corporate bankruptcy analysis. An SVM is implemented for analysing such predictors as financial ratios. A method of adapting it to default probability estimation is proposed. A survey of practically applied methods is given. This work shows that support vector machines are capable of extracting useful information from financial data, although extensive data sets are required in order to fully utilize their classification power.
    Keywords: support vector machine, classification method, statistical learning theory, electric load prediction, optical character recognition, predicting bankruptcy, risk classification
    JEL: C40 G10
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-009&r=ecm
  12. By: H. OOGHE; C. SPAENJERS; P. VANDERMOERE
    Abstract: We give an overview of the shortcomings of the most frequently used statistical techniques in failure prediction modelling. The statistical procedures that underpin the selection of variables and the determination of coefficients often lead to ‘overfitting’. We also see that the ‘expected signs’ of variables are sometimes neglected and that an underlying theoretical framework mostly does not exist. Based on the current knowledge of failing firms, we construct a new type of failure prediction models, namely ‘simple-intuitive models’. In these models, eight variables are first logit-transformed and then equally weighted. These models are tested on two broad validation samples (1 year prior to failure and 3 years prior to failure) of Belgian companies. The performance results of the best simple-intuitive model are comparable to those of less transparent and more complex statistical models.
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:05/338&r=ecm
  13. By: Siem Jan Koopman; André Lucas; Robert J. Daniels
    Abstract: We model 1981–2002 annual default frequencies for a panel of US firms in different rating and age classes from the Standard and Poor's database. The data is decomposed into a systematic and firm-specific risk component, where the systematic component reflects the general economic conditions and default climate. We have to cope with (i) the shared exposure of each age cohort and rating class to the same systematic risk factor; (ii) strongly non-Gaussian features of the individual time series; (iii) possible dynamics of an unobserved common risk factor; (iv) changing default probabilities over the age of the rating, and (v) missing observations. We propose a non-Gaussian ultivariate state space model that deals with all of these issues simultaneously. The model is estimated using importance sampling techniques that have been modified to a multivariate setting. We show in a simulation study that such a multivariate approach improves the performance of the importance sampler.
    Keywords: credit risk; multivariate unobserved component models; importance sampling; non-Gaussian state space models.
    JEL: C32 G21
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:055&r=ecm
  14. By: Emmanuel Flachaire (EUREQua); Guillaume Hollard (OEP - UniversitŽ de Marne-la-VallŽe)
    Abstract: In this paper, we study starting point bias in double-bounded contingent valuation surveys. This phenomenon arises in applications that use multiple valuation questions. Indeed, response to follow-up valuation questions may be influenced by the bid proposed in the initial valuation question. Previous researches have been conducted in order to control for such an effect. However, they find that efficiency gains are lost when we control for undesirable response effects, relative to a single dichotomous choice question. Contrary to these results, we propose a way to control for starting point bias in double-bounded questions with gains in efficiency.
    Keywords: Starting-point bias, contingent valuation.
    JEL: Q26 C81 C93
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:mse:wpsorb:v05076&r=ecm
  15. By: Federico Ravenna
    Keywords: Vector Autoreregression; Dynamic Stochastic General Equilibrium Model; Kalman Filter; Business Cycle Shocks
    JEL: C13 C22 E32
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:red:sed005:841&r=ecm
  16. By: Lawrence Christiano; Martin Eichenbaum
    JEL: E32 C15 C52
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:red:sed005:902&r=ecm
  17. By: Wolfgang Härdle; Seok-Oh Jeong
    Abstract: How can we measure and compare the relative performance of production units? If input and output variables are one dimensional, then the simplest way is to compute efficiency by calculating and comparing the ratio of output and input for each production unit. This idea is inappropriate though, when multiple inputs or multiple outputs are observed. Consider a bank, for example, with three branches A, B, and C. The branches take the number of staff as the input, and measures outputs such as the number of transactions on personal and business accounts. Assume that the following statistics are observed: Branch A: 60000 personal transactions, 50000 business transactions, 25 people on staff, Branch B: 50000 personal transactions, 25000 business transactions, 15 people on staff, Branch C: 45000 personal transactions, 15000 business transactions, 10 people on staff. We observe that Branch C performed best in terms of personal transactions per staff, whereas Branch A has the highest ratio of business transactions per staff. By contrast Branch B performed better than Branch A in terms of personal transactions per staff, and better than Branch C in terms of business transactions per staff. How can we compare these business units in a fair way? Moreover, can we possibly create a virtual branch that reflects the input/output mechanism and thus creates a scale for the real branches? Productivity analysis provides a systematic approach to these problems. We review the basic concepts of productivity analysis and two popular methods DEA and FDH, which are given in Sections 12.1 and 12.2, respectively. Sections 12.3 and 12.4 contain illustrative examples with real data.
    Keywords: relative performance, production units, productivity analysis, Data Envelopment Analysis, DEA, Free Disposal Hull, DFH, insurance agencies, manufacturing industry
    JEL: C14 D20
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-013&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.