
on Econometrics 
By:  Francesco Bravo 
Abstract:  This paper develops a new test for a unit root in autoregressive models with serially correlated errors. The test is based on the ``empirical'' CressieRead statistic and uses a sieve approximation to eliminate the bias in the asymptotic distribution of the test due to presence of serial correlation. The paper derives the asymptotic distributions of the sieve empirical CressieRead statistic under the null hypothesis of a unit root and under a localtounity alternative hypothesis. The paper uses a Monte Carlo study to assess the finite sample properties of two wellknown members of the proposed test statistic: the empirical likelihood ratio and the KullbackLiebler distance statistic. The results of the simulations seem to suggest that these two statistics have, in general, similar size and in most cases better power properties than those of standard Augmented DickeyFuller tests of a unit root. The paper also analyses the finite sample properties of a sieve bootstrap version of the (square of) the standard Augmented DickeyFuller test for a unit root. The results of the simulations seem to indicate that the bootstrap does solve almost completely the size distortion problem, yet at the same time produces a test statistic that has considerably less power than either that of the empirical likelihood or of the KullbackLiebler distance statistic. 
Keywords:  Autogregressive approximation; bootstrap; empirical CressieRead statistic; generalized empirical likelihood; linear process; unit root test. 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:05/33&r=ecm 
By:  Nordman, Dan Nordman; Sibbertsen, Philipp; Lahiri, Soumendra N. 
Abstract:  This paper considers blockwise empirical likelihood for realvalued linear time processes which may exhibit either short or longrange dependence. Empirical likelihood approaches intended for weakly dependent time series can fail in the presence of strong dependence. However, a modified blockwise method is proposed for confidence interval estimation of the process mean, which is valid for various dependence structures including longrange dependence. The finitesample performance of the method is evaluated through a simulation study and compared to other confidence interval procedures involving subsampling or normal approximations. 
Keywords:  blocking, confidence interval, empirical likelihood, FARIMA, longrange dependence 
JEL:  C13 C22 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp327&r=ecm 
By:  Todd E. Clark; Kenneth D. West 
Abstract:  Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure. 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp0505&r=ecm 
By:  Lijian Yang; Byeong U. Park; Lan Xue; Wolfgang Härdle 
Abstract:  We propose marginal integration estimation and testing methods for the coefficients of varying coefficient multivariate regression model. Asymptotic distribution theory is developed for the estimation method which enjoys the same rate of convergence as univariate function estimation. For the test statistic, asymptotic normal theory is established. These theoretical results are derived under the fairly general conditions of absolute regularity (betamixing). Application of the test procedure to the West German real GNP data reveals that a partially linear varying coefficient model is best parsimonious in fitting the data dynamics, a fact that is also confirmed with residual diagnostics. 
Keywords:  adaptive volatility estimation, generalized hyperbolic distribution, value at risk, risk management 
JEL:  C13 C14 C40 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005047&r=ecm 
By:  Emanuel Mönch (Humboldt University, School of Business and Economics, Institute of Economic Policy, Spandauer Str. 1, 10178 Berlin, Germany) 
Abstract:  This paper suggests a term structure model which parsimoniously exploits a broad macroeconomic information set. The model does not incorporate latent yield curve factors, but instead uses the common components of a large number of macroeconomic variables and the short rate as explanatory factors. Precisely, an affine term structure model with parameter restrictions implied by noarbitrage is added to a FactorAugmented Vector Autoregression (FAVAR). The model is found to strongly outperform different benchmark models in outofsample yield forecasts, reducing root mean squared forecast errors relative to the random walk up to 50% for short and around 20% for long maturities. 
Keywords:  Affine term structure models, Yield curve, Dynamic factor models, FAVAR. 
JEL:  C13 C32 E43 E44 E52 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20050544&r=ecm 
By:  Susumu Imai; Neelam Jain (Economics Northern Illinois University) 
Keywords:  Structural estimation, Dynamic programming, MCMC 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:red:sed005:432&r=ecm 
By:  Lawrence J. Christiano; Martin Eichenbaum; Robert J. Vigfusson 
Abstract:  We show that the standard procedure for estimating longrun identified vector autoregressions uses a particular estimator of the zerofrequency spectral density matrix of the data. We develop alternatives to the standard procedure and evaluate the properties of these alternative procedures using Monte Carlo experiments in which data are generated from estimated real business cycle models. We focus on the properties of estimated impulse response functions. In our examples, the alternative procedures have better small sample properties than the standard procedure, with smaller bias, smaller mean square error and better coverage rates for estimated confidence intervals. 
Keywords:  Vector analysis ; Vector autoregression ; Econometric models 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:842&r=ecm 
By:  Derek Bond (University of Ulster); Michael J. Harrison (Department of Economics, Trinity College); Edward J. O'Brien (Department of Economics, Trinity College and Central Bank of Ireland) 
Abstract:  This paper draws attention to the limitations of the standard unit root/cointegration approach to economic and financial modelling, and to some of the alternatives based on the idea of fractional integration, long memory models, and the random field regression approach to nonlinearity. Following brief explanations of fractional integration and random field regression, and the methods of applying them, selected techniques are applied to a demand for money dataset. Comparisons of the results from this illustrative case study are presented, and conclusions are drawn that should aid practitioners in applied timeseries econometrics. 
JEL:  C22 C52 E41 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:tcd:tcduee:tep20021&r=ecm 
By:  Joao Santos Silva; Silvana Tenreyro 
Abstract:  Although economists have long been aware of Jensen's inequality, many econometric applicationshave neglected an important implication of it: the standard practice of interpreting the parameters ofloglinearized models estimated by ordinary least squares as elasticities can be highly misleading inthe presence of heteroskedasticity. This paper explains why this problem arises and proposes anappropriate estimator. Our criticism to conventional practices and the solution we propose extends toa broad range of economic applications where the equation under study is loglinearized. We developthe argument using one particular illustration, the gravity equation for trade, and apply the proposedtechnique to provide new estimates of this equation. We find significant differences betweenestimates obtained with the proposed estimator and those obtained with the traditional method. Thesediscrepancies persist even when the gravity equation takes into account multilateral resistance termsor fixed effects 
Keywords:  Elasticities, Gravity equation, Heteroskedasticity, Jensens inequality, Poisson regression,Preferentialtrade agreements 
JEL:  C13 C21 F10 F11 F12 F15 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:cep:cepdps:dp0701&r=ecm 
By:  Wolfgang Härdle; Zdenek Hlavka 
Abstract:  State price densities (SPD) are an important element in applied quantitative finance. In a BlackScholes model they are lognormal distributions with constant volatility parameter. In practice volatility changes and the distribution deviates from lognormality. We estimate SPDs using EUREX option data on the DAX index via a nonparametric estimator of the second derivative of the (European) call price function. The estimator is constrained so as to satisfy noarbitrage constraints and it corrects for intraday covariance structure. Given a low dimensional representation of this SPD we study its dynamic for the years 1995–2003. We calculate a prediction corridor for the DAX for a 45 day forecast. The proposed algorithm is simple, it allows calculation of future volatility and can be applied to hedging exotic options. 
Keywords:  option pricing, state price density estimation, nonlinear least squares, confidence intervals 
JEL:  C13 C14 G12 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005021&r=ecm 
By:  Wolfgang Härdle; Rouslan A. Moro; Dorothea Schäfer 
Abstract:  The purpose of this work is to introduce one of the most promising among recently developed statistical techniques – the support vector machine (SVM) – to corporate bankruptcy analysis. An SVM is implemented for analysing such predictors as financial ratios. A method of adapting it to default probability estimation is proposed. A survey of practically applied methods is given. This work shows that support vector machines are capable of extracting useful information from financial data, although extensive data sets are required in order to fully utilize their classification power. 
Keywords:  support vector machine, classification method, statistical learning theory, electric load prediction, optical character recognition, predicting bankruptcy, risk classification 
JEL:  C40 G10 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005009&r=ecm 
By:  H. OOGHE; C. SPAENJERS; P. VANDERMOERE 
Abstract:  We give an overview of the shortcomings of the most frequently used statistical techniques in failure prediction modelling. The statistical procedures that underpin the selection of variables and the determination of coefficients often lead to ‘overfitting’. We also see that the ‘expected signs’ of variables are sometimes neglected and that an underlying theoretical framework mostly does not exist. Based on the current knowledge of failing firms, we construct a new type of failure prediction models, namely ‘simpleintuitive models’. In these models, eight variables are first logittransformed and then equally weighted. These models are tested on two broad validation samples (1 year prior to failure and 3 years prior to failure) of Belgian companies. The performance results of the best simpleintuitive model are comparable to those of less transparent and more complex statistical models. 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:05/338&r=ecm 
By:  Siem Jan Koopman; André Lucas; Robert J. Daniels 
Abstract:  We model 19812002 annual default frequencies for a panel of US firms in different rating and age classes from the Standard and Poor's database. The data is decomposed into a systematic and firmspecific risk component, where the systematic component reflects the general economic conditions and default climate. We have to cope with (i) the shared exposure of each age cohort and rating class to the same systematic risk factor; (ii) strongly nonGaussian features of the individual time series; (iii) possible dynamics of an unobserved common risk factor; (iv) changing default probabilities over the age of the rating, and (v) missing observations. We propose a nonGaussian ultivariate state space model that deals with all of these issues simultaneously. The model is estimated using importance sampling techniques that have been modified to a multivariate setting. We show in a simulation study that such a multivariate approach improves the performance of the importance sampler. 
Keywords:  credit risk; multivariate unobserved component models; importance sampling; nonGaussian state space models. 
JEL:  C32 G21 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:055&r=ecm 
By:  Emmanuel Flachaire (EUREQua); Guillaume Hollard (OEP  UniversitŽ de MarnelaVallŽe) 
Abstract:  In this paper, we study starting point bias in doublebounded contingent valuation surveys. This phenomenon arises in applications that use multiple valuation questions. Indeed, response to followup valuation questions may be influenced by the bid proposed in the initial valuation question. Previous researches have been conducted in order to control for such an effect. However, they find that efficiency gains are lost when we control for undesirable response effects, relative to a single dichotomous choice question. Contrary to these results, we propose a way to control for starting point bias in doublebounded questions with gains in efficiency. 
Keywords:  Startingpoint bias, contingent valuation. 
JEL:  Q26 C81 C93 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:mse:wpsorb:v05076&r=ecm 
By:  Federico Ravenna 
Keywords:  Vector Autoreregression; Dynamic Stochastic General Equilibrium Model; Kalman Filter; Business Cycle Shocks 
JEL:  C13 C22 E32 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:red:sed005:841&r=ecm 
By:  Lawrence Christiano; Martin Eichenbaum 
JEL:  E32 C15 C52 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:red:sed005:902&r=ecm 
By:  Wolfgang Härdle; SeokOh Jeong 
Abstract:  How can we measure and compare the relative performance of production units? If input and output variables are one dimensional, then the simplest way is to compute efficiency by calculating and comparing the ratio of output and input for each production unit. This idea is inappropriate though, when multiple inputs or multiple outputs are observed. Consider a bank, for example, with three branches A, B, and C. The branches take the number of staff as the input, and measures outputs such as the number of transactions on personal and business accounts. Assume that the following statistics are observed: Branch A: 60000 personal transactions, 50000 business transactions, 25 people on staff, Branch B: 50000 personal transactions, 25000 business transactions, 15 people on staff, Branch C: 45000 personal transactions, 15000 business transactions, 10 people on staff. We observe that Branch C performed best in terms of personal transactions per staff, whereas Branch A has the highest ratio of business transactions per staff. By contrast Branch B performed better than Branch A in terms of personal transactions per staff, and better than Branch C in terms of business transactions per staff. How can we compare these business units in a fair way? Moreover, can we possibly create a virtual branch that reflects the input/output mechanism and thus creates a scale for the real branches? Productivity analysis provides a systematic approach to these problems. We review the basic concepts of productivity analysis and two popular methods DEA and FDH, which are given in Sections 12.1 and 12.2, respectively. Sections 12.3 and 12.4 contain illustrative examples with real data. 
Keywords:  relative performance, production units, productivity analysis, Data Envelopment Analysis, DEA, Free Disposal Hull, DFH, insurance agencies, manufacturing industry 
JEL:  C14 D20 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005013&r=ecm 