
on Econometrics 
By:  Sainan Jin (Guanghua School of Management, Peking University); Peter C.B. Phillips (Cowles Foundation, Yale University, University of Auckland and University of York); Yixiao Sun (Dept. of Economics, University of California, San Diego) 
Abstract:  A new approach to robust testing in cointegrated systems is proposed using nonparametric HAC estimators without truncation. While such HAC estimates are inconsistent, they still produce asymptotically pivotal tests and, as in conventional regression settings, can improve testing and inference. The present contribution makes use of steep origin kernels which are obtained by exponentiating traditional quadratic kernels. Simulations indicate that tests based on these methods have improved size properties relative to conventional tests and better power properties than other tests that use Bartlett or other traditional kernels with no truncation. 
Keywords:  Cointegration, HAC estimation, longrun covariance matrix, robust inference, steep origin kernel, fully modified estimation 
JEL:  C12 C14 C22 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1538&r=ecm 
By:  Ekrem Kilic (Istanbul Bilgi University) 
Abstract:  Testing the distribution of a random sample can be considered ,indeed, as a goodnessoffit problem. If we use the nonparametric density estimation of the sample as a consistent estimate of exact distribution, the problem reduces, more specifically, to the distance of two functions. This paper examines the distribution testing from this point of view and suggests a nonparametric procedure. Although the procedure is applicable for all distributions, paper emphasizes on normality test.The critical values for this normality test generated by using Monte Carlo techniques. 
Keywords:  distribution testing, normality, monte carlo simulation 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–10–29 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0510006&r=ecm 
By:  Guglielmo Maria Caporale; Luis A. GilAlana 
Abstract:  In this paper we use a statistical procedure which is appropriate to test for deterministic and stochastic (stationary and nonstationary) cycles in macroeconomic time series. These tests have standard null and local limit distributions and are easy to apply to raw time series. Monte Carlo evidence shows that they perform relatively well in the case of functional misspecification in the cyclical structure of the series. As an example, we use this approach to test for the presence of cycles in US real GDP. 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:bru:bruedp:0511&r=ecm 
By:  Ekrem Kilic (Marmara University) 
Abstract:  Volatility of financial markets is an important topic for academics, policy makers and market participants. In this study first I summarized several specifications for the conditional variance and also define some methods for combination of these specifications. Then assuming that the squared returns are the benchmark estimate for actual volatility of the day, I compare all of the models with respect to how much efficient they are to mimic the realized volatility. At the same time I used a VaR approach to compare these forecasts. With the help of these analyses I examine if combination of the forecast could outperform the single models. 
Keywords:  volatility, arch, garch, combination, VaR 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–10–29 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0510007&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University); Georges Bresson (ERMES (CNRS), Universite PantheonAssas Paris II, 12 place du Pantheon, 75 230 Paris Cedex 05, France); Alain Pirotte (ERMES (CNRS), Universite PantheonAssas Paris II, 12 place du Pantheon, 75 230 Paris Cedex 05, France) 
Abstract:  This paper considers a general heteroskedastic error component model using panel data, and derives a joint LM test for homoskedasticity against the alternative of heteroskedasticity in both error components. It contrasts this joint LM test with marginal LM tests that ignore the heteroskedasticity in one of the error components. Monte Carlo results show that misleading inference can occur when using marginal rather than joint tests when heteroskedasticity is present in both components. 
Keywords:  panel data, heteroskedasticity, Lagrange multiplier tests, error components, Monte Carlo simulations 
JEL:  C23 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:72&r=ecm 
By:  Lan Zhang; Per A. Mykland; Yacine AitSahalia 
Abstract:  This paper shows that the asymptotic normal approximation is often insufficiently accurate for volatility estimators based on high frequency data. To remedy this, we compute Edgeworth expansions for such estimators. Unlike the usual expansions, we have found that in order to obtain meaningful terms, one needs to let the size of the noise to go zero asymptotically. The results have application to CornishFisher inversion and bootstrapping. 
JEL:  C13 C14 C15 C22 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0319&r=ecm 
By:  José T.A.S. Ferreira (Edeavour Capital Management); Miguel A Juárez (University of Warwick); MArk F.J. Steel (University of Warwick) 
Abstract:  We introduce a new class of distributions to model directional data, based on hyperspherical logsplines. The class is very flexible and can be used to model data that exhibits features that cannot be accommodated by typical parametric distributions, such as asymmetries and multimodality. The distributions are defined on hyperspheres of any dimension and thus, include the most common circular and spherical cases. Due to the flexibility of hyperspherical logsplines, the distributions can approximate well the distribution of any phenomenon and are as smooth as desired. We propose a Bayesian setup for conducting inference with directional logspline distributions where we pay particular attention to the prior specification and the matching of the priors of the logsplines model and the model constructed through a mixture of von Mises distributions. We compare both models in the context of three data sets: generated data on the circle, a circular application concerning the movement of turtles and a spherical application on the arrival direction of cosmic rays. 
Keywords:  Directional distributions, hyperspherical splines, mixture of distributions, prior maching, von Mises distributions 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–11–01 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511001&r=ecm 
By:  Jan Ondrich (Center for Policy Research, Maxwell School, Syracuse University) 
Abstract:  In survival analysis, Cox's name is associated with the partial likelihood technique that allows consistent estimation of proportional hazard scale parameters without specifying a duration dependence baseline. In discrete choice analysis, McFadden's name is associated with the generalized extremevalue (GEV) class of logistic choice models that relax the independence of irrelevant alternatives assumption. This paper shows that the mixed class of proportional hazard specifications allowing consistent estimation of scale and mixing parameters using partial likelihood is isomorphic to the GEV class. Independent censoring is allowed and I discuss approximations to the partial likelihood in the presence of ties. Finally, the partial likelihood score vector can be used to construct logrank tests that do not require the independence of observations involved. 
Keywords:  proportional hazard, random effects, partial likelihood, GEV class 
JEL:  C14 C41 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:68&r=ecm 
By:  MyeongSu Yun (Tulane University and IZA Bonn) 
Abstract:  This paper joins discussions on normalized regression and decomposition equations in devising a simple and general algorithm for obtaining the normalized regression and applying it to the Oaxaca decomposition. This resolves the invariance problem in the detailed Oaxaca decomposition. An algorithm to calculate an asymptotic covariance matrix for estimates in the normalized regression for hypothesis testing is also derived. We extend these algorithms to nonlinear equations where the underlying equation is linear and decompose differences in the first moment. 
Keywords:  detailed decomposition, invariance, identification, characteristics effect, coefficients effect, normalized regression 
JEL:  C20 J70 
Date:  2005–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp1822&r=ecm 
By:  Christopher Chambers; Takashi Hayashi 
Date:  2005–11–02 
URL:  http://d.repec.org/n?u=RePEc:cla:najeco:784828000000000529&r=ecm 
By:  Guglielmo Maria Caporale; Luis A. GilAlana 
Abstract:  This paper proposes a model of the US unemployment rate which accounts for both its asymmetry and its long memory. Our approach introduces fractional integration and nonlinearities simultaneously into the same framework, using a Lagrange Multiplier procedure with a standard null limit distribution. The empirical results suggest that the US unemployment rate can be specified in terms of a fractionally integrated process, which interacts with some nonlinear functions of labour demand variables such as real oil prices and real interest rates. We also find evidence of a longmemory component. Our results are consistent with a hysteresis model with path dependency rather than a NAIRU model with an underlying unemployment equilibrium rate, thereby giving support to more activist stabilisation policies. However, any suitable model should also include business cycle asymmetries, with implications for both forecasting and policymaking. 
Date:  2005–09 
URL:  http://d.repec.org/n?u=RePEc:bru:bruedp:0517&r=ecm 