nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒11‒05
eleven papers chosen by
Sune Karlsson
Orebro University

  1. A New Approach to Robust Inference in Cointegration By Sainan Jin; Peter C.B. Phillips; Yixiao Sun
  2. A Nonparametric Way of Distribution Testing By Ekrem Kilic
  3. TESTING FOR DETERMINISTIC AND STOCHASTIC CYCLES IN MACROECONOMIC TIME SERIES By Guglielmo Maria Caporale; Luis A. Gil-Alana
  4. Forecasting Volatility of Turkish Markets: A Comparison of Thin and Thick Models By Ekrem Kilic
  5. Joint LM Test for Homoskedasticity in a One-Way error Component Model By Badi H. Baltagi; Georges Bresson; Alain Pirotte
  6. Edgeworth Expansions for Realized Volatility and Related Estimators By Lan Zhang; Per A. Mykland; Yacine Ait-Sahalia
  7. Directional Log-spline Distributions By José T.A.S. Ferreira; Miguel A Juárez; MArk F.J. Steel
  8. Cox-McFadden Partial and Marginal Likelihoods for the Proportional Hazard Model with Random Effects By Jan Ondrich
  9. Normalized Equation and Decomposition Analysis: Computation and Inference By Myeong-Su Yun
  10. Bayesian Consistent Prior Selection By Christopher Chambers; Takashi Hayashi
  11. NON-LINEARITIES AND FRACTIONAL INTEGRATION IN THE US UNEMPLOYMENT RATE By Guglielmo Maria Caporale; Luis A. Gil-Alana

  1. By: Sainan Jin (Guanghua School of Management, Peking University); Peter C.B. Phillips (Cowles Foundation, Yale University, University of Auckland and University of York); Yixiao Sun (Dept. of Economics, University of California, San Diego)
    Abstract: A new approach to robust testing in cointegrated systems is proposed using nonparametric HAC estimators without truncation. While such HAC estimates are inconsistent, they still produce asymptotically pivotal tests and, as in conventional regression settings, can improve testing and inference. The present contribution makes use of steep origin kernels which are obtained by exponentiating traditional quadratic kernels. Simulations indicate that tests based on these methods have improved size properties relative to conventional tests and better power properties than other tests that use Bartlett or other traditional kernels with no truncation.
    Keywords: Cointegration, HAC estimation, long-run covariance matrix, robust inference, steep origin kernel, fully modified estimation
    JEL: C12 C14 C22
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1538&r=ecm
  2. By: Ekrem Kilic (Istanbul Bilgi University)
    Abstract: Testing the distribution of a random sample can be considered ,indeed, as a goodness-of-fit problem. If we use the nonparametric density estimation of the sample as a consistent estimate of exact distribution, the problem reduces, more specifically, to the distance of two functions. This paper examines the distribution testing from this point of view and suggests a nonparametric procedure. Although the procedure is applicable for all distributions, paper emphasizes on normality test.The critical values for this normality test generated by using Monte Carlo techniques.
    Keywords: distribution testing, normality, monte carlo simulation
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–10–29
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0510006&r=ecm
  3. By: Guglielmo Maria Caporale; Luis A. Gil-Alana
    Abstract: In this paper we use a statistical procedure which is appropriate to test for deterministic and stochastic (stationary and nonstationary) cycles in macroeconomic time series. These tests have standard null and local limit distributions and are easy to apply to raw time series. Monte Carlo evidence shows that they perform relatively well in the case of functional misspecification in the cyclical structure of the series. As an example, we use this approach to test for the presence of cycles in US real GDP.
    Date: 2005–06
    URL: http://d.repec.org/n?u=RePEc:bru:bruedp:05-11&r=ecm
  4. By: Ekrem Kilic (Marmara University)
    Abstract: Volatility of financial markets is an important topic for academics, policy makers and market participants. In this study first I summarized several specifications for the conditional variance and also define some methods for combination of these specifications. Then assuming that the squared returns are the benchmark estimate for actual volatility of the day, I compare all of the models with respect to how much efficient they are to mimic the realized volatility. At the same time I used a VaR approach to compare these forecasts. With the help of these analyses I examine if combination of the forecast could outperform the single models.
    Keywords: volatility, arch, garch, combination, VaR
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–10–29
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0510007&r=ecm
  5. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University); Georges Bresson (ERMES (CNRS), Universite Pantheon-Assas Paris II, 12 place du Pantheon, 75 230 Paris Cedex 05, France); Alain Pirotte (ERMES (CNRS), Universite Pantheon-Assas Paris II, 12 place du Pantheon, 75 230 Paris Cedex 05, France)
    Abstract: This paper considers a general heteroskedastic error component model using panel data, and derives a joint LM test for homoskedasticity against the alternative of heteroskedasticity in both error components. It contrasts this joint LM test with marginal LM tests that ignore the heteroskedasticity in one of the error components. Monte Carlo results show that misleading inference can occur when using marginal rather than joint tests when heteroskedasticity is present in both components.
    Keywords: panel data, heteroskedasticity, Lagrange multiplier tests, error components, Monte Carlo simulations
    JEL: C23
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:72&r=ecm
  6. By: Lan Zhang; Per A. Mykland; Yacine Ait-Sahalia
    Abstract: This paper shows that the asymptotic normal approximation is often insufficiently accurate for volatility estimators based on high frequency data. To remedy this, we compute Edgeworth expansions for such estimators. Unlike the usual expansions, we have found that in order to obtain meaningful terms, one needs to let the size of the noise to go zero asymptotically. The results have application to Cornish-Fisher inversion and bootstrapping.
    JEL: C13 C14 C15 C22
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0319&r=ecm
  7. By: José T.A.S. Ferreira (Edeavour Capital Management); Miguel A Juárez (University of Warwick); MArk F.J. Steel (University of Warwick)
    Abstract: We introduce a new class of distributions to model directional data, based on hyperspherical log-splines. The class is very flexible and can be used to model data that exhibits features that cannot be accommodated by typical parametric distributions, such as asymmetries and multimodality. The distributions are defined on hyperspheres of any dimension and thus, include the most common circular and spherical cases. Due to the flexibility of hyperspherical log-splines, the distributions can approximate well the distribution of any phenomenon and are as smooth as desired. We propose a Bayesian setup for conducting inference with directional log-spline distributions where we pay particular attention to the prior specification and the matching of the priors of the log-splines model and the model constructed through a mixture of von Mises distributions. We compare both models in the context of three data sets: generated data on the circle, a circular application concerning the movement of turtles and a spherical application on the arrival direction of cosmic rays.
    Keywords: Directional distributions, hyperspherical splines, mixture of distributions, prior maching, von Mises distributions
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–11–01
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511001&r=ecm
  8. By: Jan Ondrich (Center for Policy Research, Maxwell School, Syracuse University)
    Abstract: In survival analysis, Cox's name is associated with the partial likelihood technique that allows consistent estimation of proportional hazard scale parameters without specifying a duration dependence baseline. In discrete choice analysis, McFadden's name is associated with the generalized extreme-value (GEV) class of logistic choice models that relax the independence of irrelevant alternatives assumption. This paper shows that the mixed class of proportional hazard specifications allowing consistent estimation of scale and mixing parameters using partial likelihood is isomorphic to the GEV class. Independent censoring is allowed and I discuss approximations to the partial likelihood in the presence of ties. Finally, the partial likelihood score vector can be used to construct log-rank tests that do not require the independence of observations involved.
    Keywords: proportional hazard, random effects, partial likelihood, GEV class
    JEL: C14 C41
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:68&r=ecm
  9. By: Myeong-Su Yun (Tulane University and IZA Bonn)
    Abstract: This paper joins discussions on normalized regression and decomposition equations in devising a simple and general algorithm for obtaining the normalized regression and applying it to the Oaxaca decomposition. This resolves the invariance problem in the detailed Oaxaca decomposition. An algorithm to calculate an asymptotic covariance matrix for estimates in the normalized regression for hypothesis testing is also derived. We extend these algorithms to non-linear equations where the underlying equation is linear and decompose differences in the first moment.
    Keywords: detailed decomposition, invariance, identification, characteristics effect, coefficients effect, normalized regression
    JEL: C20 J70
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp1822&r=ecm
  10. By: Christopher Chambers; Takashi Hayashi
    Date: 2005–11–02
    URL: http://d.repec.org/n?u=RePEc:cla:najeco:784828000000000529&r=ecm
  11. By: Guglielmo Maria Caporale; Luis A. Gil-Alana
    Abstract: This paper proposes a model of the US unemployment rate which accounts for both its asymmetry and its long memory. Our approach introduces fractional integration and nonlinearities simultaneously into the same framework, using a Lagrange Multiplier procedure with a standard null limit distribution. The empirical results suggest that the US unemployment rate can be specified in terms of a fractionally integrated process, which interacts with some non-linear functions of labour demand variables such as real oil prices and real interest rates. We also find evidence of a long-memory component. Our results are consistent with a hysteresis model with path dependency rather than a NAIRU model with an underlying unemployment equilibrium rate, thereby giving support to more activist stabilisation policies. However, any suitable model should also include business cycle asymmetries, with implications for both forecasting and policy-making.
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:bru:bruedp:05-17&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.