nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒09‒19
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. A bayesian approach to model-based clustering for panel probit models By Aßmann, Christian; Boysen-Hogrefe, Jens
  2. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis By Michael Jansson; Morten Ørregaard Nielsen
  3. "Testing the Box-Cox Parameter in an Integrated Process" By Jian Huang; Masahito Kobayashi; Michael McAleer
  4. Testing Parameter Stability in Quantile Models: An Application to the U.S. Inflation Process By Dong Jin Lee
  5. Efficient likelihood evaluation of state-space representations By DeJong, David N.; Dharmarajan, Hariharan; Liesenfeld, Roman; Moura, Guilherme V.; Richard , Jean-François
  6. Local Whittle estimation of multivariate fractionally integrated processes By Frank S. Nielsen
  7. Combining Forecasts Based on Multiple Encompassing Tests in a Macroeconomic Core System By Costantini, Mauro; Kunst, Robert M.
  8. "Dynamic Conditional Correlations for Asymmetric Processes" By Manabu Asai; Michael McAleer
  9. Forecasting economy with Bayesian autoregressive distributed lag model: choosing optimal prior in economic downturn By Bušs, Ginters
  10. "A Trinomial Test for Paired Data When There are Many Ties" By Guorui Bian; Michael McAleer; Wing-Keung Wong
  11. Dynamic Factor Models with Smooth Loadings for Analyzing the Term Structure of Interest Rates By Borus Jungbacker; Siem Jan Koopman; Michel van der Wel
  12. Normal versus Noncentral Chi-square Asymptotics of Misspecified Models By Chun, So Yeon; Alexander, Shapiro
  13. Uniform Bias Study and Bahadur Representation for Local Polynomial Estimators of the Conditional Quantile Function By Emmanuel Guerre; Camille Sabbah
  14. "Alternative Asymmetric Stochastic Volatility Models" By Manabu Asai; Michael McAleer
  15. "Non-Classical Measurement Error in Long-Term Retrospective Recall Surveys" By John Gibson; Bonggeun Kim
  16. "Asymmetry and Leverage in Realized Volatility" By Manabu Asai; Michael McAleer; Marcelo C. Medeiros

  1. By: Aßmann, Christian; Boysen-Hogrefe, Jens
    Abstract: Consideration of latent heterogeneity is of special importance in non linear models for gauging correctly the effect of explaining variables on the dependent variable. This paper adopts the stratified model-based clustering approach for modeling latent heterogeneity for panel probit models. Within a Bayesian framework an estimation algorithm dealing with the inherent label switching problem is provided. Determination of the number of clusters is based on the marginal likelihood and out-of-sample criteria. The ability to decide on the correct number of clusters is assessed within a simulation study indicating high accuracy for both approaches. Different concepts of marginal effects incorporating latent heterogeneity at different degrees arise within the considered model setup and are directly at hand within Bayesian estimation via MCMC methodology. An empirical illustration of the developed methodology indicates that consideration of latent heterogeneity via latent clusters provides the preferred model specification compared to a pooled and a random coefficient specification.
    Keywords: Bayesian Estimation,MCMC Methods,Panel Probit Model,Mixture Modelling
    JEL: C11 C23 C25
    Date: 2009
  2. By: Michael Jansson (UC Berkeley and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations.
    Keywords: Likelihood Ratio Test, Unit Root Hypothesis
    JEL: C12 C22
    Date: 2009–08–31
  3. By: Jian Huang (Guangdong University of Finance); Masahito Kobayashi (Faculty of Economics, Yokohama National University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo)
    Abstract: This paper analyses the constant elasticity of volatility (CEV) model suggested by [6]. The CEV model without mean reversion is shown to be the inverse Box-Cox transformation of integrated processes asymptotically. It is demonstrated that the maximum likelihood estimator of the power parameter has a nonstandard asymptotic distribution, which is expressed as an integral of Brownian motions, when the data generating process is not mean reverting. However, it is shown that the t-ratio follows a standard normal distribution asymptotically, so that the use of the conventional t-test in analyzing the power parameter of the CEV model is justified even if there is no mean reversion, as is often the case in empirical research. The model may applied to ultra high frequency data
    Date: 2009–09
  4. By: Dong Jin Lee (University of Connecticut)
    Abstract: This paper considers parameter instability tests in conditional quantile models. I suggest tests for quantile parameter instability based on the asymptotically optimal tests of Lee (2008) both in parametric and semiparametric set-up. In parametric models, Komunjer (2005)'s tick-exponential family of distributions is used as the underlying distribution, in which the test has asymptotically correct sizes even when the error distribution is misspecified. I apply our test statistic to various quantile models of the U.S. inflation process such as Phillips curve, P-star model, and autoregressive models. The test result shows an evidence of parameter instability in most quantile levels of all models. The semiparametric test rejects the stability even in more recent period with moderate economic volatility. Phillips curve model and autoregressive model have asymmetric test results across quantile levels, implying the asymmetric response of inflation to economic shocks.
    Keywords: Quantile Model, optimal test, parameter instability, Phillips curve, inflation
    JEL: C12 C22 E31
    Date: 2009–02
  5. By: DeJong, David N.; Dharmarajan, Hariharan; Liesenfeld, Roman; Moura, Guilherme V.; Richard , Jean-François
    Abstract: We develop a numerical procedure that facilitates efficient likelihood evaluation in applications involving non-linear and non-Gaussian state-space models. The procedure approximates necessary integrals using continuous approximations of target densities. Construction is achieved via efficient importance sampling, and approximating densities are adapted to fully incorporate current information. We illustrate our procedure in applications to dynamic stochastic general equilibrium models.
    Keywords: particle filter,adaption,efficient importance sampling,kernel density approximation,dynamic stochastic general equilibrium model
    Date: 2009
  6. By: Frank S. Nielsen (Aarhus University and CREATES)
    Abstract: This paper derives a semiparametric estimator of multivariate fractionally integrated processes covering both stationary and non-stationary values of d. We utilize the notion of the extended discrete Fourier transform and periodogram to extend the multivariate local Whittle estimator of Shimotsu (2007) to cover non-stationary values of d. We show consistency and asymptotic normality for d between -1/2 and infinity. A simulation study illustrates the performance of the proposed estimator for relevant sample sizes. Empirical justification of the proposed estimator is shown through an empirical analysis of log spot exchange rates. We find that the log spot exchange rates of Germany, United Kingdom, Japan, Canada, France, Italy, and Switzerland against the US Dollar for the period January 1974 until December 2001 are well decribed as I (1) processes.
    Keywords: fractional integration, local Whittle, long memory, multivariate semiparametric es- timation, exchange rates.
    JEL: C14 C32
    Date: 2009–09–08
  7. By: Costantini, Mauro (Department of Economics, University of Vienna BWZ, Vienna, Austria); Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria, and Department of Economics, University of Vienna, Vienna, Austria)
    Abstract: We investigate whether and to what extent multiple encompassing tests may help determine weights for forecast averaging in a standard vector autoregressive setting. To this end we consider a new test-based procedure, which assigns non-zero weights to candidate models that add information not covered by other models. The potential benefits of this procedure are explored in extensive Monte Carlo simulations using realistic designs that are adapted to U.K. and to French macroeconomic data. The real economic growth rates of these two countries serve as the target series to be predicted. Generally, we find that the test-based averaging of forecasts yields a performance that is comparable to a simple uniform weighting of individual models. In one of our role-model economies, test-based averaging achieves some advantages in small samples. In larger samples, pure prediction models outperform forecast averages.
    Keywords: Combining forecasts, encompassing tests, model selection, time series
    JEL: C32 C53
    Date: 2009–09
  8. By: Manabu Asai (Faculty of Economics, Soka University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo)
    Abstract: The paper develops two Dynamic Conditional Correlation (DCC) models, namely the Wishart DCC (WDCC) model and the Matrix-Exponential Conditional Correlation (MECC) model. The paper applies the WDCC approach to the exponential GARCH (EGARCH) and GJR models to propose asymmetric DCC models. We use the standardized multivariate t-distribution to accommodate heavy-tailed errors. The paper presents an empirical example using the trivariate data of the Nikkei 225, Hang Seng and Straits Times Indices for estimating and forecasting the WDCC-EGARCH and WDCC-GJR models, and compares the performance with the asymmetric BEKK model. The empirical results show that AIC and BIC favour the WDCC-EGARCH model to the WDCC-GJR and asymmetric BEKK models. Moreover, the empirical results indicate that the WDCC-EGARCH-t model produces reasonable VaR threshold forecasts, which are very close to the nominal 1% to 3% values.
    Date: 2009–08
  9. By: Bušs, Ginters
    Abstract: Bayesian inference requires an analyst to set priors. Setting the right prior is crucial for precise forecasts. This paper analyzes how optimal prior changes when an economy is hit by a recession. For this task, an autoregressive distributed lag (ADL) model is chosen. The results show that a sharp economic slowdown changes the optimal prior in two directions. First, it changes the structure of the optimal weight prior, setting smaller weight on the lagged dependent variable compared to variables containing more recent information. Second, greater uncertainty brought by a rapid economic downturn requires more space for coefficient variation, which is set by the overall tightness parameter. It is shown that the optimal overall tightness parameter may increase to such an extent that Bayesian ADL becomes equivalent to frequentist ADL.
    Keywords: Forecasting; Bayesian inference; Bayesian autoregressive distributed lag model; optimal prior; Litterman prior; business cycle; mixed estimation; grid search
    JEL: C52 C11 N14 C32 C13 C53 E17 C15 C22
    Date: 2009–09–13
  10. By: Guorui Bian (Department of Statistics, East China Normal University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo); Wing-Keung Wong (Department of Economics and Institute for Computational Mathematics, Hong Kong Baptist University)
    Abstract: This paper develops a new test, the trinomial test, for pairwise ordinal data samples to improve the power of the sign test by modifying its treatment of zero diRerences between observations, thereby increasing the use of sample information. Simulations demonstrate the power superiority of the proposed trinomial test statis- tic over the sign test in small samples in the presence of tie observations. We also show that the proposed trinomial test has substantially higher power than the sign test in large samples and also in the presence of tie observations, as the sign test ignores information from observations resulting in ties.
    Date: 2009–09
  11. By: Borus Jungbacker (Department of Econometrics, VU University Amsterdam); Siem Jan Koopman (Department of Econometrics, VU University Amsterdam and Tinbergen Institute); Michel van der Wel (Tinbergen Institute + Erasmus School of Economics, ERIM Rotterdam + CREATES, Aarhus University)
    Abstract: We propose a new approach to the modelling of the term structure of interest rates. We consider the general dynamic factor model and show how to impose smoothness restrictions on the factor loadings. We further present a statistical procedure based on Wald tests that can be used to find a suitable set of such restrictions. We present these developments in the context of term structure models, but they are also applicable in other settings. We perform an empirical study using a data set of unsmoothed Fama- Bliss zero yields for US treasuries of different maturities. The general dynamic factor model with and without smooth loadings is considered in this study together with models that are associated with Nelson-Siegel and arbitrage-free frameworks. These existing models can be regarded as special cases of the dynamic factor model with restrictions on the model parameters. For all model candidates, we consider both stationary and nonstationary autoregressive processes (with different numbers of lags) for the latent factors. Finally, we perform statistical hypothesis tests to verify whether the restrictions imposed by the models are supported by the data. Our main conclusion is that smoothness restrictions can be imposed on the loadings of dynamic factor models for the term structure of US interest rates but that the restrictions implied by a number of popular term structure models are rejected.
    Keywords: Fama-Bliss data set, Kalman filter, Maximum likelihood, Yield curve
    JEL: C32 C51 E43
    Date: 2009–09–08
  12. By: Chun, So Yeon; Alexander, Shapiro
    Abstract: The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equations modeling (SEM). Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main goal of this paper is to evaluate the validity of employing these distributions in practice. Monte Carlo simulation results indicate that the noncentral chi-square distribution describes behavior of the LR test statistic well under small, moderate and even severe misspecifications regardless of the sample size (as long as it is sufficiently large), while the normal distribution, with a bias correction, gives a slightly better approximation for extremely severe misspecifications. However, neither the noncentral chi-square distribution nor the theoretical normal distributions give a reasonable approximation of the LR test statistics under extremely severe misspecifications. Of course, extremely misspecified models are not of much practical interest.
    Keywords: Model misspecification; covariance structure analysis; maximum likelihood; generalized least squares; discrepancy function; noncentral chi-square distribution; normal distribution; factor analysis
    JEL: C52 C12 C15
    Date: 2009–09
  13. By: Emmanuel Guerre (Queen Mary, University of London); Camille Sabbah (Université Pierre et Marie Curie, Paris)
    Abstract: This paper investigates the bias and the Bahadur representation of a local polynomial estimator of the conditional quantile function and its derivatives. The bias and Bahadur remainder term are studied uniformly with respect to the quantile level, the covariates and the smoothing parameter. The order of the local polynomial estimator can be higher that the differentiability order of the conditional quantile function. Applications of the results deal with global optimal consistency rates of the local polynomial quantile estimator, performance of random bandwidths and estimation of the conditional quantile density function. The latter allows to obtain a simple estimator of the conditional quantile function of the private values in a first price sealed bids auctions under the independent private values paradigm and risk neutrality.
    Keywords: Bahadur representation, Conditional quantile function, Local polynomial estimation, Econometrics of auctions
    JEL: C14 C21
    Date: 2009–09
  14. By: Manabu Asai (Faculty of Economics, Soka University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo)
    Abstract: The stochastic volatility model usually incorporates asymmetric effects by introducing the negative correlation between the innovations in returns and volatility. In this paper, we propose a new asymmetric stochastic volatility model, based on the leverage and size effects. The model is a generalization of the exponential GARCH (EGARCH) model of Nelson (1991). We consider categories for asymmetric effects, which describes the difference among the asymmetric effect of the EGARCH model, the threshold effects indicator function of Glosten, Jagannathan and Runkle (1992), and the negative correlation between the innovations in returns and volatility. The new model is estimated by the efficient importance sampling method of Liesenfeld and Richard (2003), and the finite sample properties of the estimator are investigated using numerical simulations. Four financial time series are used to estimate the alternative asymmetric SV models, with empirical asymmetric effects found to be statistically significant in each case. The empirical results for S&P 500 and Yen/USD returns indicate that the leverage and size effects are significant, supporting the general model. For TOPIX and USD/AUD returns, the size effect is insignificant, favoring the negative correlation between the innovations in returns and volatility. We also consider standardized t distribution for capturing the tail behavior. The results for Yen/USD returns show that the model is correctly specified, while the results for three other data sets suggest there is scope for improvement.
    Date: 2009–08
  15. By: John Gibson (Department of Economics, University of Waikato); Bonggeun Kim (Department of Economics, Seoul National University)
    Abstract: Applied microeconomic researchers are beginning to use long-term retrospective survey data in settings where conventional longitudinal survey data are unavailable. However, inaccurate longterm recall could induce non-classical measurement error, for which conventional statistical corrections are less effective. In this paper, we use the unique Panel Study of Income Dynamics Validation Study to assess the accuracy of long-term retrospective recall data. We find underreporting of transitory variation which creates a non-classical measurement error problem.
    Date: 2009–09
  16. By: Manabu Asai (Faculty of Economics, Soka University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo); Marcelo C. Medeiros (Department of Economics Pontifical Catholic University of Rio de Janeiro)
    Abstract: A wide variety of conditional and stochastic variance models has been used to estimate latent volatility (or risk). In both the conditional and stochastic volatility literature, there has been some confusion between the definitions of asymmetry and leverage. In this paper, we first show the relationship among conditional, stochastic, integrated and realized volatilities. Then we develop a new asymmetric volatility model, which takes account of small and large, and positive and negative, shocks. Using the new specification, we examine alternative volatility models that have recently been developed and estimated in order to understand the differences and similarities in the definitions of asymmetry and leverage. We extend the new specification to realized volatility by taking account of measurement errors. As an empirical example, we apply the new model to the realized volatility of Standard and Poor's 500 Composite Index using Efficient Importance Sampling to show that the new specification of asymmetry significantly improves the goodness of fit, and that the out-of-sample forecasts and VaR thresholds are satisfactory.
    Date: 2009–08

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.