
on Econometric Time Series 
By:  Zongwu Cai (Department of Mathematics & Statistics, University of North Carolina at Charlotte; Fujian Key Laboratory of Statistical Sciences, Xiamen University); Zhijie Xiao (Boston College) 
Abstract:  We study quantile regression estimation for dynamic models with partially varying coefficients so that the values of some coefficients may be functions of informative covariates. Estimation of both parametric and nonparametric functional coefficients are proposed. In particular, we propose a three stage semiparametric procedure. Both consistency and asymptotic normality of the proposed estimators are derived. We demonstrate that the parametric estimators are rootn consistent and the estimation of the functional coefficients is oracle. In addition, efficiency of parameter estimation is discussed and a simple efficient estimator is proposed. A simple and easily implemented test for the hypothesis of varyingcoefficient is proposed. A Monte Carlo experiment is conducted to evaluate the performance of the proposed estimators. 
Keywords:  Efficiency; nonlinear time series; partially linear; partially varying coefficients; quantile regression; semiparametric 
Date:  2010–11–22 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:761&r=ets 
By:  Anne B. Koehler; Ralph D. Snyder; J. Keith Ord; Adrian Beaumont 
Abstract:  Compositional time series are formed from measurements of proportions that sum to one in each period of time. We might be interested in forecasting the proportion of home loans that have adjustable rates, the proportion of nonagricultural jobs in manufacturing, the proportion of a rock's geochemical composition that is a specific oxide, or the proportion of an election betting market choosing a particular candidate. A problem may involve many related time series of proportions. There could be several categories of nonagricultural jobs or several oxides in the geochemical composition of a rock that are of interest. In this paper we provide a statistical framework for forecasting these special kinds of time series. We build on the innovations state space framework underpinning the widely used methods of exponential smoothing. We couple this with a generalized logistic transformation to convert the measurements from the unit interval to the entire real line. The approach is illustrated with two applications: the proportion of new home loans in the U.S. that have adjustable rates; and four probabilities for specified candidates winning the 2008 democratic presidential nomination. 
Keywords:  compositional time series, innovations state space models, exponential smoothing, forecasting proportions 
JEL:  C22 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201020&r=ets 
By:  Manabu Asai (Faculty of Economics, Soka University); Michael McAleer (Erasmus University Rotterdam, Tinbergen Institute, The Netherlands, and Institute of Economic Research, Kyoto University) 
Abstract:  The stochastic volatility model usually incorporates asymmetric effects by introducing the negative correlation between the innovations in returns and volatility. In this paper, we propose a new asymmetric stochastic volatility model, based on the leverage and size effects. The model is a generalization of the exponential GARCH (EGARCH) model of Nelson (1991). We consider categories for asymmetric effects, which describes the difference among the asymmetric effect of the EGARCH model, the threshold effects indicator function of Glosten, Jagannathan and Runkle (1992), and the negative correlation between the innovations in returns and volatility. The new model is estimated by the efficient importance sampling method of Liesenfeld and Richard (2003), and the finite sample properties of the estimator are investigated using numerical simulations. Four financial time series are used to estimate the alternative asymmetric SV models, with empirical asymmetric effects found to be statistically significant in each case. The empirical results for S&P 500 and Yen/USD returns indicate that the leverage and size effects are significant, supporting the general model. For TOPIX and USD/AUD returns, the size effect is insignificant, favoring the negative correlation between the innovations in returns and volatility. We also consider standardized t distribution for capturing the tail behavior. The results for Yen/USD returns show that the model is correctly specified, while the results for three other data sets suggest there is scope for improvement. 
Keywords:  Stochastic volatility, asymmetric effects, leverage, threshold, indicator function, importance sampling, numerical simulations. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:kyo:wpaper:739&r=ets 
By:  D M NACHANE 
Abstract:  The aim of this paper is to take stock of the important recent contributions to spectral analysis, especially as they apply to nonstationary processes. Nonstationary processes are particularly relevant in the empirical sciences where most phenomena exhibit pronounced departures from stationary. 
Keywords:  spectral analysis, nonstationary, empirical sciences, time series, 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ess:wpaper:id:3191&r=ets 
By:  Jun Yu (School of Economics, Singapore Management University) 
Abstract:  This chapter overviews some recent advances on simulationbased methods of estimating financial time series models that are widely used in financial economics. The simulationbased methods have proven to be particularly useful when the likelihood function and moments do not have tractable forms, and hence, the maximum likelihood (ML) method and the generalized method of moments (GMM) are diffcult to use. They are also capable of improving the finite sample performance of the traditional methods. Both frequentist's and Bayesian simulationbased methods are reviewed. Frequentist's simulationbased methods cover various forms of simulated maximum likelihood (SML) methods, the simulated generalized method of moments (SGMM), the efficient method of moments (EMM), and the indirect inference (II) method. Bayesian simulationbased methods cover various MCMC algorithms. Each simulationbased method is discussed in the context of a specific financial time series model as a motivating example. Empirical applications, based on real exchange rates, interest rates and equity data, illustrate how the simulationbased methods are implemented. In particular, SML is applied to a discrete time stochastic volatility model, EMM to estimate a continuous time stochastic volatility model, MCMC to a credit risk model, the II method to a term structure model. 
Keywords:  Generalized method of moments, Maximum likelihood, MCMC, Indirect Inference, Credit risk, Stock price, Exchange rate, Interest rate.. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:192010&r=ets 
By:  Yong Li (Business School, Sun YatSen University); Jun Yu (School of Economics, Singapore Management University) 
Abstract:  A new posterior odds analysis is proposed to test for a unit root in volatility dynamics in the context of stochastic volatility models. This analysis extends the Bayesian unit root test of So and Li (1999, Journal of Business Economic Statistics) in two important ways. First, a numerically more stable algorithm is introduced to compute the Bayes factor, taking into account the special structure of the competing models. Owing to its numerical stability, the algorithm overcomes the problem of diverged “size” in the marginal likelihood approach. Second, to improve the “power” of the unit root test, a mixed prior specification with random weights is employed. It is shown that the posterior odds ratio is the byproduct of Bayesian estimation and can be easily computed by MCMC methods. A simulation study examines the “size” and “power” performances of the new method. An empirical study, based on time series data covering the subprime crisis, reveals some interesting results. 
Keywords:  Bayes factor; Mixed Prior; Markov Chain Monte Carlo; Posterior odds ratio; Stochastic volatility models; Unit root testing. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:212010&r=ets 
By:  Qiankun Zhou (School of Economics, Singapore Management University); Jun Yu (School of Economics, Singapore Management University) 
Abstract:  The asymptotic distributions of the least squares estimator of the mean reversion parameter (κ) are developed in a general class of diffusion models under three sampling schemes, namely, longspan, infill and the combination of longspan and infill. The models have an affine structure in the drift function, but allow for nonlinearity in the diffusion function. The limiting distributions are quite different under the alternative sampling schemes. In particular, the infill limiting distribution is nonstandard and depends on the initial condition and the time span whereas the other two are Gaussian. Moreover, while the other two distributions are discontinuous at κ = 0, the infill distribution is continuous in κ. This property provides an answer to the Bayesian criticism to the unit root asymptotics. Monte Carlo simulations suggest that the infill asymptotic distribution provides a more accurate approximation to the finite sample distribution than the other two distributions in empirically realistic settings. The empirical application using the U.S. Federal fund rates highlights the difference in statistical inference based on the alternative asymptotic distributions and suggests strong evidence of a unit root in the data. 
Keywords:  Vasicek Model, Onefactor Model, Mean Reversion, Infill Asymptotics, Longspan Asymptotics, Unit Root Test 
JEL:  C12 C22 G12 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:202010&r=ets 
By:  Tore Selland Kleppe (Department of Mathematics, University of Bergen); Jun Yu (School of Economics, Singapore Management University); Hans J. Skaug (Department of Mathematics, University of Bergen) 
Abstract:  A new algorithm is developed to provide a simulated maximum likelihood estimation of the GARCH diffusion model of Nelson (1990) based on return data only. The method combines two accurate approximation procedures, namely, the polynomial expansion of AïtSahalia (2008) to approximate the transition probability density of return and volatility, and the Efficient Importance Sampler (EIS) of Richard and Zhang (2007) to integrate out the volatility. The first and second order terms in the polynomial expansion are used to generate a baseline importance density for an EIS algorithm. The higher order terms are included when evaluating the importance weights. Monte Carlo experiments show that the new method works well and the discretization error is well controlled by the polynomial expansion. In the empirical application, we fit the GARCH diffusion to equity data, perform diagnostics on the model fit, and test the finiteness of the importance weights. 
Keywords:  Ecient importance sampling; GARCH diusion model; Simulated Maximum likelihood; Stochastic volatility 
JEL:  C11 C15 G12 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:132010&r=ets 
By:  Zhenlin Yang (School of Economics, Singapore Management University) 
Abstract:  The biasedness issue arising from the maximum likelihood estimation of the spatial autoregressive model (SAR) is further investigated under a broader setup than that in Bao and Ullah (2007a). A major difficulty in analytically evaluating the expectations of ratios of quadratic forms is overcome by a simple bootstrap procedure. With that, the corrections on bias and variance of the spatial estimator can easily be made up to thirdorder, and once this is done, the estimators of other model parameters become nearly unbiased. Compared with the analytical approach, the new approach is much simpler, and can easily be extended to other models of a similar structure. Extensive Monte Carlo results show that the new approach performs excellently in general. 
Keywords:  Thirdorder bias; Thirdorder variance; Bootstrap; Concentrated estimating equation; Monte Carlo; QuasiMLE; Spatial layout. 
JEL:  C10 C21 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:122010&r=ets 
By:  John Gibson (University of Waikato); Bonggeun Kim (Seoul National University); Susan Olivia (Monash University) 
Abstract:  Standard error corrections for clustered samples impose untested restrictions on spatial correlations. Our example shows these are too conservative, compared with a spatial error model that exploits information on exact locations of observations, causing inference errors when cluster corrections are used. 
Keywords:  clustered samples; GPS; spatial correlation 
JEL:  C31 C81 
Date:  2011–08–18 
URL:  http://d.repec.org/n?u=RePEc:wai:econwp:10/07&r=ets 