
on Econometrics 
By:  Jan G. de Gooijer (University of Amsterdam); Ao Yuan (Howard University, Washington) 
Abstract:  Often socioeconomic variables are measured on a discrete scale or rounded to protect confidentiality. Nevertheless, when exploring the effect of a relevant covariate on the whole outcome distribution of a discrete response variable, virtually all common quantile regression methods require the distribution of the covariate to be continuous. This paper departs from this basic requirement by presenting an algorithm for nonparametric estimation of conditional quantiles when both the response variable and the covariate are discretely distributed. Moreover, we allow the variables of interest to be pairwise correlated. For computational efficiency, we aggregate the data into smaller subsets by a binning operation, and make inference on the resulting prebinned data. Specifically, we propose two kernelbased binned conditional quantile estimators, one for untransformed discrete response data and one for ranktransformed response data. We establish asymptotic properties of both estimators. A practical procedure for jointly selecting band and binwidth parameters is also presented. Simulation results show excellent estimation accuracy in terms of bias, mean squared error, and confidence interval coverage. Typically prebinning the data leads to considerable computational savings when large datasets are under study, as compared to direct (un)conditional quantile kernel estimation of multivariate data. With this in mind, we illustrate the proposed methodology with an application to a large real dataset concerning US hospital patients with congestive heart failure. 
Keywords:  Binning; Bootstrap; Confidence interval; Jittering; Nonparametric 
JEL:  C14 
Date:  2011–01–17 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110011&r=ecm 
By:  Peter Exterkate (Erasmus University Rotterdam); Patrick J.F. Groenen (Erasmus University Rotterdam); Christiaan Heij (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) 
Abstract:  This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a highdimensional space, where estimation of the predictive regression model is based on a shrinkage estimator to avoid overfitting. We extend the kernel ridge regression methodology to enable its use for economic timeseries forecasting, by including lags of the dependent variable or other individual variables as predictors, as is typically desired in macroeconomic and financial applications. Monte Carlo simulations as well as an empirical application to various key measures of real economic activity confirm that kernel ridge regression can produce more accurate forecasts than traditional linear methods for dealing with many predictors based on principal component regression. 
Keywords:  High dimensionality; nonlinear forecasting; ridge regression; kernel methods 
JEL:  C53 C63 E27 
Date:  2011–01–11 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110007&r=ecm 
By:  Bonsoo Koo; Oliver Linton 
Abstract:  This paper proposes a class of locally stationary diffusion processes. The modelhas a time varying but locally linear drift and a volatility coefficient that is allowed tovary over time and space. We propose estimators of all the unknown quantitiesbased on long span data. Our estimation method makes use of the localstationarity. We establish asymptotic theory for the proposed estimators as thetime span increases. We apply this method to the real financial data to illustrate thevalidity of our model. Finally, we present a simulation study to provide the finitesampleperformance of the proposed estimators. 
Keywords:  diffusion processes, local stationarity, term structure dynamics, density matching, option pricing. 
JEL:  C14 C32 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2010/551&r=ecm 
By:  Jacob Schwartz; David E. Giles (Department of Economics, University of Victoria) 
Abstract:  We investigate the smallsample quality of the maximum likelihood estimators (MLEs) of the parameters of the zeroinflated Poisson distribution. The finitesample biases are determined to O(n1) using an analytic bias reduction methodology based on the work of Cox and Snell (1968) and Cordeiro and Klein (1994). Monte Carlo simulations show that the MLEs have very small percentage biases for this distribution, but the analytic bias reduction methods essentially eliminate the bias without adversely affecting the mean squared error s of the estimators. The analytic adjustment compares favourably with the parametric bootstrap biascorrected estimator, in terms of bias reduction itself, as well as with respect to mean squared error and Pitman’s nearness measure. 
Keywords:  Zeroinflated Poisson, bias reduction, maximum likelihood estimation, bootstrap 
JEL:  C13 C16 C25 C46 
Date:  2011–02–15 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:1102&r=ecm 
By:  Sjoerd van den Hauwe (Erasmus University Rotterdam); Richard Paap (Erasmus University Rotterdam); Dick J.C. van Dijk (Erasmus University Rotterdam) 
Abstract:  We propose a new approach to deal with structural breaks in time series models. The key contribution is an alternative dynamic stochastic specification for the model parameters which describes potential breaks. After a break new parameter values are generated from a socalled baseline prior distribution. Modeling boils down to the choice of a parametric likelihood specification and a baseline prior with the proper support for the parameters. The approach accounts in a natural way for potential outofsample breaks where the number of breaks is stochastic. Posterior inference involves simple computations that are less demanding than existing methods. The approach is illustrated on nonlinear discrete time series models and models with restrictions on the parameter space. 
Keywords:  Structural breaks; Bayesian analysis; forecasting; MCMC methods; nonlinear time series 
JEL:  C11 C22 C51 C53 C63 
Date:  2011–02–08 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110023&r=ecm 
By:  Toshio Honda 
Abstract:  We consider nonparametric estimation of the conditional qth quantile for stationary time series. We deal with stationary time series with strong time dependence and heavy tails under the setting of random design. We estimate the conditional qth quantile by local linear regression and investigate the asymptotic properties. It is shown that the asymptotic properties are affected by both the time dependence and the tail index of the errors. The results of a small simulation study are also given. 
Keywords:  conditional quantile, random design, check function, local linear regression, stable distribution, linear process, longrange dependence, martingale central limit theorem 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10157&r=ecm 
By:  Monica Billio (University Ca'Foscari di Venezia); Roberto Casarin (University Ca'Foscari di Venezia); Francesco Ravazzolo (Norges Bank); Herman K. van Dijk (Erasmus University Rotterdam) 
Abstract:  Using a Bayesian framework this paper provides a multivariate combination approach to prediction based on a distributional state space representation of predictive densities from alternative models. In the proposed approach the model set can be incomplete. Several multivariate timevarying combination strategies are introduced. In particular, a weight dynamics driven by the past performance of the predictive densities is considered and the use of learning mechanisms. The approach is assessed using statistical and utilitybased performance measures for evaluating density forecasts of US macroeconomic time series and of surveys of stock market prices. 
Keywords:  Density Forecast Combination; Survey Forecast; Bayesian Filtering; Sequential Monte Carlo 
JEL:  C11 C15 C53 E37 
Date:  2011–01–06 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110003&r=ecm 
By:  Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and University College London); Konrad Smolinski (Institute for Fiscal Studies) 
Abstract:  <p><p>This paper studies identification of latent utility functions in multiple discrete choice models in which there may be endogenous explanatory variables, that is explanatory variables that are not restricted to be distributed independently of the unobserved determinants of latent utilities. The model does not employ large support, special regressor or control function restrictions, indeed it is silent about the process delivering values of endogenous explanatory variables and in this respect it is incomplete. Instead the model employs instrumental variable restrictions requiring the existence of instrumental variables which are excluded from latent utilities and distributed independently of the unobserved components of utilities. </p> </p><p><p>We show that the model delivers set, not point, identification of the latent utility functions and we characterize sharp bounds on those functions. We develop easytocompute outer regions which in parametric models require little more calculation than what is involved in a conventional maximum likelihood analysis. The results are illustrated using a model which is essentially the parametric conditional logit model of McFadden (1974) but with potentially endogenous explanatory variables and instrumental variable restrictions.</p> </p><p><p>The method employed has wide applicability and for the first time brings instrumental variable methods to bear on structural models in which there are multiple unobservables in a structural equation.</p></p> 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:06/11&r=ecm 
By:  Drew Creal (University of Chicago, Booth School of Business); Siem Jan Koopman (VU University Amsterdam); André Lucas (VU University Amsterdam) 
Abstract:  We propose a new class of observationdriven timevarying parameter models for dynamic volatilities and correlations to handle time series from heavytailed distributions. The model adopts generalized autoregressive score dynamics to obtain a timevarying covariance matrix of the multivariate Student's <I>t</I> distribution. The key novelty of our proposed model concerns the weighting of lagged squared innovations for the estimation of future correlations and volatilities. When we account for heavy tails of distributions, we obtain estimates that are more robust to large innovations. The model also admits a representation as a timevarying heavytailed copula which is particularly useful if the interest focuses on dependence structures. We provide an empirical illustration for a panel of daily global equity returns. 
Keywords:  dynamic dependence; multivariate Student's t distribution; copula 
JEL:  C10 C22 C32 C51 
Date:  2010–03–16 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100032&r=ecm 
By:  Matteo Pelagatti (Department of Statistics, Università degli Studi di MilanoBicocca); Pranab Sen (Department of Statistics and Operations Research, University of North Carolina at Chapel Hill) 
Abstract:  We propose a ranktest of the null hypothesis of short memory stationarity possibly after linear detrending. For the levelstationarity hypothesis, the test statistic we propose is a modified version of the popular KPSS statistic, in which ranks substitute the original observations. We prove that the rank KPSS statistic shares the same limiting distribution as the standard KPSS statistic under the null and diverges under I(1) alternatives. For the trendstationarity hypothesis, we apply the same rank KPSS statistic to the residual of a TheilSen regression on a linear trend. We derive the asymptotic distribution of the TheilSen estimator under short memory errors and prove that the TheilSen detrended rank KPSS statistic shares the same weak limit as the leastsquares detrended KPSS. We study the asymptotic relative efficiency of our test compared to the KPSS and prove that it may have unbounded efficiency gains under fattailed distributions compensated by very moderate efficiency losses under thintailed distributions. For this and other reasons discussed in the body of the article our rank KPSS test turns out to be an irresistible competitor of the KPSS for most realworld economic and financial applications. The weak convergence results and asymptotic representations proved in this article may have an interest on their own, as they extend to ranks analogous results widely used in unitroot econometrics. 
Keywords:  Stationarity test, Unit roots, Robustness, Rank statistics, TheilSen estimator, Asymptotic eciency 
JEL:  C12 C14 C22 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:mis:wpaper:20110201&r=ecm 
By:  Oliver Linton; Sorawoot Srisuma 
Abstract:  We propose a general twostep estimation method for the structural parameters ofpopular semiparametric Markovian discrete choice models that include a class ofMarkovian Games andallow for continuous observable state space. The estimation procedure is simpleas it directly generalizes the computationally attractive methodology of Pesendorferand SchmidtDengler (2008) that assumed finite observable states. This extensionis nontrivial as the value functions, to be estimated nonparametrically in the firststage, are defined recursively in a nonlinear functional equation. Utilizingstructural assumptions, we show how to consistently estimate the infinitedimensional parameters as the solution to some type II integral equations, thesolving of which is a wellposed problem. We provide sufficient set of primitives toobtain rootT consistent estimators for the finite dimensional structural parametersand the distribution theory for the value functions in a time series framework. 
Keywords:  Discrete Markov Decision Models, Kernel Smoothing, Markovian Games, Semiparametric Estimation, WellPosed Inverse Problem.D 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2010/550&r=ecm 
By:  David Ardia (University of Fribourg, aeris CAPITAL AG, Switzerland); Nalan Basturk (Erasmus University Rotterdam); Lennart Hoogerheide (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam) 
Abstract:  Strategic choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly nonelliptical posterior distributions. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. Given an appropriately yet quickly tuned adaptive candidate, straightforward importance sampling provides a computationally efficient estimator of the marginal likelihood (and a reliable and easily computed corresponding numerical standard error) in the cases investigated in this paper, which include a nonlinear regression model and a mixture GARCH model. Warping the posterior density can lead to a further gain in efficiency, but it is more important that the posterior kernel is appropriately wrapped by the candidate distribution than that is warped. 
Keywords:  marginal likelihood; Bayes factor; importance sampling; bridge sampling; adaptive mixture of Studentt distributions 
JEL:  C11 C15 C52 
Date:  2010–06–21 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100059&r=ecm 
By:  Jan G. de Gooijer (University of Amsterdam, the Netherlands); Ao Yuan (Howard University, Washington DC, USA) 
Abstract:  Item response theory is one of the modern test theories with applications in educational and psychological testing. Recent developments made it possible to characterize some desired properties in terms of a collection of manifest ones, so that hypothesis tests on these traits can, in principle, be performed. But the existing test methodology is based on asymptotic approximation, which is impractical in most applications since the required sample sizes are often unrealistically huge. To overcome this problem, a class of tests is proposed for making exact statistical inference about four manifest properties: covariances given the sum are nonpositive (CSN), manifest monotonicity (MM), conditional association (CA), and vanishing conditional dependence (VCD). One major advantage is that these exact tests do not require large sample sizes. As a result, tests for CSN and MM can be routinely performed in empirical studies. For testing CA and VCD, the exact methods are still impractical in most applications, due to the unusually large number of parameters to be tested. However, exact methods are still derived for them as an exploration toward practicality. Some numerical examples with applications of the exact tests for CSN and MM are provided. 
Keywords:  Conditional distribution; Exact test; Monte Carlo; Markov chain Monte Carlo 
JEL:  C12 C15 
Date:  2010–04–26 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100044&r=ecm 
By:  Degui Li; Oliver Linton; Zudi Lu 
Abstract:  Local linear fitting is a popular nonparametric method in nonlinear statistical andeconometric modelling. Lu and Linton (2007) established the point wise asymptoticdistribution (central limit theorem) for the local linear estimator of nonparametricregression function under the condition of near epoch dependence. We furtherinvestigate the uniform consistency of this estimator. The uniformly strong andweak consistencies with convergence rates for the local linear fitting areestablished under mild conditions. Furthermore, general results of uniformconvergence rates for nonparametric kernelbased estimators are provided.Applications of our results to conditional variance function estimation and someeconomic time series models are also discussed. The results of this paper will beof widely potential interest in time series semiparametric modelling. 
Keywords:  local linear fitting, near epoch dependence, convergence rates, uniform consistency. 
JEL:  C13 C14 C22 
Date:  2010–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2010/549&r=ecm 
By:  Lennart Hoogerheide (Erasmus University Rotterdam); Anne Opschoor (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam) 
Abstract:  A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit nonelliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student<I>t</I> densities that approximates accurately the target distribution typically a posterior distribution, of which we only require a kernel  in the sense that the KullbackLeibler divergence between target and mixture is minimized. We label this approach Mixture of <I>t</I> by Importance Sampling and Expectation Maximization (MitISEM). We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutationaugmented MitISEM approach, for importance sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model's mixture regimes' parameters. Third, we propose a partial MitISEM approach, which aims at approximating the marginal and conditional posterior distributions of subsets of model parameters, rather than the joint. This division can substantially reduce the dimension of the approximation problem. 
Keywords:  mixture of Studentt distributions; importance sampling; KullbackLeibler divergence; Expectation Maximization; MetropolisHastings algorithm; predictive likelihoods; mixture GARCH models; Value at Risk 
JEL:  C11 C15 C22 
Date:  2011–01–06 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110004&r=ecm 
By:  Olfa Zaafrane; Anouar Ben Mabrouk 
Abstract:  In the present paper, a fuzzy logic based method is combined with wavelet decomposition to develop a stepbystep dynamic hybrid model for the estimation of financial time series. Empirical tests on fuzzy regression, wavelet decomposition as well as the new hybrid model are conducted on the well known $SP500$ index financial time series. The empirical tests show an efficiency of the hybrid model. 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1102.3702&r=ecm 
By:  Hecq Alain; Laurent Sébastien; Palm Franz (METEOR) 
Abstract:  First, we investigate the minimal univariate representation of some well known n dimensional conditional volatility models. Simple systems (e.g. a VEC(0,1)) for the joint behaviour of several variables imply individual processes with a lot of persistence in the form of long order lags. We show that in the presence of factors, parsimonious univariate representations (e.g. GARCH(1,1)) can result from large multivariate models generating the conditional variances and conditional correlations. Second, we propose an approach to use empirical results for these univariate processes in the analysis of the underlying multivariate, possibly highdimensional, GARCH process. We use reduced rank procedures to discriminate between a system with seemingly unrelated assets (e.g. a diagonal model) from a set of series with few common sources of volatility. Among the analyzed procedures, the cannonical correlation test statistics on logs of squared returns proposed by Engle and Marcucci (2006) has quite good properties even in the case of falsely omitted crossmoments. Out of 30 returns from the NYSE, six returns are shown to display a parsimonious GARCH(1,1) model for their conditional variance. We do not reject the hypothesis that a single common volatility factor drives these six series. 
Keywords:  financial economics and financial management ; 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2011011&r=ecm 
By:  Charles S. Bos (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam) 
Abstract:  Many seasonal macroeconomic time series are subject to changes in their means and variances over a long time horizon. In this paper we propose a general treatment for the modelling of timevarying features in economic time series. We show that time series models with mean and variance functions depending on dynamic stochastic processes can be sufficiently robust against changes in their dynamic properties. We further show that the implementation of the treatment is relatively straightforward. An illustration is given for monthly U.S. Industrial Production. The empirical results including estimates of timevarying means and variances are discussed in detail. 
Keywords:  Common stochastic variance; Kalman filter; State space model; unobserved components time series model 
JEL:  C22 C51 C53 E23 
Date:  2010–02–03 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100017&r=ecm 
By:  Shiko Maruyama (School of Economics, University of New South Wales) 
Abstract:  I study the estimation of finite sequential games with perfect information. The major challenge in estimation is computation of highdimensional truncated integration whose domain is complicated by strategic interaction. I show that this complication resolves when unobserved offtheequilibriumpath strategies are controlled for. Separately evaluating the likelihood contribution of each subgame perfect strategy profile that rationalizes the observed outcome allows the use of the GHK simulator, the most widely used importancesampling probit simulator. Monte Carlo experiments demonstrate the performance and robustness of the proposed method, and confirm that misspecification of the decision order leads to underestimation of strategic effect. 
Keywords:  Inference In Discrete Games; Sequential Games; Monte Carlo Integration; GHK Simulator; Subgame Perfection; Perfect Information 
JEL:  C35 C63 C72 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201022&r=ecm 
By:  Tatiana Komorova; Thomas Severini; Elie Tamer 
Abstract:  We introduce a notion of median uncorrelation that is a natural extension of mean(linear) uncorrelation. A scalar random variable Y is median uncorrelated with a kdimensionalrandom vector X if and only if the slope from an LAD regression of Yon X is zero. Using this simple definition, we characterize properties of medianuncorrelated random variables, and introduce a notion of multivariate medianuncorrelation. We provide measures of median uncorrelation that are similar to thelinear correlation coefficient and the coefficient of determination. We also extendthis median uncorrelation to other loss functions. As two stage least squaresexploits mean uncorrelation between an instrument vector and the error to deriveconsistent estimators for parameters in linear regressions with endogenousregressors, the main result of this paper shows how a median uncorrelationassumption between an instrument vector and the error can similarly be used toderive consistent estimators in these linear models with endogenous regressors.We also show how median uncorrelation can be used in linear panel models withquantile restrictions and in linear models with measurement errors. 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2010/552&r=ecm 
By:  Pesaran, Hashem (University of Cambridge); Chudik, Alexander (University of Cambridge) 
Abstract:  This paper considers the problem of aggregation in the case of large linear dynamic panels, where each micro unit is potentially related to all other micro units, and where micro innovations are allowed to be cross sectionally dependent. Following Pesaran (2003), an optimal aggregate function is derived, and the limiting behavior of the aggregation error is investigated as N (the number of cross section units) increases. Certain distributional features of micro parameters are also identified from the aggregate function. The paper then establishes Granger's (1980) conjecture regarding the long memory properties of aggregate variables from 'a very large scale dynamic, econometric model', and considers the time profiles of the effects of macro and micro shocks on the aggregate and disaggregate variables. Some of these findings are illustrated in Monte Carlo experiments, where we also study the estimation of the aggregate effects of micro and macro shocks. The paper concludes with an empirical application to consumer price inflation in Germany, France and Italy, and reexamines the extent to which ‘observed’ inflation persistence at the aggregate level is due to aggregation and/or common unobserved factors. Our findings suggest that dynamic heterogeneity as well as persistent common factors are needed for explaining the observed persistence of the aggregate inflation. 
Keywords:  aggregation, large dynamic panels, long memory, weak and strong cross section dependence, VAR models, impulse responses, factor models, inflation persistence 
JEL:  C43 E31 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5478&r=ecm 
By:  Gengenbach Christian; Hecq Alain; Urbain JeanPierre (METEOR) 
Abstract:  With the development of realtime databases, N vintages are available for T observations instead of a single realization of the time series process. Although the use of panel unit root tests with the aim to gain in efficiency seems obvious, empirical and simulation results shown in this paper heavily mitigate the intuitive perspective. 
Keywords:  macroeconomics ; 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2011012&r=ecm 
By:  Rob J Hyndman; Heather Booth; Farah Yasmeen 
Abstract:  When independence is assumed, forecasts of mortality for subpopulations are almost always divergent in the long term. We propose a method for nondivergent or coherent forecasting of mortality rates for two or more subpopulations, based on functional principal components models of simple and interpretable functions of rates. The productratio functional forecasting method models and forecasts the geometric mean of subpopulation rates and the ratio of subpopulation rates to product rates. Coherence is imposed by constraining the forecast ratio function through stationary time series models. The method is applied to sexspecific data for Sweden and statespecific data for Australia. Based on outofsample forecasts, the coherent forecasts are at least as accurate in overall terms as comparable independent forecasts, and forecast accuracy is homogenised across subpopulations. 
Keywords:  Mortality forecasting, coherent forecasts, functional data, LeeCarter method, life expectancy, mortality, age pattern of mortality, sexratio 
JEL:  J11 C53 C14 
Date:  2011–02–04 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20111&r=ecm 
By:  Irma Hindrayanto (VU University Amsterdam); John A.D. Aston (University of Warwick, UK); Siem Jan Koopman (VU University Amsterdam); Marius Ooms (VU University Amsterdam) 
Abstract:  The basic structural time series model has been designed for the modelling and forecasting of seasonal economic time series. In this paper we explore a generalisation of the basic structural time series model in which the timevarying trigonometric terms associated with different seasonal frequencies have different variances for their disturbances. The contribution of the paper is twofold. The first aim is to investigate the dynamic properties of this frequency specific basic structural model. The second aim is to relate the model to a comparable generalised version of the Airline model developed at the U.S. Census Bureau. By adopting a quadratic distance metric based on the restricted reduced form movingaverage representation of the models, we conclude that the generalised models have properties that are close to each other compared to their default counterparts. In some settings, the distance between the models is almost zero so that the models can be regarded as observationally equivalent. An extensive empirical study on disaggregated monthly shipment and foreign trade series illustrates the improvements of the frequencyspecific extension and investigates the relations between the two classes of models. 
Keywords:  Frequencyspecific model; Kalman filter; modelbased seasonal adjustment; unobserved components time series model. 
JEL:  C22 C52 
Date:  2010–02–04 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100018&r=ecm 
By:  Juan Manuel Julio 
Abstract:  Preliminary and delayed Colombian GDP reports are replaced with optimal insample nowcasts of “true” GDP figures derived from a model for data revisions. The new GDP time series is augmented with optimal outofsample forecasts and backcasts of the “true” GDP figures derived from the same model. The trendcycle component of the augmented GDP series is filtered. The resulting gap is more resistant than the ordinary HP filter to the end of sample optimal filtering problem as well as to GDP revisions and delays. The short term noise of the final output gap estimate is also reduced. Adjusting for data revisions and delays reduce the uncertainty of estimated gaps. The extended and further extended HP estimates of the output gap show an impressive efficiency gain with respect to the ordinary HP gap, 43% and 47% respectively, on average. The new extension increases the efficiency in 7.4%, on average, with respect to extended HP estimates. These results constitute a benchmark to future work on real time estimation of the output gap under GDP revisions and delays in Colombia. 
Date:  2011–02–15 
URL:  http://d.repec.org/n?u=RePEc:col:000094:007956&r=ecm 
By:  Tim Salimans (Erasmus University Rotterdam) 
Abstract:  Regression analyses of crosscountry economic growth data are complicated by two main forms of model uncertainty: the uncertainty in selecting explanatory variables and the uncertainty in specifying the functional form of the regression function. Most discussions in the literature address these problems independently, yet a joint treatment is essential. We perform this joint treatment by extending the linear model to allow for multipleregime parameter heterogeneity of the type suggested by new growth theory, while addressing the variable selection problem by means of Bayesian model averaging. Controlling for variable selection uncertainty, we confirm the evidence in favor of new growth theory presented in several earlier studies. However, controlling for functional form uncertainty, we find that the effects of many of the explanatory variables identified in the literature are not robust across countries and variable selections. 
Keywords:  growth regression; variable selection; model uncertainty; model averaging; semiparametric Bayes; MCMC 
JEL:  C11 C14 C15 O40 O57 
Date:  2011–01–18 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110012&r=ecm 
By:  Hecq Alain; Laurent Sébastien; Palm Franz (METEOR) 
Abstract:  Using a reduced rank regression framework as well as information criteria we investigate the presence of commonalities in the intraday periodicity, a dominant feature in the return volatility of most intraday financial time series. We find that the test has little size distortion and reasonable power even in the presence of jumps. We also find that only three factors are needed to describe the intraday periodicity of thirty US asset returns sampled at the 5minute frequency. Interestingly, we find that for most series the models imposing these commonalities deliver better forecasts of the conditional intraday variance than those where the intraday periodicity is estimated for each asset separately. 
Keywords:  financial economics and financial management ; 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2011010&r=ecm 
By:  Patrick M. Kline; Andres Santos 
Abstract:  We examine the higher order properties of the wild bootstrap (Wu, 1986) in a linear regression model with stochastic regressors. We find that the ability of the wild bootstrap to provide a higher order refinement is contingent upon whether the errors are mean independent of the regressors or merely uncorrelated. In the latter case, the wild bootstrap may fail to match some of the terms in an Edgeworth expansion of the full sample test statistic, potentially leading to only a partial refinement (Liu and Singh, 1987). To assess the practical implications of this result, we conduct a Monte Carlo study contrasting the performance of the wild bootstrap with the traditional nonparametric bootstrap. 
JEL:  C12 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:16793&r=ecm 
By:  Christian Bach (Aarhus University, School of Economics and Management and CREATES); Bent Jesper Christensen (Aarhus University, School of Economics and Management and CREATES) 
Abstract:  We include simultaneously both realized volatility measures based on highfrequency asset returns and implied volatilities backed out of individual traded at the money option prices in a state space approach to the analysis of true underlying volatility. We model integrated volatility as a latent fi?rst order Markov process and show that our model is closely related to the CEV and BarndorffNielsen & Shephard (2001) models for local volatility. We show that if measurement noise in the observable volatility proxies is not accounted for, then the estimated autoregressive parameter in the latent process is downward biased. Implied volatility performs better than any of the alternative realized measures when forecasting future integrated volatility. The results are largely similar across the stock market (S&P 500), bond market (30year U.S. Tbond), and foreign currency exchange market ($/£ ). 
Keywords:  Autoregression, bipower variation, highfrequency data, implied volatility, integrated volatility, Kalman fi?lter, moving average, option prices, realized volatility, state space model, stochastic volatility. 
JEL:  C32 G13 G14 
Date:  2011–02–11 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201061&r=ecm 
By:  Marine Carrasco; Rachidi Kotchoni 
Abstract:  A shrinkage estimator of the integrated volatility is derived within a semiparametric moving average microstructure noise model specified at the highest frequency. The order the moving average is allowed to increase with the sampling frequency, which implies that the first order autocorrelation of the noise converges to one as the sampling frequency goes to infinity. Estimators are derived for the identifiable parameters of the model and their good properties are confirmed in simulation. The results of an empirical application with stocks listed in the DJI suggest that the order of the moving average model postulated for the noise typically increases slower than the square root of the sampling frequency. <P>Nous construisons un estimateur de volatilité intégrée qui se présente sous la forme d’une combinaison linéaire optimale d’autres estimateurs, dans le cadre d’un modèle semiparamétrique de type moyenne mobile postulé pour le bruit de microstructure. L’ordre de ce processus moyen mobile est une fonction croissante de la fréquence des observations, ce qui implique que l’autocorrélation d’ordre 1 du bruit de microstructure tend vers l’unité lorsque la fréquence tend vers l’infini. Des estimateurs sont proposés pour les paramètres identifiables du modèle et leurs bonnes propriétés sont confirmées par simulation. Les résultats d’une application empirique basée sur des actifs du DJI suggèrent qu’en général, l’ordre du processus moyen mobile postulé pour le bruit de microstructure augmente moins vite que la racine carrée de la fréquence des observations 
Keywords:  Integrated Volatility, method of moment, microstructure noise, realized kernel, shrinkage. , Volatilité intégrée, méthode des moments, bruit de microstructure, estimateur à noyaux réalisés, combinaison linéaire optimale d’estimateurs 
Date:  2011–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s29&r=ecm 
By:  Cem Cakmakli (Erasmus University Rotterdam); Richard Paap (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) 
Abstract:  This paper develops a MarkovSwitching vector autoregressive model that allows for imperfect synchronization of cyclical regimes in multiple variables, due to phase shifts of a single common cycle. The model has three key features: (i) the amount of phase shift can be different across regimes (as well as across variables), (ii) it allows the cycle to consist of any number of regimes <I>J</I> is larger than or equal to 2, and (iii) it allows for regimedependent volatilities and correlations. In an empirical application to monthly returns on sizebased stock portfolios, a threeregime model with asymmetric phase shifts and regimedependent heteroscedasticity is found to characterize the joint distribution of returns most adequately. While large and smallcap portfolios switch contemporaneously into boom and crash regimes, the largecap portfolio leads the smallcap portfolio for switches to a moderate regime by a month. 
Keywords:  imperfect synchronization; phase shifts; regimeswitching models; Bayesian analysis 
JEL:  C11 C32 C51 C52 
Date:  2011–01–06 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110002&r=ecm 
By:  Pudney S (Institute for Social and Economic Research) 
Abstract:  Factor rotation is widely used to interpret the estimated factor loadings from latent variable models. Rotation methods embody a priori concepts of `complexity' of factor structures, which they seek to minimise. Surprisingly, it is rare for researchers to exploit one of the most common and powerful sources of a priori information: nonnegativity of factor load ings. This paper develops a method of incorporating sign restrictions in factor rotation, exploiting a recentlydeveloped test for multiple inequality constraints. An application to the measurement of disability demonstrates the feasibility of the method and the power of nonnegativity restrictions. 
Date:  2011–02–14 
URL:  http://d.repec.org/n?u=RePEc:ese:iserwp:201105&r=ecm 
By:  Rodney W. Strachan (Australian National University, Australia); Herman K. van Dijk (Erasmus University Rotterdam, the Netherlands) 
Abstract:  Divergent priors are improper when defined on unbounded supports. Bartlett's paradox has been taken to imply that using improper priors results in illdefined Bayes factors, preventing model comparison by posterior probabilities. However many improper priors have attractive properties that econometricians may wish to access and at the same time conduct model comparison. We present a method of computing well defined Bayes factors with divergent priors by setting rules on the rate of diffusion of prior certainty. The method is exact; no approximations are used. As a further result, we demonstrate that exceptions to Bartlett's paradox exist. That is, we show it is possible to construct improper priors that result in well defined Bayes factors. One important improper prior, the Shrinkage prior due to Stein (1956), is one such example. This example highlights pathologies with the resulting Bayes factors in such cases, and a simple solution is presented to this problem. A simple Monte Carlo experiment demonstrates the applicability of the approach developed in this paper. 
Keywords:  Improper prior; Bayes factor; marginal likelihood; Shrinkage prior; Measure 
JEL:  C11 C52 C15 C32 
Date:  2011–01–10 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110006&r=ecm 
By:  Vincent van den Berg (VU University Amsterdam); Eric Kroes (VU University Amsterdam); Erik T. Verhoef (VU University Amsterdam) 
Abstract:  It is a common finding in empirical discrete choice studies that the estimated mean relative values of the coefficients (i.e. WTP's) from multinomial logit (MNL) estimations differ from those calculated using mixed logit estimations, where the mixed logit has the better statistical fit. However, it is less clear under exactly what circumstances such differences arise, whether they are important, and if they can be seen as biases in the WTP estimates from MNL. We use datasets created by Monte Carlo simulation to test, in a controlled environment, the effects of the different possible sources of bias on the accuracy of WTP's estimated by MNL. Consistent with earlier research we find that random unobserved heterogeneity in the marginal utilities does not in itself biases the MNL estimates. Furthermore, whether or not the unobserved heterogeneity is symmetrically shaped also does not affect the accuracy of the WTP estimates of MNL. However, we find that if two heterogeneous marginal utilities are correlated then the WTP's from MNL may be biased. If the correlation between the marginal utilities is negative, then the bias in the MNL estimate is negative, whereas if the correlation is positive the bias is positive. 
Keywords:  Discrete Choice; Biases in WTP's; Multinomial Logit; Correlated Heterogeneous Marginal Utilities 
JEL:  C13 C25 
Date:  2010–01–28 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20100014&r=ecm 
By:  Lorenzo Masiero (Institute for Economic Research (IRE), Faculty of Economics, University of Lugano, Switzerland); John M. Rose (Institute of Transport and Logistics Studies (ITLS), Faculty of Economics and Business, The University of Sydney, Australia) 
Abstract:  Within the discrete choice modelling literature, there has been growing interest in including reference alternatives within stated choice survey tasks. Recent studies have investigated asymmetric utility specifications by estimating discrete choice models that include different parameters according to gains and losses relative to the values of the reference attributes. This paper analyses asymmetric discrete choice models by comparing specifications expressed as deviations from the reference point and specifications expressed in absolute values. The results suggest that the selection of the appropriate asymmetric model specification should be based on the type of the stated choice experiment. 
Keywords:  stated choice experiments, reference alternative, preference asymmetry, willingness to pay 
JEL:  C25 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:lug:wpaper:1104&r=ecm 
By:  Wasel Shadat; Chris Orme 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1109&r=ecm 
By:  Jef Boeckx (National Bank of Belgium, Research Department) 
Abstract:  I propose a discrete choice method for estimating monetary policy reaction functions based on research by Hu and Phillips (2004). This method distinguishes between determining the underlying desired rate which drives policy rate changes and actually implementing interest rate changes. The method is applied to ECB rate setting between 1999 and 2010 by estimating a forwardlooking Taylor rule on a monthly basis using realtime data drawn from the Survey of Professional Forecasters. All parameters are estimated significantly and with the expected sign. Including the period of financial turmoil in the sample delivers a less aggressive policy rule as the ECB was constrained by the lower bound on nominal interest rates. The ECB's nonstandard measures helped to circumvent that constraint on monetary policy, however. For the preturmoil sample, the discrete choice model's estimated desired policy rate is more aggressive and less gradual than least squares estimates of the same rule specification. This is explained by the fact that the discrete choice model takes account of the fact that central banks change interest rates by discrete amounts. An advantage of using discrete choice models is that probabilities are attached to the different outcomes of every interest rate setting meeting. These probabilities correlate fairly well with the probabilities derived from surveys among commercial bank economists. 
Keywords:  monetary policy reaction functions, discrete choice models, interest rate setting, ECB 
JEL:  C25 E52 E58 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:nbb:reswpp:201102210&r=ecm 
By:  Das, J.W.M. (Tilburg University); Toepoel, V. (Tilburg University); Soest, A.H.O. van (Tilburg University) 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:124501005&r=ecm 
By:  Hubbard, Timothy P.; Li, Tong; Paarsch, Harry J. 
Abstract:  Within the affiliated privatevalues paradigm, we develop a tractable empirical model of equilibrium behaviour at firstprice, sealedbid auctions. The model is nonparametrically identified, but the rate of convergence in estimation is slow when the number of bidders is even moderately large, so we develop a semiparametric estimation strategy, focusing on the Archimedean family of copulae and implementing this framework using particular membersthe Clayton, Frank, and Gumbel copulae. We apply our framework to data from lowprice, sealedbid auctions used by the Michigan Department of Transportation to procure roadresurfacing services, rejecting the hypothesis of independence and finding significant (and high) affiliation in cost signals. 
Keywords:  firstprice, sealedbid auctions, copulae, affiliation 
JEL:  C20 D44 L1 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:hit:hitcei:201010&r=ecm 