
on Econometrics 
By:  Hirukawa, Masayuki; Prokhorov, Artem 
Abstract:  Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a nonstandard convergence rate to its probability limit. If only a few variables are used to impute the missing data then it is possible to correct for the bias. We propose two semiparametric biascorrected estimators and explore their asymptotic properties. The estimators have an indirectinference interpretation and their convergence rates depend on the number of variables used in matching. We can attain the parametric convergence rate if that number is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases. Keywords: Bias correction; differencing; indirect inference; linear regression; matching estimation; measurement error bias. JEL Classifcation Codes: C13; C14; C31. 
Keywords:  measurement error bias; matching estimation; linear regression; indirect inference; differencing; Bias correction 
Date:  2014–05 
URL:  http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/11431&r=ecm 
By:  Pawel Janus (UBS Global Asset Management, the Netherlands); Andr� Lucas (VU University Amsterdam); and Anne Opschoor (VU University Amsterdam, the Netherlands) 
Abstract:  We develop a new model for the multivariate covariance matrix dynamics based on daily return observations and daily realized covariance matrix kernels based on intraday data. Both types of data may be fattailed. We account for this by assuming a matrixF distribution for the realized kernels, and a multivariate Student’s t distribution for the returns. Using generalized autoregressive score dynamics for the unobserved true covariance matrix, our approach automatically corrects for the effect of outliers and incidentally large observations, both in returns and in covariances. Moreover, by an appropriate choice of scaling of the conditional score function we are able to retain a convenient matrix formulation for the dynamic updates of the covariance matrix. This makes the model highly computationally efficient. We show how the model performs in a controlled simulation setting as well as for empirical data. In our empirical application, we study daily returns and realized kernels from 15 equities over the period 20012012 and find that the new model statistically outperforms (recently developed) multivariate volatility models, both insample and outofsample. We also comment on the possibility to use composite likelihood methods for estimation if desired. 
Keywords:  realized covariance matrices, heavy tails, (degenerate) matrixF distribution, generalized autoregressive score (GAS) dynamics 
JEL:  C32 C58 
Date:  2014–06–19 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140073&r=ecm 
By:  Jentsch, Carsten; Paparoditis, Efstathios; Politis, Dimitris N. 
Abstract:  We develop some asymptotic theory for applications of block bootstrap resampling schemes to multivariate integrated and cointegrated time series. It is proved that a multivariate, continuouspath block bootstrap scheme applied to a full rank integrated process, succeeds in estimating consistently the distribution of the least squares estimators in both, the regression and the spurious regression case. Furthermore, it is shown that the same block resampling scheme does not succeed in estimating the distribution of the parameter estimators in the case of cointegrated time series. For this situation, a modified block resampling scheme, the socalled residual based block bootstrap, is investigated and its validity for approximating the distribution of the regression parameters is established. The performance of the proposed block bootstrap procedures is illustrated in a short simulation study. 
Keywords:  Block bootstrap , bootstrap consistency , spurious regression , functional limit theorem , continuouspath block bootstrap , modelbased block bootstrap 
JEL:  C15 C32 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:mnh:wpaper:36668&r=ecm 
By:  Abdelhamid Ouakasse; Guy Melard 
Abstract:  Recursive estimation methods for time series models usually make use of recurrences for the vector of parameters, the modelerror and its derivatives with respect to the parameters, plus a recurrence for the Hessian of the model error. An alternativemethod is proposed in the case of an autoregressivemoving average model, where the Hessian is not updated but is replaced,at each time, by the inverse of the Fisher information matrix evaluated at the current parameter. The asymptotic properties,consistency and asymptotic normality, of the new estimator are obtained. Monte Carlo experiments indicate that the estimatesmay converge faster to the true values of the parameters than when the Hessian is updated. The paper is illustrated by anexample on forecasting the speed of wind. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/13844&r=ecm 
By:  Jentsch, Carsten; Leucht, Anne 
Abstract:  Sample quantiles are consistent estimators for the true quantile and satisfy central limit theorems (CLTs) if the underlying distribution is continuous. If the distribution is discrete, the situation is much more delicate. In this case, sample quantiles are known to be not even consistent in general for the population quantiles. In a motivating example, we show that Efron’s bootstrap does not consistently mimic the distribution of sample quantiles even in the discrete independent and identically distributed (i.i.d.) data case. To overcome this bootstrap inconsistency, we provide two different and complementing strategies. In the first part of this paper, we prove that moutofntype bootstraps do consistently mimic the distribution of sample quantiles in the discrete data case. As the corresponding bootstrap confidence intervals tend to be conservative due to the discreteness of the true distribution, we propose randomization techniques to construct bootstrap confidence sets of asymptotically correct size. In the second part, we consider a continuous modification of the cumulative distribution function and make use of midquantiles studied in Ma, Genton and Parzen (2011). Contrary to ordinary quantiles and due to continuity, midquantiles lose their discrete nature and can be estimated consistently. Moreover, Ma, Genton and Parzen (2011) proved (non)central limit theorems for i.i.d. data, which we generalize to the time series case. However, as the midquantile function fails to be differentiable, classical i.i.d. or block bootstrap methods do not lead to completely satisfactory results and moutofn variants are required here as well. The finite sample performances of both approaches are illustrated in a simulation study by comparing coverage rates of bootstrap confidence intervals. 
Keywords:  Bootstrap inconsistency , Count processes , Middistribution function , moutofn bootstrap , Integervalued processes 
JEL:  C13 C15 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:mnh:wpaper:36588&r=ecm 
By:  Elbers, Chris; van der Weide, Roy 
Abstract:  This paper proposes a method for estimating distribution functions that are associated with the nested errors in linear mixed models. The estimator incorporates Empirical Bayes prediction while making minimal assumptions about the shape of the error distributions. The application presented in this paper is the small area estimation of poverty and inequality, although this denotes by no means the only application. MonteCarlo simulations show that estimates of poverty and inequality can be severely biased when the nonnormality of the errors is ignored. The bias can be as high as 2 to 3 percent on a poverty rate of 20 to 30 percent. Most of this bias is resolved when using the proposed estimator. The approach is applicable to both surveytocensus and surveytosurvey prediction. 
Keywords:  Statistical&Mathematical Sciences,Econometrics,Achieving Shared Growth,Inequality,Economic Theory&Research 
Date:  2014–07–01 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:6962&r=ecm 
By:  Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands); and André Lucas (VU University Amsterdam, the Netherlands, and Aarhus University, Denmark) 
Abstract:  The strong consistency and asymptotic normality of the maximum likelihood estimator in observationdriven models usually requires the study of the model both as a filter for the timevarying parameter and as a data generating process (DGP) for observed data. The probabilistic properties of the filter can be substantially different from those of the DGP. This difference is particularly relevant for recently developed time varying parameter models. We establish new conditions under which the dynamic properties of the true time varying parameter as well as of its filtered counterpart are both wellbehaved and We only require the verification of one rather than two sets of conditions. In particular, we formulate conditions under which the (local) invertibility of the model follows directly from the stable behavior of the true time varying parameter. We use these results to prove the local strong consistency and asymptotic normality of the maximum likelihood estimator. To illustrate the results, we apply the theory to a number of empirically relevant models. 
Keywords:  Observationdriven models; stochastic recurrence equations; contraction conditions; invertibility; stationarity; ergodicity; generalized autoregressive score models 
JEL:  C13 C22 C12 
Date:  2014–06–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140074&r=ecm 
By:  Doukhan, Paul; Lang, Gabriel; Leucht, Anne; Neumann, Michael H. 
Abstract:  In this paper, we propose a modelfree bootstrap method for the empirical process under absolute regularity. More precisely, consistency of an adapted version of the socalled dependent wild bootstrap, that was introduced by Shao (2010) and is very easy to implement, is proved under minimal conditions on the tuning parameter of the procedure. We apply our results to construct confidence intervals for unknown parameters and to approximate critical values for statistical tests. A simulation study shows that our method is competitive to standard block bootstrap methods in finite samples. 
Keywords:  Absolute regularity , bootstrap , empirical process , time series , V statistics , quantiles , KolmogorovSmirnov test 
JEL:  C 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:mnh:wpaper:35246&r=ecm 
By:  Giorgio Calzolari (University of Florence); Laura Magazzini (Department of Economics (University of Verona)) 
Abstract:  Within the framework of dynamic panel data models with mean stationarity, one additional moment condition may remarkably increase the efficiency of the system GMM estimator. This additional condition is essentially a condition of “homoskesdasticity” of the individual effects; it is “implicitly satisfied” in all the Monte Carlo simulations on dynamic panel data models available in the literature (including the experiments with heteroskedasticity, which is always confined to the idiosyncratic errors), but not “explicitly” exploited. Monte Carlo experiments show remarkable efficiency improvements when the distribution of individual effects, and thus of yi0, are skewed, thus including the very important cases in economic applications that include variables like individual wages, sizes of the firms, number of employees, etc. 
Keywords:  panel data, dynamic model, GMM estimation, mean stationarity, skewed individual effects 
JEL:  C23 C13 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:ver:wpaper:12/2014&r=ecm 
By:  Siem Jan Koopman; and Geert Mesters (VU University Amsterdam) 
Abstract:  We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkagebased estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods. 
Keywords:  Importance sampling, Kalman filtering, Likelihoodbased analysis, Posterior modes, RaoBlackwellization, Shrinkage 
JEL:  C32 C43 
Date:  2014–05–23 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140061&r=ecm 
By:  Bertille Antoine (Simon Fraser University); Otilia Boldea (Tilburg University) 
Abstract:  In the last two decades, there has been a lot of empirical evidence suggesting that many macroeconometric and financial models (e.g. for inflation, interest rates, or exchange rates) are subject to both parameter instability and identification problems. In this paper, we address both issues in a unified framework, and provide a comprehensive treatment of the link between them. Changes in identification strength provide an additional source of information that is used to improve estimation. More generally, we show that detecting and locating changes in instrument strength is essential for efficient asymptotic inference, and we provide a stepbystep guide for practitioners. In our simulation studies, our global inference procedures show very good size and power properties. 
Keywords:  GMM; Identification; Weak instruments; Break point; Change in identification strength 
JEL:  C13 C22 C26 C36 C51 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:sfu:sfudps:dp1403&r=ecm 
By:  Proietti, Tommaso 
Abstract:  Extracting and forecasting the volatility of financial markets is an important empirical problem. Time series of realized volatility or other volatility proxies, such as squared returns, display long range dependence. Exponential smoothing (ES) is a very popular and successful forecasting and signal extraction scheme, but it can be suboptimal for long memory time series. This paper discusses possible long memory extensions of ES and finally implements a generalization based on a fractional equal root integrated moving average (FerIMA) model, proposed originally by Hosking in his seminal 1981 article on fractional differencing. We provide a decomposition of the process into the sum of fractional noise processes with decreasing orders of integration, encompassing simple and double exponential smoothing, and introduce a lowpass real time filter arising in the long memory case. Signal extraction and prediction depend on two parameters: the memory (fractional integration) parameter and a mean reversion parameter. They can be estimated by pseudo maximum likelihood in the frequency domain. We then address the prediction of volatility by a FerIMA model and carry out a recursive forecasting experiment, which proves that the proposed generalized exponential smoothing predictor improves significantly upon commonly used methods for forecasting realized volatility. 
Keywords:  Realized Volatility. Signal Extraction. PermanentTransitory Decomposition. Fractional equalroot IMA model. 
JEL:  C22 C53 G17 
Date:  2014–07–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:57230&r=ecm 
By:  Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands, and CREATES, Aarhus University, Denmark); Andr� Lucas (VU University Amsterdam) 
Abstract:  We investigate the information theoretic optimality properties of the score function of the predictive likelihood as a device to update parameters in observation driven timevarying parameter models. The results provide a new theoretical justification for the class of generalized autoregressive score models, which covers the GARCH model as a special case. Our main contribution is to show that only parameter updates based on the score always reduce the local KullbackLeibler divergence between the true conditional density and the model implied conditional density. This result holds irrespective of the severity of model misspecification. We also show that the use of the score leads to a considerably smaller global KullbackLeibler divergence in empirically relevant settings. We illustrate the theory with an application to timevarying volatility models. We show that th e reduction in KullbackLeibler divergence across a range of different settings can be substantial in comparison to updates based on for example squared lagged observations. 
Keywords:  generalized autoregressive models, information theory, optimality, KullbackLeibler distance, volatility models 
JEL:  C12 C22 
Date:  2014–04–11 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140046&r=ecm 
By:  Giovanni Forchini (University of Surrey) 
Abstract:  This paper examines observational equivalence in a class of nonparametric structural equations models under weaker conditions than those currently available in the literature. It allows for several endogenous variables, does not impose differentiability or continuity of the equations which are part of the structure, and allows the unobserved errors to depend on the exogenous variables. The usefulness of the main result is illustrated by deriving observational equivalence conditions for some models including nonparametric simultaneous equations models, additive errors models, multivariate triangular models, etc. Some of these yield well known results as special cases. 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:sur:surrec:0114&r=ecm 
By:  Niels Haldrup (Aarhus University and CREATES); Robinson Kruse (Leibniz University Hannover and CREATES) 
Abstract:  Fractionally integrated processes have become a standard class of models to describe the long memory features of economic and financial time series data. However, it has been demonstrated in numerous studies that structural break processes and nonlinear features can often be confused as being long memory. The question naturally arises whether it is possible empirically to determine the source of long memory as being genuinely long memory in the form of a fractionally integrated process or whether the long range dependence is of a di¤erent nature. In this paper we suggest a testing procedure that helps discriminating between such processes. The idea is based on the feature that nonlinear transformations of stationary fractionally integrated Gaussian processes decrease the order of memory in a speci?c way which is determined by the Hermite rank of the transformation. In principle, a nonlinear transformation of the series can make the series short memory I(0). We suggest using the Wald test of Shimotsu (2007) to test the null hypothesis that a vector time series of properly transformed variables is I(0). Our testing procedure is designed such that even nonstationary fractionally integrated processes are permitted under the null hypothesis. The test is shown to have good size and to be robust against certain types of deviations from Gaussianity. The test is also shown to be consistent against a broad class of processes that are nonfractional but still exhibit (spurious) long memory. In particular, the test is shown to have excellent power against a class of stationary and nonstationary random level shift models as well as Markov switching GARCH processes where the break and transition probabilities are allowed to be time varying. 
Keywords:  Long memory, fractional integration, nonlinear models, structural breaks, random level shifts, Hermite polynomials, realized volatility, in?ation. 
JEL:  C12 C2 C22 
Date:  2014–06–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201419&r=ecm 
By:  Arun G. Chandrasekhar; Matthew O. Jackson 
Abstract:  We define a general class of network formation models, Statistical Exponential Random Graph Models (SERGMs), that nest standard exponential random graph models (ERGMs) as a special case. We provide the first general results on when these models' (including ERGMs) parameters estimated from the observation of a single network are consistent (i.e., become accurate as the number of nodes grows). Next, addressing the problem that standard techniques of estimating ERGMs have been shown to have exponentially slow mixing times for many specifications, we show that by reformulating network formation as a distribution over the space of sufficient statistics instead of the space of networks, the size of the space of estimation can be greatly reduced, making estimation practical and easy. We also develop a related, but distinct, class of models that we call subgraph generation models (SUGMs) that are useful for modeling sparse networks and whose parameter estimates are also directly and easily estimable, consistent, and asymptotically normally distributed. Finally, we show how choicebased (strategic) network formation models can be written as SERGMs and SUGMs, and apply our models and techniques to network data from rural Indian villages. 
JEL:  C01 C51 D85 Z13 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20276&r=ecm 
By:  Desire Kedagni; Ismael Mourifie 
Abstract:  This note discusses partial identication in a nonparametric triangular system with discrete endogenous regressors and nonseparable errors. Recently, [Jun, Pinkse and Xu (2011, JPX). Tighter Bounds in Triangular Systems. Journal of Econometrics 161(2), 122128] provides bounds on the structural function evaluated at particular values using exclusion, exogeneity and rank conditions. We propose a simple idea that often allows to improve the JPX bounds without invoking a new set of assumptions. Moreover, we show how our idea can be used to tighten existing bounds on the structural function in more general triangular systems. 
Keywords:  Nonparametric triangular systems; Partial identification; Instrumental variables; Rank con ditions. 
JEL:  C14 C30 C31 
Date:  2014–07–07 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa515&r=ecm 
By:  Karapanagiotidis, Paul 
Abstract:  Theory suggests that physical commodity prices may exhibit nonlinear features such as bubbles and various types of asymmetries. This paper investigates these claims empirically by introducing a new time series model apt to capture such features. The data set is composed of 25 individual, continuous contract, commodity futures price series, representative of a number of industry sectors including softs, precious metals, energy, and livestock. It is shown that the linear causal ARMA model with Gaussian innovations is unable to adequately account for the features of the data. In the purely descriptive time series literature, often a threshold autoregression (TAR) is employed to model cycles or asymmetries. Rather than take this approach, we suggest a novel process which is able to accommodate both bubbles and asymmetries in a flexible way. This process is composed of both causal and noncausal components and is formalized as the mixed causal/noncausal autoregressive model of order (r, s). Estimating the mixed causal/noncausal model with leptokurtic errors, by an approximated maximum likelihood method, results in dramatically improved model fit according to the Akaike information criterion. Comparisons of the estimated unconditional distributions of both the purely causal and mixed models also suggest that the mixed causal/noncausal model is more representative of the data according to the KullbackLeibler measure. Moreover, these estimation results demonstrate that allowing for such leptokurtic errors permits identification of various types of asymmetries. Finally, a strategy for computing the multiple steps ahead forecast of the conditional distribution is discussed. 
Keywords:  commodity futures, mixed causal/noncausal model, nonlinear dynamic models, commodity futures, speculative bubble. 
JEL:  C22 C51 C52 C58 
Date:  2014–06–22 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:56805&r=ecm 
By:  Vallejos, Catalina; Steel, Mark F. J. 
Abstract:  The aim of this paper is to model the length of registration at university and its associated academic outcome for undergraduate students at the Pontificia Universidad Cat´olica de Chile. Survival time is defined as the time until the end of the enrollment period, which can relate to different reasons  graduation or two types of dropout  that are driven by different processes. Hence, a competing risks model is employed for the analysis. The issue of separation of the outcomes (which precludes maximum likelihood estimation) is handled through the use of Bayesian inference with an appropriately chosen prior. We are interested in identifying important determinants of university outcomes and the associated model uncertainty is formally addressed through Bayesian model averaging. The methodology introduced for modelling university outcomes is applied to three selected degree programmes, which are particularly affected by dropout and late graduation. 
Keywords:  Bayesian model averaging; Competing risks; Outcomes separation; Proportional Odds model; University dropout 
JEL:  C1 C11 C41 I23 
Date:  2014–05–26 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:57185&r=ecm 
By:  Marco Bazzi (University of Padova, Italy); Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); and Andre Lucas (VU University Amsterdam, the Netherlands) 
Abstract:  We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigate the performance of the model in a Monte Carlo study and show that the model is successful in estimating a range of different dynamic patterns for unobserved regime switching probabilities. We also illustrate the new methodology in an empirical setting by studying the dynamic mean and variance behavior of U.S. Industrial Production growth. We find empirical evidence of changes in the regime switching probabilities, with more persistence for high volatility regimes in the earlier part of the sample, and more persistence for low volatility regimes in the later part of the sample. 
Keywords:  Hidden Markov Models; observation driven models; generalized autoregressive score dynamics 
JEL:  C22 C32 
Date:  2014–06–17 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140072&r=ecm 
By:  Rubio, Francisco Javier; Steel, Mark F. J. 
Abstract:  We introduce the family of univariate double two–piece distributions, obtained by using a density– based transformation of unimodal symmetric continuous distributions with a shape parameter. The resulting distributions contain five interpretable parameters that control the mode, as well as the scale and shape in each direction. Fourparameter subfamilies of this class of distributions that capture different types of asymmetry are presented. We propose interpretable scale and locationinvariant benchmark priors and derive conditions for the existence of the corresponding posterior distribution. The prior structures used allow for meaningful comparisons through Bayes factors within flexible families of distributions. These distributions are applied to models in finance, internet traffic data, and medicine, comparing them with appropriate competitors. 
Keywords:  model comparison; posterior existence; prior elicitation; scale mixtures of normals; unimodal continuous distributions 
JEL:  C11 C16 
Date:  2014–06–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:57102&r=ecm 
By:  Ismael Mourifie; Yuanyuan Wan 
Abstract:  In this paper, we discuss the key conditions for the identification and estimation of the local average treatment effect (LATE, Imbens and Angrist, 1994): the valid instrument assumption (LI) and the monotonicity assumption (LM). We show that the joint assumptions of LI and LM have a testable implication that can be summarized by a sign restriction defined by a set of intersection bounds. We propose an easytoimplement testing procedure that can be analyzed in the framework of Chernozhukov, Lee, and Rosen (2013) and implemented using the Stata package of Chernozhukov, Kim, Lee, and Rosen (2013). We apply the proposed tests to the â€œdraft eligibilityâ€ instrument in Angrist (1991), the â€œcollege proximityâ€ instrument in Card (1993) and the â€œsame sexâ€ instrument in Angrist and Evans (1998). 
Keywords:  LATE, Instrumental Variables, hypothesis testing, intersection bounds, conditionally more compliers 
JEL:  C12 C15 C21 
Date:  2014–07–07 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa514&r=ecm 
By:  Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron 
Abstract:  We examine possible accuracy gains from using factor models, quantile regression and forecast averaging for computing interval forecasts of electricity spot prices. We extend the Quantile Regression Averaging (QRA) approach of Nowotarski and Weron (2014) and use principal component analysis to automate the selection process from among a large set of individual forecasting models available for averaging. We show that the resulting Factor Quantile Regression Averaging (FQRA) approach performs very well for price (and load) data from the British power market. In terms of unconditional coverage, conditional coverage and the Winkler score, we find the FQRAimplied prediction intervals to be more accurate than those of the benchmark ARX model and the QRA approach. 
Keywords:  Probabilistic forecasting; Prediction interval; Quantile regression; Factor model; Forecasts combination; Electricity spot price 
JEL:  C22 C32 C38 C53 Q47 
Date:  2014–06–30 
URL:  http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1409&r=ecm 
By:  Aiste Ruseckaite (Erasmus University Rotterdam); Peter Goos (Universiteit Antwerpen, Belgium); Dennis Fok (Erasmus University Rotterdam) 
Abstract:  Consumer products and services can often be described as mixtures of ingredients. Examples are the mixture of ingredients in a cocktail and the mixture of different components of waiting time (e.g., invehicle and outofvehicle travel time) in a transportation setting. Choice experiments may help to determine how the respondents' choice of a product or service is affected by the combination of ingredients. In such studies, individuals are confronted with sets of hypothetical products or services and they are asked to choose the most preferred product or service from each set. However, there are no studies on the optimal design of choice experiments involving mixtures. We propose a method for generating an optimal design for such choice experiments. To this end, we first introduce mixture models in the choice context and next present an algorithm to construct optimal experimental designs, assuming the multinomial logit model is used to analyze the choice data. To overcome the problem that the optimal designs depend on the unknown parameter values, we adopt a Bayesian Doptimal design approach. We also consider locally Doptimal designs and compare the performance of the resulting designs to those produced by a utilityneutral (UN) approach in which designs are based on the assumption that individuals are indifferent between all choice alternatives. We demonstrate that our designs are quite different and in general perform better than the UN designs. 
Keywords:  Bayesian design, Choice experiments, Doptimality, Experimental design, Mixture coordinateexchange algorithm, Mixture experiment, Multinomial logit model, Optimal design 
JEL:  C01 C10 C25 C61 C83 C90 C99 
Date:  2014–05–09 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20140057&r=ecm 