Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics2014-07-13Sune KarlssonConsistent Estimation of Linear Regression Models Using Matched Data
http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/11431&r=ecm
Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a non-standard convergence rate to its probability limit. If only a few variables are used to impute the missing data then it is possible to correct for the bias. We propose two semi-parametric bias-corrected estimators and explore their asymptotic properties. The estimators have an indirect-inference interpretation and their convergence rates depend on the number of variables used in matching. We can attain the parametric convergence rate if that number is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases. Keywords: Bias correction; differencing; indirect inference; linear regression; matching estimation; measurement error bias. JEL Classifcation Codes: C13; C14; C31.Hirukawa, Masayuki, Prokhorov, Artem2014-05measurement error bias; matching estimation; linear regression; indirect inference; differencing; Bias correctionNew HEAVY Models for Fat-Tailed Returns and Realized Covariance Kernels
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140073&r=ecm
We develop a new model for the multivariate covariance matrix dynamics based on daily return observations and daily realized covariance matrix kernels based on intraday data. Both types of data may be fat-tailed. We account for this by assuming a matrix-F distribution for the realized kernels, and a multivariate Student’s t distribution for the returns. Using generalized autoregressive score dynamics for the unobserved true covariance matrix, our approach automatically corrects for the effect of outliers and incidentally large observations, both in returns and in covariances. Moreover, by an appropriate choice of scaling of the conditional score function we are able to retain a convenient matrix formulation for the dynamic updates of the covariance matrix. This makes the model highly computationally efficient. We show how the model performs in a controlled simulation setting as well as for empirical data. In our empirical application, we study daily returns and realized kernels from 15 equities over the period 2001-2012 and find that the new model statistically outperforms (recently developed) multivariate volatility models, both in-sample and out-of-sample. We also comment on the possibility to use composite likelihood methods for estimation if desired.Pawel Janus, Andr� Lucas, and Anne Opschoor2014-06-19realized covariance matrices, heavy tails, (degenerate) matrix-F distribution, generalized autoregressive score (GAS) dynamicsBlock Bootstrap Theory for Multivariate Integrated and Cointegrated Processes
http://d.repec.org/n?u=RePEc:mnh:wpaper:36668&r=ecm
We develop some asymptotic theory for applications of block bootstrap resampling schemes to multivariate integrated and cointegrated time series. It is proved that a multivariate, continuous-path block bootstrap scheme applied to a full rank integrated process, succeeds in estimating consistently the distribution of the least squares estimators in both, the regression and the spurious regression case. Furthermore, it is shown that the same block resampling scheme does not succeed in estimating the distribution of the parameter estimators in the case of cointegrated time series. For this situation, a modified block resampling scheme, the so-called residual based block bootstrap, is investigated and its validity for approximating the distribution of the regression parameters is established. The performance of the proposed block bootstrap procedures is illustrated in a short simulation study.Jentsch, Carsten, Paparoditis, Efstathios, Politis, Dimitris N.2014Block bootstrap , bootstrap consistency , spurious regression , functional limit theorem , continuous-path block bootstrap , model-based block bootstrapOn-line estimation of ARMA models using Fisher-scoring
http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/13844&r=ecm
Recursive estimation methods for time series models usually make use of recurrences for the vector of parameters, the modelerror and its derivatives with respect to the parameters, plus a recurrence for the Hessian of the model error. An alternativemethod is proposed in the case of an autoregressive-moving average model, where the Hessian is not updated but is replaced,at each time, by the inverse of the Fisher information matrix evaluated at the current parameter. The asymptotic properties,consistency and asymptotic normality, of the new estimator are obtained. Monte Carlo experiments indicate that the estimatesmay converge faster to the true values of the parameters than when the Hessian is updated. The paper is illustrated by anexample on forecasting the speed of wind.Abdelhamid Ouakasse, Guy Melard2014Bootstrapping Sample Quantiles of Discrete Data
http://d.repec.org/n?u=RePEc:mnh:wpaper:36588&r=ecm
Sample quantiles are consistent estimators for the true quantile and satisfy central limit theorems (CLTs) if the underlying distribution is continuous. If the distribution is discrete, the situation is much more delicate. In this case, sample quantiles are known to be not even consistent in general for the population quantiles. In a motivating example, we show that Efron’s bootstrap does not consistently mimic the distribution of sample quantiles even in the discrete independent and identically distributed (i.i.d.) data case. To overcome this bootstrap inconsistency, we provide two different and complementing strategies. In the first part of this paper, we prove that m-out-of-n-type bootstraps do consistently mimic the distribution of sample quantiles in the discrete data case. As the corresponding bootstrap confidence intervals tend to be conservative due to the discreteness of the true distribution, we propose randomization techniques to construct bootstrap confidence sets of asymptotically correct size. In the second part, we consider a continuous modification of the cumulative distribution function and make use of mid-quantiles studied in Ma, Genton and Parzen (2011). Contrary to ordinary quantiles and due to continuity, mid-quantiles lose their discrete nature and can be estimated consistently. Moreover, Ma, Genton and Parzen (2011) proved (non-)central limit theorems for i.i.d. data, which we generalize to the time series case. However, as the mid-quantile function fails to be differentiable, classical i.i.d. or block bootstrap methods do not lead to completely satisfactory results and m-out-of-n variants are required here as well. The finite sample performances of both approaches are illustrated in a simulation study by comparing coverage rates of bootstrap confidence intervals.Jentsch, Carsten, Leucht, Anne2014Bootstrap inconsistency , Count processes , Mid-distribution function , m-out-of-n bootstrap , Integer-valued processesEstimation of normal mixtures in a nested error model with an application to small area estimation of poverty and inequality
http://d.repec.org/n?u=RePEc:wbk:wbrwps:6962&r=ecm
This paper proposes a method for estimating distribution functions that are associated with the nested errors in linear mixed models. The estimator incorporates Empirical Bayes prediction while making minimal assumptions about the shape of the error distributions. The application presented in this paper is the small area estimation of poverty and inequality, although this denotes by no means the only application. Monte-Carlo simulations show that estimates of poverty and inequality can be severely biased when the non-normality of the errors is ignored. The bias can be as high as 2 to 3 percent on a poverty rate of 20 to 30 percent. Most of this bias is resolved when using the proposed estimator. The approach is applicable to both survey-to-census and survey-to-survey prediction.Elbers, Chris, van der Weide, Roy2014-07-01Statistical&Mathematical Sciences,Econometrics,Achieving Shared Growth,Inequality,Economic Theory&ResearchMaximum Likelihood Estimation for Correctly Specified Generalized Autoregressive Score Models: Feedback Effects, Contraction Conditions and Asymptotic Properties
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140074&r=ecm
The strong consistency and asymptotic normality of the maximum likelihood estimator in observation-driven models usually requires the study of the model both as a filter for the time-varying parameter and as a data generating process (DGP) for observed data. The probabilistic properties of the filter can be substantially different from those of the DGP. This difference is particularly relevant for recently developed time varying parameter models. We establish new conditions under which the dynamic properties of the true time varying parameter as well as of its filtered counterpart are both well-behaved and We only require the verification of one rather than two sets of conditions. In particular, we formulate conditions under which the (local) invertibility of the model follows directly from the stable behavior of the true time varying parameter. We use these results to prove the local strong consistency and asymptotic normality of the maximum likelihood estimator. To illustrate the results, we apply the theory to a number of empirically relevant models.Francisco Blasques, Siem Jan Koopman, and André Lucas2014-06-20Observation-driven models; stochastic recurrence equations; contraction conditions; invertibility; stationarity; ergodicity; generalized autoregressive score modelsDependent wild bootstrap for the empirical process
http://d.repec.org/n?u=RePEc:mnh:wpaper:35246&r=ecm
In this paper, we propose a model-free bootstrap method for the empirical process under absolute regularity. More precisely, consistency of an adapted version of the so-called dependent wild bootstrap, that was introduced by Shao (2010) and is very easy to implement, is proved under minimal conditions on the tuning parameter of the procedure. We apply our results to construct confidence intervals for unknown parameters and to approximate critical values for statistical tests. A simulation study shows that our method is competitive to standard block bootstrap methods in finite samples.Doukhan, Paul, Lang, Gabriel, Leucht, Anne, Neumann, Michael H.2014Absolute regularity , bootstrap , empirical process , time series , V -statistics , quantiles , Kolmogorov-Smirnov testImproving GMM efficiency in dynamic models for panel data with mean stationarity
http://d.repec.org/n?u=RePEc:ver:wpaper:12/2014&r=ecm
Within the framework of dynamic panel data models with mean stationarity, one additional moment condition may remarkably increase the efficiency of the system GMM estimator. This additional condition is essentially a condition of “homoskesdasticity” of the individual effects; it is “implicitly satisfied” in all the Monte Carlo simulations on dynamic panel data models available in the literature (including the experiments with heteroskedasticity, which is always confined to the idiosyncratic errors), but not “explicitly” exploited. Monte Carlo experiments show remarkable efficiency improvements when the distribution of individual effects, and thus of yi0, are skewed, thus including the very important cases in economic applications that include variables like individual wages, sizes of the firms, number of employees, etc.Giorgio Calzolari, Laura Magazzini2014-07panel data, dynamic model, GMM estimation, mean stationarity, skewed individual effectsEmpirical Bayes Methods for Dynamic Factor Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140061&r=ecm
We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods.Siem Jan Koopman, and Geert Mesters2014-05-23Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, ShrinkageEfficient Inference with Time-Varying Identification Strength
http://d.repec.org/n?u=RePEc:sfu:sfudps:dp14-03&r=ecm
In the last two decades, there has been a lot of empirical evidence suggesting that many macroeconometric and financial models (e.g. for inflation, interest rates, or exchange rates) are subject to both parameter instability and identification problems. In this paper, we address both issues in a unified framework, and provide a comprehensive treatment of the link between them. Changes in identification strength provide an additional source of information that is used to improve estimation. More generally, we show that detecting and locating changes in instrument strength is essential for efficient asymptotic inference, and we provide a step-by-step guide for practitioners. In our simulation studies, our global inference procedures show very good size and power properties.Bertille Antoine, Otilia Boldea2014-06GMM; Identification; Weak instruments; Break point; Change in identification strengthExponential Smoothing, Long Memory and Volatility Prediction
http://d.repec.org/n?u=RePEc:pra:mprapa:57230&r=ecm
Extracting and forecasting the volatility of financial markets is an important empirical problem. Time series of realized volatility or other volatility proxies, such as squared returns, display long range dependence. Exponential smoothing (ES) is a very popular and successful forecasting and signal extraction scheme, but it can be suboptimal for long memory time series. This paper discusses possible long memory extensions of ES and finally implements a generalization based on a fractional equal root integrated moving average (FerIMA) model, proposed originally by Hosking in his seminal 1981 article on fractional differencing. We provide a decomposition of the process into the sum of fractional noise processes with decreasing orders of integration, encompassing simple and double exponential smoothing, and introduce a lowpass real time filter arising in the long memory case. Signal extraction and prediction depend on two parameters: the memory (fractional integration) parameter and a mean reversion parameter. They can be estimated by pseudo maximum likelihood in the frequency domain. We then address the prediction of volatility by a FerIMA model and carry out a recursive forecasting experiment, which proves that the proposed generalized exponential smoothing predictor improves significantly upon commonly used methods for forecasting realized volatility.Proietti, Tommaso2014-07-10Realized Volatility. Signal Extraction. Permanent-Transitory Decomposition. Fractional equal-root IMA model.Information Theoretic Optimality of Observation Driven Time Series Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140046&r=ecm
We investigate the information theoretic optimality properties of the score function of the predictive likelihood as a device to update parameters in observation driven time-varying parameter models. The results provide a new theoretical justification for the class of generalized autoregressive score models, which covers the GARCH model as a special case. Our main contribution is to show that only parameter updates based on the score always reduce the local Kullback-Leibler divergence between the true conditional density and the model implied conditional density. This result holds irrespective of the severity of model misspecification. We also show that the use of the score leads to a considerably smaller global Kullback-Leibler divergence in empirically relevant settings. We illustrate the theory with an application to time-varying volatility models. We show that th e reduction in Kullback-Leibler divergence across a range of different settings can be substantial in comparison to updates based on for example squared lagged observations.Francisco Blasques, Siem Jan Koopman, Andr� Lucas2014-04-11generalized autoregressive models, information theory, optimality, Kullback-Leibler distance, volatility modelsA General Result on Observational Equivalence in a Class of Nonparametric Structural Equations Models
http://d.repec.org/n?u=RePEc:sur:surrec:0114&r=ecm
This paper examines observational equivalence in a class of nonparametric structural equations models under weaker conditions than those currently available in the literature. It allows for several endogenous variables, does not impose differentiability or continuity of the equations which are part of the structure, and allows the unobserved errors to depend on the exogenous variables. The usefulness of the main result is illustrated by deriving observational equivalence conditions for some models including nonparametric simultaneous equations models, additive errors models, multivariate triangular models, etc. Some of these yield well known results as special cases.Giovanni Forchini2014-06Discriminating between fractional integration and spurious long memory
http://d.repec.org/n?u=RePEc:aah:create:2014-19&r=ecm
Fractionally integrated processes have become a standard class of models to describe the long memory features of economic and financial time series data. However, it has been demonstrated in numerous studies that structural break processes and non-linear features can often be confused as being long memory. The question naturally arises whether it is possible empirically to determine the source of long memory as being genuinely long memory in the form of a fractionally integrated process or whether the long range dependence is of a di¤erent nature. In this paper we suggest a testing procedure that helps discriminating between such processes. The idea is based on the feature that nonlinear transformations of stationary fractionally integrated Gaussian processes decrease the order of memory in a speci?c way which is determined by the Hermite rank of the transformation. In principle, a non-linear transformation of the series can make the series short memory I(0). We suggest using the Wald test of Shimotsu (2007) to test the null hypothesis that a vector time series of properly transformed variables is I(0). Our testing procedure is designed such that even non-stationary fractionally integrated processes are permitted under the null hypothesis. The test is shown to have good size and to be robust against certain types of deviations from Gaussianity. The test is also shown to be consistent against a broad class of processes that are non-fractional but still exhibit (spurious) long memory. In particular, the test is shown to have excellent power against a class of stationary and non-stationary random level shift models as well as Markov switching GARCH processes where the break and transition probabilities are allowed to be time varying.Niels Haldrup, Robinson Kruse2014-06-26Long memory, fractional integration, non-linear models, structural breaks, random level shifts, Hermite polynomials, realized volatility, in?ation.Tractable and Consistent Random Graph Models
http://d.repec.org/n?u=RePEc:nbr:nberwo:20276&r=ecm
We define a general class of network formation models, Statistical Exponential Random Graph Models (SERGMs), that nest standard exponential random graph models (ERGMs) as a special case. We provide the first general results on when these models' (including ERGMs) parameters estimated from the observation of a single network are consistent (i.e., become accurate as the number of nodes grows). Next, addressing the problem that standard techniques of estimating ERGMs have been shown to have exponentially slow mixing times for many specifications, we show that by reformulating network formation as a distribution over the space of sufficient statistics instead of the space of networks, the size of the space of estimation can be greatly reduced, making estimation practical and easy. We also develop a related, but distinct, class of models that we call subgraph generation models (SUGMs) that are useful for modeling sparse networks and whose parameter estimates are also directly and easily estimable, consistent, and asymptotically normally distributed. Finally, we show how choice-based (strategic) network formation models can be written as SERGMs and SUGMs, and apply our models and techniques to network data from rural Indian villages.Arun G. Chandrasekhar, Matthew O. Jackson2014-06TIGHTENING BOUNDS IN TRIANGULAR SYSTEMS
http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-515&r=ecm
This note discusses partial identication in a nonparametric triangular system with discrete endogenous regressors and nonseparable errors. Recently, [Jun, Pinkse and Xu (2011, JPX). Tighter Bounds in Triangular Systems. Journal of Econometrics 161(2), 122-128] provides bounds on the structural function evaluated at particular values using exclusion, exogeneity and rank conditions. We propose a simple idea that often allows to improve the JPX bounds without invoking a new set of assumptions. Moreover, we show how our idea can be used to tighten existing bounds on the structural function in more general triangular systems.Desire Kedagni, Ismael Mourifie2014-07-07Nonparametric triangular systems; Partial identification; Instrumental variables; Rank con- ditions.Dynamic modeling of commodity futures prices
http://d.repec.org/n?u=RePEc:pra:mprapa:56805&r=ecm
Theory suggests that physical commodity prices may exhibit nonlinear features such as bubbles and various types of asymmetries. This paper investigates these claims empirically by introducing a new time series model apt to capture such features. The data set is composed of 25 individual, continuous contract, commodity futures price series, representative of a number of industry sectors including softs, precious metals, energy, and livestock. It is shown that the linear causal ARMA model with Gaussian innovations is unable to adequately account for the features of the data. In the purely descriptive time series literature, often a threshold autoregression (TAR) is employed to model cycles or asymmetries. Rather than take this approach, we suggest a novel process which is able to accommodate both bubbles and asymmetries in a flexible way. This process is composed of both causal and noncausal components and is formalized as the mixed causal/noncausal autoregressive model of order (r, s). Estimating the mixed causal/noncausal model with leptokurtic errors, by an approximated maximum likelihood method, results in dramatically improved model fit according to the Akaike information criterion. Comparisons of the estimated unconditional distributions of both the purely causal and mixed models also suggest that the mixed causal/noncausal model is more representative of the data according to the Kullback-Leibler measure. Moreover, these estimation results demonstrate that allowing for such leptokurtic errors permits identification of various types of asymmetries. Finally, a strategy for computing the multiple steps ahead forecast of the conditional distribution is discussed.Karapanagiotidis, Paul2014-06-22commodity futures, mixed causal/noncausal model, nonlinear dynamic models, commodity futures, speculative bubble.Bayesian Survival Modelling of University Outcomes
http://d.repec.org/n?u=RePEc:pra:mprapa:57185&r=ecm
The aim of this paper is to model the length of registration at university and its associated academic outcome for undergraduate students at the Pontificia Universidad Cat´olica de Chile. Survival time is defined as the time until the end of the enrollment period, which can relate to different reasons - graduation or two types of dropout - that are driven by different processes. Hence, a competing risks model is employed for the analysis. The issue of separation of the outcomes (which precludes maximum likelihood estimation) is handled through the use of Bayesian inference with an appropriately chosen prior. We are interested in identifying important determinants of university outcomes and the associated model uncertainty is formally addressed through Bayesian model averaging. The methodology introduced for modelling university outcomes is applied to three selected degree programmes, which are particularly affected by dropout and late graduation.Vallejos, Catalina, Steel, Mark F. J.2014-05-26Bayesian model averaging; Competing risks; Outcomes separation; Proportional Odds model; University dropoutTime Varying Transition Probabilities for Markov Regime Switching Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140072&r=ecm
We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigate the performance of the model in a Monte Carlo study and show that the model is successful in estimating a range of different dynamic patterns for unobserved regime switching probabilities. We also illustrate the new methodology in an empirical setting by studying the dynamic mean and variance behavior of U.S. Industrial Production growth. We find empirical evidence of changes in the regime switching probabilities, with more persistence for high volatility regimes in the earlier part of the sample, and more persistence for low volatility regimes in the later part of the sample.Marco Bazzi, Francisco Blasques, Siem Jan Koopman, and Andre Lucas2014-06-17Hidden Markov Models; observation driven models; generalized autoregressive score dynamicsBayesian modelling of skewness and kurtosis with two-piece scale and shape transformations
http://d.repec.org/n?u=RePEc:pra:mprapa:57102&r=ecm
We introduce the family of univariate double two–piece distributions, obtained by using a density– based transformation of unimodal symmetric continuous distributions with a shape parameter. The resulting distributions contain five interpretable parameters that control the mode, as well as the scale and shape in each direction. Four-parameter subfamilies of this class of distributions that capture different types of asymmetry are presented. We propose interpretable scale and location-invariant benchmark priors and derive conditions for the existence of the corresponding posterior distribution. The prior structures used allow for meaningful comparisons through Bayes factors within flexible families of distributions. These distributions are applied to models in finance, internet traffic data, and medicine, comparing them with appropriate competitors.Rubio, Francisco Javier, Steel, Mark F. J.2014-06-30model comparison; posterior existence; prior elicitation; scale mixtures of normals; unimodal continuous distributionsTesting Local Average Treatment Effect Assumptions
http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-514&r=ecm
In this paper, we discuss the key conditions for the identification and estimation of the local average treatment effect (LATE, Imbens and Angrist, 1994): the valid instrument assumption (LI) and the monotonicity assumption (LM). We show that the joint assumptions of LI and LM have a testable implication that can be summarized by a sign restriction defined by a set of intersection bounds. We propose an easy-to-implement testing procedure that can be analyzed in the framework of Chernozhukov, Lee, and Rosen (2013) and implemented using the Stata package of Chernozhukov, Kim, Lee, and Rosen (2013). We apply the proposed tests to the â€œdraft eligibilityâ€ instrument in Angrist (1991), the â€œcollege proximityâ€ instrument in Card (1993) and the â€œsame sexâ€ instrument in Angrist and Evans (1998).Ismael Mourifie, Yuanyuan Wan2014-07-07LATE, Instrumental Variables, hypothesis testing, intersection bounds, conditionally more compliersProbabilistic forecasting of electricity spot prices using Factor Quantile Regression Averaging
http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1409&r=ecm
We examine possible accuracy gains from using factor models, quantile regression and forecast averaging for computing interval forecasts of electricity spot prices. We extend the Quantile Regression Averaging (QRA) approach of Nowotarski and Weron (2014) and use principal component analysis to automate the selection process from among a large set of individual forecasting models available for averaging. We show that the resulting Factor Quantile Regression Averaging (FQRA) approach performs very well for price (and load) data from the British power market. In terms of unconditional coverage, conditional coverage and the Winkler score, we find the FQRA-implied prediction intervals to be more accurate than those of the benchmark ARX model and the QRA approach.Katarzyna Maciejowska, Jakub Nowotarski, Rafal Weron2014-06-30Probabilistic forecasting; Prediction interval; Quantile regression; Factor model; Forecasts combination; Electricity spot priceBayesian D-Optimal Choice Designs for Mixtures
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140057&r=ecm
Consumer products and services can often be described as mixtures of ingredients. Examples are the mixture of ingredients in a cocktail and the mixture of different components of waiting time (e.g., in-vehicle and out-of-vehicle travel time) in a transportation setting. Choice experiments may help to determine how the respondents' choice of a product or service is affected by the combination of ingredients. In such studies, individuals are confronted with sets of hypothetical products or services and they are asked to choose the most preferred product or service from each set. However, there are no studies on the optimal design of choice experiments involving mixtures. We propose a method for generating an optimal design for such choice experiments. To this end, we first introduce mixture models in the choice context and next present an algorithm to construct optimal experimental designs, assuming the multinomial logit model is used to analyze the choice data. To overcome the problem that the optimal designs depend on the unknown parameter values, we adopt a Bayesian D-optimal design approach. We also consider locally D-optimal designs and compare the performance of the resulting designs to those produced by a utility-neutral (UN) approach in which designs are based on the assumption that individuals are indifferent between all choice alternatives. We demonstrate that our designs are quite different and in general perform better than the UN designs.Aiste Ruseckaite, Peter Goos, Dennis Fok2014-05-09Bayesian design, Choice experiments, D-optimality, Experimental design, Mixture coordinate-exchange algorithm, Mixture experiment, Multinomial logit model, Optimal design