nep-dcm New Economics Papers
on Discrete Choice Models
Issue of 2006‒06‒03
fifteen papers chosen by
Philip Yu
Hong Kong University

  1. Testing for Hypothetical Bias in Contingent Valuation Using a Latent Choice Multinomial Logit Model By Steven B. Caudill; Peter A. Groothuis; John C. Whitehead
  2. Consumer Preferences for Mass Customization By Dellaert, B.G.C.; Stremersch, S.
  3. Assessing management options for weed control with demanders and non-demanders in a choice experiment By Carlsson, Fredrik; Kataria, Mitesh
  4. Random Covariance Heterogeneity in Discrete Choice Models By Stephane Hess; Denis Bolduc; John Polak
  5. An integrated Land use – Transportation model for Paris area By André De Palma; Kiarash Motamedi; Nathalie Picard; Dany Nguyen Luong
  6. The airport Network and Catchment area Competition Model - A comprehensive airport demand forecasting system using a partially observed database By Eric Kroes; Abigail Lierens; Marco Kouwenhoven
  7. Modelling departure time and mode choice By Andrew Daly; Stephane Hess; Geoff Hyman; John Polak; Charlene Rohr
  8. Valuation of uncertainty in travel time and arrival time - some findings from a choice experiment By Dirk Van Amelsfort; Michiel Bliemer
  9. A cost-benefit analysis of tunnel investment and tolling alternatives in Antwerp By André De Palma; Robin Lindsey; Stef Proost; Saskia Van der Loo
  10. FURTHER EXPOSITION OF THE VALUE OF RELIABILITY By Richard Batley
  11. Public Preferences for Land uses’ changes - valuing urban regeneration projects at the Venice Arsenale By Patrizia Riganti; Anna Alberini; Alberto Longo
  12. The Effect of Income on Positive and Negative Subjective Well-Being By Stefan Boes; Rainer Winkelmann
  13. Networking Off Madison Avenue By J. Vernon Henderson; Mohammad Arzaghi
  14. Quality Sorting and Networking: Evidence from the Advertising Agency Industry By Mohammad Arzaghi
  15. On Ehrhart Polynomials and Probability Calculations in Voting Theory By Dominique Lepelley (CERESUR – University of la Reunion); Ahmed Louichi (CREM – CNRS); Hatem Smaoui (CREM – CNRS)

  1. By: Steven B. Caudill; Peter A. Groothuis; John C. Whitehead
    Abstract: The most persistently troubling empirical result in the contingent valuation method literature is the tendency for hypothetical willingness to pay to overestimate real willingness to pay. We suggest a new approach to test and correct for hypothetical bias using a latent choice multinomial logit (LCMNL) model. To develop this model, we extend Dempster, Laird, and Rubin’s (1977) work on the EM algorithm to the estimation of a multinomial logit model with missing information on categorical membership. Using data on both the quality of water in the Catawba River in North Carolina and the preservation of Saginaw wetlands in Michigan, we find two types of “yes” responders in both data sets. We suggest that one set of yes responses are yea-sayers who suffer from hypothetical bias and answer yes to the hypothetical question but would not pay the bid amount if it were real. The second group does not suffer from hypothetical bias and would pay the bid amount if it were real.
    Keywords: C25, P230, Q51
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:apl:wpaper:06-09&r=dcm
  2. By: Dellaert, B.G.C.; Stremersch, S. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: Increasingly, firms adopt mass customization, which allows consumers to customize products by self-selecting their most preferred composition of the product for a predefined set of modules. For example, PC vendors such as Dell allow customers to customize their PC by choosing the type of processor, memory size, monitor, etc. However, how such firms configure the mass customization process determines the utility a consumer may obtain or the complexity a consumer may face in the mass customization task. Mass customization configurations may differ in four important ways ? we take the example of the personal computer industry. First, a firm may offer few or many product modules that can be mass customized (e.g., only allow consumers to customize memory and processor of a PC or allow consumers to customize any module of the PC) and few or many levels among which to choose per mass customizable module (e.g., for mass customization of the processor, only two or many more processing speeds are available). Second, a firm may offer the consumer a choice only between very similar module levels (e.g., a 17? or 18? screen) or between very different module levels (e.g., a 15? or 21? screen). Third, a firm may individually price the modules within a mass customization configuration (e.g., showing the price of the different processors the consumer may choose from) along with pricing the total product, or the firm may show only the total product price (e.g., the price of the different processors is not shown, but only the computer?s total price is shown). Fourth, the firm may show a default version (e.g., for the processor, the configuration contains a pre-selected processing speed, which may be a high-end or low-end processor), which consumers may then customize, or the firm may not show a default version and let consumers start from scratch in composing the product. The authors find that the choices that firms make in configuring the mass customization process affect the product utility consumers can achieve in mass customization. The reason is that the mass customization configuration affects how closely the consumer may approach his or her ideal product by mass customizing. Mass customization configurations also affect consumers? perception of the complexity of mass customization as they affect how many cognitive steps a consumer needs to make in the decision process. Both product utility and complexity in the end determine the utility consumers derive from using a certain mass customization configuration, which in turn will determine main outcome variables for marketers, such as total product sales, satisfaction with the product and the firm, referral behavior and loyalty. The study offers good news for those who wish to provide many mass customization options to consumers, because we find that within the rather large range of modules and module levels we manipulated in this study, consumers did not perceive significant increases in complexity, while they were indeed able to achieve higher product utility. Second, our results imply that firms when increasing the number of module levels, should typically offer consumers more additional options in the most popular range of a module and less additional options at the extremes. Third, pricing should preferably be presented only at the total product level, rather than at the module and product level. We find that this approach reduces complexity and increases product utility. Fourth, firms should offer a default version that consumers can use as a starting point for mass customization, as doing so minimizes the complexity to consumers. The best default version to start out with is a base default version because this type of default version allows the consumer to most closely approach his or her ideal product. The reason is that consumers when presented with an advanced default may buy a product that is more advanced than they actually need. We also found that expert consumers are ideal targets for mass customization offerings. Expert consumers experience lower complexity in mass customization and complexity has a less negative influence on product utility obtained in the mass customization process, all compared to novice consumers. In general, reducing complexity in the mass customization configuration is a promising strategy for firms as it not only increases the utility of the entire process for consumers, but also allows them to compose products that more closely fit their ideal product.
    Keywords: mass customization;consumer choice;complexity;utility;PC buying;mass customized products;customization;
    Date: 2004–11–17
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30001946&r=dcm
  3. By: Carlsson, Fredrik (Department of Economics, School of Business, Economics and Law, Göteborg University); Kataria, Mitesh (Department of Economics, Swedish University of Agricultural Sciences)
    Abstract: The yellow floating heart is a water weed causing nuisance problems in Swedish watercourses. An economic analysis of this is required where various management options are considered. The benefits of a management program are to a large extent recreational. Using a choice experiment we estimate the benefits of a weed management program and perform a cost-benefit analysis of different management programs. In order to be able to distinguish between those who have a demand for a program from those who do not, we introduce a way to distinguish demanders from non-demanders in the choice experiments. The advantage of our suggested approach is that we can more clearly distinguish between conditional and unconditional willingness to pay. In the empirical study we find that a share of the respondents are non-demanders. The demander willingness to pay still justifies cutting the weed in certain places in the lake, given that we use a simple cost-benefit rule. <p>
    Keywords: Choice experiments; invasive species; non-demanders; bivariate probit
    JEL: Q25 Q26 Q51
    Date: 2006–05–30
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0208&r=dcm
  4. By: Stephane Hess; Denis Bolduc; John Polak
    Abstract: The area of discrete choice modelling has developed rapidly in recent years. In particular, continuing refinements of the Generalised Extreme Value (GEV) model family have permitted the representation of increasingly complex patterns of substitution and parallel advances in estimation capability have led to the increased use of model forms requiring simulation in estimation and application. One model form especially, namely the Mixed Multinomial Logit (MMNL) model, is being used ever more widely. Aside from allowing for random variations in tastes across decision-makers in a Random Coefficients Logit (RCL) framework, this model additionally allows for the representation of inter-alternative correlation as well as heteroscedasticity in an Error Components Logit (ECL) framework, enabling the model to approximate any Random Utility model arbitrarily closely. While the various developments discussed above have led to gradual gains in modelling flexibility, little effort has gone into the development of model forms allowing for a representation of heterogeneity across respondents in the correlation structure in place between alternatives. Such correlation heterogeneity is however possibly a crucial factor in the variation of choice-making behaviour across decision-makers, given the potential presence of individual-specific terms in the unobserved part of utility of multiple alternatives. To the authors' knowledge, there has so far only been one application of a model allowing for such heterogeneity, by Bhat (1997). In this Covariance NL model, the logsum parameters themselves are a function of socio-demographic attributes of the decision-makers, such that the correlation heterogeneity is explained with the help of these attributes. While the results by Bhat show the presence of statistically significant levels of covariance heterogeneity, the improvements in terms of model performance are almost negligible. While it is possible to interpret this as a lack of covariance heterogeneity in the data, another explanation is possible. It is clearly imaginable that a major part of the covariance heterogeneity cannot be explained in a deterministic fashion, either due to data limitations, or because of the presence of actual random variation, in a situation analogous to the case of random taste heterogeneity that cannot be explained in a deterministic fashion. In this paper, we propose two different ways of modelling such random variations in the correlation structure across individuals. The first approach is based on the use of an underlying GEV structure, while the second approach consists of an extension of the ECL model. In the former approach, the choice probabilities are given by integration of underlying GEV choice probabilities, such as Nested Logit, over the assumed distribution of the structural parameters. In the most basic specification, the structural parameters are specified as simple random variables, where appropriate choices of statistical distributions and/or mathematical transforms guarantee that the resulting structural parameters fall into the permissible range of values. Several extensions are then discussed in the paper that allow for a mixture of random and deterministic variations in the correlation structure. In an ECL model, correlation across alternatives is introduced with the help of normally distributed error-terms with a mean of zero that are shared by alternatives that are closer substitutes for each other, with the extent of correlation being determined by the estimates of the standard deviations of the error-components. The extension of this model to a structure allowing for random covariance heterogeneity is again divided into two parts. In the first approach, correlation is assumed to vary purely randomly; this is obtained through simple integration over the distribution of the standard deviations of the error-terms, superseding the integration over the distribution of the error-components with a specific draw for the standard deviations. The second extension is similar to the one used in the GEV case, with the standard deviations being composed of a deterministic term and a random term, either as a pure deviation, or in the form of random coefficients in the parameterisation of the distribution of the standard deviations. We next show that our Covariance GEV (CGEV) model generalises all existing GEV model structures, while the Covariance ECL (CECL) model can theoretically approximate all RUM models arbitrarily closely. Although this also means that the CECL model can closely replicate the behaviour of the CGEV model, there are some differences between the two models, which can be related to the differences in the underlying error-structure of the base models (GEV vs ECL). The CECL model has the advantage of implicitly allowing for heteroscedasticity, although this is also possible with the CGEV model, by adding appropriate error-components, leading to an EC-CGEV model. In terms of estimation, the CECL model has a run-time advantage for basic nesting structures, when the number of error-components, and hence dimensions of integration, is low enough not to counter-act the gains made by being based on a more straightforward integrand (MNL vs advanced GEV). However, in more complicated structures, this advantage disappears, in a situation that is analogous to the case of Mixed GEV models compared to ECL models. A final disadvantage of the CECL model structure comes in the form of an additional set of identification conditions. The paper presents applications of these model structures to both cross-sectional and panel datasets from the field of travel behaviour analysis. The applications illustrate the gains in model performance that can be obtained with our proposed structures when compared to models governed by a homogeneous covariance structure assumption. As expected, the gains in performance are more important in the case of data with repeated observations for the same individual, where the notion of individual-specific substitution patterns applies more directly. The applications also confirm the slight differences between the CGEV and CECL models discussed above. The paper concludes with a discussion of how the two structures can be extended to allow for random taste heterogeneity. The resulting models thus allow for random variations in choice behaviour both in the evaluation of measured attributes C as well as the correlation across alternatives in the unobserved utility terms. This further increases the flexibility of the two model structures, and their potential for analysing complex behaviour in transport and other areas of research.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p375&r=dcm
  5. By: André De Palma; Kiarash Motamedi; Nathalie Picard; Dany Nguyen Luong
    Abstract: There is a new growing interest in the development and in the use of integrated land use and transport planning models in France. In this paper, we describe the steps of a current project which aims to integrate UrbanSim, a flexible land use model, and METROPOLIS, a dynamic traffic model, and to apply this integrated model to Paris region. We shortly present the two models and the common architecture then we describe the fastidious but crucial step of collecting input data and calibration data for the study area. Paris region is one of the most important metropolises in the world: 12,000 km2, 11 millions inhabitants and 5 millions jobs. Most interactions between the land use dynamics and the transportation dynamics are taken into account in the short, middle and long term. All of this consists in a pioneering and innovative work, for a region where urban planning and fiscal policies are very important. UrbanSim is a land use model developed at the University of Washington (USA). It is based mainly on three logit models (households and jobs localization choices and development type choice models) and a hedonic regression model (land price model). The data structure is based on a large grid which partitions the whole Paris region with 50 000 square cells by 500 meters. This high level of spatial resolution is really original in France but requires a huge amount of data and spatial analysis that we have performed thanks to the GIS tool. METROPOLIS is a dynamic transportation model developed at the University of Cergy-Pontoise (FRANCE). It provides the user surplus as the measure of accessibility. This measure takes into account the time-dependent congestion situation of the transportation system. On the other hand METROPOLIS can differentiate the users by their value of time and desired arrival time and some other behavioral parameters. The roads network contains more than 16,000 links, the transit network contains about 4,000 links. An architecture bas been designed to integrate these two models within a coherent framework. A prototype of interface has been developed which allows input and output data to be exchanged in an automatic feedback process. We use different sources to build the input database: general census, numerical land use database (cover of 400,000 parcels classified into 83 different types), regional travel survey, the notary database of real-estate transactions, local land use plans, commercial and offices surfaces data, income tax files, … Since none of these sources is perfect, we had to develop innovative methods to realize data fusion and mixed databases. For example, we localize the 11,000 households of the travel survey in the grid, or we associate the attribute of income from the tax files to the attributes of household in the general census. The second database concerns the calibration data. For each of the four models of UrbanSim, we have developed a significant sample of individual observations from four sources: the general census, the travel survey, the land use evolution database and the notary database of real-estate transactions. These files will be used to estimate the models thanks to an econometric software. We choose as period of calibration 1990 – 1999. We plan to achieve our project in the end of 2005.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p421&r=dcm
  6. By: Eric Kroes; Abigail Lierens; Marco Kouwenhoven
    Abstract: For airport capacity planning long term forecasts of aircraft movements are required. The classical approach to generate such forecasts has been the use of time series data together with econometric models, to extrapolate observed patterns of growth into the future. More recently, the dramatically increased competition between airports, airlines and alliances on the one hand, and serious capacity problems on the other, have made this approach no longer adequate. Airport demand forecasts now need to focus heavily on the many competitive elements in addition to the growth element. In our paper we describe a comprehensive, pragmatic air demand model system that has been implemented for Amsterdam’s Schiphol Airport. This model, called the Airport Network and Catchment area Competition Model (ACCM), provides forecasts of future air passenger volumes and aircraft movements explicitly taking account of choices of air passengers among competing airports in Europe. The model uses a straightforward nested logit structure to represent choices of air passengers among alternative departure airports, transport modes to the airport, airlines/alliances/low cost carriers, types of flight (direct versus transfer), air routes, and main modes of transport (for those distances where car and high-speed train may be an alternative option). Target year passenger forecasts are obtained by taking observed base year passenger numbers, and applying two factors to these: (1)Firstly a growth factor, to express the global impact of key drivers of passenger demand growth such as population size, income, trade volume; (2)Secondly a market share ratio factor, to express the increase (or decline) in attractiveness of the airport due to anticipated changes in its air network and landside-accessibility, relative to other (competing) airports. The target year passenger forecasts are then converted into aircraft movements to assess whether or not the available runway capacity is adequate. Key inputs to the model are data bases describing for base year and target year the level of service (travel times, costs, service frequencies) of the land-side accessibility of all departure airports considered, and the air-side networks of all departure and hub airports considered. The air-side networks (supply) are derived from a detailed OAG based flight simulation model developed elsewhere. A particular characteristic of the ACCM implementation for Schiphol Airport is that it had to be developed using only a partial data set describing existing demand: although detailed OD- information was available for air passengers using Schiphol Airport in 2003, no such data was available for other airports or other transport modes. As a consequence a synthetic modelling approach was adopted, where the unobserved passenger segments for the base year were synthesised using market shares ratios between unobserved and observed segments forecasts for the base year together with the observed base year passenger volumes. This process is elegant and appealing in principle, but is not without a number of problems when applied in a real case. In the paper we will first set out the objectives of the ACCM as it was developed, and the operational and practical constraints that were imposed. Then we will describe how the ACCM fits with model developments in the literature, and sketch the overall structure that was adopted. The following sections will describe the modelled alternatives and the utility structures, the level-of-service data bases used for land-side and air-side networks, for base year and target year. Then we will describe in some detail how we dealt with the partial data issue: the procedure to generate non-observed base year data, the validation, the problems encountered, the solutions chosen. Finally we shall show a number of the results obtained (subject to permission by the Dutch Ministry of Transport), and provide some conclusions and recommendations for further application of the methodology.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p521&r=dcm
  7. By: Andrew Daly; Stephane Hess; Geoff Hyman; John Polak; Charlene Rohr
    Abstract: As a result of increasing road congestion and road pricing, modelling the temporal response of travellers to transport policy interventions has rapidly emerged as a major issue in many practical transport planning studies. A substantial body of research is therefore being carried out to understand the complexities involved in modelling time of day choice. These models are contributing substantially to our understanding of how travellers make time-of-day decisions (Hess et al, 2004; de Jong et al, 2003). These models, however, tend to be far too complex and far too data intensive to be of use for application in large-scale modelling forecasting systems, where socio-economic detail is limited and detailed scheduling information is rarely available. Moreover, model systems making use of the some of the latest analytical structures, such as Mixed Logit, are generally inapplicable in practical planning, since they rely on computer-intensive simulation in application just as well as in estimation. The aim of this paper, therefore, is to describe the development of time-period choice models which are suitable for application in large-scale modelling forecasting systems. Large-scale practical planning models often rely on systems of nested logit models, which can incorporate many of the most important interactions that are present in the complex models but which have low enough run-times to allow them to be used for practical planning. In these systems, temporal choice is represented as the choice between a finite set of discrete alternatives, represented by mutually exclusive time-periods that are obtained by aggregation of the actual observed continuous time values. The issues that face modellers are then: -how should the time periods be defined, and in particular how long should they be? -how should the choices of time periods be related to each other, e.g. is the elasticity for shorter shifts greater than for longer shifts? -how should time period choice be placed in the model system relative to other choices, such as that of the mode of travel? These questions cannot be answered on a purely theoretical basis but require the analysis of empirical data. However, there is not a great deal of data available on the relevant choices. The time period models described in the paper are developed from three related stated preference (SP) studies undertaken over the past decade in the United Kingdom and the Netherlands. Because of the complications involved with using advanced models in large-scale modelling forecasting systems, the model structures are limited to nested logit models. Two different tree structures are explored in the analysis, nesting mode above time period choice or time period choice above mode. The analysis examines how these structures differ by data set, purpose of travel and time period specification. Three time period specifications were tested, dividing the 24-hour day into: -twenty-four 1-hour periods; -five coarse time-periods; -sixteen 15-minute morning-peak periods, and two coarse pre-peak and post-peak periods. In each case, the time periods are used to define both the outbound and the return trip timings. The analysis shows that, with a few exceptions, the nested models outperform the basic Multinomial Logit structures, which operate under the assumption of equal substitution patterns across alternatives. With a single exception, the nested models in turn show higher substitution between alternative time periods than between alternative modes, showing that, for all the time period lengths studied, travellers are more sensitive to transport levels of service in their choice of departure time than in choice of mode. The advantages of the nesting structures are especially pronounced in the 1-hour and 15-minute models, while, in the coarse time-period models, the MNL model often remains the preferred structure; this is a clear effect of the broader time-periods, and the consequently lower substitution between time-periods.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p688&r=dcm
  8. By: Dirk Van Amelsfort; Michiel Bliemer
    Abstract: We are developing a dynamic modeling framework in which we can evaluate the effects of different road pricing measures on individual choice behavior as well as on a network level. Important parts of this framework are different choice models which forecast the route, departure time and mode choice behavior of travelers under road pricing in the Netherlands. In this paper we discuss the setup of the experiment in detail and present our findings about dealing with uncertainty, travel time and schedule delays in the utility functions. To develop the desired choice models a stated choice experiment was conducted. In this experiment respondents were presented with four alternatives, which can be described as follows: Alternative A: paying for preferred travel conditions. Alternative B: adjust arrival time and pay less. Alternative C: adjust route and pay less. Alternative D: adjust mode to avoid paying charge. The four alternatives differ mainly in price, travel time, time of departure/arrival and mode and are based on the respondents’ current morning commute characteristics. The travel time in the experiment is based on the reported (by the respondent) free-flow travel time for the home-to-work trip, and the reported trip length. We calculate the level of travel time, by setting a certain part of the trip length to be in free-flow conditions and calculate a free-flow and congested part of travel time. Adding the free-flow travel time and the congested travel time makes the total minimum travel time for the trip. Minimum travel time, because to this travel time we add an uncertainty margin, creating the maximum travel time. The level of uncertainty we introduced between minimum and maximum travel time was based on the difference between the reported average and free-flow travel time. In simpler words then explained here, we told respondents that the actual travel time for this trip is unknown, but that between the minimum and maximum each travel time has an equal change of occurring. As a consequence of introducing uncertainty in travel time, the arrival time also receives the same margin. Using the data from the experiment we estimated choice models following the schedule delay framework from Vickrey (1969) and Small (1987), assigning penalties to shifts from the preferred time of departure/arrival to earlier or later times. In the models we used the minimum travel time and the expected travel time (average of minimum and maximum). Using the expected travel time incorporates already some of the uncertainty (half) in the attribute travel time, making the uncertainty attribute in the utility function not significant. The parameters values and values-of-time for using the minimum or expected travel time do not differ. Initially, we looked at schedule delays only from an arrival time perspective. Here we also distinguished between schedule delays based on the minimum arrival time and the expected arrival time (average of minimum and maximum). Again, when using expected schedule delays the uncertainty is included in the schedule delays and a separate uncertainty attribute in the utility function is not significant. There is another issue involved when looking at the preferred arrival time of the respondents; there are three cases to take into account: 1.If the minimum and maximum arrival times are both earlier than the preferred arrival time we are certain about a schedule delay early situation (based on minimum or expected schedule delays). 2.If the minimum and maximum arrival times are both later than the preferred arrival time we are certain about a schedule delay late situation (based on minimum or expected schedule delays). 3.The scheduling situation is undetermined when the preferred arrival time is between the minimum and maximum arrival time. In this case we use an expected schedule delay assuming a uniform distribution of arrival times between the minimum and maximum arrival time. Parameter values for both situations are very different and results from the minimum arrival time approach are more in line with expectations. There is a choice to take into account uncertainty in the utility function in either the expected travel time, expected schedule delays or as a separate attribute. In the paper we discuss the effects of different approaches. We extended our models to also include schedule delays based on preferred departure time. In the departure time scheduling components uncertainty is not included. Results show that the depart schedule delay late is significant and substantial, together with significant arrival schedule early and late. Further extension of the model includes taking into account the amount of flexibility in departure and arrival times for each respondent. The results will be included in this paper.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p721&r=dcm
  9. By: André De Palma; Robin Lindsey; Stef Proost; Saskia Van der Loo
    Abstract: This paper presents and illustrates a comprehensive and operational model for assessing transport pricing and investment policies and regulatory regimes. The approach encompasses intra-modal as well as inter-modal competition, and could be used either by private operators or by the legislator for the purpose of evaluating market conduct. The model combines elements of contract theory, public economics, political economy, transportation economics and game theory. It incorporates a CES-based discrete-choice framework in which user charges and infrastructure investments are endogenously determined for two competing alternatives (air, rail or two parallel roads) that may be used for transportation of passengers and/or freight. The model includes separate modules for demand, supply, equilibrium and the regulatory framework. The demand module for passenger transport features a CES decision tree with three levels: choice between transport and consumption of a composite commodity, choice between peak and off-peak periods, and choice between the two transport alternatives. Elasticities of substitution at each level are parametrically given. Passengers can be segmented into classes that differ with respect to their travel preferences, incomes and costs of travel time. The demand module for freight transport also features three levels. The first level encompasses choice between transport and other production inputs, and the second and third levels are the same as for passenger transport. Freight transport can be segmented into local and transit traffic. The supply module specifies for each transport alternative travel time as a function of traffic volume and a rule for infrastructure maintenance. Operating, maintenance and investment costs are allowed to depend on the contractual form. Given the demand and supply functions, the equilibrium module computes a fixed-point solution in terms of prices and levels of congestion. Finally, the exogenous regulatory framework stipulates for each alternative the objective functions of the operators and infrastructure managers (public or private objectives), the nature of competition, procurement policies, the cost of capital, and the source and use of transport tax revenues. Possible market structures include: no tolls (free access), exogenous tolls, marginal social cost pricing, private duopoly and mixed oligopoly. Public decisions can be made either by local or central governments that may attach different welfare-distributional weights to agents (e.g. low-income vs. high-income passengers, or local vs. transit freight traffic) as well as different weights to air pollution and other (non-congestion) external transport costs. Primary outputs from the model are equilibrium prices, transport volumes, travel times, cost efficiency of operations, toll revenues and financial balances, travellers’ surplus and social welfare. In the final section of the paper the methodology is illustrated with an example of competition in the market for long-distance passenger travel between high-speed rail and air. A simple procedure allows the calibration of the parameters when aggregate data are available. The model is used to evaluate policies (pricing, investment, taxes, inter alia).
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p732&r=dcm
  10. By: Richard Batley
    Abstract: As the demands placed on transport systems have increased relative to extensions in supply, problems of network unreliability have become ever more prevalent. The response of some transport users has been to accommodate expectations of unreliability in their decision-making, particularly through their trip scheduling. In the analysis of trip scheduling, Small’s (1982) approach has received considerable support. Small extends the microeconomic theory of time allocation (e.g. Becker, 1965; De Serpa, 1971), accounting for scheduling constraints in the specification of both utility and its associated constraints. Small makes operational the theory by means of the random utility model (RUM). This involves a process of converting the continuous departure time variable into discrete departure time segments, specifying the utility of each departure time segment as a function of several components (specifically journey time, schedule delay and the penalty of late arrival), and adopting particular distributional assumptions concerning the random error terms of contiguous departure time segments (whilst his 1982 paper assumes IID, Small’s 1987 paper considers a more complex pattern of covariance). A fundamental limitation of Small’s approach is that individuals make choices under certainty, an assumption that is clearly unrealistic in the context of urban travel choice. The response of microeconomic theory to such challenge is to reformulate the objective problem from the maximisation of utility, to one of maximising expected utility, with particular reference to the works of von Neumann & Morgenstern (1947) and Savage (1954). Bates et al. (2001) apply this extension to departure time choice, but specify choice as being over continuous time; the latter carries the advantage of simplifying some of the calculations of optimal departure time. Moreover Bates et al. offer account of departure time choice under uncertainty, but retain a deterministic representation. Batley & Daly (2004) develop ideas further by reconciling the analyses of Small (1982) and Bates et al. Drawing on early contributions to the RUM literature by Marschak et al. (1963), Batley and Daly propose a probabilistic model of departure time choice under uncertainty, based on an objective function of random expected utility maximisation. Despite this progression in the generality and sophistication of methods, significant challenges to the normative validity of RUM and transport network models remain. Of increasing prominence in transport research, is the conjecture that expected utility maximisation may represent an inappropriate objective of choice under uncertainty. Significant evidence for this conjecture exists, and a variety of alternative objectives proposed instead; Kahneman & Tversky (2000) offer a useful compendium of such papers. With regards to these alternatives, Kahneman & Tversky’s (1979) own Prospect Theory commands considerable support as a theoretical panacea for choice under uncertainty. This theory distinguishes between two phases in the choice process - editing and evaluation. Editing may involve several stages, so-called ‘coding’, ‘combination’, ‘cancellation’, ‘simplification’ and ‘rejection of dominated alternatives’. Evaluation involves a value function that is defined on deviations from some reference point, and is characterised by concavity for gains and convexity for losses, with the function being steeper for gains than for losses. The present paper begins by formalising the earlier ideas of Batley and Daly (2004); the paper thus presents a theoretical exposition of a random expected utility model of departure time choice. The workings of the model are then illustrated by means of numerical example. The scope of the analysis is subsequently widened to consider the possibility of divergence from the objective of expected utility maximisation. An interesting feature of this discussion is consideration of the relationship between Prospect Theory and a generalised representation of the random expected utility model. In considering this relationship, the paper draws on Batley & Daly’s (2003) investigation of the equivalence between RUM and elimination-by-aspects (Tversky, 1972); the latter representing one example of a possible ‘editing’ model within Prospect Theory. Again, the extended model is illustrated by example.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p750&r=dcm
  11. By: Patrizia Riganti; Anna Alberini; Alberto Longo
    Abstract: This paper discusses the results of a conjoint analysis study developed to assess alternative land uses for an important part of the city of Venice: its Arsenal. Aim of the study is to illustrate the potential of stated preferences techniques for placing a value on redevelopment and reuse alternatives for an underutilized site with high historical, cultural and architectural significance. Very few studies have used conjoint choice to assess public preferences for alternative land uses in an ex-ante framework, i.e. masterplans. For our study, we wanted to concentrate on a “city of art,” where the relationship between cultural heritage resources management and city development is more critical. Venice was an obvious choice for the national and international relevance of its heritage. The Arsenale is one of the few places in Venice that has the potential for a real transformation of its uses, with important impacts on both residents and visitors. Moreover, the Arsenale plays a strong symbolic role: it was the place where the strength and power of the Serenissima was built. The City Council of Venice has recently deliberated that the Arsenale is an inalienable heritage of the city of Venice. In recent years, the importance of the Arsenale has resulted in a heated debate on its possible new uses. Many architectural proposals have been submitted through international competitions. These proposals—whether submitted in the past or currently under consideration—have shown that there may be a conflict between different possible land uses and the transformation allowed by the existing architectural structures. We surveyed individuals in Venice asking respondents to engage in conjoint choice tasks, gathering 168 usable observations. Members of the general public were intercepted at the Multimedia Library at Palazzo Querini Stampalia/FEEM and asked to indicate which choice they preferrd among hypothetical—but realistic—redevelopment projects of the Arsenale historic site. Each project was described by a vector of attributes, such as land use, use of basins and waterways, architectural features, access, employment implied by the reuse, and cost. The responses to these choice tasks was used to infer the rate at which respondents trade off land uses, aesthetic features, and costs, and hence to derive the value of marginal changes in the attributes, and the value of a proposed policy package. The Venice Arsenale is owned by the Italian government and is currently used by the Italian Navy. The Arsenale site accounts for about 15 percent of the area of the city of Venice (about 45 hectares), and is located in the Castello district. Tradition has it that doge Ordefalo Falier founded the Arsenale—a shipbuilding yard—in 1104. In 1340 the “Darsena Nuova” was created, which marked the birth of the Arsenal Nuovo and of the Corderie building. Further expansion started in 1473, covering an area of 26 hectares. This phase lasted more than 100 years, resulting in the construction of the New Corderie building, among others, in 1591. In its heyday, the Arsenale employed roughly 20,000 workers in an assembly-line fashion and produced one ship a day. The Arsenale, after the navy largely withdrew from the complex over 40 years ago, suffered from abandonment and under use. The Arsenale is, therefore, one of the few places in Venice that has the potential for a real transformation of its uses. In this paper we investigate how the development of the Arsenale site, involving alternative land uses, may influence the welfare of the residents of the historical city center of Venice. Starting from the evidence of our survey in Venice, the paper broaden its scope to discuss ways of improving the management of cultural heritage cities, focusing on new forms of involvement and public participation based on public preferences’ elicitation. We debate the issues related to city governance and the need for an appropriate level of democratic participation. An integrated approach, capable of bridging the practice of economic valuation, urban design, conservation of the built environment, and decision-making support systems is here analysed.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p756&r=dcm
  12. By: Stefan Boes (Socioeconomic Institute, University of Zurich); Rainer Winkelmann (Socioeconomic Institute, University of Zurich)
    Abstract: Increasing evidence from the empirical economic and psychological literature suggests that positive and negative well-being are more than opposite ends of the same phenomenon. Two separate measures of the dependent variable may be needed when analyzing the determinants of subjective well-being. We argue that this conclusion reflects in part the use of too restrictive econometric models. A flexible multiple-index ordered probit panel data model with varying thresholds can identify response asymmetries in single-item measures of subjective well-being. An application to data from the German Socio-Economic Panel for 1984-2004 shows that income has only a minor effect on positive subjective well-being but a large effect on negative well-being.
    Keywords: generalized ordered probit model, marginal probability effects, random and fixed effects, life-satisfaction
    JEL: I31 D12 C23
    Date: 2006–05
    URL: http://d.repec.org/n?u=RePEc:soz:wpaper:0605&r=dcm
  13. By: J. Vernon Henderson; Mohammad Arzaghi
    Abstract: This paper examines the effect on productivity of having more near advertising agency neighbors and hence better opportunities for meetings and exchange within Manhattan. We will show that there is extremely rapid spatial decay in the benefits of having more near neighbors even in the close quarters of southern Manhattan, a finding that is new to the empirical literature and indicates our understanding of scale externalities is still very limited. The finding indicates that having a high density of commercial establishments is important in enhancing local productivity, an issue in Lucas and Rossi-Hansberg (2002), where within business district spatial decay of spillovers plays a key role. We will argue also that in Manhattan advertising agencies trade-off the higher rent costs of being in bigger clusters nearer “centers of action”, against the lower rent costs of operating on the “fringes” away from high concentrations of other agencies. Introducing the idea of trade-offs immediately suggests heterogeneity is involved. We will show that higher quality agencies are the ones willing to pay more rent to locate in greater size clusters, specifically because they benefit more from networking. While all this is an exploration of neighborhood and networking externalities, the findings relate to the economic anatomy of large metro areas like New Yorkthe nature of their buzz.
    Keywords: Advertising, Agglomeration, Business Services, Discrete Choice, Knowledge Spillovers, Learning, Location Decision, Poisson Regression, Nested Logit
    JEL: D82 D83 D85 L25 L84 M37 R12 R30
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:05-15&r=dcm
  14. By: Mohammad Arzaghi
    Abstract: This paper provides a model of knowledge sharing and networking among single unit advertising agencies and investigates the implications of this model in the presence of heterogeneity in agencies’ quality. In a stylized screening model, we show that, under a modest set of assumptions, the separation outcome is a Pareto-undominated Nash equilibrium. That is, high quality agencies locate themselves in a high wage and rent area to sift out low quality agencies and guarantee their network quality. We identify a necessary condition for the separating equilibrium to exist and to reject the pooling equilibrium even in the presence of agglomeration economies from networking. We derive the maximum profit of an agency and show the condition has a directly testable implication in the empirical specification of the agency’s profit function. We use a sample of movers—existing agencies that relocate among urban areas—in order to extract a predetermined measure of their quality prior to relocation. We estimate the parameters of the profit function, using the Census confidential establishment-level data, and show that the necessary condition for separation is met and that there is strong separation and sorting on quality among agencies in their location decisions.
    Keywords: Advertising, Agglomeration, Industrial Concentration, Business Services, Discrete Choice, Knowledge Spillovers, Learning, Location Decision, Poisson Regression, Nested Logit, Screening, Separating Equilibrium, Sorting
    JEL: D82 D83 D85 L25 L84 M37 R12 R30
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:05-16&r=dcm
  15. By: Dominique Lepelley (CERESUR – University of la Reunion); Ahmed Louichi (CREM – CNRS); Hatem Smaoui (CREM – CNRS)
    Abstract: In voting theory, analyzing how frequent is an event (e.g. a voting paradox) is, under some specific but widely used assumptions, equivalent to computing the exact number of integer solutions in a system of linear constraints. Recently, some algorithms for computing this number have been proposed in social choice literature by Huang and Chua [17] and by Gehrlein ([12, 14]). The purpose of this paper is threefold. Firstly, we want to do justice to Eug`ene Ehrhart, who, more than forty years ago, discovered the theoretical foundations of the above mentioned algorithms. Secondly, we present some efficient algorithms that have been recently developed by computer scientists, independently from voting theorists. Thirdly, we illustrate the use of these algorithms by providing some original results in voting theory.
    Keywords: voting rules, manipulability, polytopes, lattice points, algorithms.
    JEL: D70 D71
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:tut:cremwp:200610&r=dcm

This nep-dcm issue is ©2006 by Philip Yu. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.