|
on Discrete Choice Models |
By: | Justyna Tanas |
Abstract: | This article aims to determine buyers' revealed preferences in the secondary housing market in Warsaw. The study was conducted based on data on transactions of real estate premises made on the secondary market in Warsaw in 2016-2020. These data were supplemented with the information contained in land and mortgage registers (section II - ownership), in the real estate cadastre, and using Google Street View. After a tidying-up exercise, a database of over 35, 000 residential property transactions was created.The vast majority of buyers' preferences in the residential market in Poland conducted in recent years have been studies of local markets, usually in the largest cities. Most of the time, surveys of declared preferences were conducted using a survey questionnaire, usually targeted surveys that did not meet the sample’s representativeness. The unique database built this way allows the preferences of different buyers (e.g., young people, seniors, singles, married couples, etc.) to be identified in a previously impossible way due to the lack of such databases. |
Keywords: | buyers' preferences; housing market; revealed vs. stated preferences |
JEL: | R3 |
Date: | 2024–01–01 |
URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-216 |
By: | Duffy, Sean; Smith, John |
Abstract: | Standard random utility models can account for stochastic choice. However, a common implication is that the realized utilities are equal with probability zero. This knife-edge aspect implies that indifference is thin because arbitrarily small changes in utility will break indifference. Semiorders can represent preferences where indifference is thick, however, choice is not random. We design an incentivized binary line length judgment experiment to better understand how indifference can be both thick and random. In the 2-choice treatment, subjects select one of the lines. In the 3-choice treatment, subjects select one of the lines or can express indifference, which directs the computer to "flip a coin" to decide. In every trial, there is a longer line and subjects were told this fact. For each of our line pairs, subjects make 5 decisions in the 2-choice treatment and 5 decisions in the 3-choice treatment. In the line pair with the smallest length difference, 49.7% of 2-choice treatment trials are optimal. For this line pair in the 3-choice treatment, only 1 out of 113 subjects selected indifference on all 5 available trials. There are well-known predictions that optimal choices will have shorter response times than suboptimal choices (Fudenberg, Strack, and Strzalecki, 2018) and we find evidence of this in our dataset. However, not much seems to be known about the response times and indifference. In the 3-choice treatment, we find that indifference choices have longer response times than suboptimal choices. We find that indifference choices are associated with risk aversion and a measure of the beliefs of the favorability of the coin flip. We do not find that indifference choices become more likely across trials, however we find the likelihood of selecting the longer line--in both 2-choice and 3-choice treatments--are decreasing across trials. We hope that the results of our experiment can help inform models of choice where indifference is both thick and random. |
Keywords: | choice theory, judgment, indifference, memory, search |
JEL: | C91 D03 |
Date: | 2024–09–20 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:122165 |
By: | Xing, Yan; Pike, Susan; Waechter, Maxwell; DeLeon, Graham; Lipatova, Liubov; Handy, Susan; Wang, Yunshi |
Abstract: | Transportation-disadvantaged populations often face significant challenges in meeting their basic travel needs. Microtransit, a technology-enabled transit mobility solution, has the potential to address these issues by providing on-demand, affordable, and flexible services with multi-passenger vehicles. The ways in which microtransit supports underserved populations and the factors influencing its adoption are not well-studied, however. This research examines SmaRT Ride, a microtransit pilot program in the Sacramento, California, area operated by Sacramento Regional Transit. The project evaluates a broad range of factors influencing microtransit adoption and travel behavior among underserved populations using original revealed choice survey data collected from February – May 2024 with online and intercept surveys. A descriptive analysis revealed that SmaRT Ride has improved transportation access for these communities, complements the transit system by connecting fixed-route transit, and offers a cost-effective alternative to other transportation modes. A binary logistic regression was employed to explore differences between microtransit users and non-users with microtransit awareness. The results indicate that homeownership, employment status, frequency of public transit service use, and attitude towards transit significantly affect microtransit use. Homeowners are more likely to use microtransit, while households without employed members are less likely. In contrast, part-time employees show a higher inclination to use microtransit. Regular public transit users are also more likely to incorporate microtransit into their routines, with a positive attitude toward public transit further increasing the likelihood of its use. The nuanced understanding of microtransit adoption presented here can inform targeted strategies to promote its use among transportation-disadvantaged groups. The results suggest that integrating microtransit with existing transit, outreach programs, discounted or free access, extended service hours, and supporting homeownership and affordable housing in transit-rich areas can encourage microtransit adoption by low-income and/or underserved individuals. View the NCST Project Webpage |
Keywords: | Social and Behavioral Sciences, underserved populations, SmaRT Ride, microtransit adoption, transportation access |
Date: | 2024–10–01 |
URL: | https://d.repec.org/n?u=RePEc:cdl:itsdav:qt9863j1fz |
By: | Jonathan Chapman; Erik Snowberg; Stephanie W. Wang; Colin Camerer |
Abstract: | We introduce DOSE⸻Dynamically Optimized Sequential Experimentation⸻to elicit preference parameters. DOSE starts with a model of preferences and a prior over the parameters of that model, then dynamically chooses a customized question sequence for each participant according to an experimenter-selected information criterion. After each question, the prior is updated, and the posterior is used to select the next, informationally-optimal, question. Simulations show that DOSE produces parameter estimates that are approximately twice as accurate as those from established elicitation methods. DOSE estimates of individual-level risk and time preferences are also more accurate, more stable over time, and faster to administer in a large representative, incentivized survey of the U.S. population (N = 2; 000). By reducing measurement error, DOSE identifies a stronger relationship between risk aversion and cognitive ability than other elicitation techniques. DOSE thus provides a flexible procedure that facilitates the collection of incentivized preference measures in the field. |
Keywords: | preference elicitation, risk preferences, time preferences, dynamic experiments, cognitive ability, preference stability |
JEL: | C81 C90 D03 D81 D90 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_11361 |
By: | J. Aislinn Bohren (University of Pennsylvania); Josh Hascher (University of Chicago); Alex Imas (University of Chicago); Michael Ungeheuer (Aalto University); Martin Weber (University of Mannheim) |
Abstract: | We propose a framework where perceptions of uncertainty are driven by the interaction between cognitive constraints and the way that people learn about it—whether information is presented sequentially or simultaneously. People can learn about uncertainty by observing the distribution of outcomes all at once (e.g., seeing a stock return distribution) or sampling outcomes from the relevant distribution sequentially (e.g., experiencing a series of stock returns). Limited attention leads to the overweighting of unlikely but salient events—the dominant force when learning from simultaneous information—whereas imperfect recall leads to the underweighting of such events—the dominant force when learning sequentially. A series of studies show that, when learning from simultaneous information, people are overoptimistic about and are attracted to assets that mostly underperform, but sporadically exhibit large outperformance. However, they overwhelmingly select more consistently outperforming assets when learning the same information sequentially, and this is reflected in beliefs. The entire 40-percentage point preference reversal appears to be driven by limited attention and memory; manipulating these factors completely eliminates the effect of the learning environment on choices and beliefs, and can even reverse it. Our results have implication for the design of policy and the recovery of preferences from choice data. |
Keywords: | Choice Under Risk, Bounded Rationality, Perceptions of Uncertainty, Information, Beliefs, Attention, Memory, Description-Experience Gap |
Date: | 2024–05–01 |
URL: | https://d.repec.org/n?u=RePEc:pen:papers:24-031 |
By: | Robert F. Phillips; Benjamin D. Williams |
Abstract: | We study the interactive effects (IE) model as an extension of the conventional additive effects (AE) model. For the AE model, the fixed effects estimator can be obtained by applying least squares to a regression that adds a linear projection of the fixed effect on the explanatory variables (Mundlak, 1978; Chamberlain, 1984). In this paper, we develop a novel estimator -- the projection-based IE (PIE) estimator -- for the IE model that is based on a similar approach. We show that, for the IE model, fixed effects estimators that have appeared in the literature are not equivalent to our PIE estimator, though both can be expressed as a generalized within estimator. Unlike the fixed effects estimators for the IE model, the PIE estimator is consistent for a fixed number of time periods with no restrictions on serial correlation or conditional heteroskedasticity in the errors. We also derive a statistic for testing the consistency of the two-way fixed effects estimator in the possible presence of iterative effects. Moreover, although the PIE estimator is the solution to a high-dimensional nonlinear least squares problem, we show that it can be computed by iterating between two steps, both of which have simple analytical solutions. The computational simplicity is an important advantage relative to other strategies that have been proposed for estimating the IE model for short panels. Finally, we compare the finite sample performance of IE estimators through simulations. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.12709 |
By: | Robert Lasser; Fabian Hollinetz |
Abstract: | In the realm of Automated Valuation Models (AVM) for real estate, incorporating nuanced features can significantly enhance the accuracy of property valuation. We are introducing a novel feature in our AVM framework aimed at capturing the impact of heating energy demand on the market value of real estate properties. Leveraging a combination of machine learning techniques and statistical modeling, our approach involves two key steps.First, utilizing a robust dataset of real estate transactions, we employ XGBoost models to predict heating energy demand for properties lacking such information. This imputation process enables us to generate comprehensive estimates of heating energy demand across a diverse range of properties.Secondly, we integrate tensor interaction effects within Generalized Additive Models (GAM) to analyze the relationship between heating energy demand and property value, considering crucial factors such as the construction year of the real estate objects. By incorporating tensor interaction effects, we are able to capture complex nonlinear relationships and interactions, allowing for a more nuanced understanding of how heating energy demand influences property valuation over time.Through the implementation of this advanced feature, our AVM framework offers real estate practitioners and stakeholders a more comprehensive tool for accurately assessing property values. This research contributes to the evolving landscape of real estate valuation methodologies, demonstrating the efficacy of combining machine learning with statistical modeling techniques to capture multifaceted influences on property value. |
Keywords: | Automated Valuation Models (AVM); Heating Energy Demand, ; Machine Learning; Real Estate Valuation |
JEL: | R3 |
Date: | 2024–01–01 |
URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-196 |
By: | Hammond, Peter J (University of Warwick) |
Abstract: | A decision-making agent is usually assumed to be Bayesian rational, or to maximize subjective expected utility, in the context of a completely and correctly specified decision model. Following the discussion in Hammond (2007) of Schumpeter's (1911, 1934) concept of entrepreneurship, and of Shackle's (1953) concept of potential surprise, this paper considers enlivened decision trees whose growth over time cannot be accurately modelled in full detail. An enlivened decision tree involves more severe limitations than model mis-specification, unforeseen contingencies, or unawareness, all of which are typically modelled with reference to a universal state space large enough to encompass any decision model that an agent may consider. We consider three motivating examples based on : (i) Homer's classic tale of Odysseus and the Sirens; (ii) a two-period linear-quadratic model of portfolio choice; (iii) the game of Chess. Though our novel framework transcends standard notions of risk or uncertainty, a form of Bayesian rationality is still possible. Instead of subjective probabilities of different models of a classical finite decision tree, we show that Bayesian rationality and continuity imply subjective expected utility maximization when some terminal nodes have attached real-valued subjective evaluations instead of consequences. Moreover, subjective evaluations lie behind, for example, the kind of Monte Carlo tree search algorithm that has been used by some powerful chess-playing software packages. |
Keywords: | Prerationality ; consequentialist decision theory ; entrepreneurship ; potential surprise ; enlivened decision trees ; subjective evaluation of continuation ; subtrees ; Monte Carlo tree search. JEL Codes: D81 ; D91 ; D11 ; D63 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:wrk:wcreta:89 |
By: | Paul Hufe (University of Bristol); Daniel Weishaar (LMU Munich) |
Abstract: | The measurement of preferences often relies on surveys in which individuals evaluate hypothetical scenarios. This paper proposes and validates a novel factorial survey tool to measure fairness preferences. We specifically examine whether a non-incentivized survey captures the same distributional preferences as an impartial spectator design, where choices may apply to a real person. In contrast to prior studies, our design involves high stakes, with respondents determining a real person’s monthly earnings, ranging from $500 to $5, 700. We find that the non-incentivized survey module yields nearly identical results compared to the incentivized experiment and recovers fairness preferences that are stable over time. Furthermore, we show that most respondents adopt intermediate fairness positions, with fewer exhibiting strictly egalitarian or libertarian preferences. These findings suggest that high stake incentives do not significantly impact the measurement of fairness preferences and that non-incentivized survey questions covering realistic scenarios offer valuable insights into the nature of these preferences. |
Keywords: | Fairness preferences; Survey experiment; Vignette studies; |
JEL: | C90 D63 I39 |
Date: | 2024–11–01 |
URL: | https://d.repec.org/n?u=RePEc:rco:dpaper:515 |
By: | Roman Belavkin; Panos Pardalos; Jose Principe |
Abstract: | The advances and development of various machine learning techniques has lead to practical solutions in various areas of science, engineering, medicine and finance. The great choice of algorithms, their implementations and libraries has resulted in another challenge of selecting the right algorithm and tuning their parameters in order to achieve optimal or satisfactory performance in specific applications. Here we show how the value of information (V(I)) can be used in this task to guide the algorithm choice and parameter tuning process. After estimating the amount of Shannon's mutual information between the predictor and response variables, V(I) can define theoretical upper bound of performance of any algorithm. The inverse function I(V) defines the lower frontier of the minimum amount of information required to achieve the desired performance. In this paper, we illustrate the value of information for the mean-square error minimization and apply it to forecasts of cryptocurrency log-returns. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.01831 |
By: | Li, Jiangyan; Fairley, Kim; Fenneman, Achiel |
Abstract: | The Ellsberg urn is conventionally used in experiments to measure ambiguity attitudes, yet there is no uniformity in the method for producing Ellsberg urns, which complicates the comparability of results across studies. By surveying 69 experimental studies, we distill four different methods of ambiguity production—Ellsberg urns that are produced by (i) the experimenter, (ii) another random participant, (iii) compound risk lotteries, and (iv) compound risk derived from random numbers in nature. In an experiment we then assess participants’ ambiguity attitudes concerning each production method and detect no statistically significant differences among them. However, a notable proportion of preference inconsistency is observed when utilizing compound risk lotteries for ambiguity generation. Generally, our findings suggest interchangeability among the four production methods in future laboratory experiments. Nevertheless, we suggest employing method (i) as it is the most uncomplicated and straightforward production method. |
Keywords: | Ambiguity, ambiguity aversion, likelihood insensitivity, uncertainty, Ellsberg, experiment |
JEL: | C90 D80 |
Date: | 2024–09–06 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:122336 |
By: | Zhaonan Qu; Yongchan Kwon |
Abstract: | Instrumental variables (IV) estimation is a fundamental method in econometrics and statistics for estimating causal effects in the presence of unobserved confounding. However, challenges such as untestable model assumptions and poor finite sample properties have undermined its reliability in practice. Viewing common issues in IV estimation as distributional uncertainties, we propose DRIVE, a distributionally robust framework of the classical IV estimation method. When the ambiguity set is based on a Wasserstein distance, DRIVE minimizes a square root ridge regularized variant of the two stage least squares (TSLS) objective. We develop a novel asymptotic theory for this regularized regression estimator based on the square root ridge, showing that it achieves consistency without requiring the regularization parameter to vanish. This result follows from a fundamental property of the square root ridge, which we call ``delayed shrinkage''. This novel property, which also holds for a class of generalized method of moments (GMM) estimators, ensures that the estimator is robust to distributional uncertainties that persist in large samples. We further derive the asymptotic distribution of Wasserstein DRIVE and propose data-driven procedures to select the regularization parameter based on theoretical results. Simulation studies confirm the superior finite sample performance of Wasserstein DRIVE. Thanks to its regularization and robustness properties, Wasserstein DRIVE could be preferable in practice, particularly when the practitioner is uncertain about model assumptions or distributional shifts in data. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.15634 |
By: | Pei Kuang; Michael Weber; Shihan Xie |
Abstract: | We conduct a survey experiment with a large, politically representative sample of U.S. consumers (5, 205 participants) to study how perceptions of the U.S. Federal Reserve’s (Fed) political stance shape macroeconomic expectations and trust in the Fed. The public is divided on the Fed’s political leaning: most Republican-leaning consumers believe the Fed favors Democrats, whereas most Democrat-leaning consumers perceive the Fed as favoring Republicans. Consumers who perceive the Fed as aligned with their political affiliations tend to (1) have a more positive outlook on current and future economic conditions and express higher trust in the institution, (2) show greater willingness to pay for and are more likely to receive Fed communications, and (3) assign significantly more weight to Fed communications when updating their inflation expectations. Strong in-group favoritism generally amplifies these effects. Finally, if Trump were elected U.S. president, consumers would overwhelmingly view the Fed as favoring Republicans. The proportion of consumers viewing the Fed as an in-group would remain stable, but its composition would shift: Democrat-leaning consumers would see the Fed as less of an in-group, whereas more Republican-leaning consumers would perceive it in this way. Likewise, overall public trust in the Fed would remain steady, but trust among Democrat-leaning consumers would decline significantly, whereas it would rise among Republican-leaning consumers. |
JEL: | D72 D83 D84 E31 E7 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33071 |
By: | Andrew Dillon; Nicoló Tomaselli |
Abstract: | Making markets is central to theories of development. In a randomized controlled trial, we vary an agricultural input market's organization to test whether time-inconsistent preferences, hard or soft commitments, and liquidity are constraints to market formation. The results show that markets organized earlier raise market sales consistent with farmer's measured time-inconsistent preferences. Liquidity in later spot markets are a substitute for earlier market timing. Farmer's demand is relatively inelastic to deposit levels in forward contracts. The experiment also directly tests the separability hypothesis where we find creating input markets alone does not lead to welfare improvements. |
Keywords: | agriculture, market formation, welfare improvements, randomized controlled trial, development, farmers. |
JEL: | Q12 L10 G21 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:frz:wpaper:wp2024_18.rdf |