DescriptionBack to top
Decision Theory, including its applications and closely-related topics, is a deep and active area of research. It includes theories of how people make or should make decisions, often in the face of uncertainty. Mathematically, such theories often connect preferences or choices with functional representations, and/or analyze and apply such functionals as models of behavior. At this workshop, invited scholars will present their recent work and engage in discussion with the audience.
OrganizerBack to top
SpeakersBack to top
ScheduleBack to top
Speaker: Mira Frick (Yale University)
Joint work with Ryota Iijima and Yuhta Ishii.
We provide a systematic approach to compare different belief-updating rules under ambiguity, by characterizing their learning efficiency. We consider a decision-maker (DM) with maxmin expected utility preferences who observes many signals about anunknown state of the world, and then solves a decision problem based on her updated beliefs. Capturing signal ambiguity, the DM perceives a set of possible signal structures. A belief-updating rule maps sequences of signals to sets of posteriors about the state,and the DM chooses optimally based on her worst-case posterior. We measure the learning efficiency of each updating rule by considering the DM’s induced worst-case expected payoff, evaluated from an ex-ante perspective. Thus, updating rules withhigher learning efficiency can be viewed as displaying less dynamic inconsistency. We provide a simple characterization of the learning efficiency of each updating rule. This has the following main implications. First, in stationary environments (i.e., when signaldraws are conditionally i.i.d.), we show that learning efficiency is maximal if (and, in a sense, only if) the DM uses maximum-likelihood updating; in contrast, the widely used full-Bayesian updating rule is generically (potentially highly) inefficient. Second, in non-stationary environments (i.e., when signal structures can vary over time), we show that learning efficiency is maximal if (and, in a sense, only if) the DM uses a maximum-likelihood updating rule that (mis)perceives the environment to be stationary.
Speaker: Fabio Maccheroni (Bocconi University)
We introduce an algorithmic decision process for multialternative choice, the Neural Metropolis Algorithm, that combines binary comparisons and Markovian exploration. We show that a stochastic version of transitivity, a basic tenet of rationality, makes this algorithm value-based. In so doing, we extend to decision procedures some classic stochastic choice notions.
Speaker: Shaowei Ke (University of Michigan)
We study a decision maker’s learning behavior when she receives recommendations from black boxes, i.e., the decision maker does not understand how the recommendations are generated. We introduce three types of axioms to be imposed on the decision maker’s learning rule, (weak and strong) monotonicity, (weak and strong) regularity, and partial obedience. We show that strong monotonicity and weak regularity characterize the contraction rule, which has two parameters that map each recommendation to a recommended belief and to the trustworthiness of the recommendation. The decision maker’s posterior is formed by mixing her prior with the recommended belief weighted by the trustworthiness measure. We show that under weak monotonicity, partial obedience, and strong regularity, the learning rule must feature a form of conservatism. However, no contraction rules are conservative; i.e., there does not exist any learning rule that satisfies strong monotonicity, partial obedience, and strong regularity simultaneously.
Speaker: Sarah Auster (University of Bonn)
Speaker: Jean-Marc Tallon (Paris School of Economics)
Speaker: Jacob Sagi (University of North Carolina Chapel Hill)
Speaker: David Dillenberger (University of Pennsylvania)
Speaker: Eran Hanany (Tel Aviv University)
Speaker: Christopher Chambers (Georgetown University)
Speaker: Itzhak Gilboa (Tel Aviv University)
Speaker: Luca Rigotti (University of Pittsburgh)
Speaker: Marciano Siniscalchi (Northwestern University)
Speaker: David Ahn (Washington University in St. Louis)
Related decisions are often observed in isolation without direct measurement of correlation in beliefs across state spaces or complementarity in tastes across prize spaces. We introduce a novel model with two decision problems with distinct states and prizes, which we call small worlds, without observation of bets that are contingent on the realization of both worlds. We characterize an appropriate version of subjective expected utility, where choices are made as if there is a joint distribution over the product of the state spaces and a joint utility index over pairs of prizes from both prize spaces. Turning to identification, the joint utility index over pairs of prizes and the marginal belief over each small world is identified, but the uniqueness of the joint distribution is more subtle. If the utility index is separable across prize spaces, then the correlation across state spaces is unidentified; but if there is any complementarity across prizes, then the joint distribution is exactly identified.
Speaker: Simone Cerreia-Vioglio (Bocconi University)
VideosBack to top
Time-Constrained Sequential Neural Decision Procedures in Multialternative Choice Problems
May 2, 2022