Social and Information Sciences Laboratory (SISL) Seminar
Mislearning from Censored Data: The Gambler's Fallacy in Optimal-Stopping Problems
Abstract: I study endogenous learning dynamics for people expecting systematic reversals from random sequences — the "gambler's fallacy." Biased agents face an optimal-stopping problem, such as managers conducting sequential interviews. They are uncertain about the underlying distribution (e.g. talent distribution in the labor pool) and must learn its parameters from previous agents' histories. Agents stop when early draws are deemed "good enough," so predecessors' histories contain negative streaks but not positive streaks. Since biased learners understate the likelihood of consecutive below-average draws, histories induce pessimistic beliefs about the distribution's mean. When early agents decrease their acceptance thresholds due to pessimism, later learners will become more surprised by the lack of positive reversals in their predecessors' histories, leading to even more pessimistic inferences and even lower acceptance thresholds — a positive-feedback loop. Agents who are additionally uncertain about the distribution's variance believe in fictitious variation (exaggerated variance) to an extent depending on the severity of data censoring. When payoffs are convex in the draws (e.g. managers can hire previously rejected interviewees), variance uncertainty provides another channel of positive feedback between past and future thresholds.
Contact: Mary Martin at 626-395-4571 email@example.com