I am leading the Rationality Enhancement Group at the MPI for Intelligent Systems in Tübingen since July 2018. My mission is to develop a scientific foundation and practical tools for empowering people to choose and successfully pursue their ideal self and to make valuable contributions to society.
Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence
Falk Lieder was awarded the Glushko Prize in 2020 for his 2018 Ph.D. thesis: “Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence”. This recorded talk was prepared for the virtual CogSci conference in 2020. Here he presents the main ideas behind the research on his thesis.
People's limited attentional resources are challenged by the high prevalence of potential distractors in daily life. How well people can cope with the arising demands is moderated by individual attention control abilities that have proven to mitigate potentially harmful effects (Wirzberger & Rey, 2018).
Values are long-term, high-level motivational guides that people can use to organize and energize their lives, from daily pursuits, to interpersonal relationships, to career goals, and so on. The content of people’s values is associated with the way they experience their day-to-day (Prentice, 2015) and crucial periods of life ...
In this project, we investigate to which extent seemingly irrational planning decisions are a consequence of how people individually experience the costs and benefits of deliberate decision-making. We start from the empirically-grounded assumptio...
Goal-setting can be a powerful strategy for inciting action and increasing productivity (Locke & Latham, 1990) and meaningful goals can be a sustainable source of happiness (Niemic, Ryan, and Deci, 2009). But this o...
Building on insights into how people learn how to think and how to decide (Lieder & Griffiths, 2017; Krueger, Lieder, & Griffiths, 2017; Lieder, Shenhav, Musslick, & Griffiths, 2018) and novel methods for discovering optimal cognitive str...
In this project, we are developing a computational-level theory of human goal-pursuit based on the principle of resource rationality. Our theory assumes that our limited attention and planning ability constrain our capacity to pursue goals. We formal...
In Proceedings of the 37th Annual Conference of the Cognitive Science Society, 2015 (inproceedings)
The human mind appears to be equipped with a toolbox full of cognitive strategies, but how do people decide when to use which strategy? We leverage rational metareasoning to derive a rational solution to this problem and apply it to decision making under uncertainty. The resulting theory reconciles the two poles of the debate about human rationality by proposing that people gradually learn to make rational use of fallible heuristics. We evaluate this theory against empirical data and existing accounts of strategy selection (i.e. SSL and RELACS). Our results suggest that while SSL and RELACS can explain people's ability to adapt to homogeneous environments in which all decision problems are of the same type, rational metareasoning can additionally explain people's ability to adapt to heterogeneous environments and flexibly switch strategies from one decision to the next.
Lieder, F., Sim, Z. L., Hu, J. C., Griffiths, T. L., Xu, F.
In Proceedings of the 37th Annual Conference of the Cognitive Science Society, 2015 (inproceedings)
Adults and children rely heavily on other people’s testimony. However, domains of knowledge where there is no consensus on the truth are likely to result in conflicting testimonies. Previous research has demonstrated that in these cases, learners look towards the majority opinion to make decisions. However, it remains unclear how learners evaluate social information, given that considering either the overall valence, or the number of testimonies, or both may lead to different conclusions. We therefore formalized several social learning strategies and compared them to the performance of adults and children. We find that children use different strategies than adults. This suggests that the development of social learning may involve the acquisition of cognitive strategies.
The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (article)
Humans possess a repertoire of decision strategies. This raises the question how we decide how to decide. Behavioral experiments suggest that the answer includes metacognitive reinforcement learning: rewards reinforce not only our behavior but also the cognitive processes that lead to it. Previous theories of strategy selection, namely SSL and RELACS, assumed that model-free reinforcement learning identifies the cognitive strategy that works best on average across all problems in the environment. Here we explore the alternative: model-based reinforcement learning about how the differential effectiveness of cognitive strategies depends on the features of individual problems. Our theory posits that people learn a predictive model of each strategy’s accuracy and execution time and choose strategies according to their predicted speed-accuracy tradeoff for the problem to be solved. We evaluate our theory against previous accounts by fitting published data on multi-attribute decision making, conducting a novel experiment, and demonstrating that our theory can account for people’s adaptive flexibility in risky choice. We find that while SSL and RELACS are sufficient to explain people’s ability to adapt to a homogeneous environment in which all decision problems are of the same type, model-based strategy selection learning can also explain people’s ability to adapt to heterogeneous environments and flexibly switch to a different decision-strategy when the situation changes.
Lieder, F., Plunkett, D., Hamrick, J. B., Russell, S. J., Hay, N. J., Griffiths, T. L.
In Advances in Neural Information Processing Systems 27, 2014 (inproceedings)
Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People's choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem.
In Computational and Systems Neuroscience (Cosyne), pages: 112, 2013 (inproceedings)
Learned helplessness experiments involving controllable vs. uncontrollable stressors have shown that
the perceived ability to control events has profound consequences for decision making. Normative models of decision making, however, do not naturally incorporate knowledge about controllability, and previous approaches to incorporating it have led to solutions with biologically implausible computational demands [1,2]. Intuitively, controllability bounds the differential rewards for choosing one strategy over another, and therefore believing that the environment is uncontrollable should reduce one’s willingness to invest time and effort into choosing between options. Here, we offer a normative, resource-rational account of the role of controllability in trading mental effort for expected gain. In this view, the brain not only faces the task of solving Markov decision problems (MDPs), but it also has to optimally allocate its finite computational resources to solve them efficiently. This joint problem can itself be cast as a MDP , and its optimal solution respects computational constraints by design.
We start with an analytic characterisation of the influence of controllability on the use of computational
resources. We then replicate previous results on the effects of controllability on the differential value
of exploration vs. exploitation, showing that these are also seen in a cognitively plausible regime of
computational complexity. Third, we find that controllability makes computation valuable, so that it is
worth investing more mental effort the higher the subjective controllability. Fourth, we show that in this
model the perceived lack of control (helplessness) replicates empirical findings  whereby patients
with major depressive disorder are less likely to repeat a choice that led to a reward, or to avoid a choice that led to a loss. Finally, the model makes empirically testable predictions about the relationship between reaction time and helplessness.
Advances in Neural Information Processing Systems 25, pages: 2699-2707, 2012 (article)
Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as burn-in. Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks. Our theoretical and empirical results suggest that the anchoring bias is consistent with approximate Bayesian inference.
Zeitschrift für Sozialpsychologie, 37(4):245-258, Verlag Hans Huber, 2006 (article)
In der vorliegenden Studie wurde die Effektivität von furchterregenden Warnhinweisen bei jugendlichen Rauchern und Raucherinnen analysiert. 336 Raucher/-innen (Durchschnittsalter: 15 Jahre) wurden schriftliche oder graphische Warnhinweise auf Zigarettenpackungen präsentiert (Experimentalbedingungen; n = 96, n = 119), oder sie erhielten keine Warnhinweise (Kontrollbedingung; n = 94). Anschließend wurden die Modellfaktoren des revidierten Modells der Schutzmotivation (Arthur & Quester, 2004) erhoben. Die Ergebnisse stützen die Hypothese, dass die Faktoren «Schweregrad der Schädigung» und «Wahrscheinlichkeit der Schädigung» die Verhaltenswahrscheinlichkeit, weniger oder leichtere Zigaretten zu rauchen, vermittelt über den Mediator «Furcht» beeinflussen. Die Verhaltenswahrscheinlichkeit wurde dagegen nicht von den drei experimentellen Bedingungen beeinflusst. Auch konnten die Faktoren «Handlungswirksamkeitserwartungen» und «Selbstwirksamkeitserwartungen» nicht als Moderatoren des Zusammenhangs zwischen Furcht und Verhaltenswahrscheinlichkeit bestätigt werden.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems