A Theory of Cognitive and Algorithmic Decision Making
Addressing the Problem
How can we leverage the mathematical science of algorithmic decision-making to enhance the abilities of the intuitive human cognitive decision system? Can the mathematical framework of engineering decision theory capture the salient characteristics of human decision behavior that do not fit unrealistic models of human rationality? Can our understanding of social and behavioral influences on human decision-making improve existing and future algorithmic decision systems that interact with people?
Answers to these questions will have deep implications in a variety of fields, such as data-driven and machine-aided decision-making systems; economic and financial forecasting that incorporates into algorithmic data analytics an understanding of the cognitive biases of investors and consumers; marketing and politics in the era of social networks; high-performance computer support for first responders and emergency operations combining statistical analysis based on machine learning with “soft” inputs provided by experts; and data representation and visualization for enhancing human decision-making.
Leveraging existing strengths at Illinois, the CADM research team is pursuing the next generation of human-machine (cognitive-algorithmic) decision systems by pairing engineering decision theory and machine learning with the emerging science of social networks and human decision-making.
There is a fundamental difference between the science of decision-making in engineering and the manner in which cognitive and social psychologists model and attempt to understand human decision-making. In engineering, decision systems are implemented in hardware and are designed through an optimization or game-theoretic framework in which an optimal policy attempts to maximize well-defined gains or minimize well-defined losses. These systems are not subject to the numerous cognitive biases that can plague human decision-makers as studied by behavioral economists.
Humans can be exceptionally talented decision-makers in repeatable situations with immediate feedback and a controlled environment, such as in the game of chess. However, humans are highly fallible in many other contexts, where biases, framing, and anchoring effects can give rise to completely irrational behavior. While the algorithmic approach is immune to such trappings of cognitive behavior, it also lacks the creativity and imagination of its human counterpart.
The merits of algorithmic decision theory are well-known and have given rise to a century of development of decision science spanning mathematics, economics, and engineering. Similarly, the study of human decision-making has deep roots in psychology, and its link to economics dates to at least the 18th century. It was only in the latter half of the 20th century that the field of behavioral economics began to relate cognitive, social, and emotional factors to the decisions of economic agents. Until now, the two approaches to modeling, understanding, and developing systems for decision-making have not been rigorously, jointly examined by a multidisciplinary research team.
Basic research conducted by the project team has the potential to contribute knowledge and technology to actual decision systems deployed in real-world settings. At this moment, new collaborative decision systems are being implemented at companies like Google, Yahoo and Facebook, yet there remains a considerable gap between these implementations and our understanding of their operation.
As emerging collaborative decision systems like these are often proprietary, researchers have had limited opportunity to learn from their data, and the public has had no chance to consider their possible consequences. This SRI provides an opportunity to apply a rigorous program of research on decision theory from the perspective of engineering to such real-world systems. This work has made strides toward a deeper understanding of these transformative new collaborative decision systems that are already in wide use.