Artificial Intelligence: Thinking, fast(er) and slow Date: 2023-04-25 Author: John Brennan Source: https://johnbrennan.xyz/essay/thinking-faster-and-slow Before artificial intelligence there was intelligence. Wisdom is knowledge combined with discernment. --- Before artificial intelligence there was intelligence. Wisdom is knowledge combined with discernment. Intelligence is our capacity to learn, to apply knowledge to manipulate one's environment, and to think abstractly. In his book Thinking, Fast and Slow, Daniel Kahneman popularized the modern understanding of the thinking abilities of humans as being the by-product of two "systems" -- system 1 and system 2 (originally so-named by the psychologists Keith Stanovich and Richard West.) As Kahneman and his research partner Amos Tversky explained, System 1 responds automatically and quickly. It takes little to no effort for your sense of sight, hearing, or smell to tell you a house is on fire. There is little to no sense of voluntary control about using System 1. System 2, on the other hand, "allocates attention to the effortful mental activities" and enables tasks like complex computations. We feel much more agency and choice over when we allocate our energies to concentration. These systems are electrochemical machines (that we barely understand) storing experiences and information in associations that form "the model" of what a brain thinks normal is. The models are tuned to what feels like normal temperatures, altitude, air pressure, length of day, length of season, things that are safe, things that are dangerous, faces we know, names we remember, and people we trust. According to Kahneman, the main function of System 1 is to update and maintain a model of your personal world and what is normal in it. New, novel, abnormal information leads to surprise. Surprise triggers a System 1 and a System 2 reaction. The System 1 reaction is what we call fight or flight where our sympathetic nervous system automatically changes our heart rate, blood pressure, breathing rate, the dilation of our pupils, and the priority of energy to other systems in the body. Now we also know based on a 2022 study at MIT that System 2 and 1 work together to release the neurotransmitter noradrenaline to code this new association into our model for normal. For example, your first car crash is very disorienting. Your second or third does not trigger the same initial feeling of cognitive helplessness. Artificial intelligence is the deliberate effort by humans to build what I think of as System 3. System 3 is us thinking through in advance: a) what we expect normal will be, b) the base rate cases/alternatives for that normal situation, c) what we think the edge cases will be, and d) then crafting the algorithms that best represent our System 1 model. It also needs instructions for how that model should react under those conditions and when it should stop and transfer its actions to a set of System 2-mimicking algorithms. What are algorithms? Webster's reminds us they are procedures for solving a mathematical problem in a finite number of steps that frequently involves repetition of an operation. The algorithms that will make up a System 3 implementation need data, software, models, and interfaces constrained to specific tasks and contexts. For example, my car has three algorithms I rely upon. First, it gives me visual and audio indicators when I am trying to change lanes and another vehicle is in my blind spot. Second, it gives me similar alerts if conditions indicate I am closing too quickly with an item in front of me. Finally, it will even activate the brakes if the probability of a collision exceeds a pre-established level. "Smart" phones have algorithms such as auto-fill of contact information by parsing text in messages to find phone numbers and email addresses. Credit: Chris Auffenberg Perhaps the most medically intrusive, autonomous system making life or death decisions today is the collection of algorithms in an automated external defibrillator. We trust this machine to detect heart rhythms, assess for ventricular fibrillation, and then autonomously apply a set voltage for a specific duration to restore a normal rhythm. All of this can happen without our consent if a bystander elects to intervene during an emergency with this life-saving technology. Perhaps in the fullness of time a series of reliable System 3 implementations could resemble a nascent kind of artificial general intelligence. The pace with which we approach this potential moment is accelerating, as it seems Wright's Law also works on computers and storage. In 1936 Theodore Wright observed that for every order of magnitude increase in the total produced volume of airplanes there was roughly an order of magnitude improvement in the costs. That process has continued for computers: GPUs and TPUs have made large math problems fast and affordable. As a result we are moving from performing aperiodic statistical analysis of samples to near real time models based on data approximating the entire population for a problem. As others have observed, AlphaGo and ChatGPT are Sputnik-like moments for this generation. Sputnik ushered in a space race that cost more than $300 billion over twelve years (current dollars, figure below), i.e., the budget for the US to get to the Moon. We see this historical pattern, but we do not have a specific moon landing in mind right now. What is more likely to happen is a Cambrian Explosion of new efforts in all domains and by all actors. The technologies involved are vastly different in their data sources and application, e.g., natural language processing, classifiers, generative pre-trained transformers, deterministic and non-deterministic reasoning for autonomy, and computer vision. Data from The Guardian As organizations contemplate building System 3 capabilities, they first must contend with the limitations and frailties of Systems 1 and 2. Experimentation, test and evaluation, and peer review will be useful practices in identifying and then correcting for the range of ways we can hard code the faulty heuristics, biases, and the other illusions of understanding and validity that are the foundations of overconfidence in human experts. As we are reminded by C3PO, the trusty droid responsible for human-cyborg relations, even System 3 will still need our assistance: "Sometimes, I just don't understand human behavior." Citation: @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Björn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } CreativeML Open RAIL++-M License --- Canonical: https://johnbrennan.xyz/essay/thinking-faster-and-slow