Russ Roberts: We’re going to base our conversation today loosely on a recent article you wrote, “Theory Is All You Need: AI, Human Cognition, and Decision-Making,” co-written with Matthias Holweg of Oxford.
Now, you write in the beginning–the Abstract of the paper–that many people believe, quote,
due to human bias and bounded rationality–humans should (or will soon) be replaced by AI in situations involving high-level cognition and strategic decision making.
Endquote.
You disagree with that, pretty clearly.
And I want to start to get at that. I want to start with a seemingly strange question. Is the brain a computer? If it is, we’re in trouble. So, I know your answer, the answer is–the answer is: It’s not quite. Or not at all. So, how do you understand the brain?
Teppo Felin: Well, that’s a great question. I mean, I think the computer has been a pervasive metaphor since the 1950s, from kind of the onset of artificial intelligence [AI].
So, in the 1950s, there’s this famous kind of inaugural meeting of the pioneers of artificial intelligence [AI]: Herbert Simon and Minsky and Newell, and many others were involved. But, basically, in their proposal for that meeting–and I think it was 1956–they said, ‘We want to understand how computers think or how the human mind thinks.’ And, they argued that this could be replicated by computers, essentially. And now, 50, 60 years subsequently, we essentially have all kinds of models that build on this computational model. So, evolutionary psychology by Cosmides and Tooby predicted processing by people like Friston. And, certainly, the neural networks and connectionist models are all essentially trying to do that. They’re trying to model the brain as a computer.
And, I’m not so sure that it is. And I think we’ll get at those issues. I think there’s aspects of this that are absolutely brilliant and insightful; and what large language models and other forms of AI are doing are remarkable. I use all these tools. But, I’m not sure that we’re actually modeling the human brain necessarily. I think something else is going on, and that’s what kind of the papers with Matthias is getting at.
Russ Roberts: I always find it interesting that human beings, in our pitiful command of the world around us, often through human history, take the most advanced device that we can create and assume that the brain is like that. Until we create a better device.
Now, it’s possible–I don’t know anything about quantum computing–but it’s possible that we will create different computing devices that will become the new metaphor for what the human brain is. And, fundamentally, I think that attraction of this analogy is that: Well, the brain has electricity in it and it has neurons that switch on and off, and therefore it’s something like a giant computing machine.
What’s clear to you–and what I learned from your paper and I think is utterly fascinating–is that what we call thinking as human beings is not the same as what we have programmed computers to do with at least large language models. And that forces us–which I think is beautiful–to think about what it is that we actually do when we do what we call thinking. There are things we do that are a lot like large language models, in which case it is a somewhat useful analogy. But it’s also clear to you, I think, and now to me, that that is not the same thing. Do I have that right?
Teppo Felin: Yeah. I mean the whole what’s happening in AI has had me and us kind of wrestling with what it is that the mind does. I mean, this is an area that I’ve focused on my whole career–cognition and rationality and things like that.
But, Matthias and I were teaching an AI class and wrestling with us in terms of differences between humans and computers. And, if you take something like a large language model [LLM], I mean, how it’s trained is–it’s remarkable. And so, you have a large language model: my understanding is that the most recent one, they’re pre-trained with something like 13 trillion words–or, they’re called tokens–which is a tremendous amount of text. Right? So, this is scraped from the Internet: it’s the works of Shakespeare and it’s Wikipedia and it’s Reddit. It’s all kinds of things.
And, if you think about what the inputs of human pre-training are, it’s not 13 trillion words. Right? I mean, these large language models get this training within weeks or months. And a human–and we have sort of a back back-of-the-envelope calculation, looking at some of the literature with infants and children–but they encounter maybe, I don’t know, 15-, 17,000 words a day through parents speaking to them or maybe reading or watching TV or media and things like that. And, for a human to actually replicate that 13 trillion words, it would be hundreds of years. Right? And so, we’re clearly doing something different. We’re not being input: we’re not this empty-vessel bucket that things get poured into, which is what the large language models are.
And then, in terms of outputs, it’s remarkably different as well.
And so, you have the model is trained with all of these inputs, 13 trillion, and then it’s a stochastic process of kind of drawing or sampling from that to give us fluent text. And that text–I mean, when I saw those first models, it’s remarkable. It’s fluent. It’s good. It’s remarkable. It surprised me.
But, as we wrestle with what it is, it’s very good at predicting the next forward. Right? And so, it’s good at that.
And, in terms of kind of the level of knowledge that it’s giving us, the way that we try to summarize it is: it’s kind of Wikipedia-level knowledge, in some sense. So, it could give you indefinite Wikipedia articles, beautifully written about Russ Roberts or about EconTalk or about the Civil War or about Hitler or whatever it is. And so, it could give you indefinite articles in sort of combinatorially pulling together texts that isn’t plagiarized from some existing source, but rather is stochastically drawn from its ability to give you really coherent sentences.
But, as humans, we’re doing something completely different. And, of course, our inputs aren’t just they’re multimodal. It’s not just that our parents speak to us and we listen to radio or TV or what have you. We’re also visually seeing things. We’re taking things in through different modalities, through people pointing at things, and so forth.
And, in some sense, the data that we get–our pre-training as humans–is degenerate in some sense. It’s not–you know, if you look at verbal language versus written language, which is carefully crafted and thought out, they’re just very different beasts, different entities.
And so, I think that there’s fundamentally something different going on. And, I think that analogy holds for a little bit, and it’s an analogy that’s been around forever. Alan Turing started out with talking about infants and, ‘Oh, we could train the computer just like we do an infant,’ but I think it’s an analogy that quickly breaks down because there’s something else going on. And, again, issues that we’ll get to.
Russ Roberts: Yeah, so I alluded to this I think briefly, recently. My 20-month-old granddaughter has begun to learn the lyrics to the song “How About You?” which is a song written by Burton Lane with lyrics by Ralph Reed. It came out in 1941. So, the first line of that song is, [singing]:
I like New York in June.How about you?
So, when you first–I’ve sung it to my granddaughter, probably, I don’t know, 100 times. So, eventually, I leave off the last word. I say, [singing]:
I like New York in June.How about ____?
and she, correctly, fills in ‘you.’ It probably isn’t exactly ‘you,’ but it’s close enough that I recognize it and I give it a check mark. She will sometimes be able to finish the last three words. I’ll say, [singing],
I like New York in June.______?
She’ll go ‘How about yyy?’–something that sounds vaguely like ‘How about you?’
Now, I’ve had kids–I have four of them–and I think I sang it to all of them when they were little, including the father of this granddaughter. And, they would some say very charmingly when I would say, ‘I like New York in June.’ And, I’d say, ‘How about ____?; and they’d fill in, instead of saying ‘you’–I’d say, [singing]:
I like New York in June.How about ____?
‘Me.’ Because, I’m singing it to them and they recognize that you is me when I’m pointing at them. And that’s a very deep, advanced step.
Russ Roberts: But, that’s about it. They are, as you say, those infants–all infants–are absorbing immense amount of aural–A-U-R-A-L–material from speaking or radio or TV or screens. They are looking at the world around them and somehow they’re putting it together where eventually they come up with their own requests–frequent–for things that float their boat.
And, we don’t fully understand that process, obviously. But, at the beginning, she is very much like a stochastic process. Actually, it’s not stochastic. She’s primitive. She can’t really imagine a different word than ‘you’ at the end of that sentence, other than ‘me.’ She would never say, ‘How about chicken?’ She would say, ‘How about you or me?’ And, that’s it. There’s no creativity there.
So, on the surface, we are doing, as humans, a much more primitive version of what a large language model is able to do.
But I think that misses the point–is what I’ve learned from your paper. It misses the point because that is–it’s hard to believe; I mean, it’s kind of obvious but it hasn’t seemed to have caught on–it’s not the only aspect of what we mean by thinking–is like putting together sentences, which is what a large language model by definition does.
And I think, as you point out, there’s an incredible push to use AI and eventually other presumably models of artificial intelligence than large language models [LLMs] to help us make, quote, “rational decisions.”
So, talk about why that’s kind of a fool’s game. Because, it seems like a good idea. We’ve talked recently on the program–it hasn’t aired yet; Teppo, you haven’t heard it, but we talked, listeners will have when this airs–we talked recently on the program about biases in large language models. And, we’re usually talking about by that political biases, ideological biases, things that have been programmed into the algorithms. But, when we talk about biases generally with human beings, we’re talking about all kinds of struggles that we have as human beings to make, quote, “rational decisions.” And, the idea would be that an algorithm would do a better job. But, you disagree. Why?
Teppo Felin: Yeah. I think we’ve spent sort of inordinate amounts of journal pages and experiments and time kind of highlighting–in fact, I teach these things to my students–highlighting the ways in which human decision-making goes wrong. And so, there’s confirmation bias and escalation of commitment. I don’t know. If you go onto Wikipedia, there’s a list of cognitive biases listed there, and I think it’s 185-plus. And so, it’s a long list. But it’s still surprising to me–so, we’ve got this long list–and as a result, now there’s a number of books that say: Because we’re so biased, eventually we should just–or not even eventually, like, now–we should just move to letting algorithms make decisions for us, basically.
And, I’m not opposed to that in some situations. I’m guessing the algorithms in some, kind-of-routine settings can be fantastic. They can solve all kinds of problems, and I think those things will happen.
But, I’m leery of it in the sense that I actually think that biases are not a bug, but to use this trope, they’re a feature. And so, there’s many situations in our lives where we do things that look irrational, but turn out to be rational. And so, in the paper we try to highlight, just really make this salient and clear, we try to highlight extreme situations of this.
So, one example I’ll give you quickly is: So, if we did this thought-experiment of, we had a large language model in 1633, and that large language model was input with all the text, scientific text, that had been written to that point. So, it included all the works of Plato and Socrates. Anyway, it had all that work. And, those people who were kind of judging the scientific community, Galileo, they said, ‘Okay, we’ve got this great tool that can help us search knowledge. We’ve got all of knowledge encapsulated in this large language model. So we’re going to ask it: We’ve got this fellow, Galileo, who’s got this crazy idea that the sun is at the center of the universe and the Earth actually goes around the sun,’ right?
Russ Roberts: The solar system.
Teppo Felin: Yeah, yeah, exactly. Yeah. And, if you asked it that, it would only parrot back the frequency with which it had–in terms of words–the frequency with which it had seen instances of actually statements about the Earth being stationary–right?–and the Sun going around the Earth. And, those statements are far more frequent than anybody making statements about a heliocentric view. Right? And so, it can only parrot back what it has most frequently seen in terms of the word structures that it has encountered in the past. And so, it has no forward-looking mechanism of anticipating new data and new ways of seeing things.
And, again, everything that Galileo did looked to be almost an instance of confirmation bias because you go outside and our just common conception says, ‘Well, Earth, it’s clearly not moving. I mean it turns its–toe down[?], it’s moving 67,000 miles per hour or whatever it is, roughly in that ballpark. But, you would sort of verify that, and you could verify that with big data by lots of people going outside and saying, ‘Nope, not moving over here; not moving over here.’ And, we could all watch the sun go around. And so, common intuition and data would tell us something that actually isn’t true.
And so, I think that there’s something unique and important about having beliefs and having theories. And, I think–Galileo for me is kind of a microcosm of even our individual lives in terms of how we encounter the world, how things that are in our head structure what becomes salient and visible to us, and what becomes important.
And so, I think that we’ve oversimplified things by saying, ‘Okay, we should just get rid of these biases,’ because we have instances where, yes, biases lead to bad outcomes, but also where things that look to be biased actually were right in retrospect.
Russ Roberts: Well, I think that’s a clever example. And, an AI proponent–or to be more disparaging, a hypester–would say, ‘Okay, of course; obviously new knowledge has to be produced and AI hasn’t done that yet; but actually, it will because since it has all the facts, increasingly’–and we didn’t have very many in Galileo’s day, so now we have more–‘and, eventually, it will develop its own hypotheses of how the world works.’
Russ Roberts: But, I think what’s clever about your paper and that example is that it gets to something profound and quite deep about how we think and what thinking is. And, I think to help us draw that out, let’s talk about another example you give, which is the Wright Brothers. So, two seemingly intelligent bicycle repair people. In what year? What are we in 1900, 1918?
Teppo Felin: Yeah. They started out in 1896 or so. So, yeah.
Russ Roberts: So, they say, ‘I think there’s never been human flight, but we think it’s possible.’ And, obviously, the largest language model of its day, now in 1896, ‘There’s much more information than 1633. We know much more about the universe,’ but it, too, would reject the claims of the Wright Brothers. And, that’s not what’s interesting. I mean, it’s kind of interesting. I like that. But, it’s more interesting as to why it’s going to reject it and why the Wright Brothers got it right. Pardon the bad pun. So, talk about that and why the Wright kids[?] took flight.
Teppo Felin: Yeah, so I kind of like the thought experiment of, say I was–so, I actually worked in venture capital in the 1990s before I got a Ph.D. and moved into academia. But, say the Wright Brothers came to me and said they needed some funding for their venture. Right? And so, I, as a data-driven and evidence-based decision maker would say, ‘Okay, well, let’s look at the evidence.’ So, okay, so far nobody’s flown. And, there are actually pretty careful records kept about attempts. And so, there was a fellow named Otto Lilienthal who was an aviation pioneer in Germany. And, what did the data say about him? I think it was in 1896–no, 1898. He died attempting flight. Right?
So, that’s a data point, and a pretty severe one that would tell you that you should probably update your beliefs and say flight isn’t possible.
And so, then you might go to the science and say, ‘Okay, we’ve got great scientists like Lord Kelvin, and he’s the President of the Royal Society; and we ask him, and he says, ‘It’s impossible. I’ve done the analysis. It’s impossible.’ We talked to mathematicians like Simon Newcomb–he’s at Johns Hopkins. And, he would say–and he actually wrote pretty strong articles saying that this is not possible. This is now an astronomer and a mathematician, one of the top people at the time.
And so, people might casually point to data that supports the plausibility of this and say, ‘Well, look, birds fly.’ But, there’s a professor at the time–and UC Berkeley [University of California, Berkeley] at the time was relatively new, but he was one of the first, actually–but his name was Joseph LeConte. And, he wrote this article; and it’s actually fascinating. He said, ‘Okay, I know that people are pointing to birds as the data for why we might fly.’ And, he did this analysis. He said, ‘Okay, let’s look at birds in flight.’ And, he said, ‘Okay, we have little birds that fly and big birds that don’t fly.’ Okay? And then there’s somewhere in the middle and he says, ‘Look at turkeys and condors. They barely can get off the ground.’ And so, he said that there’s a 50-pound weight limit, basically.
And that’s the data, right? And so, here we have a serious person who became the President of the American Association for Advancement of Science, making this claim that this isn’t possible.
And then, on the other hand, you have two people who haven’t finished high school, bicycle mechanics, who say, ‘Well, we’re going to ignore this data because we think that it’s possible.’
And, it’s actually remarkable. I did look at the archive. The Smithsonian has a fantastic resource of just all of their correspondence, the Wright Brothers’ correspondence with various people across the globe and trying to get data and information and so forth. But they said, ‘Okay, we’re going to ignore this. And, we still have this belief that this is a plausible thing, that human heavier-than-air–powered flight,’ as it was called, ‘is possible.’
But, it’s not a belief that’s just sort of pie in the sky. Their thinking–getting back to that theme of thinking–involved problem solving. They said, ‘Well, what are the problems that we need to solve in order for flight to become a reality?’ And, they winnowed in on three that they felt were critical. And so: Lift, Propulsion, and Steering being the central things, problems that they need to solve in order to enable flight to happen. Right?
And, again, this is going against really high-level arguments by folks in science. And they feel like solving those problems will enable them to create flight.
And, I think this is–again, it’s an extreme case and it’s a story we can tell in retrospect, but I still think that it’s a microcosm of what humans do, is, is: one of our kind of superpowers, but also, one of our faults is that we can ignore the data and we can say, ‘No, we think that we can actually create solutions and solve problems in a way that will enable us to create this value.’
I’m at a business school, and so I’m extremely interested in this; and how is it that I assess something that’s new and novel, that’s forward-looking rather than retrospective? And, I think that’s an area that we need to study and understand rather than just saying, ‘Well, beliefs.’
I don’t know. Pinker in his recent book, Rationality, has this great quote, ‘I don’t believe in anything you have to believe in.’ And so, there’s this kind of rational mindset that says, we don’t really need beliefs. What we need is just knowledge. Like, you believe in–
Russ Roberts: Just facts.
Teppo Felin: Just the facts. Like, we just believe things because we have the evidence.
But, if you use this mechanism to try to understand the Wright Brothers, you don’t get very far. Right? Because they believed in things that were sort of unbelievable at the time, in a sense.
But, like I said, it wasn’t, again, pie in the sky. It was: ‘Okay, there’s a certain set of problems that we need to solve.’ And, I think that’s what humans and life in general, we engage in this problem-solving where we figure out what the right data experiments and variables are. And, I think that happens even in our daily lives rather than this kind of very rational: ‘Okay, here’s the evidence, let’s array it and here’s what I should believe,’ accordingly. So.
Russ Roberts: No, I love that because as you point out, they needed a theory. They believed in a theory. The theory was not anti-science. It was just not consistent with any data that were available at the time that had been generated that is within the range of weight, propulsion, lift, and so on.
But, they had a theory. The theory happened to be correct.
The data that they had available to them could not be brought to bear on the theory. To the extent it could, it was discouraging, but it was not decisive. And, it encouraged them to find other data. It didn’t exist yet. And, that is the deepest part of this, I think. [More to come, 26:14]