Then Do Better

View Original

Leigh Caldwell: cognitive economics, power of stories, how the mind consumes dreams and plans future actions | Podcast

Leigh Caldwell is a cognitive economist. Leigh has done excellent work around the psychology of pricing and exploring how people consume intangible products with their mind. He has founded several software companies and is co-founder of the Irrational Agency.

We chatted on Leigh graduating from university at 18 and what attracted him to the internet. Why he wanted to start companies and what lead him to the path of psychology, behavioral economics and ultimately to cognitive economics.

How the question of “Why do we get so much of what's important to us from what is manufactured inside our heads?” inspired Leigh to understand more about the brain.

Leigh discusses his ideas on why the human brain might have developed the mental tools that we have. We explore the idea of mental simulation, what the brain may be incentivised to do and how the brain may solve the challenge of planning for future action and deferred gratification.

I believe that some of these mental tools arose, not quite for communication, but at an slightly earlier stage, which is the idea of planning. When…. we talk about this idea of the future self. You may take an action now that is intended to benefit your future self. …it's not just humans. The stereotype - the cliché - is the squirrel that buries the acorn instead of eating it so that its future self will have an acorn. Now, I don't believe that your future self can motivate you today. That would be time travel. Your future self doesn't exist yet. Your future self cannot tell you what to do or influence you because it won't be there for a while.

However, clearly there is an evolutionary advantage in being able to act for your future self. There obviously is, because if you only act for your present self, you can squander a lot of resources. Your future self will have less chance of surviving and for passing on your genes. So I think evolution made this accidental discovery that if you can imagine the utility of a future consumption occasion, then you'll get utility from that right now. It's essentially a way of your brain getting the payoff now for an action that will have a future benefit. I believe that that-- In the case of the squirrel, it might be kind of coded in an instinctive. The squirrel may or may not be using an imaginary process, I don't know.

But there is this idea of mental simulation that we have observed in lots of animals; not just humans. It's the idea that we can replay or pre-play an experience and essentially feel again what it felt like. So you'll see it when you can see a mouse, for example, that is sleeping. If it's been trained to follow a maze, you can see the same kind of place cells that are activated when it's in the maze, you can see them activating again while it's sleeping…. And what it's probably doing is replaying that experience so it can makes synaptic connections that will make it more efficient to follow the maze next time, or it's kind of calibrating, "What if I went that way through the maze, what would that be like?" And it's kind of planning out some possible routes. So essentially, what I think happens there is when the mouse is at a certain point in the maze where the cheese is, it gets some reward so that generates dopamine in the brain and so on. But when the mouse imagines being at that point in the maze, I think it also gets some reward. So the brain is doing this clever job of being able to reward you for imagining something good. And essentially for planning out the chain of events that would get you there so that you could take an action now that will follow those rewards bread crumbs and lead you there.

So essentially, in order to solve the problem of future action of planning and deferred gratification, the brain has evolved a synthetic gratification or a synthetic reward that happens when you imagine things. And I think that is, although probably the evolutionary function of it was to allow us to conserve resources and spread consumption over time, the side effect of it is that we can get rewards for not consuming anything. That in a sense, what we're doing, I think with art and with daydreaming is where we're tricking ourselves into having this reward, even when we may never get the thing that originally it was associated with.


Leigh discusses the idea of discounting the future, the challenge of long causality chains in our scenario thinking  and how the ease of imagination may impact how we think of future actions. 

Leigh explains his model exploring agency and choice, and the difficulties and limitations of models.

We discuss the power of stories, why narrative might work and the possible process of conditioning and deconditioning to story narratives. Why once a myth or pattern is embedded, it is so difficult to work around.

Leigh gives me some free consulting on how to price and sell sustainable investing products, how to use surveysand how a small company could do it themselves to an extent. We discuss utility theory and  nudge economics.

We play over-rated/under-rated on: nudging, carbon tax, carbon labels, being a generalist, deliberative democracy; and Scottish Independence.

We end on current projects and Leigh’s life advice.

‘...storytelling is your voice going out there and hopefully having as big an impact as you want. But, story hearing is the other side of it. We should be listening to the stories of all the people around us, and the many people whose stories are not heard in society. By hearing those stories and understanding what underlies them, then we will be able to figure out what the right story is that we might then want to tell…”


See this content in the original post

PODCAST INFO


Transcript (only lightly edited, expect typos etc.)

Ben: Hey everyone. I am super excited to be speaking to Leigh Caldwell. Leigh is a cognitive economist. He's done some excellent work around the psychology of pricing. He's also explored how people consume intangible products with their mind. He has founded several software companies and is the co-founder of the Irrational Agency. Leigh, welcome.

Leigh (00:59):

Hi. Lovely to see you, Ben.

Ben (01:01):

So I'm going to start us back a little bit in time, not too far back to the 1990s and Scotland.

Leigh (01:09):

Right. Well, what happened in Scotland in the 1990s?

Ben (01:13):

Well, you were at university there. How did you find that? So Maths and Physics, although you've ended up in kind of more fuzzy and economist type of work rather than pure and algebra.

Leigh (01:25):

Yes. Although actually, the word fuzzy is interesting because one of the things that I was exploring in my Math degree was Fuzzy Logic, or what is now called Fuzzy Logic, Fuzzy Set Theory. And in a sense, some of my current work has kind of looped around to that. I was at university then. I went to university a bit early and so I graduated when I was 18-- This was 1994, then decided this internet thing sounds good. I think we just had-- Netscaper just floated and made people billionaires and there was obviously a lot of excitement. I thought, "You know what? I can put all the academic study on hold for a little while. I'll come back to the Maths. And in the meantime, I will start some kind of .com business, build websites, become a software developer, and become a billionaire. Haven't quite gotten there yet. It's still in the plans.

Ben (02:28):

Still working on the Bill Gates thing. Was there anything about being, I guess, around Glasgow or being young, which made a difference? So I went to university when I was 17 and finished at 20. More or less, apart from the first term of not really drinking and therefore perhaps not drinking as much as a lot of my peers, probably not too much difference for me. But I didn't know whether the youth element was something different, or perhaps also being in Scotland in the 1990s set your fund on this path, or is it just randomosity?

Leigh (03:03):

I mean, I think my experience was like yours. The first year I was finding my place. But to be honest, everyone's finding their place the first year in uni. And so I probably didn't have a hugely different experience. To be honest, it was more being in the science end of the campus and not the arts and humanities end of the campus probably made more difference. Me and all my classmates hung out in the games room playing pinball instead of going on dates with people. But that's how it is when you're in science and engineering, to be honest. So that was the only real impact. The fact that I went early was because I just had a very mathematical mind that I'd always been obsessed with those kind of things.

I'd been reading recreational math books. I don’t know if anyone of your listeners will remember Martin Gardner who was an author of Math books that were just really good at stimulating someone's intellectual interest in those things. So, I suppose I had led the life of the mind, and that kind of brought me to that place early. That's always been something that's a theme of my work ever since. And even when I went into a much more practical kind of commercial environment, I always had that attitude of trying to theoretically reason out what was the new idea or the best way to do this. In some senses, that could be a disadvantage. So I was trying to sell to clients, but I was trying to do it in a very rational way and building mathematical models of how clients should buy products and so forth. That obviously doesn't really work, but that's kind of what led me.

Ben (05:01):

Don't follow the model.

Leigh (05:02):

That probably led me then to become interested in psychology, and in behavioural economics, and then ultimately cognitive economics, and that idea of trying to understand why people didn't do what they theoretically should. But then rather than abandoning the idea of theory, it was to adapt to theory and say, “Right. There's still a theory. You can still build a model of this. You can still figure out why and how do people buy things? It's just not the model that we've been taught. It's not the rational Adam Smith economics model. There's something else going on here, but you can still work out.” That's what I then decided to do after about 10 years of building software, was to go into let me figure out what is actually happening inside people's brains and model that instead.

Ben (05:57):

Do you miss any of that pure maths type of work? Although I guess you have come back to it, like you say, full circle. I think of these people who go off to do the academia and all of that, and there's a tiny bit of me which thinks that, "Maybe I should have done that more than real world." And then I come back into the real world and say, "Oh, no, actually trying to solve things in the real world is definitely where I found it." Whereas you seem to still do quite well with one foot in both camps to some degree; going along to economics conferences, still publishing papers and doing that work. And then also then the applied side, which I'm sure we'll come to, about using story, thinking about systems theory, about how people actually work as agents in the world and all of that.

Leigh (06:43):

There are definitely times that I think I would love to just lock myself away in a room for a year and just work on theoretical constructs and building things. I don't really know what it would be like. When I was at university, Fermat's last theorem hadn't been solved, and that was kind of a classic thing that you would love to just be a pure theoretician and think about. Andrew Wiles who did solve it basically did lock himself away for almost seven years to work on it and not speak to anyone. But that was also his job. He was a professor of Mathematics. So I came to a realization at maybe around the age of 30, almost that it was a little bit self-indulgent to want to go and just work on theory, and not think about the real world.

Those actually are pretty important problems in the real world. The idea of both working on those problems, doing something real. But also I think, there was even an aspect within mathematics of, “Is it a little bit just competitive to say can I solve Fermat's last theorem before anyone else can?” Or is it more meaningful to say, "Well, if I can invent something new, I'm not going to win against anyone, I'm not going to beat anyone, but I might actually make a bigger contribution by coming up with something." And then in the end, although cognitive economics is not just kind of my invention, I think I have made some advances in creating new work in that area. So maybe that will have a bigger impact than just working on some obscure number theory.

Ben (08:38):

I was reading about a recent field medal winner so this is a great prize in mathematics. His masters come about, he would claim by a kind of search of beauty. So that is one of that. But, I think he would admit this is a theoretical, very heads in the clouds. But he's come from it because he found maths beautiful and wants to find that PTA in maths, which I actually see to a certain extent. And I think a lot of non-mathematicians don't see that. But I was kind of interested on that academic work that you continue to put out. I think you're doing something on Goodhart's law. You've done things on this system 3. What are you most proud about? Or maybe what's most misunderstood about that, versus say, even behavioural economists or mainstream economists, what don't they understand about this kind of area that you are working with? And if you could explain it to them that could peak their interest in saying, "We need to take this area more seriously."

Leigh (09:38):

What first got me interested in this world was the question of, “Why do we get so much of what's important to us from what is manufactured inside our heads?” So, there's a great essay by Thomas Schelling who was a Nobel Prize winner in economics called, "The Mind as a consuming Organ." And that says it very well. We get so much utility from things that are purely internal, or at least the internal process that might be triggered by something external. So art is a great example. You watch a movie, you read a book, you see a piece of art, and you haven't consumed anything material. It has not given you any object, but just some images and some words have triggered this world building in your head and you get so much out of that.

So it creates emotions. It creates experiences that are joyful and sad, but there's a lot of our experience of life that is happening just inside the brain. It might not be externally triggered. It could be daydreaming. It could be the beauty of mathematics. You're working on something in your head. I think all mathematicians do recognize that beauty and the aesthetic pleasure that we get from working on something like that. That's just the same I suspect as for people who get that beauty from art and from literature. I wanted to understand what is going on in there. “Are there trade-offs? Are there, we still get pleasure from things outside, you still eat food, you still own a car, or you have lots of external utility.”

And so are those things complimentary. Where does the internal one come from? Why in a sense don't we just continue to manufacture internal utility? If that's so good, why don't we just have more of that? Presumably, there's no scarcity. It's not like you have to spend money to have a nice daydream. So rather than worrying about my budget of whether to buy a coffee or what car to get, should I just daydream more? There's a traditional philosophy and psychology that does kind of say that. And maybe if you're very enlightened and you've studied, if you've done a lot meditation, then maybe you do have a lot of control over creating the mental state that gives you the most satisfaction or happiness.

But most of us don't. Most of us use triggers that are partly external to create those mental states. So I basically wanted to understand that whole process; how it happens, and what are the trade-offs? How do we get more of it? How can we have a better experience of life? I thought the tools of economics are really optimized for trying to answer that question, but for material goods. We look at, “How do we optimize our consumption bundle to create the maximum utility given the budget that we have?” Well, there must be some way to work on that, and say how do we optimize our mental consumption experience given whatever constraints we're under? Maybe we've got attention constraints, or there's only a certain amount of time we have, or it uses a certain amount of energy to daydream and to create a nice image.

So those were the dilemmas that led me into this question. Then I started to try and work almost from first principles to see what is actually happening. What is it that is going on in the brain when we're daydreaming or when we're imagining, and why does it push our utility buttons? From an evolutionary point of view, you'd expect that utility buttons are there to motivate you to do things that ensure your survival and your reproduction. That's what fundamentally we think is supposed to be happening. So why can I still push those utility buttons in my own head? Wouldn't that make me less likely to survive or pass on my genes? So I kind of started there thinking, "What's that? How do you resolve that paradox of creating pleasure for yourself that doesn't seem to ensure our survival?"

Ben (14:40):

That's really interesting. And that's the mathematical side of you taking it all the way back to first principles in trying to figure that out. That makes me reflect on a couple of things. One is that high end. It is an explanation for people who are particularly good at mindfulness, or Zen monks, or things like that, who seem to have either extraordinary contentedness in the face of what most people would find extreme. But they definitely don't seem to suffer. In fact, they would go the other way. And then the other thing which it causes me to think about is-- I guess what I've heard the term is inter-subjective myth, or this thing that humans find valuable because only humans find it valuable. So I have this joke like, "The dog doesn't care or the parrot doesn't care, right. It's only humans who care,” which actually is a vast quantity of human things like human laws, human money, human misreligion. That's kind of why I call it ‘mis’ a little bit.

Some of that seems to be really effective because other humans also value it. But to your point, there's actually some of it which we self-construct-- In fact, quite a lot of it that we seem to self-construct. And I recall listening to some evolutionary psychologist type people on the social cohesion part. The idea that if you're going to hunt the woolly mammoth or do a lot of things, that actually communication in this social cohesion thing, which might be one reason why it's developed. Although quite difficult to get the counterfactual. So thinking about that-- I guess that's two things. So one is this idea of thinking about the future and how we think about our future, and one other is this counterfactual; how do we think about different world’s multiverse, but at least in our own lives counterfactual. It seems to me that these set of tools are one way of looking at these kind of things. So how would you use it for those type of ideas?

Leigh (16:40):

Yeah, I think that is exactly the space that we need to explore to figure that out. I believe that some of these mental tools arose, not quite for communication, but a slightly earlier stage, which is the idea of planning. So if you think about the... We talk about this idea of the future self. You may take an action now that is intended to benefit your future self. And it's not just humans. The stereotype, the cliché is the squirrel that buries the acorn instead of eating it so that its future self will have an acorn. Now, I don't believe that your future self can motivate you today. That would be time travel. Your future self doesn't exist yet. Your future self cannot tell you what to do or influence you because it won't be there for a while.

However, clearly there is an evolutionary advantage in being able to act for your future self. There obviously is, because if you only act for your present self, you can squander a lot of resources. Your future self will have less chance of surviving and they're for passing on your genes. So I think evolution made this accidental discovery that if you can imagine the utility of a future consumption occasion, then you'll get utility from that right now. It's essentially a way of your brain getting the payoff now for an action that will have a future benefit. I believe that that-- In the case of the squirrel, it might be kind of coded in an instinctive. The squirrel may or may not be using an imaginary process, I don't know.

But there is this idea of mental simulation that we have observed in lots of animals; not just humans. It's the idea that we can replay or pre-play an experience and essentially feel again what it felt like. So you'll see it when you can see a mouse, for example, that is sleeping. If it's been trained to follow a maze, you can see the same kind of place cells that are activated when it's in the maze, you can see them activating again while it's sleeping. And what it's probably doing is replaying that experience so it can makes synaptic connections that will make it more efficient to follow the maze next time, or it's kind of calibrating, "What if I went that way through the maze, what would that be like?" And it's kind of planning out some possible routes. So essentially, what I think happens there is when the mouse is at a certain point in the maze where the cheese is, it gets some reward so that generates dopamine in the brain and so on. But when the mouse imagines being at that point in the maze, I think it also gets some reward. So the brain is doing this clever job of being able to reward you for imagining something good. And essentially for planning out the chain of events that would get you there so that you could take an action now that will follow those rewards bread crumbs and lead you there.

So essentially, in order to solve the problem of future action of planning and deferred gratification, the brain has evolved a synthetic gratification or a synthetic reward that happens when you imagine things. And I think that is, although probably the evolutionary function of it was to allow us to conserve resources and spread consumption over time, the side effect of it is that we can get rewards for not consuming anything. That in a sense, what we're doing, I think with art and with daydreaming is where we're tricking ourselves into having this reward, even when we may never get the thing that originally it was associated with.

Ben (21:20):

That's fascinating. I've never heard it explained so clearly about essentially a construct of future self or other worlds being reinforcing and using that mechanism and the evolutionary roots of it. That makes me reflect on three things adjacently; one higher level, and one that, and one may be more practical. One is, it seems to me that that maybe highlights a possible key function for sleep of which I don't know whether we'll ever be able to really pass that out. I don't think our technology is there, but it seems to be that would offer an explanation for why sleep is quite well conserved in animals and things. If this is somehow a process for helping us plan, either with fantastical cells or future cells, whether that's growing synapses and things, and there's some evidence there is in memory. Or there's this other thing which is much more nebulous to us because we can't see in inside of it.

Leigh (22:15):

Yeah. I think that's probably correct. I've looked only a little bit into sleep theory and I think the evidence seems to be consistent with that. The idea that we're either calibrating, or planning, or just reinforcing particular pathways and probably… I guess in a way you need the sandbox of sleep where you're not in the sensory world, but you're still able to run those processes without actually taking any action.

Ben (22:49):

Exactly. And you're either hardening the kind of patterns or synapses or whatever that you need, or running them through when you’re not having to take everything else in. Then the higher level left field question, which just occurred to me is I hadn't considered the philosophical roots of some of this, which now seemed to me intriguing and quite important. So I guess from a moral or maybe a moral philosophy point of view, does this mean we should actually potentially put more weight onto our future selves? So this is also this idea of how important we should treat future generations, which actually comes into climate and sustainability and everything else. But there is some notion that maybe we don't pay enough attention. And if we did wait... For instance, if you waited your future self, almost as important as your today self, then at least in economic thinking your discount rate is whatever it is, and you do almost everything for that future self. It seems to me this is a mechanism of thinking around about that, but that actually if we already do it, there is some weight to saying that maybe we should think about it and do it more. Have you thought about it from a moral philosophy point of view? Does that make any sense?

Leigh (24:03):

You could almost argue the other way. You could say if we are already taking into account our future self intuitively in our present actions, then we shouldn't need to actually artificially increase the weight that we put onto.

Ben (24:18):

If we're doing it well, I guess.

Leigh (24:20):

Yeah. Absolutely. I think there's a really important work to be done on figuring out how well we do that. I think one of the things that comes from this is that maybe what we're doing when we do think about our future self, we're not discounting by time. So in economics, you would expect there to be a certain discount rate for each year or for a fixed period of time. But we may be instead discounting by ease of imagination and by causality. So if I can imagine a certain version of my future self more easily than another one, then maybe I'm more likely to act for that. So for example, in a way we're trained to think about our retiring self. The person who's going to collect the pension in 20 or 30 years.

And because we're quite-- We've got the tools to visualize that. You can sort of imagine me retiring. What does that look like? Which boat will I want to buy? Or what house will I want to live in? Well, at least personally, I've got some pension savings that I think are reasonably good. And I've got a pathway there that I think that particular future self is probably well served. But the future self is 15 years earlier. I don't really have a good picture of that. Right now, I don't have children, but maybe I'll have children. It's very hard to imagine how that will change my life. I don't know if I'll be doing the same kind of work or living in the same country.

So 10 years from now, that future self is probably not getting a very good service from me, whereas my future self when I'm 70 and retire maybe is. So I think there's visualization, but then there's also causality. So I think that what's happening in these when we are replaying or pre-playing these experiences as we're following a causal chain. So it's a little bit like the mouse going through the maze. You can only go from A to B, and then B to C. You can't just go straight from A to F. So the mouse has to go through the maze in the right order. We have a more abstract, but still a network of causality that is coded into our brain. We believe because we've experienced it that, when I'm hungry, I can eat a piece of food and that food will make me less hungry. It will give me some energy. When I'm thirsty, I'll drink something. If I turn the engine on in the car, it will go. There are these causal beliefs that we have. And when we are mentally simulating a possible future, we're following a chain of those causal beliefs. Those were quite physical ones, but of course, there are lots more that if I'm a loving partner, then my relationship will go better and I'll be happier. So you're following a kind of causal network of events and I think that that's how the discounting works. Something that's very close in causal terms that I can imagine myself doing it now and that I will get the effect right away will be discounted by say, 5%. And something that takes like three or 10 causal steps will be discounted by 5% to the power of 10.

So I think how causally distant our actions are might play into how much we discount and that may not be what we want because our perception of causality is quite subjective. I can think of a certain causal chain to get to a certain point. But you might think of a shorter one because you have thought about it in a smarter way than me so you would causally discount it less. So we end up with a conflict. And actually, I think this is why stories are so powerful because stories show as the causal chain and they may allow us to shortcut the causal chain. So rather than having to follow through all the 20 steps in the story, I might know, "Well, if you start on the quest and you've got the guide, then you're going to win the treasure.” Or a story can accelerate your long chain of causal reasoning that makes it easier to get to the conclusion.

Ben (28:59):

That's fascinating. That also makes me think of a couple of things. One is the way that you have slotted in associative learning technique; so Pavlovian and that, into that. And then this idea of the fact that we follow our own causality, we see something in the world, we make that associative learning and that's a learning that we do. It also explains why generally we're so bad with randomosity to the extent that we can be objective. It does seem like so much more is random than we might want to believe or expect, and we find that really difficult to cope with. So we either give it stories or narratives. We have spirit trees or whatever, because that's much less effortful seemingly for our system. That's why we find it very hard to explain a lot of the big blobs which don't seem to follow a kind of causal logic that some people might follow, but actually it's because people are following other different causal paths or other things which makes sense. And it also seems to me that makes sense of this kind of effortful idea that it takes a lot of effort for a big causal chain. In fact, this is perhaps why we're so bad at these leaps of imagination which is either slightly randomosity, or just seems so hard to find what the causal chain would be but actually just obviously happen, because once you see them happen you can't pretend that it didn't happen.

Leigh (30:25)

Or, I suppose because if the causal chain actually has many inputs, so something like climate change, so many different things are going to have to happen to reduce carbon emissions that it's quite hard to reason about it causally and at least in an intuitive way. You can logically work out well, if all these people reduce consumption or if all these laws pass or these inventions are made, then it will have this impact. But, really my causal intuition is designed to reason for me doing something and something happening and that kind of falls down. So you do have to look at it on a social basis. I think there are probably tools that we have to do that almost like in social projection; that if everyone does what I'm doing, then it will have an effect. And you could probably almost imagine society as an agent. If you can model the behaviour that you want society to follow, then you could perhaps empathically project that. If other people do the same thing, then it will have a causal effect. So there are probably ways that we can do that.

Ben (31:45):

Yeah. I guess that would be the pattern for instance, slavery emancipation, or women's votes. There was a society agent modelling thing. Couldn't really imagine it. Economics, at least at the time, probably hard to model or not even within the models. But you could maybe project on the kind of society you would want to do for, and therefore that's why you had these stories and things like that. I seem to recall you did some work on your own model on sort of AI or agent based modelling and how that might look in society, or at least in certain worlds or certain world building. Would that go into that and have you taken that any further?

Leigh (32:23):

I've played around a little bit with that. That was a model called heuristica where there are certain problems that are quite hard to work out mathematically. There is a thing what you a closed-form solution, which is what economists like to find because they're neat. It's all of that mathematical beauty and elegances. If you can just have a single equation that tells you if you increase the money supply, inflation will increase, something like that. A lot of problems are very hard to find a simple equation like that. So instead, what you can do is build a model. You can say, "Well, on an individual basis, I can to some extent predict how an individual person will behave. Or even lower level, you can predict how a particular impulse or preference within a person's brain might play out."

So you can predict that a certain desire will incentivize a certain action to be followed. So if you build a few of those impulses into a model of how a single person behaves, and then you assemble in a piece of software a hundred people or a thousand people, and then say, “Well, and how do they interact with each other?” You can kind of build up from first principle foundations that are simple enough to model in a somewhat accurate way to say, "Well, how do they aggregate when there are thousands of people or millions of people?" So heuristica was a model to say, “How can certain low level beliefs or behaviours impact social outcomes initially in a gender pay gap?"

So basically exploring how certain beliefs that are pattern matching that people might do, which is a very natural thing to do. Let's say that you're starting in the 1950s and you look around and you say, "You think, well, I'm an economist. I believe that people's pay correlates with their productivity," broadly speaking. So I look around and I see that white men are paid a lot more than black women, so well, probably they're more productive. And so even if you then don't actually believe, if you don't have an explicit prejudice, then you will extrapolate from that pattern and then society will lock in those prejudices. Even when people don't believe them anymore, then the effects will still persist and so you still end up with a gender pay gap.

So that was an example model. In the beginning of the pandemic, I mean, everyone was building models and I thought, “Well, actually, it's kind of the same challenge.” You had this SIR model, which was Susceptibility, Infection, and Recovery and it's a closed-form model. The goal of that is to solve the differential equation so that you can predict exactly how many people will be infected and so forth. But that closed-form model, there are so many things that actually happens in the real world that means that model cannot be as simple as that. And so it could be things like, "Well, how many people does one person bump into or encounter in a given day? How is that affected by lockdown, or how is it affected by different types of lockdown policy that you could explore?"

So I built a model along the same lines as heuristica to say, "Given some of those base assumptions, what can you predict about the progress of this disease?" So I had an ability to pull a few levers and say, "Well, if we closed down public transport, how would that change things?” And come up with a few policy conclusions. Unfortunately, even on those simple first principles assumptions you'd still need a lot of data to be able to calibrate it accurately, and like with many complex systems, the outcome that you predict is so sensitive to the initial assumptions that I couldn't meaningfully make predictions. But, I guess what I could predict is that everyone else's predictions were not meaningful.

Ben (36:55):

My model shows that actually no one else's model is going to work there. That reminds me of something that last year I had a chat with economist Diane Coyle. She said this in her last book that economists don't quite understand that their own views make an impact on the model. This is a kind of meta view. So this is exactly your point. If you have these embedded assumptions, what you say and how you think about the world, particularly on these things which are human invented, like money, things that really only matter to humans; then actually you are influencing it much more than you think. Therefore this myth of the neutral economist is potentially a harmful myth because you kind of believe yourself to be neutral, but your commenting on it is definitely on the system.

Leigh (37:47):

Yeah. And I think more generally, there's a lot to be figured out about transmission of-- and as you say, the inter-subjective myth. It might be that there's an individual level that you can see. “This is how I'm affected by the myth, but then how do I affect other people by spreading the myth or by... Do these things reinforce while I'm sleeping and therefore do I then pass them on more to other people? Does that amplify it?” There's a lot of sociology to be done as well as economics to see how these are spread. There's a bit of work in this field called narrative economics, and I've mentioned storytelling. There are people studying this from a couple of different angles, including Robert Scheller, who you might know his animal spirits book, with Akerlof. He's looking at narrative economics.

The question there is, "How do economic narratives spread?" So there might be a narrative… In the great depression there was narratives about how companies were causing this problem by trying to reduce the wages they were paying to people, and that was reducing demand or other narratives that might conflict with that. But those narratives, they looked at how the narratives spread, how fast those stories spread, how they can take hold, and what the economic outcomes would be. Like, how the story causes the recession or gets you out of the recession. People do have this intuition that in a recession, the only thing you have to fear is fear itself. If you can tell people the story that it's over, then it will be over.

There's definitely some kind of element of truth in that. But there's a lot more subtlety. How can you actually persuade people of a story? How do they spread? I think one of the key things is understanding the elements of that narrative. Rather than saying a narrative is just like a sentence that floats around the world. If you say a narrative is made up of these causal steps, then you might be able to say, “Well, can we in a Pavlovian way condition people to believe the individual steps in the narrative and therefore they will adopt the whole story more quickly?”

Ben (40:10):

Interesting. And does that unpack the other way. So if you wanted to convince anti-vaxers that maybe they had the wrong narrative or they had wrong causal steps, can you unpick it, or do you need to catch them before they've gone down that route and put them on a different story?

Leigh (40:29):

Yeah. That's a great question.

Ben (40:34):

And if we have to answer. We have solved global public health policy at the same time.

Leigh (40:38):

Interestingly, there is work on de-conditioning in, again, at the kind of Pavlovian animal experiments level. If you've conditioned an animal to think that the cheese is at this point in the maze, how do you decondition them? So do you put them in the maze again enough times and without cheese that they start to realize, “Okay, well, I've gone there four times and it wasn't there so I'm going to change my assumption.” There's actually an equation of how the brain, when it makes a prediction-- So it predicts the cheese is around this corner and it goes, and it's not there, it expects a certain level of reward and it gets less reward. That changes the synaptic strength of the connection so that over enough time… You never totally extinguish it, so the brain always got these traces of almost every experience you've ever had. So it'll never go away completely.

Ben (41:37):

So you can reactivate...

Leigh (41:38):

Yeah. If you send it there 10 times and it doesn't see the cheese, and then one time the cheese is there, it hits right back. You talked about randomosity, this is one of the challenges with these narratives. You can see 10 people, a hundred people get their vaccine and nothing happens. It protects them, they don't get COVID. And then one time you see someone with a vaccine that did get COVID, or one time someone gets a vaccine and the next day they get L, no, it's probably a coincidence. But once you've got the story, then it could be reactivated by almost anything. It's very, very hard to...

Ben (42:19):

And it's very salient. This is why we think in ancient cultures, you saw bad weather and the tree fell down. Why isn't that a tree spirit? Actually, you know what? It could well be a tree spirit. There's something to it. The internet today would be viewed as magic by people even a hundred years ago. So I could see that.

Leigh (42:39):

It may be that we have to construct other stories that could just go around altogether and do something new. I don't really have a great answer to that.

Ben (42:53):

So once a myth is embedded; whether by chance or somehow, it's actually quite difficult to get around. And it's interesting, that's completely adjacent. That reminds me of a lot of this work which is done, for instance, with addiction or even schizophrenia, where you can show that actually sometimes you can do a lot better for schizophrenic symptoms or even some addiction. But if you are put back into a similar environment, or you have an accidental thing in the world which triggers whatever the pathways were where you had it, and you had those problems, you relapse and there's very little it seems that you can do about it. So it's either accidental environmental, or because of life, you have to go back to where you were; home environment or some other aspect, and it's immediately triggering. There's very seemingly little that we can do against that because it seems to be a pattern which was so well encoded that you can, like you say, diminish it and actually you living quite well with that. But anything which reactivates it and essentially you have this relapse.

Leigh (43:56):

Yeah. And I don't know whether we will ever have the ability to get to the individual synapses and look at, “Could you actually even remove a synapse or something with super advanced brain surgery to break those connections? Or are we never going to be able to get an exact picture?” I don't know.

Ben (44:17):

Yeah. I mean, my intuition is that there seems to be too much redundancy in the brain that we hide it in other shadow patterns and other things, which maybe not quite complete. But actually we couldn't reconstruct it because we've needed it so well conserved.

Leigh (44:30):

Yeah. One of the people whose work I've built on is George Ainslie who wrote a book called “Picoeconomics,” which is down from macro, micro and Pico. But that was sort of an early version in the nineties of what has become cognitive economics. He was an addiction specialist. He was a psychiatrist working in the veteran’s administration in the US on addiction. This idea of self-reward, understanding how you generate reward inside your head, he looked at that in the context of addiction. So it is very closely connected.

Ben (45:09):

Great. So putting this all together in an applied way, time for some free consulting. So if I have a sustainable investing product, or maybe just some sort of product, and I'm thinking of the service, how to price it, how customers think about it, it seems to me that what you are saying is-- So there's logical causal pathways and you probably all are doing that in your marketing and your pricing thinking. But there seems to be something about you need to know the stories that your customers are telling themselves around about your products or competing products. But also, how would then I go about developing the story which will make it less effortful for them to buy. And then how does that-- I guess this is your earlier work, but then how does that relate to how I should think about pricing it? Because you get these sometimes counterintuitive things that for instance, luxury goods, you should actually in fact just price them up and you do better to these type of things, because it's the stories that we tell them themselves. So I've got some sustainable investing product. How should I think about selling and pricing it?

Leigh (46:21):

Good. It's good that you asked this. I've been working with a big fund manager on their ESG recently and thinking about just this stuff. So, yes, you want to map out the stories that they already have. And more than the stories, you want to try and get down to the individual causal connections. In my commercial work, this is one of the things we do. We might interview a thousand people-- Well not interview, but like do an online survey of a thousand people, get them to tell us stories about their investing experience or what they think about investing, and then we'll also measure the elements of those stories. We'll measure the associations that they have, the individual steps in that causal graph.

I think the best thing you can do is look for the opportunities to create new connections in that graph. So like we were saying a minute ago, it's very hard to overthrow the ones that are already there, but you can make new ones much more easily. So you could tell a new story that says, “Actually, there's a great pathway to reducing carbon emissions by finding the companies that are the most culturally pro sustainability, and finding the best marginal investment to intensify that company's culture.” So whatever that story might be, then turn it into a three or four step process and then really emphasize that as a fund, we are going to follow this four step process. Tell people what it is, show them the four steps, show how each step leads to the next one, show them that there's a reward to get at each of those four steps.

Turn them into a mouse in the mental maze so that they go follow the cheese down those four steps, and make sure all the cheese isn't at the end that they're going to get. They're going to get a mental reward by saying, "Oh yeah, we got an activist onto the board of that company." Fine. They might not have changed any policies yet, but that's still a great process, et cetera. So maybe that would be an example of how to do it. Hopefully, they might have three of those steps already encoded in their brain and you only have to make the fourth one, but then reinforce the four step pathway in your marketing.

Ben (49:14):

And if you're a start-up or a smaller company which might not be able to afford a thousand person survey or that, can you simply do this via intuition by speaking to just two or three friends or customers about the actual journey, or just even imagine it, or does that not work so well?

Leigh (49:31):

I mean, I think that's like a low cost substitute, absolutely. A lot of this, once you think about it in a certain way, then you can start to see how it works. In your own minds, you might be able to identify some of those stories. If they're in your mind, there's a reasonable chance that they're in someone else's mind as well. And if you are a small company, you don't necessarily need to build a model of how every single investor or consumer out there is thinking. You just need to find a story that enough people believe in to probably make yourself a market. It's different if you're Vanguards and BlackRock and basically, there's no point in you finding a niche thing that 1% of people will care about. You need to map out the whole landscape and figure out how can we speak to 60% of the market? But if you're new and you're happy with 1% then, absolutely. So intuitive market research, conversations and interviews, or small scale research can be very effective.

Ben (50:39):

But I guess you can do this on big global brands. So if you’re trying to sell global toothpaste, is there something globally as a story narrative, or does it only work in Portugal or Brazil? Or you can take something with a narrative which makes sense to the Portuguese but might not make sense globally, so you can segment your market and think about it. Things like that way.

Leigh (50:58):

Yeah. That's the sort of thing that we do. One thing actually that we're trying to be able to do is to make some of that research accessible to smaller companies. So if you think about toothpaste, there are certain things that are specific to toothpaste, but they're really built on deeper cultural stories and behavioural stories that are actually very general to do with our daily habits, our diet, or our attitudes to health. There's a lot that you could gather that is very universally applicable. So you could collect a lot of data, and then even if you're a toothpaste company, fine, you get the toothpaste veneer on the front. But if you're a snack company, or if you're a restaurant, or if you're a doctor surgery, you can still get a lot of insights from the 80% that's general. In my company I'd like to be able to sell, so have some of that general research and offer it as a subscription to smaller companies that can just tap into the general insights, even if they can't afford to commission the bespoke research. So we're starting to get towards that. We're not quite there yet, but I think that would be a nice way to democratize some of these insights.

Ben (52:29):

Does it then make sense of the decisions that consumer’s make-- or even businesses, anyone that makes, which are not necessarily what an economist would think of as rational? We seem to make a lot of these decisions. Does essentially these causalities and these other stories explain that to this idea of the mental utility that we get? Does that explain a lot of this other blob? And so, being able to explain that whenever, and it seems that most of our everyday decisions are influenced by that. I guess the other side observation is because what we say, or in general, what average humans say in surveys are often very different from what they actually do in actions. One of the great things about all of this clicky technology internet stuff is that people then follow the actions and they say, "Okay, we built all of these models on oh. This is actionable rather than surveys." But it seems to me that there is some understanding around that if you can understand the chain of causality which is happening in the blob. To your point, it's not financial money economic rationality, but it actually is the same sort of tools and models but with a different chain of causality and so a different utility or value for that.

Leigh (53:47):

Yeah. Even conventional economics, money is just a kind of construct that sets within that. It's really about optimizing happiness. And it just happens that money is much more measurable so lots of conventional economics works on using that as a proxy. One of the motivations I had for getting into this work was there's a whole bunch of behavioural science insights about exactly what you just said. People don't do what's supposedly rational. But they're very fragmented insights. It's like, okay, you can look at this particular behaviour and say maybe that's caused by status quo bias, and this other one maybe is caused by anchoring, and this other one is caused by prospect theory and loss aversion. Famously, you look on Wikipedia onto cognitive biases and there's a list of 150 different ones. There's not much of a theory to explain anything. It's just like, "Oh, I saw a thing and I put a label on it."

Ben (54:55):

Associative learning.

Leigh (54:57):

Yeah. But I felt that there has to be at least some more general principles and probably you're never going to explain every single thing. But at least we should have an ambition for our theories that they can explain more phenomena with fewer constructs, rather than having to just assume 150 different random patchwork phenomena. There's definitely a debate about that within behavioural science. Richard Thaler of Nudge fame says, “We'll never have any theory of everything. We'll never have these kind of universal theories. We just have to accept the patchwork.” But I don't really agree. Not only me versus Richard Thaler, of course. There are a lot of people like George Loewenstein, who's another kind of big thinker in this area who I think is much more sympathetic to the idea of finding these kind of cognitive theories.

I guess people who come from cognitive science are more amenable to this than people who come from behavioural science, because that is about finding common patterns. And actually, I think people in AI, there's definitely an overlap between AI and cognitive science kind of where AI first came from as a field. There's a growing overlap between AI and neuroscience, people who are looking at the lessons that can go in each direction. I think these causal models could be quite powerful in AI and looking at how we, for example, the explanation problem in AI that currently you have these black boxes in neural networks that you can train it, but then you don't know what's happening. And actually, instead of just training it blindly, if you say, “Let's look at building a causal structure underneath that machine learning,” then I think there's the possibility to develop AI that can learn on smaller amounts of data, and AI that we can actually explain what's happening better than the current approaches.

Ben (57:04):

That's interesting. Relatively simple algorithms with a lot of data power seem to have these interesting emergent properties. But like you say, we're not quite sure how that's happened. So that's no better than our understanding of the brain. That recreated the brain, but we're not. And then there's this rogue AI alignment problem. Maybe one last thought on this area which has just occurred to me. Again, this is probably more adjacent to philosophy, but do you think then the so-called transitivity or intransitivity problem has anything to say on this? So this is the idea; in maths, three is bigger than two, is bigger than one, means three is always bigger than one. So in mathematical language, that's true. So you can find a lot of things where, if A is bigger than B, and B is bigger than C, you are saying A is bigger than C. But it seems to be that for a lot of human moral choices in particular, that that seems to break down, that that sort of transitivity doesn't seem to hold because of the way that humans value things.

There are lots of cases where you think, "Well, actually, yeah. Most humans would decide this or that." One example in healthcare economics which doesn't follow standard money or life utility for instance, is the trade-off that society seem to make for premature babies. So premature babies cost a lot to save-- somewhere often between a few hundred thousand to maybe a million pounds. But when you ask people, they tend to say that that is worth saving. Even though it might cost 20,000 pounds for diabetic patients. So if you were just talking about the kind of money life aspect, then nothing's perfect, but it just seems to be an order of magnitude yet. It's quite consistent that generally people in society say, "Yes." You can ask that and you get different story narratives, but they seem to have the same sort of blobs. And so some of utility theory, at least classically understood by economists, slightly fails on that because you don't have the same transitivity and utility.

Leigh (59:08):

So just to clarify. If you ask people if you could either save 50 diabetic people versus one premature baby, do they tend to go for the 50 people, do you think? If you put it in those terms.

Ben (59:22):

No. So they tend to go for the baby. Well, it depends on exactly how you ask it.

Leigh (59:25):

Yeah. Because if you ask it in monetary terms, yes. I can absolutely get that if you ask it in monetary terms, people will say, "Of course I will spend a million pounds to save the baby." But if you translate it into the other terms… Because I think that's what the intransitivity problem tends to be, or it's often expressed as the inconsistency of people's beliefs.

Ben (59:44):

Yeah. That's true. And also actually, sometimes you don't actually really ask them that you don't give them the money value. In fact, that's why interestingly doctors tend to save the diabetics. Doctors and economists are the one group...

Leigh (59:58):

Yeah. They're trained to be rational in a certain way.

Ben (59:59):

Yeah. They tend to go actually, "No, we'll go for the diabetics." Whereas, if you ask general population-- And it does change a little bit the more information that you give them, but they never go to where the economists and healthcare people go. That's just one example. But you can come up with a load of bunch where get this inconsistency. But it seems to be that society has in general a preference for this. And it just strikes me that actually this is a partial, or some explanation for that.

Leigh (1:00:30):

Yeah. I think the closest match or the thing in this causal theory that could explain that is kind of, "What's people's horizon, what's people's causal horizon?" So if you are looking at-- As we said, this causal chains could be very long. It could be 20 steps in that chain. In many cases you're not going to follow all the 20 steps. You just follow the first three. Well, in this example, the causal chain could be you get to spend a million pounds, it buys this equipment and these drugs, and that will save the baby. So three steps, you've saved the baby. The next step in that causal chain is that million pounds came from somewhere.

So the million pounds that was spent is not available anymore to buy insulin or something else. Therefore, as a result, you follow it down eventually 20 people's lives are shortened because they didn't have access to those drugs. So the full length of the causal chain is that you have chosen to save one person at the expense of 20 people. But most people don't follow it down that far. They just stop because of both cognitive effort and the emotional rewards of saving the baby. Of course, it's really important to save the baby, but also it's very rewarding to people because it taps into certain underlying trigger impulses that we have. I've also got somewhere learned in this world of causal reward.

Ben (1:02:25):

We're giving ourselves more utility to a young life saved than something else. Also, that is a good explanation for why second, third, fourth, all their thinking is just so much harder for us.

Leigh (1:02:36):

Yeah.

Ben (1:02:37):

Great. So does that mean you're a fan of say nudging ideas or nudge economics or like this nudge division in the government, or actually could that backfire? I guess the adjacent question to this is then, it seems to me that cognitive economics or stories or these things is a kind of neutral tool because you can use the stories for what I would consider bad or evil outcomes, as well as for good outcomes. So does that worry you? In fact, I'm going to give an example because I just recently came across this. This is from a podcast I was listening to with Jon Ronson, who one of the causal factors for why evangelicals take the position that they do on abortion was due to an art documentary done in the 1970s. Pre 1970s, evangelicals actually weren't very bothered. So if you asked in surveys, it’s like not bothered. Then essentially this story, but now that I understand having had this conversation, it makes a lot of sense. It also seems to me very difficult to roll back because very easily activated and now has such high utility in their own myth-making. It doesn't seem to me that it would ever roll back.

So if you're on the other side of that debate, you'd probably see that well, actually that was a tool essentially used for something that we don't think is very good. Equally, you can see that it was a tool which were used to gain women rights and other things. So do you worry that it could be that-- and I guess this is the worry about nudging where you nudge, we don't know whether it's nudging for good or for bad or for whatever, or I don't know of that. That's just suddenly occurred to me.

Leigh (1:04:30):

Yeah. It's not quite the same as nudging, but it does raise similar philosophical question. The distinction with nudging, I would say nudging largely tries to speak to system 1. Whereas, I talk about system 3 as being this-- In a way, this system 1 is a single Pavlovian connection. It's one of the A to B connections, and system 3 is where you assemble the whole set into a bigger landscape and you can do mental simulation over it and it becomes system three. So I guess I would say nudging system 1, cognitive economic system 3. So they have their different tools, but the same philosophical dilemma applies. The same world dilemma as with any tool as with nuclear power, or either famous Einstein quote about being a locksmith, if it was true or not. I definitely don't take the view that as a scientist you just try and discover something and you don't care about how it's used. But I also think there's-- of course, you can't also totally control how something is used.

If I publish a paper talking about this and it gets out there, should I be saying, "No, I'm not going to publish the paper?" I'm not going to try and increase the sum of human knowledge and understanding because I'm worried about that it might be misused? I think the better thing is to first try and do good in your own life and use it in good ways in your work. But I suppose also, do you fundamentally believe that human progress is a good thing and is on a generally positive path? In which case, most discoveries that you would make, you would hope that they'll broadly push us along that path in the right way. And yes, a discovery might have a few negative things and some positive things, but you'd have to hope that it's a net positive. That art documentary was obviously made without any input from this theory, and it happened anyway. So these things can happen anyway. 

Ben (1:06:57):

He completely regrets doing the documentary now as well.

Leigh (1:06:59):

Interesting. Right. Yeah.

Ben (1:07:00):

If you thought about it.

Leigh (1:07:02):

But maybe by giving ourselves this wider set of tools, these more powerful tools, we might be able to paint our way around that. Find a new story that can change it. I don't know for sure. Maybe in the end, humans are going to discover things that we'll end up using to destroy ourselves. But I think that unfortunately-- well, maybe not unfortunately, but there's no stopping us discovering things.

Ben (1:07:36):

So I'm going to take it to a meta level. Having now listened to your whole thesis on that, what we should do because we're influencing on the system, is just paint this positive picture of human progress. Because then if everyone believes in the positive picture of human progress, you will use the tools for more good things rather than bad. So that's actually what we have to do with our own meta stories on that one. Otherwise, we will then fall to the dark side of the force as it were. So that's quite a good segue into overrated, underrated. So you can pass, and we can do quick ones, or you can have a little commentary on it. So the first one would be nudging, or nudge as an idea; overrated, underrated.

Leigh (1:08:22):

I'm so immersed in the debate that there's so many arguments on both sides...

Ben (1:08:28):

Correctly rated. Is that what you end up with?

Leigh (1:08:30):

I think that it is a little bit overrated in how powerful it is. It's a little bit underrated in how evil and manipulative it can be, because I think given that it's not actually that powerful, then it can't be that bad either. It's like if Darth Vader didn't have a lightsabre and couldn't use the force, then the limited damage it could do. So, yeah, I think on balance, it's probably a little bit underrated because there's been slightly too much backlash against it, and it's not that bad.

Ben (1:09:06):

Okay. Fair enough. Carbon tax, or I guess all external taxes you could talk about. But carbon tax or price, that whole area.

Leigh (1:09:14):

So carbon tax is underrated by the general public, but it's overrated by economists. And here is why. There is a self-binding problem. Governments cannot credibly commit themselves to keep a carbon tax in place. Once a carbon tax is there, you cannot guarantee that it'll still be there. So many of the things that economists say about a carbon tax rely on it being there for the long term. So if I knew for sure that we'd pay carbon tax for 50 years, then yes, I could make all sorts of investments. People's behaviour would be correctly incentivized to like, "Buy the low carbon option," and people would invent things because they know that the tax is going to be there. But nobody can guarantee. We only have to look at the objections to fuel tax right now in the inflationary environment to say that it would be hard to make those long term investments.

So in practice, when governments instead say, "Okay, I can't put a carbon tax in place that will make everyone build solar panels." Instead, what I can do is I can incentivize people to build solar panels. Those things, you can incentivize the behaviour that you want directly instead of indirectly. It's not as efficient. I mean, economists are right that a carbon tax is a much more efficient way to do it because then people will make the right choices and instead of having a meat free Monday, everyone will eat like 20% less meat. And then that's actually better than just having to skip one day a week. So all of the behaviours that would be efficient and would be a good outcome in theory would be in incentivized by a carbon tax. But, there's a lot to see for essentially governments picking the things to incentivize because those are the things that you can actually make happen instead of relying on this long term tax stretching away into the future that the next government might just reverse.

Ben (1:11:37):

Yeah. So overrated by economists, for sure. Which seems to me, they overrate most of their models. Carbon pricing or carbon labels, are they overrated or underrated, as well as ideas? [Inaudible 1:11:51]

Leigh (1:11:54):

Yeah. So how do you distinguish carbon pricing from carbon tax?

Ben (1:11:59):

So prices, you don't necessarily do anything with it, but you give the information out. So this is the idea that a lot of goods and services we actually don't really know where there is. So I guess it's like food labelling, once you knew that it had certain level of ingredients maybe that could do something. But actually, there's some evidence that maybe food labels don't do what you want to do. So carbon labels, therefore you need a price or to know this before you could even do a label. So it's sort of that same bucket. It's sort of actively debated. Some people think it's overrated. Some people think it's underrated.

Leigh (1:12:37):

I would say underrated, because what it does is shortens the causal chain. So people who might be motivated to want to reduce carbon, when there's 20 steps in the chain it's really hard for them to make the action today. But if you can look at every product you buy and say, "Oh, well that one's got a third of the carbon of that one," then sure. People will get their mental reward from buying the low carbon thing if they know about it. If they don't know about it, then there's no mechanism for them to get that reward. So yeah, I think that could actually be quite effective. Of course, it relies on people's motivation being the right one, but at least for those who have that motivation, then yeah.

One of the things that we found in recent research is that everyone likes recycling. Everyone thinks recycling is the solution because you get a mental reward. Every time you throw the bottle in the bin, you get your little bit of dopamine. There's not really a mechanism for that with carbon, and so people don't engage with carbon positive actions. If you had a label that would show them, or an app that could count up-- instead of how many Avios or how many Tier Points I've got on British Airways, which is a terrible thing that I do far too much of-- I could score up how many tanks, how many bits of how many kilos of carbon have I saved in today by my choices and have that on the app, and build up my elite status of carbon saving. That could be very motivating.

Ben (1:14:10):

That ties into everything I spoke to with a philosopher on games philosophy on gamification and that type of stuff. And agency and all of that actually... Also, that is another explanation for why we're so fond of paper straws, even though actually they're not that.

Leigh (1:14:30):

Yeah. It's a symbol.

Ben (1:14:31):

Yes. It's a symbol because of that causal chain. That makes a lot of sense. I guess that's what we call when we are talking about signalling; whether it's virtually signalling or not.

Leigh (1:14:40):

Well, yes. Self-signalling. I don't read “Marginal Revolution” so much anymore, but Tyler Cowen did have a thing about self-signalling a few years ago. The external signalling can be very rational to say, "I'm going to signal to customers or potential partners a certain thing about myself." But self-signalling doesn't fit that explanation, yet it still can be very rewarding that I send a signal to myself by having the paper straw.

Ben (1:15:16):

Yeah. Okay. And then overrated or underrated, being a generalist. I guess this is being a generalist over being a specialist. There are some books out there now like “Range” and some others suggesting that maybe more general knowledge is being perhaps underrated, but then some people say, "Well, you need specialist knowledge.

Leigh (1:15:37):

Interesting. I mean, obviously, of course the world needs both. I think that just personally I've really gained a lot from being a generalist. A lot of the thoughts I have on this come from trying to pull together wildly divergent areas, sustainability, and being able to motivate yourself to do something in the future. Empathy for other people, being able to think about how other people, arts and the fact that we get value from consuming art, mathematics software. I've had so many influences in my career and thinking, and I really enjoy trying to bring those altogether and synthesize models. But in a sense, maybe I'm trying to distil the general into the specific by creating this very particular way of thinking about cognitive economics. But, I enjoy the general, so I'll go for underrated.

Ben (1:16:36):

Yeah. Okay. And then, deliberative democracy or other ways of doing decision making.

Leigh (1:16:48):

Maybe underrated in their potential. I would really like to think that deliberation and I guess having a conversation with other people about the stories that we want to build, and about creating for ourselves the right motivations for action, I think that could be very powerful. I don't know whether we've found the right mechanisms for it yet. But in principle, I think that could be really good. Then again, I do also think that we need to understand the value of cognitive laziness and the fact that we can't think about everything. We do need to be able to delegate certain things. One way that we do that actually again is in this reward learning ideas. We delegate the problem of thinking through all of the causal chain. We delegate that to the proxy that we've come up with which is the ‘I.’ “I know that somewhere around there there's going to be the cheese. So I just think about the next turn in my maze, instead of thinking through the whole maze.” We do need to be able to shortcut and not think about everything in the world. So, definitely a place for representative democracy there as well. Maybe what I can do is let a few thousand people do the deliberative democracy for me, and I'll rely on them.

Ben (1:18:28):

Yeah. I think that's an idea. So are you a fan of the idea that we can only make a certain amount of cognitive decisions a day? So I see this actually, men who only have to go into office and choose just shirt and trousers and they don't have to deliberative, there is seemingly that less cognitive load and fewer decisions that you make. There is some evidence that maybe you are less exhausted or you can turn your mind to things. Do you think that's true?

Leigh (1:18:55):

Yeah. I think that we need to train ourselves to not worry about certain things and to focus our minds on where they can be most productive. And yes, there obviously are social barriers in the way like the example you've just mentioned. But the more we can let things go, then the more we'll be able to focus on what matters.

Ben (1:19:25):

Great. Then last one on this which I guess flows through from democracy, Scottish independence; overrated, underrated, or do you want to pass.

Leigh (1:19:35):

Overrated. I've always been an anti-Scottish nationalist. The only thing that has started to make me sympathetic to it is obviously Brexit, and that Scotland is more pro-European and the current instantiation of the Scottish government seems to be quite liberal in a lot of ways that I agree with. But I say that through gritted teeth, because throughout my whole life, I have thought that Scottish nationalism was like a barely disguised form of racism against English people. I now think that's probably overstating the case, but I'm still going to say that Scottish independence is overrated.

Ben (1:20:20):

Great. So, our last couple of questions. What are your current projects or future thinking? What are you up to at the moment?

Leigh (1:20:28):

A couple of things. You mentioned Goodhart's law. I've just written a paper with three very smart and inspiring co-authors about Goodhart's law. And that is the idea-- Goodhart's law traditionally stated says that, "A metric that becomes a target is no longer a good metric," which basically means if you have something that's a good proxy for what you really want, and you start incentivizing the proxy, then it wouldn't be good anymore. For example, you might know that the number of phone calls that your sales people make correlates with the number of deals you eventually close and the amount of revenue. But as soon as you start incentivizing your sales people to make more phone calls, then suddenly that correlation breaks down and they'll just start making loads of phone calls that are useless, and you won't get the results anymore.

So we discovered in this paper that the same principle that applies throughout evolution, applies in biology, applies in neuroscience and the brain, applies in machine learning, and applies in economics. So we kind of developed this general model of what we call proxy divergence that essentially says, "Anytime you have a goal and you can't directly pursue the goal, but you try to pursue some proxy for the goal, then the proxy of the goal will start to drift apart." So we've just submitted that to behavioural and brain sciences, waiting for a decision. But hopefully that will be published one way or another soon. That's kind of on the academic side.

The other thing I spend my time on is Irrational Agency and the market research world. So that's taking a lot of these ideas from cognitive economics about understanding narratives and then applying that commercially. So helping companies to figure out the narratives that their customers already believe in and how they can change those narratives. We've been doing quite a bit on sustainability recently for companies that... You obviously have this dilemma in the commercial world between companies that genuinely want to help their customers behave more sustainably, and customers that want to tell a good story that people will believe about how sustainable they are. Hopefully, we've stayed mostly on the right side of that line.

Ben (1:22:51):

Yeah. That whole idea of greenwashing. Your academic work has made me think. It's very adjacently on-- I guess this is Gödel’s theorem, but this idea that, "If you can describe a world with a certain number of rules, X rules, then there's always something in the world which is true, but would require X plus one and different rule to describe, and therefore you go on and on." There seem to be this pattern about when you get something, there's that. Then the second thing which I reflect on is that it seems to be that there are a couple of really big things which you only pursue indirectly. One of which is happiness. You can't easily say-- although maybe these Zen monks are different. "I'm going to be 20% happier, but you do perceive them by proxies." But sometimes if you perceive that proxy too much or not, I don't know.

The other one actually being this other mythical thing of shareholder value or the stock price. Generally, there might be some shorter term financial engineering things you can do, but typically you can't say, "Oh, I want that to be up 10% tomorrow or even three months’ time." You have to do that by proxies, which is usually a value adding products or service to the world, say something like that. And that seems to be quite complex. But it strikes me that if there is something more to that that could be quite interesting. And the metric thing is definitely true. As in, there's a lot of debate on sustainability, or call them ESG- Environmental Social Governance metrics. But this is seemingly pushed back by where we see it, where we put them in, it doesn't do what we want.

Health and safety is another classic one where actually you've put a health and safety goal in, because you say, "Oh, well, you do the metric then obviously you're having less accidents." And actually all of the second order thing. Your middle managers get afraid to report, you do these other things, and then the actual true level of health and safety doesn't do what you thought it would do. So you somehow have to keep monitoring it because you need to know what's going on, but you need to incentivize it actually normally by a cultural element. Because policy doesn't really save you. I mean, no one wants to get injured. Your workforce, no one thinks it's a good thing. You need to tackle it from a very indirect way, but you still need to monitor it. So you think, "Well, if I monitor it, should I incentivize that?" Well, actually, no. You got to incentivize all of this other behavioural stuff. If it goes into your scorecard, you actually then get the wrong effect and that's just so hard to encompass. Particularly then if you're on the outside and saying, "Well, what am I meant to hold you to account for?" Because there's never this thing like culture. It's way off there. We can't do that.

Leigh (1:25:33):

Yeah. Trying to understand some of the causal structure that sits underneath that kind of slippage between the metric and the actual goal, that's some of what we've tried to work out in the paper, and maybe otherwise, the solution wouldn't be in that paper. But I think we might be able to start figuring out, "Well, in that complicated causal graph, where should you put pressure? Where should you push the buttons?" If not on the actual metric itself, maybe underneath it you can see actually, “Here are the things that lead to the metric, here are the things that lead to the goals, here are the buttons we can push instead.” Culture is one of these great cognitive shortcuts that helps you to say, “I cannot predict the 20 levels of outcome of pushing this particular button, but I can, at some higher level know that by enhancing the reward that people get for following a certain culture, that they'll be on average. They're more likely to do the right thing.”

I love Gödel’s theorem. It's one of my absolute favourite parts of mathematics. I think there are lots of scope to look at. Can you prove that these contradictions or these loopholes in a way will always exist within a system? I think they probably will. But also, I think we need to remember that Gödel’s theorem is a bit of a 'gotcha.' It's a bit of a like, "Oh, well, mathematics can never be consistent and complete. There's no point doing it.” That is not true. The gotcha is, "Yes, there is always a loophole, but it doesn't mean that every single thing you do is going to be undermined by a loophole.” There are lots of things you can do in practice that aren't really affected by the loopholes.

Ben (1:27:37):

Yeah. I feel the same about Arrow's theorem, which is around about voting.

Leigh (1:27:42):

Yeah. Actually, I think Arrow's and Gödel’s theorem are under the surface. I think there is a principle that kind of put... They're in a way, both the same thing. In the same way that the halting problem of Turing is the same thing under the surface.

Ben (1:27:56):

Yeah. Well, yes, voting is never going to be perfect. But actually, it doesn't mean that we can't get a lot done. And actually, also should evolve if we're very interested in deliberative democracy and things like that, well, maybe we can make something better and that seems to be true. Last one is, do you have any thoughts or advice for listeners out there? Maybe younger people who are thinking about a career, want to do start-ups, or anyone who's interested in cognitive economics, or storytelling, or anything you'd like to share really?

Leigh (1:28:34):

Am I now so old and wise that I can be giving young people advice? I think that although cognitive economics itself is still quite a small niche field and it's not loads of jobs and posts out there, I do think it will grow. I think that if you're someone studying economics, then learn a little bit about it and keep an eye on it because I think there is the potential for it to be more important and more recognized in the future. The storytelling world and the world of stories is huge right now. So many people are recognizing the power of narrative and there are loads of opportunity there. I would say the thing that I-- I kind of like to turn it around slightly and say storytelling is one thing, but storytelling is your voice going out there and hopefully having as big an impact as you want. But, story hearing is the other side of it. We should be listening to the stories of all the people around us, and the many people whose stories are not heard in society. By hearing those stories and understanding what underlies them, then we will be able to figure out what the right story is that we might then want to tell.

Ben (1:29:53):

Excellent. Listen to the less heard stories. I think that's a great note to end on. So Leigh, thank you very much.