Gerd Gigerenzer on How to Stay Smart in a Smart World
Aug 1 2022

41kvozk5NTL._SX323_BO1204203200_-195x300.jpg IBM's super-computer Watson was a runaway success on Jeopardy! But it wasn't nearly as good at diagnosing cancer. This came as no surprise to Max Planck Institute psychologist Gerd Gigerenzer, who argues that when it comes to life-and-death decisions, we'll always need real, not artificial, brains. Listen as the author of How to Stay Smart in a Smart World tells EconTalk host Russ Roberts why computers aren't nearly as smart as we think. But, Gigerenzer says, human beings need to get smarter in order to avoid being manipulated by people who use AI for their own ends.

RELATED EPISODE
Sridhar Ramaswamy on Google, Search, and Neeva
Former Google ads boss Sridhar Ramaswamy says that we live in a world that seems to give out free content when we use a search engine. But that world comes with a hidden cost--search results that distort what we find...
EXPLORE MORE
Related EPISODE
Vinay Prasad on Cancer Drugs, Medical Ethics, and Malignant
Oncologist, author, and podcaster Vinay Prasad talks about his book Malignant with EconTalk host Russ Roberts. Prasad lays out the conflicts of interest and scientific challenges that make drugs that fight cancer so disappointing at times. The conversation looks at...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Nick Ronalds
Aug 1 2022 at 2:55pm

The guest illustrates beautifully the Nirvana fallacy: “invest in education” to overcome the delusions and biases that cause people to “sleepwalk” into dependance on the Tech giants. In the real world, more education dollars–and there are already a lot of them–are unlikely to go toward any sort of “optimal” education model but into teacher salaries and benefits. The reason is simple. The NEA, the teachers’ union, is one of the most powerful constituencies of the Democratic party and of the pre-college educational establishment. As Russ pointed out, high-school stats courses aren’t great. It’s easy to imagine a reformed education model with better priorities and thoughtful teachers, but how do you make it happen when the educational establishment is a quasi-monopoly run by the teachers’ unions whose priority is salaries and benefits for members, with a little woke ideology thrown in on the side?

Nick Ronalds
Aug 1 2022 at 11:09pm

Sorry if my post above sounds like a rant. It’s just that solutions are so easy to imagine; actual policies that come out of the sausage factory never do what they were supposed to. We can probably do much more good by focusing on undoing bad policies than trying to push through one more great idea.

David Zetland
Aug 1 2022 at 3:53pm

Interesting conversation. I think the “free coffee shop” analogy is excellent, but I want to add that there’s even more danger from the manipulations of social media.

 

Besides selling your personal info, location data, and app use, (as mentioned), they ALSO

(a) show you whatever the algorithm profits from — including propaganda, which (given the Snowdon leaks) includes both the US and China.

(b) sell (in the US) into a voracious data market, where DMV, loyalty card, hacks, and lots of HR data are diced and sliced into incredible profiles. These are now being used for “pig butchering” and spearfishing scams on Americans. Stalkers, spies and salespeople are weaponizing this data.

It’s an expensive “free” coffee.

Shalom Freedman
Aug 2 2022 at 1:02am

The ideas that AI is not about to replace us, that it cannot do and will not be able to do many things that humans can, that it lacks ‘common sense’ and needs stable realities to operate and cannot deal with the uncertainties of reality, certainly appeal to many of us not looking forward to our much-predicted moving into second-fiddle status. But it seems to me this argument or series of arguments could be made a stronger way. After all it was Einstein who said ‘common sense is that layer of prejudice deposited in the mind before the age of eighteen years’ and anyone involved in work of creation and discovery knows ‘common sense’ is not enough to do the job. Nor would I be certain that ‘stories involving causation’ another suggested human exclusive ability cannot in one way or another be taught to AI. It seems to any case that the kind of simplistic thinking of ‘the loading brains into computers people’ is complemented here by arguments which really do not show the way AI will not be able to engage in any mental activity we can find some formula for duplicating. And this when there is the understanding that we ourselves do not really understand the full complexity of how human beings think and feel and act.
In other words, I would want to say that if we stand with humility before the uncertainties of the future, we must be too open to the possibility that humanity will feel that ‘machine intelligences’ will one day do important work we will never be able to do. (Perhaps above all exploring, and in some way inhabiting the distant spaces of the universe) And we must too open to the possibility and threat of humanity seeking to create all kinds of hybrid intelligences in which AI plays a central part. It would be wonderful if we could preclude all scenarios for our ceasing to be the central species of the reality we know, but it seems to me this is not now possible.

Ajax4Hire
Aug 2 2022 at 9:40am

I disagree.

Almost ready to agree with the closing solutions till…”Remove dis-information”.  That is a fools errand, for there will always be dis-information.

The goal should ALWAYS be to RECOGNIZE dis-information.  Truth is easy, truth can stand on its own.  It takes a Smart person to peer into the jungle of dis-information to see the slinking tiger.

Purposeful dis-information is called camouflage.  We all use it to hide truth.  Lifts in our shoes, vertical lines in shirts to the full range of advertising; it is all camouflage.

Teach children to recognize the camouflage, the dis-information, and you will have successful adults.

Roger D Barris
Aug 2 2022 at 11:25am

Interesting discussion.

I was in particular struck by the discussion of the business model of Big Tech firms: free services in return for data. When I ran for office, I had a simple solution for this issue: make it clear that your data is your property. You can choose to “sell it” in return of free services, but that should be an explicit choice. Alternatively, you can agree to pay for the services and maintain your data privacy. (I think that this model would also facilitate the movement of data between apps, thereby increasing the level of competition and reducing the anti-competitive results of network effects.)

We know as economists that the market does not function when property rights are ambiguous – this is the entire issue of externalities and tragedies of the commons. I think that this is true in the realm of data. This is a vast area of ambiguous property rights.

David Brisco
Aug 2 2022 at 12:37pm

I believe that one of the causes of our fall into the disinformation trap is the virtual atmosphere of lies in which we live. How much of the innocuous/inane advertising to which we are all constantly exposed is ‘true’. Most at least hints if not shouts untruths or ‘near’ truths. I personally deal with three food markets; Farmers Pick, The Farm Boy, and The Country Grocer . None as far as I know has any connection to the ‘country’ or to a ‘farm’. This is probably true for most, if not all mainstream advertising. We are always being conditioned to accept or ignore untruth.

Jason Stone
Aug 4 2022 at 1:07pm

At almost 20 min: I think the dichotomy of personal information types is wrong. What was described was two versions of the same thing: What book do I want now? Do I need some anti depression help? Do I need prenatal meds?  What exercise should I do? I think those are all good uses.  More information on both sides aides economic exchange with is usually “Non-Zero”.
What the other type is that I am more concerned about and I’m surprised it wasn’t covered here: What issue am I most likely to get riled up by and what types of info am I likely to take for truth? : Police killing blacks? Immigrants invading? Deviants raping? Corporate greed?  It’s politicians and other “interests” that get this data and feed videos and memes back to us with some spin.  I guess one could argue that is also ideally getting at what issues we want politicians to try to solve.  But….

Where have you discussed this?

Chris
Aug 5 2022 at 8:05am

What I see in Germany is that the number of people who think a Social Credit System would be a good idea in Germany is increasing. In 2018, it was 10%. And now, 2022, it is 20%.

 

This was a depressing statistic

AtlasShrugged69
Aug 5 2022 at 3:21pm

“So, it’s certainly that we are in a situation today where a few tech companies, and mostly a few relatively young white males who are immensely rich, shape the emotions, the values, and also control the time of almost everyone else. And, that worries me.” -Gigerenzer

What a disgusting thing to say. I’m curious, if most large tech CEO’s were old black women who were immensely rich, would that make their motives inherently good? Is it their wealth that allows you to determine their motive? Or their age, sex, and race? I wonder how Sundar Pichai (CEO of Google, Indian) Parag Agrawal (Current CEO of Twitter, Indian) or Elon Musk (CEO of Tesla, South African) would react to your characterization. Next time try defining their motives by the things they say and do rather than their immutable biologic traits.  Each of them CLEARLY have very different ideas about how to most effectively use Social Media (compare Zuckerburg’s implementation of ‘fact-checking’ harmful information, vs Musk’s promise that Twitter will be a beacon of free speech, open for anyone to say almost anything they want).

The coffee shop example is a horrible analogy, a much better analogy would be smoking cigarettes. These harm the user, have negative externalities in certain situations, and have been legislated to death in various ways to curb demand, all to protect us from our own stupidity (and of course ‘The Children!’). If you are on Social Media in 2022 unaware that you are the product, and that it is harming you(monopolizing your time, showing you provocative posts), then you are equivalent to someone taking up smoking cigarettes in the year 2000 unaware of the negative health risks. It is not the job of the government to act as our babysitter and protect us from the ‘worrisome’ motives of Mark Zuckerburg. At MOST, the government should warn users of the dangers of using Social Media, and maybe enact minimum age restrictions for using Social Media. I think privacy values are already being promoted extensively by various non-governmental groups and individuals in society (Parents who limit the time their kids can be on technology, alternative search engines IE DuckDuckGo and Neeva, Signal messaging service, etc…) but that’s never stopped the government getting involved in anything else!

The partnership between governments and Social Media for criminal prosecutions and/or social credit is slightly more worrisome, but only because of the immense power the state wields. I would guess if Facebook or Google start working with governments to prosecute individuals (for crimes other than leaking thousands of CIA Documents), demand for their services will shrink VERY quickly.

The best thing to come out of this episode was hearing 3 of Russ’ 10 favorite films. I would love to hear the other 7. Are Gladiator and The Count of Monte Cristo on the list?

nathan
Aug 7 2022 at 9:26am

Gerd Gigerenzer was fascinating as always. I liked the free coffee shop metaphor.

Here is where I quibble and perhaps echo things made by other commentators.

Surveys about whether people are willing to pay or not for freedom from surveillance are beyond the point.  The idea that we should have the chance to opt out is backwards; that they have the right to use our data like this is unacceptable. Their business model is unacceptable. That innovation today only seems to come in coding and in figuring out how to enslave ourselves and our children to spend more time on devices is unacceptable. Whether you take a paternalistic view of government or a more gadsden-type view of things, either way, the idea that we are the products on Old MacZuckerbergs’s farm is not acceptable. 25 years ago the GOP wanted to stop internet porn and Democrats want to stop Microsoft’s advantage taking. Today all this and much much more, is all ok and we are seeing all the negative effects we were warned about then.

As well, the idea that we can train people to think for themselves using some module created by an academic is embarrassing. Academia today is not the home of “how to think. Rather it is the home of cancel culture and
Maoist conformity. I know, I worked at a university for 10 years and a “conservative” one at that.

The fact that all the “serious”  people talk about combating and banning misinformation instead of  teaching people how to evaluate it is just another example of how far we are from the values that built our world, values that  were quite present only a few decades ago.

I don’t see a fix as too many of the ideas that the “serious”  people defend are too silly to survive any rigorous discourse so we are somewhat stuck as “serious” people won’t let them be scrutinized.

It does seem that some people learned to think in the education of 100 years ago in the USA or England, but I don’t know why exactly reading the classics should do that to you. I suspect it has more to do with their willingness to tolerate discussion and extreme opinions than in the fact that they read the “Greats.”

Luke J
Aug 7 2022 at 12:51pm

The amount of outrage and partisan zeal is not so much a result, in my mind, of the stakes being higher. It’s a result of what we consume and how we consume it online is different than how it used to be.

That’s food for thought. Thank you for the interesting conversation.

Gregg Tavares
Aug 9 2022 at 7:21pm

Like others I also had issues with this episode

So, it’s certainly that we are in a situation today where a few tech companies, and mostly a few relatively young white males who are immensely rich, shape the emotions, the values, and also control the time of almost everyone else

The CEO of Google, Sundar Pichai, is Indian. The president of Microsoft, Satya Nadella, is Indian. The CEO of Twitter is Indian American, Parag Agrawal. The CEO of Apple is LGBTQ. Can with stop with the “it’s the fault of white people”?

Governments spend billions for tablets and whiteboards for technology in schools. They spend almost nothing on making teachers smart and pupils smart.

I’m 100% sure the people that approved the tablets and whiteboards believe that purchasing them was a step in making teachers smart and pupils smart. This felt particularly luddite. A tablet provides access to the entire world. Watch a kid in 2022 learn anything and everything they want to know just by Googling it. Is it guaranteed to make them smarter in the way Gigerenzer wishes? No. But neither is anything else. There was no advice here, just an irrelevant rant on tech.

Google gets 80% of the revenue from advertisement. Facebook, 97%. And, that makes the customer–the user–no longer the customer.

The same is true for newspapers and television. Why the panic now? Further, unlike newspapers and television, neither Google nor Facebook are the source of the info on their systems. They don’t have newscasters and journalist spinning events to lead people to their POV. Yes, their algorithms have the potential to have a similar effect but neither of them have even remotely the level of spin of other ad based media.

We need to get rid of the dis-information part.

As if we could snap our fingers and make this happen. Neither Facebook nor Google nor Twitter are in a position to snap their fingers and sensor all dis-information. Nor would we want them to since we’ll all disagree what’s true info and what’s false.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: July 8, 2022.]

Russ Roberts: Today is July 8th, 2022. My guest is Gerd Gigerenzer. Gerd was last here in December of 2019, talking about his book Gut Feelings. His newest book is our topic for today: How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. Gerd, welcome back to EconTalk.

Gerd Gigerenzer: I'm glad to be back and to talk to you again.

Russ Roberts: My pleasure.

1:03

Russ Roberts: You write a lot about artificial intelligence and you say at one point that AI--artificial intelligence--lacks common sense. Explain.

Gerd Gigerenzer: Yeah. Common sense has been underestimated in psychology, in philosophy, always[?] else. It's a great contribution of AI to realize how difficult a common sense is to be modeled.

So, what that means is that, for instance, AlphaZero can beat every human in chess and Go, but it doesn't know that there is a game that's called chess or Go. A deep neural network, in order to learn, to distinguish pictures of, say school buses, from other objects on the street needs 10,000 pictures of school buses in order to learn that.

If you have a four-year-old and point to a school bus, you may have to point another time, and then the kid has gotten it. It has a concept of a school bus.

So, what I'm saying: artificial intelligence, as in deep neural networks, has a very different kind of intelligence that does not resemble, much, human intelligence. Basically, to understand that deep neural networks are statistical machines that can do very powerful look for correlations. That's not the greatest ability of the human mind. We are strong in causal stories. We invent, we are looking for.

A little child just asks, 'Why? Why? Why? Why do I have to eat broccoli? Why are the neighbors so much richer than we?' It wants casual stories.

Another aspect of human intelligence is intuitive psychology. How can a deep neural network know about these things?

And, finally, there's intuitive physics. Already, children understand that an object that disappears behind a screen is not gone. How does a neural network know that? It's very difficult. It's a big challenge to get common sense into neural networks.

3:50

Russ Roberts: So, a big issue in computer science--we've talked about it many times on this program over the years--is that: Is the brain a computer? Is the computer a brain? They both have electricity. They both have on/off switches.

There is a tendency in human thought, which is utterly fascinating and I think underappreciated, that we tend to use whatever is the most advanced technology as our model for how the brain works. It used to be a clock. It was other things in the past. Now, of course, it's a computer. And, there is a presumption that when a computer learns to recognize the school bus, it's mimicking the brain. But, as you point out, it's not mimicking the brain.

Gerd Gigerenzer: No.

Russ Roberts: But, there may be some things that we call artificial intelligence that are brain-like and others that are not. What are your thoughts on the limits of that process? There's a lot of nirvana, utopian thinking about what computers will be capable of in the coming years. Are you skeptical of those promises?

Gerd Gigerenzer: There's certainly a lot of marketing hype out there. When IBM [International Business Machines] had this great success with Watson in the game Jeopardy!, I think I was amazed. Everyone was amazed. But, it's a game--again, a well-defined structure. And even the rules of Jeopardy! had to be adopted[adapted?] to the capabilities of Watson. Then, the CEO [Chief Executive Officer], Ginni Rometty, announced, 'Now, it's the Moonshot.' We are not going to the moon, but to healthcare, not because Watson knew anything about healthcare; because there was the money. And then, naive heads of clinics bought the advice of Watson.

Watson Oncology was the first thing for cancer treatment, only to find out that some of the recommendations were dangerously deadly. And then, IBM clarified that Watson is at the level of a first-year medical student.

Here we have an example of a general principle: If the world is stable, like a game, then algorithms will most likely beat us, perform much better. But, if it's lots of uncertainty as in cancer treatment or investment, then you need to be very cautious. The claims are probably overstated--in that case, by the PR [Public Relations] Department of, yeah, of IBM.

Russ Roberts: But, isn't the hope that, 'Okay, Watson today is a first-year medical student, but give it enough data, it'll become a second-year medical student. And in a few years, it'll be the best doctor in the world.' And we can all go to it for diagnosis. We'll just do a body scan, or our smart watch will tell Watson something about our heartbeat, etc. It will be able to do anything better than any doctor. And you won't have to wait in line, because it can do this instantly.

Gerd Gigerenzer: That's rhetoric. If you read Harari, or many other prophets of AI, that's what they preach.

Now, I have studied psychology and statistics, and I know what a statistical machine can do.

A deep neural network is about correlations and it's a powerful version of a non-linear multiple regression, or a discriminant analysis. Nobody has ever talked about multiple regressions as intelligence. They can do something else. We should not let us bluff away into the story of super-intelligence.

So, what the real prospect is, deep neural networks can do something that we cannot do. And we can do something that they cannot do.

We should, if we want to invest into better AI, smarter AI, we also should invest in smarter people. That's what we really need.

So, smarter doctors, more experts who can tell the difference and not wasting lots of money on projects like IBM's oncology that don't work, or IBM also had Watson on solely to bankers for investment. If Watson could invest--would be th4 great investor, then IBM wouldn't be in financial troubles it is.

8:54

Russ Roberts: What I love about that insight is focusing on what distinguishes where artificial intelligence, or at least computers at this stage, can be extremely powerful versus not. And that's stable.

There's a more general principle--and I think it's in your book, but certainly, it's in your other book or in other people's books, which is--fundamentally, when we're looking at correlations in big data, we're presuming that the past will tell us what the future will be like. And sometimes, it can: because it's stable.

The environment is stable enough that whatever were the patterns that were revealed in the past, those patterns will persist in the future.

But in most human environments they don't. And so, the promise of big data is--I like what, well, two things.

Former, excuse me, past EconTalk guest, Ed Leamer, likes to say, 'We are storytelling, pattern-seeking animals.'

And, we are good at patterns and causation and sometimes they're correct, but the computer doesn't have any common sense to examine whether acorrelation is just a correlation or a causation.

Gerd Gigerenzer: So, the general point is--so, I've been studying simple heuristics that make us smart. And, simple heuristics, it's like, you probably know the story of Harry Markowitz who got his Nobel Prize for an optimization model that tells you how to diversify your money into end assets.

But, when he himself invested his own money, for the time after retirement, he used his Nobel Prize winning optimization method? No, he didn't. He used a simple heuristic.

A heuristic is a rule of thumb, and that's called: invest equally. It's called one over N. N is the number of assets or options. If you have two, 50/50. If you have three, a third/a third. That's a heuristic.

And, in a world--which is called in decision theory of calculable risk that's of stable world, yeah?--that would be stupid.

But, in the real world of finance, studies have shown it often outperforms Markowitz optimization, including modern Bayesian methods of that.

The general lesson is: There's a difference between stable worlds and uncertainty, unstable worlds.

And, particularly, if the future is not like the past, then Big Data doesn't help you. And for a finance with Markowitz optimization, you need lots of data to estimate all these parameters. The heuristic, 1 over N, needs no data on the past. It's the opposite of Big Data.

Russ Roberts: Well, except for the problem, you've got to figure out N. N doesn't come--the number of assets is not given.

Gerd Gigerenzer: That's true.

Russ Roberts: That's another problem.

Gerd Gigerenzer: Yeah. But, that's the same thing for Markowitz optimization.

Russ Roberts: Yeah. For sure. For sure.

12:16

Russ Roberts: Now, you are a strong and I think eloquent promoter of human abilities and a counterweight to the view that we're going to be dominated by machines. They're going to take over, because they'll be able to do everything--everything, everything. So, we're kind of remarkable: our brains are really amazing. At yet at the same time, there's a paradox in your book, which is that you're very worried about the ability of tech companies to use Big Data to manipulate us. How do you resolve that paradox?

Gerd Gigerenzer: So, the statement that you made before is just right on the point. So, it's not about AI by itself. It's about the people behind AI and their motives.

We usually talk about whether AI will be omniscient, or AI will be just an assistant tool; but we need to talk about those behind it. What's the motive? So, that is what really worries me.

So, it's certainly that we are in a situation today where a few tech companies, and mostly a few relatively young white males who are immensely rich, shape the emotions, the values, and also control the time of almost everyone else. And, that worries me.

You are a free market person. And I, also, am a person who tries to believe in people's abilities. But we need to be aware that the opposite won't happen.

So, and here, one thing that we might think about--how to improve the situation--is the following: Google gets 80% of the revenue from advertisement. Facebook, 97%. And, that makes the customer--the user--no longer the customer.

So, in the book, How to Stay Smart, I use an analogy, the free coffee house. Imagine in your hometown, there is a coffee house that offers free coffee. Soon, everyone--all the other coffee houses--will be bankrupt.

We all go there, enjoy our times, but in the tables, the tables are bugged, and on the wall are videos, which record everything we talk, and to whom we talk, when we do this, and that will be analyzed. And, in the coffee house, there are people--sales people--who interrupt us all the time in order to make us buy personalized products.

The sales people are the customers of this coffee house. We, who enjoy our coffee--we are the product being sold: precisely, our time, our attention.

So that's roughly how the business model of Facebook and others functions.

And it also gives us an idea about a solution. Namely: Why don't we want to have real coffee houses back, where we can pay rather than being the product?

16:14

Russ Roberts: The problem with that--and by the way, you know, I started off--I'm sure long-time listeners can go back to my earliest episodes on this topic, where I was extremely skeptical and less worried, not worried at all; to a little bit worried; to now, today, I'm somewhat worried.

And, listeners will recognize my metaphor of the repair person who comes to your house to fix your washing machine. Does it for free. But, while he is there, he takes a lot of photographs of what you bought and what's on your shelves and says, 'Oh, by the way, you don't mind if I use these to sell to my friends? Because they want to know what you buy and what you're interested in. What books are on your shelf and the receipt that you have here for this product you bought.'

And, there is something creepy about it. The creepy part about it for me is that most people don't think about it. They don't realize that there's cameras in the coffee house. They don't realize that the person, that everything they say is being recorded, and who they're talking to, and what the topics are, and so on.

On the other hand, you could argue, and sometimes I argue like this, because it's interesting and it may be true, 'Okay. So those sales people interrupt my conversation every once in a while. They don't literally shut me up. They just hold an ad next to my friend's head and distract me from--you and I having this conversation in the coffee house.

And I find that somewhat annoying. But actually, it's kind of useful, because sometimes it's something I actually want, because they know a lot about me. And, the coffee is free.

So, you're telling me, I need to go to this coffee house over here where I don't get interrupted. Okay. That's nice. But, the coffee is $5 a cup. What's scary about it, to you?

Gerd Gigerenzer: Okay.

Russ Roberts: I think there is stuff to be scared about. I'm playing a little bit of rhetoric here now. But I'm increasingly scared, so take a shot.

Gerd Gigerenzer: So, there are two kinds of personal information that need to be distinguished. The one is, like, collecting information about what books you buy and recommending you other books.

The other thing is collect all the information about you that one came including whether you're depressed today, whether you are pregnant, whether you have had a heart failure or have cancer, and use that information to target you in the right moment with the right advertisement.

So, that's the part that we do not need.

And also, in some countries--like, so I'm living in Berlin. And, East Germany had had the Stasi.

Russ Roberts: The secret police.

Gerd Gigerenzer: If the Stasi would've had these methods, they would be over-enthusiastic.

So, we see something similar in China and other countries.

And, the final point I want to make is, what people underestimate is how closely tech companies are interrelated with governments. So, they say, 'Oh, oh, it doesn't matter whether Zuckerberg knows what I'm doing, but the government doesn't know.' Uh-Uh. Snowden has, a few years ago, shown how close the connection in the United States is. In the United Kingdom, there's Karma Police. And, many countries.

So, then, the--let me make another point. What would it cost us to get freedom and privacy back? So I made a little calculation.

If you take the Facebook--now Meta Corporation--and would reimburse--so, from reimburse Zuckerberg for his entire revenue, it would be about, per month, it would be about $2 per month. Per person. That's all.

And those countries who can[?cannot?] afford $2, then we pay $4.

And that would solve the problem. So there is a solution for that, if you want to go there. The question is how we get there.

20:44

Russ Roberts: The reason--I'm a little bit skeptical of that. The reason is, is that I pay $5 a month for an app that is helping me with my Hebrew. I pay $5 a month for a lot of things, by the way. There's a lot of Substack and Patreon accounts I pay $5 a month to. So, I have a lot of--$60 a year. Somebody has decided that 4.99 is pretty easy for people to swallow. And, by the way, when it says '$60 a year,' sometimes I go, 'Oh, that's a lot,' but '$4.99 a month, that's nothing.'

Gerd Gigerenzer: Yeah.

Russ Roberts: Anyway, I have a bunch of those. And, you're suggesting that Zuckerberg could have twice the money, twice the revenue he has now, if he would charge people $4 a month, instead of $2 a month.

Now, economics predicts that, in general, when you make people pay $4 for something that they used to get for free, you won't have 2 billion users. You're going to have fewer. But, as long as you still have a billion, you're suggesting that going to $4 a month would have such an enormous effect on their user base that Zuckerberg won't do it voluntarily: we'd have to impose something through regulation.

Gerd Gigerenzer: Yeah. That's the problem, yeah, how to get there. And I see the problem. But I'm just saying that it wouldn't be--in terms of the contribution of individuals to get their privacy back--it wouldn't be much. It's just a coffee.

Russ Roberts: But, most people don't care.

Gerd Gigerenzer: Yeah. That's the problem.

Russ Roberts: Would you argue--are you arguing that they should care? I think you're arguing they should care, because they don't realize--I think a lot of people don't realize what they're actually being surveilled about, how widespread it is. But, you're also arguing that even if they knew--and I think many people do know, we'll talk about that maybe in a minute--they go, 'Eh, what's the big deal? I get a lot of products that I'm interested in. It's actually pretty good.'

They don't realize, potentially, that the products that they see in their search engine aren't really the ones that they want. It's the ones that the real customers line up--and we recently had a conversation with the head of Neeva, about, who used to work for Google Ads.

So, let me try a different way to get at a better situation, see what you think. This is the way an economist might think about it. So, that coffee shop, the real problem is there's only one free coffee shop and everybody is in that coffee shop.

So, when I'm on Twitter, I've got my followers. I've accumulated them over years. I go to the new coffee shop that competes with it: it's empty. It's very hard for a new coffee shop to start up, because what we'd like there to be--one way to think about how to improve the situation is--there's a lot of coffee shops.

And one of the coffee shops--the coffee is free, so are the pastries, it's fantastic quality. The problem is that they force you to give blood, when you come in. They do an MRI (Magnetic Resonance Imaging). They know everything--like you say, they know all your mental states. It's very invasive.

But, there's another coffee shop down the street where it's not free. It might be a subscription model: Once you are in the coffee shop, you can have as much coffee as you like. There's a third coffee shop, where it's per by the cup, because some people don't drink so much, it's not worth it to pay the full subscription.

The problem is I can't find my friends in those coffee shops.

What I would suggest, for those people you, and maybe me, who are worried about surveillance and government intervention against us, tyranny--let's find a way that I can port my friends to a different coffee shop without having to start from scratch.

Possible?

Gerd Gigerenzer: I mean, humans have imagination and we could find a way to get there. It's just, we also need people who want that.

And, as you hinted before, there's the so-called Privacy Paradox, which is: that, in many countries, people say that their greatest concern about the digital life is that they don't know where the data is going and what's done with that.

If that's the greatest concern, then you would expect that they would be willing to pay something. That's the economic view. You pay something for that.

But, then, so in Germany--Germany is a good case. Because Germany, we had the East German Stasi. We had another history before that--the Nazis, who would have been--enjoyed about such a surveillance system.

And, so Germans would be a good candidate for a people who are worried about their privacy and would be willing to pay.

That's what I thought.

So, I have done, now, three surveys since 2018, the last one this year. So, representative sample of all Germans over 18. And asked them the question: 'How much would you be willing to pay for all social media if you could keep your data?'

So, we are talking about the data about whether you are depressed, whether you're pregnant, and all those things that they really don't need.

And, so: 'How much are you willing to pay to get your privacy back?'

Seventy-five percent--75%--of Germans said 'Nothing.' Not a single Euro.

And, the others were willing to pay something, yah.

So, if you have that situation--where people say, 'Oh, my greatest worry is about my data'; at the same time, 'No, I'm not paying anything for that,' then that's called the Privacy Paradox.

26:52

Russ Roberts: So, I found that fascinating. I want to give you what came to my mind and let you react to it. So, at night, when I get ready for bed, I close the curtains. I don't want people looking at me as I get ready for bed.

I suppose I would feel differently if there was a camera taking a photo of my pre-sleep preparations. If no one could see my face, and it went out into the Internet, and no one could identify me. And, it was just, my body; but, it's not obviously mine. And the only people who look at it are machines that, say, 'Wow, he's fatter than I would've thought. Let's send him some ads for weight loss.' Or, 'Let's send them some ads for books about exercise or dieting.'

Again, I might be excited to get those. It might be wonderful. The real problem would be that the books they send me are really bad books. But. people have paid to get those ads in front of me, and I'm stuck looking at those, and I don't realize that, and so on.

I think the Privacy Paradox is the fact that, when you tell me that my data is available on the web, I think, 'Well, but no one person is really looking at it.' They can: there are individuals who could look at it.

Gerd Gigerenzer: Right.

Russ Roberts: But, so, Mark Zuckerberg does have my data. Not mine, so much. I'm rarely on Facebook. But, Facebook Users'.

But, I assume he doesn't spend each night going through it, going, 'Wow, I can't believe how fat he is.'

So, you know--I don't have a smart scale. But if you had a smart scale, they'd really know exactly how much I weigh. And they'd know that my shoes were artificially making me look 5'7" instead of 5'6", and so on.

But, I think most of us assume that it's anonymous. more or less.

And, that's the problem: is that it doesn't have to be and we kind of ignore the possibility that it might not be anonymous, really.

Gerd Gigerenzer: I see people sleepwalking into surveillance. So, for instance, in the studies we have done, most people are not aware that a smart TV may record every personal conversation people have in front of it, whether it's in the living room or in the bedroom. At least in the German data, 85% are not aware about that. Although it can be found in some zones in the end user notes, but who is reading these things?

Also, for instance, think about there's already surveillance in a child's life. So. remember Mattel's Barbie? The first Barbie was modeled after a German tabloid cartoon, the Bild-Zeitung, and it just gave totally unrealistic long legs and tailored figures. The result was that quite a few little girls found their body not right. In 1998, the second version of Ken could talk briefly--utter sentences like, 'Math is hard. Let's go shopping.'

So, the little girls got a second message: They're not up to math. They are consumers. And, the 2015 generation, called Hello Barbie, which got the Big Brother Award, can actually do a conversation with the little girl. But, the little girl doesn't know that all the hopes and fears and anxieties it trusts to the Barbie doll are all recorded and sent off to third parties, analyzed by algorithms for advertisement purposes.

And also, the parents can buy the record on a daily or weekly basis to spy on their child.

Now, two things may happen, Russ. One is the obvious, that maybe when the little girl is a little bit older, then she will find out, and trust is gone in her beloved Barbie doll and also maybe in her parents.

But, what I think is the even deeper consequence is: the little girl may not lose trust. The little girl may think that being surveilled, even secretly, that's how life is.

And so, here is another dimension that the potential of algorithms for surveillance changes our own values. We are no longer concerned so much about privacy. We still say we are concerned, but not really. And then, we'll get a new generation of people.

32:42

Russ Roberts: Yeah; I don't know how--I mean, it sounds horrible. I think, tied to government, it's really--authoritarian government--it's terrifying, potentially. I do think the smart TV is a great example of this privacy paradox, the way I'm thinking about it, which is, 'Okay, it hears what I say in the bedroom; but it doesn't know it's really me. No one's paying attention; it's just an algorithm that analyzes it.' I think, first of all, that's today. And, I think that it is a very dangerous thing for all kinds of obvious reasons.

I think the other thing, though, is--the movie Minority Report, which is in my top 10 movies alongside The Lives of Others, which is about the Stasi, by the way. I'll just throw that in as a bonus. But, Minority Report was very prescient. A lot of it is about the dangers of smart technology, artificial intelligence used to predict guilt before a crime. It has this idea of precognition--that it knows what you're going to do because it has enough information about you to forecast.

And of course, your point, which is deep and true, is that we're human beings. We're not chess boards.

But, one of the things about that movie is that those kinds of movies usually rely on the fact that there's some corner of existence that's still private. In that movie, there's a sequence where the hero is able to do something outside of the surveillance world. There's an underground, there's a corner, there's a place.

And, the reality, though, is that if that world were here, there would be no such place. And, I think that's the world we ought to be worried about.

In the movie, there's a corner because otherwise, the plot's not going to work, and it's not interesting, and the hero is going to get killed, and that's the end of the story. But, in real life, you really don't want everything to be Barbie--I don't think--listening to you recording and someone else knowing everything about you that you're unaware of--seems horrible.

Gerd Gigerenzer: Yeah. So, the Minority Report also illustrates another twist--so, predicting whether someone will commit another crime is a situation of high uncertainty where algorithms are actually not good.

So, we know that recidivism predictions can be done by just simple heuristics with two or three variables as good as the secret COMPAS [Correctional Offender Management Profiling for Alternative Sanctions] or other algorithms. The two or three variables are: Previous Offenses and Age. Maybe a third one, Gender. Okay. That is something which I find very interesting because those sides, though, the enthusiasts who tell us that soon there will be super-intelligence where we can upload our brains--for whatever reason with the super-intelligence, want our brains uploaded. And we are--that's a Californian dream of eternal life.

And also, the other side. So, this great book by Shoshana Zuboff about--both sides assume that the algorithms would be perfect. And that's only true in a stable world. In astronomy, they will be very useful. But, it's not the case; and I don't see a way to get there. There will be an increase a little bit, but there will not get there.

And then we have a situation where we are--our behavior is predicted and controlled by algorithms, which are actually not very good.

But still, we submit to the recommendations; and people today on YouTube, some 75% of all videos being seen are no longer chosen by the viewers. They are recommended by an algorithm or just played.

And that's why I think an important partial solution is: Make people smart. Open their eyes and make them think about what's happening.

37:25

Russ Roberts: Yeah; I've always liked that solution, which you could call more information, raise awareness. A simple way to describe it, it's called education. I spent a good chunk of my life thinking about, say, confirmation bias and similar problems. And, when you make people aware of it, it's pretty cool. It's a good thing to be aware of, that you're easily fooled.

I think you quoted Richard Feynman: 'The first principle is not to fool yourself, and you're the easiest person to fool.' So, the more we make people aware of that, you think it'd make a better world.

I've become a little bit skeptical of people's desire for truth. I think they like comfort more than they like truth.

And so, the education, which is--here I am, I'm a president of a college and I run a weekly podcast that tries to educate people--but it's a quixotic mission, I'm afraid. It may not be the road to real success there.

But, I would say it's the only road I want to go down--and I think it's the right road--is to encourage people to be aware of these things and to be more sensitive to them.

Gerd Gigerenzer: Yeah. I share the two sides, but I think it's an obligation to be optimistic and do something. And one can really point to blind spots. For instance, the most recent international PISA [Programme for International Student Assessment] study, so it looks at all the OECD [Organization for Economic Cooperation and Development] countries and tests the 15-year-olds.

Russ Roberts: In math, right?

Gerd Gigerenzer: In math, in language, in the sciences, and this time, they also had a component about digital understanding. To make it short, 90% of 15-year-olds, that [?] the digital natives, do not know how to tell facts from fakes.

So, there is something to be done. Governments spend billions for tablets and whiteboards for technology in schools. They spend almost nothing on making teachers smart and pupils smart. And, with smart, I mean that they understand these concepts.

There is a group in Stanford, in the educational department, and they have looked at a little bit older kids--so, undergraduates. Given them websites, real websites, and asked them whether they're trustworthy and how to find out. So, what one should do is read a little bit, and then immediately go into About Us, and then leave the page, and find out who is behind it. Ninety seven percent of these young people in Stanford don't know that. And that's called lateral reading. They read a website still as in the pre-digital ages, from the beginning to the end and then look at whether it looks cool; and then it's trustworthy.

Russ Roberts: Or whether it confirms their worldview and makes them feel good about themselves.

Yeah--a lot of our conversation reminds me of The Truman Show, which was also a very prescient film, where a Big Brother person played by Ed Harris was manipulating Jim Carey in the movie. It's a phenomenal movie.

Slight spoiler alert: At the end of that film, I think it captures the way most people feel about this. Okay, at this point, Truman doesn't realize he's on a TV show. Some of the people who have been watching the TV show then just, 'Oh, it's over,' switch the channel. Human life has been manipulated. It's a grotesque and powerful moment for this person in this world; and the people watching are, like, 'Okay, I'll switch. What else is on?'

It's a deliberately--I think--anticlimax. I mean, it's a phenomenal--one of my favorite movies.

But, I think you're pointing out something really profound, which is: if we don't think about how the world works, you don't know how the world works, you will be the customer. If you don't know who the sucker is at the poker table, it's you. And, most of us are the sucker at the poker tables, is really what you're saying here. It's a little bit depressing.

Gerd Gigerenzer: Yeah. And, there may be a development going to that. For instance, we have had in democracies now quite a number of presidents who seem to have been elected because of their entertainment value. So, Boris Johnson is just leaving at this moment. Donald Trump already left, but may come back. The idea that education--that understanding the world, having control over the world, and also over oneself--it seems to be fading. And, we need to steer against that. And we can do something. We can start in the schools and open the eyes. In the same way as we can teach risk literacy in general. It's still not happening in schools, except in Finland.

43:30

Russ Roberts: The problem is that--thinking about risk is a perfect example of the challenge, which is--you know, late in my life, I've become very aware of how complex uncertainty and risk are. Which is ironic: I'm an economist, trained in statistics, econometrics, and so on. And I think people are starting to realize: 'Yeah, most people don't really understand risk and they don't understand probability. So, what we need to do is introduce statistics into the high school curriculum.'

So they have. And it's horrible. It's mostly horrible. It's cookbook, teaching people how to calculate means and medians--things you can test on an exam: what's the standard deviation?

The subtle, deep, common-sense ideas of how to think about the fact that the world is unpredictable, there's not a curriculum for that. That's the challenge, I think: is creating educational material that would help open people's eyes.

Gerd Gigerenzer: A long time ago, I published a book called Calculated Risk. In the United Kingdom, it's called Reckoning with Risk--the American published it and wants to translate it into Reckoning with Risk, because I think no American would read a book with "reckoning" in the title--anyhow, which gives such a recipe. One could teach that. I'm all for teaching statistical thinking. Not probability theory.

Russ Roberts: Exactly. Yes.

Gerd Gigerenzer: It's deadly. It enters here, exits immediately there. And, it's not done. It could be done with examples that open your eyes.

So, for instance, even--God knows--teenagers can learn about how to evaluate a positive COVID [Coronavirus Disease] test. What does it mean? How do you do that? And, what are the uncertainties about the hit rate, the false alarm rates, and so on? Just think in these terms.

At the moment, I'm every week giving an online or in-person training of doctors in clinics and doing basically what they should learn. So, the average doctor does not understand what a sensitivity, what a positive predictive value is, or a false positive. Ask your own doctor.

And so, then they cannot read an article in their own field to evaluate: how good is cancer screening? So, they just say that a group of lung cancer specialists, if you think they understand, a few do, but the majority, no.

So, I just say this in order to say there are so many blind spots which we could fill, actually. And then we can see there is certainly a kind of attitude among some people who just don't want to know. But we have to do our homework first.

46:49

Russ Roberts: One of my favorite lessons from Nassim Taleb is that when you're at the casino and you're trying to figure out the game of roulette, you don't ask the carpenter who built the roulette wheel on the grounds, thinking that, 'Well, who could know more about it than the person who built it?' He knows nothing about probability. He knows about wood. He knows about circles, spheres, and other things. And I think there is a trust in the medical profession--often deserved, but often not deserved--because there's uncertainty in many medical diagnoses and in treatments.

And COVID has been a fantastic example of this. We recently had Vinay Prasad on the program. It hasn't aired yet, but it will have aired by the time this episode, our conversation is out.

And he points out that for young men, in particular, particularly taking the Moderna vaccine, the odds of myocarditis, swelling of the heart valve, which is not a pleasant, not a healthy thing, is about one in 3,000.

Now, is that a big number or a small number? I think if you ask people on the street, they would not answer that question and maybe not know how to think about it. But you'd certainly want to know something about that when you're asking a, say, 23-year-old male to get a vaccine. Who, by the way, if he gets COVID, has very little risk of death.

Now you could argue he should worry about it because he could infect other people. Yes, of course.

But, just that simple fact of thinking about that trade-off between his own health with the vaccine/his own health without it--the fact that most people don't know how to think about that--in fact, their natural thought is--and this is a heuristic by the way: 'Well, a vaccine's better than not.' End of story.

Or the other side, the other heuristic: 'A vaccine could have side effects. Not taking it.'

So, I think it would be good for the world if we spent more time thinking about these complexities, which don't come so easily to us.

Gerd Gigerenzer: Yeah. And, we would need policy makers who understand that point. I can tell you stories--I have been talking with the CEOs of major hospitals in Europe, showing them data about their own medical students, at the end of their career, not understanding numbers. Most of them just close their eyes and think education is the least interesting thing to do. It's more like publishing, clinical work. And those--as one CEO explained to me--those who aren't good in publishing and good in clinical work, they care about education. That's about the thinking. And of course, those who publish don't understand their numbers, either.

Russ Roberts: Your argument--which you alluded to here, you make I think very persuasively in the book--is that we spend a lot of time teaching people how to use technology, right? Especially not--[?] the older people who are not as comfortable with it or actually think you should read the manual. Most people--kids, young people--easily grab the thing: they start playing with it, they figure it out in five minutes. So, we spend a lot of time on that; we don't spend any time thinking about what this does to us. And, it might be okay, but you should think about that. It's weird that we don't.

Gerd Gigerenzer: Yeah. So, particularly in the digital age, one thing becomes clear: that people should think a little bit more.

So, I start out with a chapter on online dating; and there is a famous online agency that advertises its success by saying, 'Every 11 minutes, a single falls in love in the agency.' Isn't it great? So, you pay a few hundred euros and wait 11 minutes.

Russ Roberts: That's it!

Gerd Gigerenzer: That's it. And it's a successful advertisement; it runs since years. And, if people would just think for a moment and make a little calculation: So, every 11 minutes, a single falls in love. Let's forget that we need two singles and they need to fall in love with one another--that doesn't matter. Just 11 minutes, a single. In an hour, it's about six who fall in love. On a day, it's 144. And in a year, it's roughly 50,000. If the online agency has 1 million customers, then in a year, 5%--50,000--fall in love.

You can expect that you may pay for 10 years, then you have a 50/50 chance, roughly, if you are still paying and waiting to fall.

So, this is a simple exercise that everyone could do, although almost nobody does it. And, the actual studies about the success of online dating confirm that. It's in the order of 5% per year who find a good partner and 95% that just pay.

So, I observe in many areas that people haven't been taught to think. They're just impressed by these numbers. In the digital age, we need smarter people. We need to invest in people and not leaning back and do everything what your Alexa tells you.

53:08

Russ Roberts: Well, there's a word that gets thrown around a lot, which is numeracy--which means literacy in the area of numbers. You just gave a simple example of where a little bit of numeracy would go a long way but--and you left out the opportunity cost. I could take those few hundred dollars and maybe go sit in a bar or a cafe or somewhere, buy a bottle of champagne, and then sit on a street corner every night, and maybe I'd do a lot better.

But, I think it comes back to a very deep question that I don't have an answer to, which is, if you think about--let's just take math education as an example. Math education--I know it best in the United States--math education in the United States is designed to get you ready for calculus. Eventually. Now, the fact that most people won't study calculus and most people won't understand it if they do, very well, is ignored. 'We need to build the building blocks.' Some of the building blocks along the way are going to be useful: geometry, algebra, trigonometry, pre-calculus, and so on.

But, the goal is to head us toward both the SATs [Scholastic Achievement Tests, or Standard Assessment Tests]--college entrance exams--advanced placement exams to get me ready for college and give me early college credit. And, very little of that is about thinking. A good calculus class--my wife is a calculus teacher--she actually, I think, taught people how to think quite a bit. But much of it is taught as a cookbook. It's a set of recipes for getting answers on exams. It's not designed to give deep insight.

And I think that's true of a lot of what we teach in America. Certainly, what we're trying to do here at Shalem College in Jerusalem is very different. We're trying to teach people how to think, and give them the tools for exploring the world.

Most formal education doesn't do that. Most formal education is designed to give people a strategy for taking an exam. Which is--most of life is not an exam. Very little. And so, it's an aptitude test, or a sitzfleisch test--that you push--you sat long enough.

But, the idea of teaching real numeracy, real statistical understanding about uncertainty, that just isn't done. And it's a tragedy.

Gerd Gigerenzer: Yeah. You're totally right. One might reconsider dropping exams and find a way to motivate young people so they do it for intrinsic purposes because they understand, and get stronger.

So, when I learned math, it was like that. I got a solution, a formula, and then some exercises to find out the right parameters to put that in. This was mindless math.

And, good math would be to give pupils a problem they have no solution for. Give them a problem and let them make errors and find a way and discuss them among other to solve a problem.

So, that is thinking.

And, that also creates a different error culture that one understands making errors is a positive thing.

It is: Not making errors, it is just mechanical. Any intelligent system makes errors.

And, we could implement that in a way starting from school into universities, and to channel a goal about problem solving in an imaginative way. And that leads them later to--what would we suffer in large corporations is negative error cultures: Nobody dares to admit an error, or many just don't make decisions anymore to avoid any errors. And we pay dearly for that.

57:17

Russ Roberts: So of course, the challenge is that if you go to parents and say, 'I'm going to give you a choice between two schools. This school is going to be a cookbook school: recipes, etc. When they're done, they're going to take the SAT, they're going to get into a good college. And that college, of course, is just another place of credentialing. You're not going to learn so much, but your child will prove that they have discipline. And they'll learn something maybe if they pick the right field. But, at the end, they'll get a good job. And, this other college, ehhh--it's uncertain what's going to happen. They're not going to have a formal measure of their success. They'll just be smarter. They'll just know how to think.'

Very similar to the challenge that I have here with a liberal arts college. People say, 'Well, they study philosophy, history, and literature. What's that good for?' I say, 'Thinking, listening, reading, asking deep questions, appreciating that questions don't always have an answer and how to cope with that.' And, even better, 'Understanding when someone else is talking nonsense: You'll have a way of appreciating that. You won't be fooled again.'

But that's a harder sell, because it's uncertain. It's kind of ironic, right?

Gerd Gigerenzer: But, some countries are better. I mentioned, before, Finland. So, they finish in the regular schools, they are typically on top. So, they are 15-years-old compared to others, Finland and Japan, although they're very different ways, they are finer[?] ways. It's not a school that's geared--the Finnish system not geared to exams. On the opposite. But they do well on the exams.

So, it's not a trade-off in a sense, but the future--also going back to the surveillance--the future of a democracy is in people who think--who want to think. And not just follow some message.

So, the Chinese Social Credit System--that we haven't talked about--is one way to go towards, basically, total control of the citizens in terms that every citizen has a Social Credit Score. It's like a FICO [Fair Isaac Corporation] score, but now for everything that can be measured, including your political and social [?].

And that's very similar to what you just complained: that an educational system is geared on numbers at the end.

Russ Roberts: Yep, yeah.

Gerd Gigerenzer: That you have a certain score. That's all that would matter.

Russ Roberts: A scalar. That's what matters, yeah.

Gerd Gigerenzer: Most Westerners, they have really a revulsion when they hear about the Chinese Social Credit system. But it's the same spirit. You just put a number on people and then if the number is public, and that's the very idea, then you immediately know whom you can trust or not. Wouldn't that be convenient?

Russ Roberts: Kind of like finding out what your best romantic partner would be. A 97 out of 100: Wow, that'll be great. I'll be happy every day. Well, maybe 97 days out of 100, anyway.

Gerd Gigerenzer: That's actually happening--that, in the advertisement for finding a partner in China, quite a few people advertise their Social Credit Score.

Russ Roberts: Fascinating.

Gerd Gigerenzer: There is another world out there which I do not want, but I could understand that people later think it is a great option. It's a world where we all are surveilled, predicted, and controlled, where the good guys--good guys defined by a government--get goodies. Like in China, in a hospital, you're treated first if you have a high score. Those with a lower score have to wait. And those with the lowest score get punished. So, there were 10,000 in the last few years of Chinese who were not allowed to purchase plane tickets because of low scores. Others were not--

Russ Roberts: Because they were caught jaywalking, or raising their voice in a restaurant, or whatever is the--

Gerd Gigerenzer: Whatever thing, yeah. You get good points if you visit your old parents and you get negative points if you put a search term like Dalai Lama. No. But, it is an alternative. And, as far as we know, many people in China find this a good system.

What I see in Germany is that the number of people who think a Social Credit System would be a good idea in Germany is increasing. In 2018, it was 10%. And now, 2022, it is 20%. What do you think: High is more among the young or among the old?

Russ Roberts: Young.

Gerd Gigerenzer: Yes. Among the young, it's 28%. There's one group, which just I found striking, the German, so, the civil--the people who have their lifelong sentence working for the government--37% of them think it would be a good idea. They probably believe they're on the right side anyhow: They are obedient to the government, why not collect a few goodies?

1:03:21

Russ Roberts: Well, let's close and say something about democracy. You took a cheap shot at Boris Johnson and Donald Trump; and they certainly do have entertainment--both of them had a clown-like performance aspect to them, to their success. But they also tapped into something I think that was real. You can agree or disagree with it, but they certainly aren't merely entertaining. They actually, I think, were very skilled politically in understanding what--that there was a market opportunity--a political-market opportunity--that some people were being ignored, and who felt they were being ignored, and it was important to them that someone could hear them.

And, all of what you've written in this book is deeply alarming in an authoritarian state, but in a way, it's even more alarming in a democracy.

If people are not thoughtful about what they consume in terms of information--if they're not thoughtful about how they could be being manipulated by advertisers in the political area--they're overconfident in their understanding of how the world works as a result; and the information stream they consume--which we haven't really talked about--is manipulated in all kinds of ways that they're not aware of: If we don't educate people about that, democracy is going to have a very tough time thriving because it's going to be easily manipulated.

And in some sense, it already is. The amount of outrage and partisan zeal is not so much a result, in my mind, of the stakes being higher. It's a result of what we consume and how we consume it online is different than how it used to be. And, it seems like a very bad future.

Gerd Gigerenzer: But it's not destiny. We can change that.

So, for instance, if we invest not only in tablets and whiteboards, but in making young people smart, open their eyes--that can be done. And we know how it's being done: Teach them lateral reading; teach them click discipline--don't click at the first entry, and why. These things can be done.

We can change the university education, and there are also economists who can do their share. There are these fascinating studies with eBay by a Californian economist, Steve Tadelis, who just ran experiments and found out that a certain kind of the keyword brand advertising actually was a negative profit for eBay and not the promised $12 for every $1 [?].

So, if you do that, then you can find out--and this should be in the interests of companies, to find out whether they're personalized at investments--are really pay. So, there are lots of things to do to create a better world, a world where the Internet is more like it was once meant to be--an aid, a tool to get information, to get reliable information, to understand where you can find reliable information.

And, instead, it's now both. It's a host of information and dis-information. We need to get rid of the dis-information part. And, we could think more about the deep reasons--so, the business models, instead of just finding Facebook another time, violating, but it can't be different if they have the same business model. And, most importantly, I think we can do it, is invest in people. Make people smart. If we want to maintain a democracy, then it only works with people who think, who critically think, and who want to live a life of self-determination and judgment.

Russ Roberts: My guest today has been Gerd Gigerenzer. His book is How to Stay Smart in a Smart World. Gerd, thanks for being part of EconTalk.

Gerd Gigerenzer: It was great to talk to you again.