David Epstein on Mastery, Specialization, and Range
May 27 2019

Range-195x300.jpg Journalist and author David Epstein talks about his book Range with EconTalk host Russ Roberts. Epstein explores the costs of specialization and the value of breadth in helping to create mastery in our careers and in life. What are the best backgrounds for solving problems? Can mastery be achieved without specialization at a young age? What experiences and knowledge best prepare people to cope with unexpected situations? This is a wide-ranging conversation that includes discussion of chess, the Challenger tragedy, sports, farming in obscure Soviet provinces after the revolution, the Flynn effect and why firefighters sometimes fail to outrun forest fires.

RELATED EPISODE
David Epstein on the Sports Gene
David Epstein, writer for ProPublica and author of The Sports Gene, talks with EconTalk host Russ Roberts about the book. Epstein discusses a number of the ideas in the book including what we have learned about the nature vs. nurture...
EXPLORE MORE
Related EPISODE
Bill James on Baseball, Facts, and the Rules of the Game
Baseball stats guru and author Bill James talks with EconTalk host Russ Roberts about the challenges of understanding complexity in baseball and elsewhere. James reflects on the lessons he has learned as a long-time student of data and the role...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Schepp
May 27 2019 at 9:21am

Superb, great subject, knowledgeable guest and a spirit of exploration.

As an engineer, much of what I do is standardize and follows procedures which are critical to functioning. It is impossible to derive everything from first principles on each task. It is also critical that the construction plans and contract documents be formed in a manner that delivers clarity to the parties to the construction contract. But almost in every project some amount of non-standard or non-procedure has to be implemented to complete the project.

Looking in my own field it is the specific training of being precise and forming procedures that makes abstract elements like project selection and trail and error so difficult. There is one thing that the podcast did not cover that I think is important. Standards and procedures are valuable tools. It is a feature not a bug that it is difficult mentally and emotionally to vary from them because of their value. So the contrarian risks greatly when not conforming for potential high risk gains or losses as the market dictates.

Rob Steele
May 27 2019 at 1:21pm

Fantastic!

Nick
May 27 2019 at 2:29pm

One hour forty-one minutes! Is this the longest Econtalk in your history? I’m a fan by the way, thanks for all the hard work.

Mike Marrs
May 27 2019 at 4:18pm

Simply a great episode. Saying this is a economics focused blog does not do it justice, its so much more. I appreciate the economics and the critical analysis but what helps me the most are these type of “human oriented” episodes. Specifically this episode has made me aware of an issue I have had with my mother and now I have to go back and apologize to her. I thought she was just being a difficult 74 year old senior, but know I realize that she does not think abstractly. She grew up on a subsistence farm and then worked 20 years in a factory. What I thought were naive and child-like questions make more sense to me. Thanks Russ, now I feel like I was insensitive snapping at her yesterday. 🙂

Phil Langton
May 28 2019 at 4:20am

This was a fabulous conversation – my cycle to work flew by.

I was especially interested by the discussion of the wild fire fighters because I’d read something about this in an article  called ‘Practical wisdom and organizations’, in which Barry Schwartz describes what happened when the training of wildland firefighters was augmented from just four ‘survival guidelines’ to a list of very nearly 50 items (cases).  He writes. ‘….teaching the firefighters these detailed lists was a factor in decreasing the survival rates. The original short list was a general guide. The firefighters could easily remember it, but they knew it needed to be interpreted, modified, and embellished based on circumstance. And they knew that experience would teach them how to do the modifying and embellishing. As a result, they were open to being taught by experience. The very shortness of the list gave the firefighters tacit permission—even encouragement—to improvise in the face of unexpected events. Weick found that the longer the checklists for the wildland firefighters became, the more improvisation was shut down.’ 

It seems that detail in the wrong place, or at the wrong level, flatters to deceive.  Detailed instruction hampers critical thinking and adaptation because the mind is fully occupied in trying to select ‘the’ best option to apply.

The reference to Weick is the original study of firefighter training:
K.E. Weick (2001). Tool retention and fatalities in wildland fire settings: Conceptualizing the naturalistic. G. Klein, E. Salas (Eds.), Naturalistic decision making, Erlbaum, Hillsdale, NJ, pp. 321-336

Steve Wood
May 28 2019 at 10:00am

One of your most stimulating – and I daresay – most important interviews ever. It balances the mind-expanding and the practical.

Michael Kleyn
May 29 2019 at 1:47am

Coming from an undergrad, this was a great listen. In fact, I’ve been listening to Econtalk for a little over a year now, and this was the one that brought me to the website to get the transcript. It could be that I would have done this eventually regardless, but for whatever its worth — this was the episode to do it. Thanks Russ!

SaveyourSelf
May 29 2019 at 7:50am

Loved nearly every minute. I especially appreciated the plug for trial and error. Thanks Russ and David. Superb interview.

Robert Wiblin
May 29 2019 at 12:26pm

This was a consistently outstanding episode. I’d be excited for episodes to go over 60 minutes more often!

Todd Kreider
May 29 2019 at 3:55pm

Epstein said:

Then you go over to something like cancer research, where IBM’s [International Business Machine] Watson has been such a disaster that AI [Artificial Intelligence] researchers I talked to were worried that it would taint the reputation of AI in health care because it had so underperformed.

I’d say it is an exaggeration to say Watson was a “disaster” although IBM oversold it in terms of the timeline. Since 2013, other A.I. firms have done much better as their systems have been more flexible.

I’m not sure which worried A.I. researchers Epstein talked to but in early 2017  an A.I. system outperformed cardiologists in accurately reading EKGs and a year after that A.I. outperformed cardiologists in detecting arrhythmia.

In early 2018, an A.I. system outperformed 58 dermatologists in detecting skin cancer; Google’s Deepmind did better than ophthalmologists at detecting eye disease; and a few weeks ago, Google’s system outperformed six radiologists in detecting lung disease.

Kevin
May 30 2019 at 9:54am

Great discussion. I learned a lot and there was a lot to think about. I really was impressed by the idea the introspection is limited vs experience where (a little bit like a price in an economy) the information about ourselves simply cannot be learned or figured out without exposure to the world and new opportunities.

There is research on protocols and outcomes done on anesthesiologists (doctors who put people to sleep for surgery). A large number of anesthesiologists were randomly given many made-up test cases to review with details blinded. The cases had small changes to them – some cases there were mistakes made of varying degrees but the outcomes in the cases were nearly random. If the outcome was good even when protocol was violated the case was on average judged ok. When the outcome was bad (patient died) even when protocol was followed the case was judged as bad and the reviewing blinded doctors identified things that could have done differently. This was even more true if a child died – even if the physicians had been perfect in every regard in the mock report the reviewers on average judged it as a failure. My conclusion is that it is hard to learn much from the NASA disasters because a bad outcome induces a desire to create a narrative where it could have been avoided and we could have had control. We over-read the evidence and have a difficult time assessing the situation as normal. It may be possible to identify mistakes but sometimes no rational group would have made changes and in high risk fields there will be failures that could never have been anticipated or avoided.

I would like to learn more about the firefighters and especially like the perspective of some forest firefighters because that story sounds fake and something a sociologist manufactured from scarce data. Maybe its real, but I would love to see the evidence that induced the conclusions. Searching the web every reference is some management or leadership style reference suggesting the original author got exactly the intent he was aiming for, but in this instance I would very much to review the data for alternative explanations (firefighters found with their tools faced faster fires or had little time to react vs those who dropped tools and ran).

There is a discussion of a Soviet scientist reporting on the cognitive changes induced with collectivization. Nothing at all suspicious about a communist country producing research which shows thinking advanced with collectivization – an idea perfectly aligned with communist ideas of the mind – during Stalin’s reign. Am I the only one that finds that the slightest bit suspicious? I think that is an idea that requires replication in a few more studies across cultures before I would pay it the least bit of attention.

AI in medicine is performing exactly as we would expect as Todd Kreider points out – it is beating humans at pattern recognition. Breast radiologists have been working together with AI for sometime on mammograms and we see what has been seen in chess and nearly every where else. At beats doctor at pattern recognition; doctor plus AI beat AI. I expect this will take hold in nearly every field of medicine. A dermatologist at the end of his career may have seen a million skin cancers, the AI can be trained on 10 million in a week. Doctors should rejoice – together with AI they are going to get a whole lot better at diagnosis and decision making.

Joseph Coco
May 30 2019 at 12:46pm

Like most, this was a great episode. I work in medical industry, which I’m sure the book pulls some examples from, but I’ve been happy to hear lately there’s been a push to explicitly mention in medical expert systems when it’s most critical for doctors to be leaning on their experience to make decisions rather than ruthlessly following protocol.

Fundamentally, I feel like this is a third tier of ensuring quality. The first being quality control, of making sure your product is right. The second being quality assurance, making sure your process for producing the product is right. The third being quality _____ where you make sure the process is generalizable enough to escape the process in the right circumstances.

Todd Kreider
May 30 2019 at 1:49pm

[AI] beats doctor at pattern recognition; doctor plus AI beat AI. I expect this will take hold in nearly every field of medicine. … Doctors should rejoice – together with AI they are going to get a whole lot better at diagnosis and decision making.

Several years ago, economist Tyler Cowen, an excellent chess player, made a big deal about “A.I. plus human chess player” pairs doing better than A.I. alone, and I thought then that wouldn’t last more than a decade and that has in fact ended. The A.I. now always beats the human.

This will be true of health specialists as well. Machine learning pioneer Geoff Hinton said in 2016:

“Let me say a few things that are obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of a cliff but hasn’t yet looked down so doesn’t realize there’s no ground underneath him. People should just stop training radiologists now. It’s just completely obvious that within just five years [2021] deep learning is going to do better than radiologists because it will be able to  get a lot more experience. It might be ten years, but we’ve got plenty of radiologists already. I said this at a hospital, and it didn’t go down to well.”

Doctors and nurses will also be in less demand.

I agree with Kevin that the fire fighter lesson doesn’t seem right for a few reasons.

Karl Weick is an “organizational psychologist”, not a sociologst, who came up with the term “mindfulness.”

 

 

Kevin
May 30 2019 at 4:17pm

Thanks for the correction – I had not looked into the updated chess data.  So…maybe it will simply go that way … AI will just replace the diagnostic efforts of physicians.

However, Geoff Hinton is still wrong about the timeline, because diagnosis is only one limited part of radiology (I am not a radiologist) and the legal hurdles could take decades to work through anyway.   But the reading of images is the one part that is well suited to the deep learning algorithms.   If we eventually double or triple the productivity of the radiologists/dermatologists/everyone so that we need few that will be amazing.  Ironically, without so much law to wade through poorer nations may move to AI driven medicine first because if your cell phone can do it – why hire expensive doctors?

Todd Kreider
May 31 2019 at 1:23am

However, Geoff Hinton is still wrong about the timeline, because diagnosis is only one limited part of radiology (I am not a radiologist) and the legal hurdles could take decades to work through anyway.

I think Hinton is dead on. Medical A.I. will keep improving and specialists, as we know them today, will be out by 2030.   Part of the reason is what Kevin wrote at the end but why limit that to poorer countries?

The U.S. isn’t going to become a has been country due stuck in 2023 due to 25 years of legal battles while China, Japan, India and other countries not subject to American law race ahead with A.I., stem cells, etc.

 

 

 

Richard Bustamante
Jun 2 2019 at 12:41pm

I enjoyed this podcast immensely and listened to it twice. However, I do have one technical correction:  Susan Polgar achieved the Women’s World Championship but has never been regarded as the greatest female player – that honor, without question,  goes to her sister Judith. Judith reached a peak rating of 2735 in 2015 and was ranked 8th in the world whereas Susan’s current rating is 2577 – 373rd in the world.

It’s a little confusing because FIDE recognizes a Women’s World Championship but also allows the women who qualify to participate in the overall World Championship.  The Polgar sisters tried to avoid female-only events because their father thought they were harmful to their chess development and, after 1990, Judith total eschewed women’s events whereas Susan (obviously) did not.

Floccina
Jun 3 2019 at 10:38am

I have a suspicion that the idea that in sports early generalization appears better than early specialization is just due to the fact that athletes are much more born than made.

Look at an athlete like Giannis Antetokounmpo would surely be a little better had been born in the USA but he started basketball quite late and is still one of the best in the world mostly because of born in talent. Had Roger Federer  specialized earlier he might have been an even better tennis player.

JK Brown
Jun 3 2019 at 5:04pm

I had an experience in the conflict of abstract and practical thinking years ago.  I and one other person was taking a certification course in the Global Maritime Distress Safety System (GMDSS).  A post-radio operator international plan for handling ships in distress.  I have a BS in Physics and the other individual was a captain of small coastal vessels looking to upgrade his license.  The instructor was a former radio operator.  The course required you to learn already outdated processes even though the system was not fully implemented yet.  Devised in the ’70s by international agreement and not adapting to the technology changes of the 00s.

On specific instance was you had to follow a formal call-response procedure.  Instructor calls Mayday, you respond and both negotiate who will come to the rescue.  Total BS for reality, but required to “pass the test”.  The captain just couldn’t get onboard with that.  When hit with a “Mayday” even in a classroom, his response, marked as a fail, of “This is ship Whatever, we are on the way”.  He thought the “proper” procedure was BS.  I probably should have tried harder to get him to realize that this class was just a requirement and you had to play the game to pass.  Knowing it was all BS in the face of a real emergency.  I did well because I knew to shutdown my thinking and meet the requirements of the class.  You can sort through later to find useful information in the content.  Abstract thinking is good, but most of what you learn in school is to accept it is BS with no relation to how one deals in the real world.

This was 15 years or so ago, I hope they’ve fixed the GMDSS.

Joshua
Jun 3 2019 at 7:30pm

Correction on Pearl Harbor comment:

 

Russ mentioned that “they” should have known about the attack ahead of time based on the intercepted messages from the Japanese. This is a common misconception because after the attack, the men in charge, Kimmel and Short, were blamed for not being prepared and judged guilty of dereliction of duty by a commission appointed by Roosevelt.

However, Kimmel and Short demanded a court marshal and later got their day in military court where it was uncovered that “they” (bureaucrats in Washington including the president) actually had full knowledge of the attack ahead of time but didn’t relay any of the information to Kimmel or Short or anyone else at Pearl Harbor. The only action taken was a “training exercise” the morning of that involved our newest and best ships including all our aircraft carriers. We had very direct warning messages from 3 ambassadors, a general stationed overseas, a soviet double agent, and had deciphered the Japanese naval code so we could interpret all the intercepted messages. Needless to say, Kimmel and Short were exonerated but the results of the trial were classified until later in the war.

 

There is much more to the story but the point is that it isn’t rumor or conspiracy theory that some may have had prior knowledge of the attack. It was officially confirmed in a government court. I’m sure you’d like to confirm this information yourself but after you do, it would be prudent and greatly appreciated to issue an official correction for accuracy and to not propagate the false narrative.

Beck
Jun 6 2019 at 9:59am

I’m having trouble understanding this point:

I’m just thinking of Nassim Taleb, who has taught me that expected value is the wrong way to think about rational decision making. And, often–and you don’t want to just look at the odds of a bad outcome. You want to look at what would be the consequence of that bad outcome. And that’s really hard for us to do. We often just say, ‘Oh, it’s unlikely. So I don’t have to worry about it.’ But, if it’s unlikely and it means ruin, you want to stay really far away from that.

Doesn’t expected value already take into account the consequence? Expected value is probability * value, not just probability. So, the expected value of something extremely damaging but rare could still be very negative.

Also, how do we operationalize this? There is a small chance of a life-ending accident every time I get into a car. At the civilizational level, we cannot afford to pour a substantial chunk of our resources into asteroid mitigation even if that is an existential threat. As individuals and as civilizations we constantly have to make decisions where ruin is a small likelihood but possible outcome.

Slaven
Jun 7 2019 at 3:34am

I’ve been a listener for a long time. This was – in my opinion – one of the best talks. A very interesting guest and excellent interviewing. Thank you Russ and keep on the good work.

Mark
Jun 10 2019 at 2:33pm

Congratulations Mr. Roberts…you just hosted a perfect podcast. Enjoying the book Mr. Epstein.

John Alsdorf
Jun 14 2019 at 10:15pm

A fascinating interview (most Econtalks are, but this one stands out). I’ll be listening again. Wanted to comment on the Challenger and the notion that the data weren’t present. Edward Tufte, in one of his several books on visual display of quantitative information, makes a convincing case that the necessary data were present, but organized in such a way as to obscure the cause and effect relationship between temperature and blow-bys. The incidents were arrayed chronologically, so the clear correlation with temperature was obscured. Had the engineers organized the data (as TUfte does in one of the books–I’ve have to dig it out it when I get back home from my vacation weekend) with temperature on one axis, and experiences of O-ring failure on the other, the cause-effect connection would have been clearly made. You might want to get Tufte on (anyway; his work on visual arrangements of data is fascinating too) for a future podcast.

John Alsdorf
Jun 16 2019 at 9:47am

Here’s a link to a web page discussing Tufte’s re-examination of the Challenger disaster, showing how the same data, arrayed differently, would have painted a clearer picture.

https://www.asktog.com/books/challengerExerpt.html

 

Comments are closed.


DELVE DEEPER

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: May 1, 2019.]

Russ Roberts: My guest is David Epstein.... His latest book and the topic of today's conversation is Range: Why Generalists Triumph in a Specialized World.... Your book opens with a little fable of--you could call it Tiger versus Roger. What's that fable about?

David Epstein: Yeah, what I call the 'Roger vs. Tiger problem.' So, basically, Tiger Woods, I think, is the epitome of early specialization, and sort of--his story--he started, he was able to walk very early, at about 6 months old there are pictures of him balancing on his father's palm, and he started swinging a golf club not long after that. And that story of his precocity and early specialization in golf became sort of the core of at least a half-dozen best-selling books, most of which argued that this was just a model that you should think about for anything you want to get good at: this early head start in specialization and technical, what's called, deliberate practice. And so that's really the Tiger model. And, what I wanted to do was see whether that is indeed appropriate for extrapolating, and whether it's the typical route to success. And I found--I looked at other models and found what's actually more normal is what I call the Roger models. So, Roger Federer, whose development story is not nearly as well known as Tiger Woods's, is much more normal. What happened was, he played a whole bunch of sports when he was a kid; mother forced him to continue playing basketball and soccer, after, instead of specializing in tennis, after his peers were already specializing. In fact, when he got good enough to get bumped up a level to play with older kids, he declined because he liked talking about WWE (World Wrestling Entertainment) after practice with his friends. And, when he finally got good enough to be interviewed by a local paper and was asked what he would purchase with a theoretical first check if he ever became a tennis pro, he said, 'A Mercedes.' And his mother was appalled, and asked the reporter if she could listen to the recording. And the reporter obliged; and it turned out he had said, 'Mehr CDs,' in Swiss-German: he just wanted more CDs (Compact Disks). So, she was much more content with that. But, essentially, Roger Federer was years behind his peers in focusing only on tennis. And obviously he turned out okay. And so, I sort of conceived this as the Roger versus Tiger problem, asking which developmental model is more typical and which one is better used to extrapolate to other domains.

Russ Roberts: His mother, if I remember correctly from the book, was a tennis coach. Which was even crazier.

David Epstein: Oh, yeah; she refused to coach him because she said, 'It wouldn't be any fun for me because he liked to return a ball normally,' basically. Which, of course, is actually you exactly want that, that sort of variable movement in development, but wouldn't be fun for an adult as much.

Russ Roberts: And, for people who don't know much about Tiger Woods, he was golf 24/7 from a very young age. At least, it seems that way. Driven by his father to excel.

David Epstein: That's kind of the story. Although, I think we need a correction for the public narrative that he was driven by his father. I, sort of in the course of this book examined the Tiger and the Mozart narratives, because they are so central to so many books that argue for early specialization. And neither one is how they have been portrayed. In both cases, the fathers were responding to the children, not the reverse. So, there's no evidence that you can engineer these performers. And Tiger, in 2000, said himself that 'My father has never asked me to play golf. Never.' It's the child's desire to play that matters, not the parents' desire for the child to play. And he said that because he wanted to sort of correct the record. And I found something very similar going through letters from Mozart's childhood, where his--he wanted to play with a group of musicians that came over to play with his father. And Mozart's father said--Mozart wanted to play a second violin role when he was a little kid--and Mozart's father said, 'You had no lessons. Go away. You can't play.' And Mozart said, 'You don't need lessons for second violin.' And so, finally he starts crying. And another musician says, 'I'll go play with him in the other room.' And then they hear the playing from the other room and are sort of awestruck. And the letter that this musician left says, like, 'Young Wolfgang was emboldened by our applause to say that he could play the first violin also.' Which he then did with totally irregular positioning. So, he hadn't learned the actual fingering. So, he just played--but he was able to play it with his improvised fingerings. So, those cases do occur. But, the ones that have been portrayed as parent-manufactured--it's not really the case.

Russ Roberts: Have you read Open, by Andre Agassi?

David Epstein: I have, yeah.

Russ Roberts: So, that's a parent-driven one. At least, it appears to be. I recommend that book. I was--I'm not a big tennis fan; I'm somewhat of a tennis fan. And I was really impressed and inspired and touched by his story. It gives you an insight into the psyche of competitor. He's very open about his failures and successes and fears. And, his dislike of his father at times for pushing him relentlessly.

David Epstein: Yeah; no; I agree. It's a great book.

Russ Roberts: Now, so--well, whether it's the parent or the child, your point with Roger is that his mother--and I assume his father--pushed him to diversify away from one thing. And Roger was happy to do that. You are implying that--really two things are central to your book. One is: head starts are over-rated; and specialization is over-rated. And secondly, a stronger claim, I think, is that they are not just over-rated: they are less than helpful. The diversity that Federer had with soccer and other sports made him a better tennis player.

David Epstein: And, I should say, I think there are as many ways to attain elite performance as there are people. So I think there are--sometimes people become elites with suboptimal development. And sometimes people with optimal development don't become elites. So, I think there's no perfect path. But that a huge body of evidence across different sports now shows that the typical path is an early sampling period, where you do a large variety of sports. You gain a breadth of general skills. You learn about your own abilities, your own interests. And you delay specialization. Now, whether or not that--because I also make the argument that golf is a particularly bad model of most other things that people want to learn. Whether or not that early, that trend holds for golf is unclear. I think there is a dearth of research on the best development in golf. So, it is possible that early specialization in golf does work. I think the jury is out. But, for most of the other sports that are more dynamic, involve anticipatory skills where you are judging what other people are going to do, the early sampling, delayed specialization is the pattern. And initially I thought that was going to be purely a selection effect. That it was going to be just a better athlete's could play more sports until later, so they did. And then I started, as this question became more important in the sports world, started to see these studies where, say, German researchers would take soccer players matched for skill at a certain age, follow them longitudinally for 7 years or for several years, and see what they did and who would improve more later on. And what they would see is the people who were diversifying their activities would actually end up improving more even if they were sort of slightly behind at a certain point because of diversification.

8:59

Russ Roberts: Now, I should make it clear that your book opens with the Tiger/Roger story; but this is not a book about sports. It's a book about, you could say, everything. It's about math, it's about chess; it's about music, art, decision-making, career generally. So, we're opening with some sports conversation, but your book is really--in many ways, it's a warning not to generalize from golf, say, or chess. And so, tell the story of the chess family, and what you learned from that, and the terms 'wicked' and 'kind.'

David Epstein: So, this is--the Polgar family is another famous story in books that are concerned with the development of expertise, basically, from childhood. And in this case, in an individual named Lazlo Polgar, who had studied--Hungarian man whose family, basically his entire family had been wiped out in the Holocaust. And he was determined to have a remarkable family. And had studied the lives of people who went on to greatness, basically. And decided that he could manufacture greatness and that he would do so by experimenting with his own children. And, he decided, at the time--

Russ Roberts: Kind of an ambiguous moral strategy. But, okay.

David Epstein: Um--yeah. I mean, that's fair, but that's a different discussion.

Russ Roberts: I guess the word 'experimental' there is a little bit loaded. But, go ahead.

David Epstein: Yeah. That's totally fair. Maybe 'experiment' isn't the best--I mean, I think his feeling was that he was just going to make them really good at something. So, this--you know, was it an experiment? I don't know. I guess everyone can do what they want with their own kids. But I do realize that is kind of a weird kind of way to frame it; so thanks for making that interjection. But, in any case, he wanted to see if he could make his kids really good at something. And beyond just making his kids good at something, the idea was to prove that you could do this with basically any kid, in anything. Right? Any kid, and not just in chess. And so, in the early 1970s--he had his first daughter in 1969--in the early 1970s chess was really, really important. Right? It was sort of like a Cold War proxy, more or less. And, so for a number of reasons including that popularity and also the fact that his first daughter Susan might have shown a little bit of interest in a chess board when she was very little--and also the fact that chess has a very clear rating system that rates a player according to other players in the world, so you can really track someone's progress in a very rigorous way--he decided to base the--well, now I don't want to call it an experiment, but the project on chess. And, that he would specialize his daughter in very technical training in chess, very, very early. And so, when she was--he started training her hours a day; and, you know, before the first year, like within 8 months of training, at 4 years old, she went to a smoky chess club in Budapest, and, with her legs dangling from the chair, beat an adult man--who stormed off. And from there she just got better and better. And became the greatest female player in the world. I should say--she also was way ahead of her peers in other areas, like math and things like that, so she was rather exceptional in a number of areas. But, Susan Polgar became the greatest female player in the world, qualified for what was then called the Men's World Championships but wasn't allowed to play. And the rules were eventually changed because of her achievements. And, she had two sisters that were part of the project as well, one of whom went on to become ranked, at a certain point, 8th in the world, which was the highest ranking a woman had ever attained; and the other became an International Master, didn't quite make it to Grandmaster status. But the point was, with this early approach of giving a head start and highly technical training and very focused on chess training, Lazlo Polgar showed that he could make his daughters into world class chess players. And, for him, the goal was to show by extension that any kid could be made into a champion in anything, essentially.

Russ Roberts: And, you haven't mentioned it, but the claim in the books that you alluded to earlier is that if you just practice enough and specialize and focus, you can do anything at a very high level. And the magical number is 10,000 hours, supposedly--that 10,000 hours of practice can lead to greatness. And I assume the Polgars got more than 10,000. It would have been a more--if it was an actual experiment, he should have made one of them a swimmer and one of them a chef. But, to get a little diversity--

David Epstein: Or he should have randomly selected some kids. Which was his next plan. He had--there was a wealthy individual who was ready to sort of back him adopting some kids. Because he's a kind of a brilliant guy, right? And his daughters showed some brilliance beyond even chess. So it wasn't exactly a random sample. But that's a separate issue.

Russ Roberts: Yeah. Yeah.

14:21

Russ Roberts: So, so, what's wrong with concluding that? Why--especially with chess? Why is chess perhaps not the best way to think about this? And you use the idea of wicked and kind. I really like that.

David Epstein: Yeah. So chess is an endeavor where early specialization is very important. Right? And, even though my book talks about areas where we overvalue specialists and undervalue generalists, it is domain dependent. And chess is an area where it's important--like, if you haven't started technical training by the age of 12, your chance of ever reaching, I think, International Master status, which is a step down from Grandmaster status, drops from like 1 in 4 to, like, 1 in 55. Or something like that. You have to be studying patterns, or so-called tactics, which is a short combination of moves that give you an immediate advantage on the board. It's based on pattern recognition. And the reason why there's been this explosion of young chess masters--there's something like, I don't know, two dozen or something like that Grandmasters, ever, under the age of, like 17. Something like that. And, the oldest one is like my age now, because this is a phenomenon that grew out of the availability of computer chess. Where at a much younger age you can study many, many, many more patterns. So, Lazlo Polgar gave his daughters a head start on this because he clipped 200,000 different game reports and essentially allowed them to study patterns. Now you can do that on the computer. And so that's caused a lot more young Grandmasters, because this instinctive pattern recognition on the chessboard is so incredibly important. As Susan Polgar said, 'Tactics,' which is essentially recognizing patterns, recurring patterns, 'is 99% of chess.' Basically. And, that pattern recognition--so, you mentioned the kind and wicked environments. The way that chess works makes it what's called a kind learning environment. So, these are terms used by psychologist Robin Hogarth. And what a kind learning environment is, is one where patterns recur; ideally a situation is constrained--so, a chessboard with very rigid rules and a literal board is very constrained; and, importantly, every time you do something you get feedback that is totally obvious--all the information is available, the feedback is quick, and it is 100% accurate. Right? And this is chess. And this is golf. You do something: all the information is available; you see the consequences. The consequences are completely immediate and accurate. And you adjust accordingly. And in these kinds of kind learning environments, if you are cognitively engaged you get better just by doing the activity. On the opposite end of the spectrum are wicked learning environments. And this is a spectrum, from kind to wicked. Wicked learning environments: often some information is hidden. Even when it isn't, feedback may be delayed. It may be infrequent. It may be nonexistent. And it maybe be partly accurate, or inaccurate in many of the cases. So, the most wicked learning environments will reinforce the wrong types of behavior. So, one of the examples that Hogarth talks about is a famous physician, a famous diagnostician, who became very prominent because he could accurately tell that somebody was going to get typhoid--like, weeks before they had any symptoms whatsoever. And, the way he would do that was by palpating their tongue with his hands. And over and over again, he would amazingly say, 'This person is going to get typhoid.' Without a single[?] system--symptom. And as one of his--it turned out, as one of his colleagues later said, he was a more prolific carrier of typhoid than Typhoid Mary. Because he was infecting--giving people typhoid, by feeling around their tongues from one typhoid patient to another.

Russ Roberts: [?]

David Epstein: And so, in that case, the feedback of his successes taught him the wrong lesson. Now, that's a very extreme case. Most learning environments are not that wicked. But, most learning environments are not nearly as kind as chess and golf, either. And most of the areas that most of us work in do not have just built-in rules and recurring patterns that we can rely on, or built-in feedback that is always immediate, automatically comes as complete and fully accurate. So, in that sense, things like golf and chess are poor models for extrapolating what most people want to learn. And, in fact, one of the reasons chess is so easy--relatively speaking--easy to automate is because it's such a kind learning environment. So there's this huge store of data, very constrained situations, repeating patterns. So, the kinder a learning environment is, the more amenable it is to both specialization and to being automated.

19:15

Russ Roberts: I'm just going to add a few thoughts outside the scope of your book about that distinction, because I think it's extremely important and powerful in our modern obsession with quantifying everything. In what you're calling a wicked environment, a lot of things that are important can't be quantified, I would argue, and so they get ignored. Which is extremely costly. I can't help but think about Hayek's Nobel Prize address, "The Pretence of Knowledge." He said you understand some of the relationships, perhaps, in a wicked environment; but you can't understand all of them. You can't quantify all of them. You don't understand all the feedback loops. You don't understand the unintended consequences of action. And you are misled about what works and what doesn't work. And, when I say that--and I mention my skepticism about science in general being applied to social phenomena, people say--first of all they say, 'You're anti-science.' I'm not. I'm pro good science; I'm anti bad science. But the other, you know, area that I think people tend to--they tend to misjudge the effectiveness of science, because there are some kind learning environments where we make tremendous progress. So, chess is an example. I think about baseball, where people applied statistics and analysis. Bill James was the pioneer of this in his--when I asked him on EconTalk if he felt we had pretty much figured everything out, he said, 'Oh my gosh, no. There are a lot of things we don't understand; they are a thousand-fold more than what we do understand.' Hope I'm getting that ratio right. It was a figure of speech; it was not a precise measure. But, the point is that baseball is a really kind environment. You can get really close to what the value of a walk is. And if you don't pay attention to walks, and then you realize they matter, you get a better understanding. I think our understanding of the economy isn't like baseball. And we want it to be.

David Epstein: You brought up a couple of great points there. And, going to Bill James--sort of three great points I want to glance off of really quickly that you brought up. With Bill James, I think it's pretty clear to people who work within sports analytics, that there has been a much greater impact of analytics in baseball than within, say, soccer. It's because of the way baseball works, with these discrete outcomes and one-on-one interactions and all this stuff you can measure. So, it's much kinder than even other sports--

Russ Roberts: Yep. Good point.

David Epstein: which are still kind in the spectrum of the world. And, what Robin Hogarth--so, he called--you talk about the economy--he said, you know: I differentiate golf and tennis, where tennis is more dynamic and involves sometimes teammates but also human interactions and things like that. I'm anticipating what the other person is doing. But it's still on the kind end of the spectrum compared to most activities. Whereas, Hogarth said, 'What you're doing in the wider world is playing Martian tennis, where you can see that people are playing a game but nobody has told you what the rules are. You have to deduce them by yourself. And they can change at any moment without notice.' And that's what we're usually faced. And I think that shows up--this kind/wicked spectrum shows up in our ideas about things that we can easily automate. Or apply analytics to. So, if you look at chess, the chess app, the free chess app on your iPhone can beat Kasparov now. Right? It doesn't take a so-called super-computer any more. So, we've made exponential progress in chess. Absolutely. In a very constrained but slightly less predictable area of driving--self-driving cars have made huge progress, but there are still some serious challenges, even though that's an area that's governed by repeating behaviors and regulations and all of those things. So, that's kind of the middle of the spectrum. Then you go over to something like cancer research, where IBM's [International Business Machine] Watson has been such a disaster that AI [Artificial Intelligence] researchers I talked to were worried that it would taint the reputation of AI in health care because it had so underperformed. And as one of the oncologists I talked to told me, where[?] Watson won Jeopardy, he said, 'The difference between Jeopardy and cancer research is we know all the answers to Jeopardy.' And so, I like to think of that spectrum from chess to self-driving cars to the real, open-ended questions that don't have these recurring patterns; and that you have to find ways to learn other than just doing the activity and expecting automatic feedback.

23:47

Russ Roberts: So, let's get practical for a minute. And a lot of listeners to EconTalk are in their twenties and thirties. Some of you out there are in your first career, your first job, your first part of your career. Maybe you like it; maybe you love it. Maybe you don't, and you're worried that if you quit, you're going to fall behind. And one of the lessons of this book--I hope it's correct, David--but one of the lessons is that quitting is okay. And, I think we're often afraid--I know I was, felt this at various times in my career--that, you know, that if I step off this treadmill, or this escalator, or get off the elevator at the third floor before I get to the higher floors, I'm going to have to use the stairs the rest of the way and I'll never catch up. So, what do we know? It's one thing to say Roger Federer got a late start in tennis. I'm probably not a lot like Roger Federer. So, for an average person who is not extraordinary, what do we know about our ability to "catch up"? Or to make up lost ground? Or to just thrive? You know, we don't need to be at the top. To just thrive would be just great.

David Epstein: Right. I think there are a couple of different ways to approach that. So, if we look at some of the sort of concrete evidence--let's say of like Head Start programs in academics--one of the things that I think is now clear based on research that's gathered up about, you know, looked at now 70 different programs that try to give kids a jump on academics, is there are some good social outcomes; but there's also what's called a 'ubiquitous fade-out effect' of their actual academic skills. And, one of the reasons that actually happens is because the easiest way to give someone an apparent head start is to teach them so-called 'closed skills'--like, basically, just procedures for doing something that work really well--

Russ Roberts: Like, on tests.

David Epstein: Right. And the problem is, everyone's going to learn that stuff anyway; and you're no good at applying it to new situations. So, it's not that they get worse. It's just that everyone else catches up and you've learned these very narrowly-constrained skills. So, there's that. In fact, I would say--I don't want to get too far off topic here, but I thought one of the coolest studies in the book and most surprising to me was the one at the U.S. Air Force Academy, where the Air Force Academy--students come in; they have to take a certain sequence of math courses. And they are randomized to professors; and then they are re-randomized the next class; and re-randomized again. And they all have to take the same test; and it's created by multiple professors and standardized and all this stuff. And, what the study found was that the professors in Calculus I. who teach the most narrow skill set have the students who do the best on their test; and it systematically undermines them in future classes. So, they underperform going forward. Because, what you actually want to do is teach them how to connect concepts and this much, sort of, broader knowledge. It makes them frustrated and may make them not do as well on the tests. But it sets them up for future learning. That study is just amazing. So, it's one example where, the kids who rate their professors really highly because they did well in their class contemporaneously--because they were given the skills they needed right now for the test--are systematically undermined and under-perform in future classes. So, it was interesting to see that the teachers who were rated the best by students were the ones who undermined those students for their future learning. So, that's one example of getting what appears to be a head start but what undermines future development. And I think that's sort of a decent analogy for other areas of life. And this isn't to say that people shouldn't specialize at all. I think one of the [?] a theme of this book, it's not that specialization is bad. It's that society has overvalued specialists and undervalued generalists. And overvalued the early specialization pathway, and undervalued the sampling period and delayed specialization pathway. And I think that shows up more broadly in other areas of work. So, in one part of the book, I discuss the match-quality--the degree of fit between a person's abilities and their interests and the work that they do. And, one of the things that shows up in that kind of research is that people get information signals from trying certain work. And, they learn about themselves: they learn how good they are and if they like it. And, when they use that information and quit, they tend to have faster growth rates in whatever they are doing next because they've learned something about themselves. And that's--maybe they learned that they weren't good at doing what they learned before, they're better at something else, or they're more interested in something else, so there are better opportunities elsewhere. But, maybe, paradoxically, if we think about the received wisdom, the quitters end up as the faster growers. And also, often, happier. There's a Freakonomics--so, Freakonomics used to have these Freakonomics Experiments home page that I discuss in the book, where thousands of people--Steven Levitt, the so-called Freakonomics economist--leveraged his readership to get thousands of people to flip a digital coin to make important life-decisions. Those could be from getting a tattoo to having a kid to changing jobs. Right? Changing jobs was the most common one. And, nobody had to change jobs if they flipped heads. But they could. And it turned out there was a causal effect of the digital coin flip on the decision people actually made: and, people who followed the coin flip and changed jobs were happier down the line when he checked back in with them than those who had gotten the flip that said they should stick with their job and had done that. So, there was a causal effect of changing based on the coin flip and to happiness. So, I think there was--I thought--Tyler Cowen talked a little about this study and said, basically his advice was, 'If we're thinking about quitting, maybe we should.' I think that was his exact quote.

29:54

Russ Roberts: Yeah; I don't really like that. For a bunch of reasons, one of which is, 'Boy, the selection issues there: who chooses to come to the page, who chooses to flip, what they hit'--you'd really want a lot more information.'

David Epstein: But you should look at his analysis where he tries to establish causality between the coin flip and the subsequent outcome. So, there's definitely selection for who comes to the page. But I would highly recommend diving into the methodology of that study, too, if you are--

Russ Roberts: I will, but it's more than just who comes to the page. It's who thinks it's a good idea to flip a coin to make a life decision: some people are in--you could imagine someone who is in desperate anxiety because they can't decide what to do; someone who finds it amusing. I mean, there's just such an enormous range of emotional things there. But, to give the study its due, it does remind me a little bit of the placebo effect. It's like, if I can convince myself that I've done this in a certain, particular way, I'll be happier. It's like a form of self-therapy maybe. I don't know.

David Epstein: Maybe.

Russ Roberts: I do want to add--

David Epstein: But, you know, there is other work I discuss in there, too, that shows things like teacher turnover, which is just this horrible, denigrated thing among school systems, shows that when teachers move, they actually perform better. And so, you know, we don't like teacher turnover because it's an administrative headache. But, in fact, I think the evidence is that those teachers are responding to match-quality information, finding a better fit for themselves. And it's not based on moving to schools with better students. And they actually do a better job of boosting student performance after they move. So, I think what we should be careful about constraining those kinds of movements. And you tell me, but, I think we want low friction that sort of talent market, basically. And that's not really what we have.

Russ Roberts: Oh, I agree. And I also think--to, just to [?] what I said a minute ago: I think change is a really powerful--just change, period. I think, you know, when I moved and came to George Mason, my productivity jumped dramatically. For a hundred reasons. But part of it was just that I was in a different place. I remember when key people left institutions I worked in, I thought, 'Oh, my gosh. That's going to be such a blow.' And other people stepped up. And other people knew people who came in. And there were--just gave people things to think about that they hadn't thought about before. So, you know, when people ask me, 'Should I take this job?' one of the things I always ask is: Do you feel like you are in a rut? And a rut can be a really comfortable place. I think about it more like a hammock. I remember a friend of mine who asked me for advice on this. And, his--he was in an incredible job. It paid well. Very low expectations. Lots of leisure on the job, outside the job. And he was very happy. It was very satisfying. It was a good job. And he did it well. And, he had a new opportunity that came into his life that was much harder. It wasn't going to pay a lot more in real terms, after cost-of-living changes were taken into account. And, I just asked him if he felt challenged in his current job. Did he feel alive? Was it ever exhilarating? And the answer was No. And--I didn't tell him what to do, but I encouraged him to consider the new job; he took it. And I understand that's a data point of 1, a sample of 1. But, his life changed, in all kinds of mostly good ways. So I think change--I wouldn't tell people, every 5 years change. That's a little bit like term limits. We understand why term limits in politics might be good. It just, it seems absurd to say that after you've stayed in Congress a certain number of years, you should quit, after you've learned so much and you have better props[?] at the job. But we understand that sometimes just leaving is a good thing. People get into a rut. They get into--expectations fall. It gets harder and harder to fire people. And you know that. And so, you don't work as hard. And so, I don't know--I think there's a lot to be said for just changing now and then. So, I'm a big fan of that.

David Epstein: Wait. Wait. Wait. That brought up[?] some great points. Sorry--can I just--because I thought those were fantastic points--

Russ Roberts: Sure.

David Epstein: that go to a couple of things: The idea that just a change might be useful. At first--I love that hammock; I'm totally going to use that in all trivia to you[?]--that, we say people get into a rut when really what they're doing, more often--like, a rut saying maybe they can't produce anything. No: it's their getting into a hammock, which is they are producing the same stuff, basically. They are not getting off that plateau because it's comfortable.

Russ Roberts: And it's pleasant. It's not like Charlie Chaplin, Modern Times, tightening the same braking[?] nut over and over again. That's a rut.

David Epstein: It really reminds me of when I used to--you know, for my last book, when I was looking through literature on speed typing, actually. So, it turns out, what most people do is, we get to a certain speed of typing that's like fast enough and there's nothing pushing us beyond that; and we settle into it. When in fact you could get much, much faster. But what you have to do is basically set a metronome at a little bit faster than you go now; ignore the mistakes; just go at that speed. And you take it up little by little, and you can like double your typing speed. But that's not our natural orientation, right? It's to get to a certain place and then sit in the hammock. And, gosh, now I'm mad I didn't use that hammock in my book. But it reminds me of one of my favorite--this might be related--one of my favorite phrases that stuck in my head in the book was from Herminia Ibarra, professor of organizational behavior, which is, 'We learn who we are in practice, not in theory.' And I think there's a huge industry of, like, self-help and personality tests that either explicitly or implicitly want to convince us that we can just take that test and introspect, and know what's best for ourselves. But, in fact, our insight into ourselves is constrained by our roster of experiences. And so, the only way to find out what else is out there and what might fit better is to try some stuff. And while experimentation seems like it might be a waste of time, or it might be scary, that some of the people I think I write about in the book end up sort of, in fact, being generalists because what they were trying to do is zigzag till they could kind of triangulate the best spot for themselves. And they end up having a lot of different experiences because that's how they get to know themselves and their skills. It wasn't that they were just trying to be broad. But their just changing things, this experimentation really teaches you about who you are in practice, because we're not as good at introspecting that as we think. And it makes me think--and I wonder--I was just reading some research by LinkedIn's chief economist that showed one of the main predictors other than going to a Top-5 MBA [Masters of Business Administration] program--whether that was because of the school or the student selection, who knows?--but, of becoming an executive when they looked at a half million members was the number of different job functions that someone had worked across in the industry. And I wonder--maybe that's because--I think the chief economist suggests it's probably because those people get a well-rounded view of the industry, which could be. But I also wonder if some of that is they are going through this form of personal experimentation where they learn what's possible; they learn what they are good at and other interests; and maybe they are able to find a place where they fit. And I think we have to realize we can't just do that without experimenting. We can't just, you know, introspect everything about ourselves. Like, that would seem crazy when we were younger, to think that we could just like sit around and introspect and know everything about ourselves, without trying things.

Russ Roberts: That is so deep. I mean, it seems obvious.

37:36

Russ Roberts: But, one of the things that we haven't talked about is credible fear of change. And, Eric Hoffer, who wrote a beautiful little book, I think it's called The Ordeal of Change. I recommend it. It's about just how hard it is. And in his case--it's just an amazing story. I think he was blind until he was, I don't know, an adolescent or a teenager. He couldn't read, I know for sure. And at some point he has access to books, and he just has very little formal education; and he just becomes this voracious reader. But, a good chunk of his life, or at least part of his life, he's a farm worker. And he's picking some farm--he's a migrant farmer--he agrees to moving from picking peas to something else. And he talks about how scary it was, because he's afraid he wasn't going to be good at this different kind of vegetable. And, that's a trivial example, but--

David Epstein: That's an awesome example--

Russ Roberts: the book's about the effect of change, not just change of vegetables. But, change is scary. And most of the time that panic is attractive, not just because it's fairly pleasant to rock back and forth in it: you worry that that other one is going to be a really hard chair you can't ever get comfortable in. And so I think it's very difficult for people to change. And one of the themes of your book, which I love, and which you are emphasizing now, is that: it's not just that, 'Oh, you might like this more.' You need to try a lot of stuff. You can't figure it out. You need to explore stuff. Unless, you know--some of us are lucky or not lucky--I don't know what the right word is--and find something early on. And others, "flounder." But, that's a feature, not a bug.

David Epstein: Absolutely. And again, there's no single pathway. Some people will find something early on that's a great fit for them, and that's great. But, I think--you know, it reminds me of a psychological finding I mention in the book called the 'end of history' illusion. That's this idea that we all recognize that we have changed a lot in the past, but think that we won't change so much in the future. So, it leads to some really funny findings. Like, if you ask people how much they would pay to see their current favorite band 10 years from now, the average answer is $129. But if you ask how much they would pay today to see their favorite band from 10 years ago, the answer is $80. Right? Because we really underestimate how much we change. And that's including personality traits, right? The correlation for an individual personality trait from teen years to middle age is usually like in the 0.2, so it's 0.2-0.3, so it's low to moderate. So, there are certainly traces of who you were that are still distinguishable, but you are a very different person. And we underestimate that change. I think between that and our inability to predict the world we're going to live in, we're basically facing this task of trying to decide how to behave for a future you who you don't yet know, in a world you can't yet conceive. And I think the idea that most of us can do that really well, like a priori, without trying some things, is a very limiting notion. Basically.

Russ Roberts: And the idea that you can get better at it by sitting in your armchair and pondering it, or reading a book that's going to help you figure it out is probably an illu'--a collusion. It's really an important point. You have a great line in there. May I--I can't remember, [?] is one of the people you write about. But, I think about the idea of flirting with your possible self: 'Because you don't know who you want to be. And you want to just kind of hang out for a little bit. Go on a date. Imagine it. Try it. Do a little of it.' And, trial and error is, just as a general point, it's grossly underrated.

David Epstein: That's right. And that's because we think of, is, there's so much lip service to the error part. But who really supports that? Right? In practice. 'Oh, failure is so important.' But I don't see anyone's boss being like, 'Yeah, this was an important failure for you.' Or, at least that's never happened to me. So, we give the lip service to it, but what about in practice? And, you mention dating, which I especially like, because I like to think of careers that way. Right? Where, we incentivize people to get married to their high school sweetheart. Which, you know, if we thought about careers the way we think about dating, nobody would settle down that quickly. Or very few people would settle down that quickly. Right? It might seem like a great idea to marry your high school sweetheart; I thought it was a great idea at the time. But, having more experience in the world, in retrospect it looks like a really bad idea. And, yes. So I think that's an important thing to keep in mind.

42:17

Russ Roberts: Now, before we go on, and I want to move on to a couple of different things, but when you mention this issue about learning and the importance of, what I understand as narrow techniques that work for one particular thing but aren't as generalizable, you [?] that it's time[?], they're frustrating: I want to share my favorite course evaluation when I used to teach in the classroom. So, I got a 1 from this student, on a scale of--

David Epstein: out of?

Russ Roberts: 5 was good. 1 was bad. And a 1 is really demoralizing. So, I look at it: What does the student say? 'This course was very unfair. Professor Roberts expected us to apply the material to things we had never seen before.'

David Epstein: That's like--that's the whole trick of learning.

Russ Roberts: Right. But I do think--and I want to use that as a segue to the next topic, which is: Our ability--and you write about this a lot, and it's complicated--our ability to use analogies and patterns, not in a chess world, not in the world of chess but in the wicked world of complex problems, and how powerful that is but also how limiting it can be. And I want to tell you a story, and then I'll let you use the story as a way to riff on the ways you write about this in the book. I met a CEO [Chief Executive Officer] once, of a very large company. The company had gone bankrupt. And this was, of course, humiliating to this person. And I don't know why he confided in me. I didn't know him well. I think it was the first time I had ever met him. It was in my office. I was working in a business school. And he was almost talking out loud to himself. And he'd gone to Harvard--one of those Top-5 MBA programs, which leads to a lot of CEOs. And he, um, he said, 'I made a mistake,' when he was talking about why they went bankrupt. I didn't ask him why he went bankrupt. He just sort of took, free-associated, and he said, 'I made a mistake. I applied the wrong case.' And he told me what the cases were. He thought it was going to be like this case, but it turned out to be like this other one. And it bit him in the rear end, and the company went bankrupt. And I thought, 'What an extraordinary illustration of the challenge of the case study approach?' But, in a way, that's kind of like life. We see things, and we say, 'Oh, that's like [?].' And so I think, 'Oh, I'll use that tool to solve this problem.' Turns out it's the wrong tool.

David Epstein: Right. Right. And I think that, um, is really common among executives, or among all decision makers. Right? And in many cases, I think--so I write--I have a chapter about Analogical Thinking, which I think is what you are referring to--

Russ Roberts: yep--

David Epstein: and, um, in many cases, the case like that executive where he thinks of maybe the most traumatic example or the one that on the surface is the most similar and that's what he uses as a model of the problem he has to solve: In many cases that actually does work for us in a kind world. Right? If we are--you know, I fixed my drain in one apartment when, you know, somewhat similar drain gets clogged in another place, like, I'm going to use the same techniques and that's going to work. Or, you have a million interactions every day where they are not exactly the same. But an analogy to a pretty similar situation works really, really well. And that's kind of how we get through life. And we don't really have a problem with that. The tricky part is when we use that same sort of instinct to default to like a single either most dramatic or most surface similar, um, analogy, when the problems are much more complex. And, I think this can end up leading to a really constrained view--what Daniel Kahneman called The Inside View. Basically. Where you have your problem, and you get obsessed with kind of the particular details of your problem. If you use an analogy, it's going to be like a single analogy. And, you're going to try to like match up details of those problems and similarities. And, essentially, you end up having this very narrow view of your own problem. And if you use any analogies at all, it will probably be a single one. And, what you want to do, instead, is to get out of that mindset where you really have--you picked the first analogy that came to mind. Irrespective of whether it's useful or not. And probably because it has some of what I call surface similarities, basically. What you want to do is create a so-called reference class of analogies. You want to generate a whole bunch of analogies and then think about what usually happens, on the broad scale, instead of focusing on the particular details. So, Kahneman tells a personal story about this where he and a group were charged with making a decision-making curriculum for a school system. And they had a year of meetings--

Russ Roberts: incredible story--

David Epstein: and, at the end of that, they decided, 'Okay, let's have a meeting to talk about how much longer we think it's going to take for us to finish this curriculum.' And, they take votes; and everyone votes between a year and two years from now. So, the entire range is like 1-2 years of guesses. And then Kahneman realizes there's this guy named Seymour who has seen this process play out with a whole bunch of other teams. And he asks Seymour who--again, Seymour had just predicted no more than 2 years, like 5 minutes ago. And now he asked Seymour, 'So, how did it work with these other teams?' And Seymour says, 'Gosh, I never really thought about that. But, come to think of it, none of them made it in less than 7 years. And a lot of them never finished.' And so the group says, 'Wow.' Well, and then they discuss their unique personalities and their unique assets and say, 'Well, that won't be us.' And they stick with their 1-2 years prediction. Eight years later, they finish. Kahneman is not even on the team or living in the country any more. And the school system doesn't want the curriculum any more. Right? So, instead of focusing on their unique assets and how skilled they were or even any single analogy, what they should have done is forget about all those little details. Forget about what they think their unique skills are. And gather as many previous cases as they possibly can. Because, while those aren't exactly the same, most events aren't completely unique. And this what's called--this reference class, forecasting forces you in a way to think a little bit like a statistician where you look at what normally happens instead of getting distracted by all the sort of little details of your particular case. And one of the interesting findings in this chapter, to me, was that analogical thinking can be very powerful for problem-solving. But, if you use the single analogy, it's not good at all. Basically, you have to use a bunch of analogies. You have to generate this whole reference class of analogies and see what usually happens. Whether you are trying to predict what is going to happen with your group or trying to come up with a novel solution for a problem, you want to come up with a whole bunch of analogies. And, if you are trying to generate ideas, having more analogies leads you to generate more ideas for a problem. Or, if you are trying to predict how your situation is going to unfold, you want more analogies and to try to see what usually happens. But, our instinct is very strongly to focus on the internal details of a problem, and if anything, to use a single analogy. And, unfortunately, that basically is the exact wrong way to go about it, whether you are trying to generate ideas or trying to predict what's coming.

49:59

Russ Roberts: The other part of it, which I want to get your thoughts on--and I don't think you talk about this in the book explicitly although it's, I think, under the surface--it's not just analogies. I like the idea of thinking, as you talk about in the book, of tools. And, classic line I've said here before, and I think it's deep even though it's a cliché: If you only have a hammer, everything looks like a nail. And I think after a while we forget that we have a hammer. So, one of the problems is, is that we are used to having a hammer. It's what we've got. We really like swinging the hammer, too, after a while. And so it's not just: it's what you're used to or you pick this analogy. It's that you get into this habit of using the same analogy over and over. Because it's served you very well. And, as you say, it often does. Often, a particular analogy in the right kind of setting or the right kind of tool in the right kind of setting is extraordinary. And, it just--just to contribute a stupid example: As an economist I always like to say incentives matter. So, the coach of Villanova basketball team--I just read this morning; I don't know if it's true--allegedly turned down an enormous offer to leave Villanova, go to UCLA (University of California, Los Angeles). 'Well, that's irrational,' would say someone who doesn't understand the world or who only uses the hammer of financial incentives, doesn't understand the role of nonmonetary factors. Or, just other cultural habits: if you are of the different, or who loves Villanova--who knows what the reason is. But an economist's first thought might be, 'Oh, he'll take that job because it's so much higher salary.' And that's just a trivial example of where a tool could lead you astray. But, this idea that we become attached to our tools in an emotional, almost needy way is illustrated in your book. And what, to me, this is just an unforgettable, extraordinary example: Firefighters who die because they get overtaken by fires. Talk about that.

David Epstein: So, these are wilderness firefighters, particularly so-called 'Hotshots' and 'Smokejumpers' who go into forest fires, either hike in or parachute in. And they have to dig trenches, usually, and clear fuel to try to contain forest fires. And, an unusual sociologist named Karl Weick made an unusual finding when he looked at--these are incredibly skilled performers; but once in a while, there's a disaster and a bunch of them die, when something unexpected happens. And what he noticed over and over again in those scenarios is that they would die with their tools. With--more than a hundred pounds, like axes and things like that. Heavy tools, while they were trying to run away from a fire. And, on brief occasions one of them would drop their tools and would survive, because they would be able to run effectively away from the fire. And you see in testimony of those survivors, they'll say, 'The fire was closing on me, and I decided I had to put down my axe. And I thought: Man, I'm crazy! I can't believe I'm letting go of my axe.' So, they'll, like, look for a place to dig a hole really quick and bury the axe to protect it. Meanwhile, the axe has no use any more. All they can do is flee from the fire for their lives. And most of them never drop those tools, even when they are ordered to. They die with the tools still on their back, encumbering their running, when they could have gotten away if they had dropped the tools. And what Weick saw this as an allegory for, that, kind of, when all you have is a hammer problem--when all the tools become so central to the professional identity of the practitioner, and so bound up with their feeling of competence, that they essentially no longer really realize that they are separate from them self or their practice than their own arms are. And so, the thought of dropping those tools never even really occurs to them as a way to adapt to an unfamiliar problem. Even though it's the one thing that would save their life. So, their bodies are found still with their tools. And he used that as an allegory to talk about this in different disciplines, where, whether it's a real physical tool or just some common procedure, for example that people get so attached to that they don't even really realize it's something that they can drop, or can change. And that's fine as long as they face the same situation over and over. But when they face an unfamiliar situation, or, Russ, like your student said, it's unfair because you expect them to apply the material to a new situation: Well, that's kind of life, isn't it? And you have to apply the material to a new situation. They don't realize that they can use these tools in any different way, or drop them entirely. And so, in all these domains that Weick studied, like in commercial airplane accidents, the vast majority of the time the problem is that when all the signals about the situation show is that the crew is in a unique, facing a unique problem, they stick to their familiar procedures anyway, and to their initial plan, until it's way too late. And so there's this lack of ability to realize that you can deviate. You are almost, like, stuck in this pattern. You are in your procedural hammock, basically--to use a mere terminology; and it's really hard to get out of it.

Russ Roberts: It's such a deep thing, actually. I appreciate[?] enough when I was reading the book, and you are making me realize it now: The emphasis on procedure--which is really important, usually, because it prevents emotional mistakes, and it prevents spontaneity that in life or death situations is extremely risky--becomes the thing that kills you. Those tools which save your life become the thing that cost you your life. And, you know, you talk a lot in the book about how many solutions to problems come from the non-specialist, and how often that fresh way of looking at things--the generalist approach rather than the specialist approach. It's a--it almost doesn't pass the sniff test. Like, 'How could a non-chemist solve a chemistry problem? It's impossible.' And the reason is: The chemistry people are just hammering that nail over and over and over again, whatever it is. And somebody comes and says, 'Let's try a screwdriver. That's not a nail. You are doing the wrong thing.'

56:51

Russ Roberts: But, it comes out most vividly in the book with the NASA (National Aeronautics and Space Administration) example. Here are these engineers--you talk at length about the Challenger tragedy--here are these engineers who, they are so smart. And they understand so much. But they are paralyzed when it doesn't fit into the procedure.

David Epstein: That's right. And not just the Challenger. I mean, in the Challenger case, they had these incredible procedures that had worked really, really well. I mean, they did an amazing job--

Russ Roberts: Doing something that's inconceivable.

David Epstein: Absolutely.

Russ Roberts: Sending people out to space and catching them when they come back.

David Epstein: Absolutely. And, until the Challenger, I think had never lost--had never lost anyone in space--in a returning from space. I guess the Challenger didn't--well, anyway, they had had an accident on the launch pad before. And, what happened in that case was they had an unfamiliar in terms of the temperature they were going to have at launch--

Russ Roberts: You are on the Challenger, now.

David Epstein: Yeah. About the Challenger. And, to make a long story short, there were a small number of engineers who recognized that they were in an unfamiliar situation. And raised their voices; and said, 'We might have a problem here.' But, one engineer in particular. And he was asked to quantify the problem. And NASA's, the organization that had, they had this mantra hanging on the mission room that was, 'In God we trust. All others bring data.' Right? So I think that sort of set the tone that if you didn't have--and you could see this in, after the accident, in transcripts of testimony from engineers, they would say things like, 'If I didn't have data, I didn't have a right to have an opinion, basically.' And I understand that: because you want a rigorous data culture.

Russ Roberts: Generally, it's a good rule.

David Epstein: Exactly. At the same time, in this particular case, they did not have the data they needed to make the decision. And so, when this particular engineer argued for a last-second delay of the launch, he was asked to make the quantitative case. You know: Why does he think the O-rings are going to fail in this temperature? Show the data points. And the fact was that they didn't have the right data points. His data was primarily based on two photographs that showed at different temperatures that some burning hot gas had gotten past a seal. And at the colder temperature it looked much worse than at the warmer temperature. But those--one of the temperatures of one of the warmest launches they had ever done, and one of them was the coldest launch they had ever done. And so it spanned almost their entire range of temperatures they had launched at. And his bosses basically looked at this, like, 'Well, you have one really warm one and one really cold one. So that's no correlation.' And what he was saying was, 'I think this qualitative data, these pictures, are telling a story.' And that was rejected. It was essentially deemed inadmissible evidence because it wasn't a quantitative story, and their procedure called for strict quantitative criteria.

Russ Roberts: It was an anecdote, actually. The equivalent of an anecdote.

David Epstein: That's right. It was an anecdote. And it was a hunch. And, you know, later, when the NASA managers testified in front of the Rogers Commission that was investigating this--Richard Feynman--they made this argument that, the engineers, they didn't have a good quantitative case. And Feynman said, 'When you don't have data, you have to use reason. And they were giving you reasons.' And he goes on this sort of explanation of, the data wasn't there, so you have to find another way to make decisions and not just stick to 'Well, our process says you either have the data or you can't change the decision.' It's, 'You have to recognize that you are outside of the normal bounds,' and say, 'In that case we have to apply different criteria.' And they didn't. So, in this case, like the Hotshots, their tools weren't axes. And hammers. Their tools were these procedures that called for very specific types of quantitative data that they were not willing to drop, even though the data that was required to really make the decision didn't exist. And yet they still had to make a decision. So they just continued with the launch. And we know what happened. And, in fact, their next disaster, the Columbia explosion, was so culturally similar that the investigation committee deemed NASA not a learning organization, because they had not learned from the Challenger launch. So, in that case, there were, again, a small number of engineers who were concerned that Columbia had been damaged. And asked the Department of Defense for high-res [high-resolution] photos of a portion of the shuttle they thought were damaged. And this was outside the normal procedure--again, this hallowed procedure that's kind of the tools for NASA decision-making. And their superiors found out, went to Department of Defense, and apologized for contact outside of normal channels and said it wouldn't happen again. And then the Shuttle exploded. So, in both cases, it was this strict adherence to a procedure that works really well when they are in the bounds of their experience, and that was disastrous when the information they needed to make a decision was no longer available and the quantitative case couldn't be made.

1:01:54

Russ Roberts: I can't help but wonder if, on virtually every launch, there were people who said, 'Wait a minute.' And most of the time, when they were ignored it turned out okay. You know--I often--after many terrorist attacks, there's always some report that we had some information: if we'd only. Or, a better example would be Pearl Harbor. You know, in a world of infinite time, infinite resources, Pearl Harbor shouldn't have been a surprise. You know, of the zillions of telegrams that were intercepted at the time, and decrypted, there was some suggestion. So of course, afterwards, there were people who said we should have been aware of this risk. You think that's the case in this NASA, culture, story? I mean, it's an incredibly powerful story. Even if it's not 100% as straightforward as it sounds, it's still a very useful thing to keep in mind about relentlessly using tools all the time. But, do you think there are other times when people raise those issues and just get ignored and it turned out okay?

David Epstein: Yeah. I mean, I think you nailed that right. So, one of the guys I spent a lot of time with interviewing for that chapter was Allan McDonald, who was the head of the rocket booster program for NASA's contractor, Morton Thiokol. And, so he was on the famous conference call where they decided to go ahead with the launch. And, one of the things he said was, you know, 'If we had effectively delayed the launch without the proper quantitative data, the feeling probably would have been: It would have been fine; we should have gone ahead.' The people who stopped it are 'Chicken Littles,' to use his language. And, it wouldn't have been deemed, like, 'Oh, that was a great decision.' Right? Because we don't know the counterfactuals. And, the Challenger--again, I think that was probably more unique because the temperature was so outside of their normal bounds, that that specific instance might not have been the same. But I'm sure at a lesser sort of magnitude, those things were happening constantly. Probably. And, that's why, in that chapter, I had a really to[?] interview a million times. In that chapter I also talk about a commander of para-rescue jumpers in the Air Force. And when he has to make decisions in Afghanistan with very, very little information. So, there's an explosion in a caravan and his para-rescue men have to go and essentially rescue an unknown number of injured soldiers. And they don't know what situation they are getting into. And it turns out that he makes sort of a very difficult decision that turns out really well, and everyone survives. But, he was so adamant that I better include a quote, where he says, 'Maybe it was luck. That decision could have turned out differently. And then even if I used the right procedure it would be a bad decision because I would have had to go explain it to 10 families.' Um, and so I think that was an important thing to include. Because--even in something as simple as, like, blackjack: If you play perfectly, you win, you know, whatever--dozens of more hands in a thousand or something like that if you play perfectly. And we don't get that many takes in most of the things we do. So, I think we have to be really conscious of the fact that a good outcome doesn't always mean that we used a good process. And vice versa. And I think it's really difficult--I'm really curious what would have happened if they had not launched Challenger. Because, I think internally there would have been a lot of feeling that people were being overly cautious. And we're talking about the space program. I think I used this quote where engineer Mary Shafer, former NASA engineer, said, 'Perfect safety is for people who don't have the balls to live in the real world.' Right? And that kind of became a famous quote. Because you can't have perfect safety. You have to take some risk. And so, what I think that chapter is about is sort of trying to calibrate to minimize two different types of errors: the errors of mindless conformity where you follow the procedure and use the tools no matter what; and balance that with errors of reckless deviance, where you are never following procedures and always sort of ad libbing and improvising. And what I tried to get out of that chapter was: How do we try to diversify the tools of an organization so that we strike a balance between those errors of deviation and errors of conformity?

1:06:32

Russ Roberts: I'm just thinking of Nassim Taleb, who has taught me that expected value is the wrong way to think about rational decision making. And, often--and you don't want to just look at the odds of a bad outcome. You want to look at what would be the consequence of that bad outcome. And that's really hard for us to do. We often just say, 'Oh, it's unlikely. So I don't have to worry about it.' But, if it's unlikely and it means ruin, you want to stay really far away from that. You want to be, as he says, Antifragile. And it's a fascinating, I think, part of human experience: we're not really--we struggle to deal with uncertainty. When I talk with people about this, they'll say things like, 'That was a bad decision.' And their reason for thinking that is it didn't turn out well. That's not a very good--that's a bad way to think about the feedback between your choices and your outcomes. And, I don't want to overstate--I said earlier, the importance of change and the power of change. And someone listening might say, 'Oh. Great! I'm going to quit my job tomorrow.' And it might turn out really badly. And you might decide that I gave really bad advice. And, in a way you have to quit your job ten times; and we don't live long enough. We don't live long enough. It's like blackjack: You have to quit your job a thousand times so that the 12 extra times that you win outweigh the losses to make it a valuable thing. And, I just think--that story--I'm not going to go into the details, but it's so powerful in the book of that Afghanistan commander, that you realize how--well, one thing you realize is that you have a really easy life, and the decisions that you make, that you worry about are trivial compared to what he had to deal with. It's hard to figure it out.

David Epstein: When I was interviewing him--he's a very stoic guy. And when he talked about delivering this decision--essentially the major part of his decision was he was not going to accompany his men on this rescue mission because they didn't know how much space they needed, and they were space-constrained; and he was guessing how many patients they would have to deal with. And some of his men sort of rebelled at that, or even suggested that he was afraid. And he broke into tears when I was interviewing him about this--which was totally unexpected to me--saying that peer[?pure?] leadership is hard. And the way I use that in the book is to suggest that this incredibly strong cohesion culture made sure that he would not deviate recklessly from normal procedures. But at the same time, he had enough autonomy and pure[?] outcome accountability that he was willing to deviate and ad lib if he thought it was tremendously important; but that the bar was really, really high. And so I think that's kind of the best we can do, is try to set up these forces that cause people not to conform excessively mindlessly, and not to deviate all of the time and ignore standard procedures. And then hope that over a large number of people that that gives us a blackjack advantage, even when individual decisions go wrong.

Russ Roberts: Well, I assume that commander--I assume he was afraid--

David Epstein: Oh yeah--

Russ Roberts: that he knew he was afraid. And he probably hated the idea that by making that call he was being selfish. And, that's an unbelievable dilemma, right, where your brain's telling you--well, your brain's telling you, 'because that's the procedure'; that's the "right thing to do." For some reason--maybe it was fear--he imagined that it might be a good idea to stay home. And, it turned out great. At that time. That one time. As you point out. But, being aware that he may have come up with that solution partly out of fear, especially since it was a particularly unknown, set of unknowns, probably haunts him. Terribly.

David Epstein: I don't think--personally, I don't think he was afraid of dying. Because, he had gone of many of those Category Alphas, so-called, those very dangerous situations with lots of injuries before. I think he was afraid of having to make a second trip back there, basically, if they didn't have enough room for patients; but I think, what he said was, a worse outcome for him than dying would have been he would have had to watch--if something went really wrong, he would have watched his whole team die. And then have to explain that. And I think for those guys, that's a fate worse than dying.

Russ Roberts: Yeah; and of course, I know nothing about this individual; I'm really working on the fictional art version of the story and playing it for educational purposes.

1:11:34

Russ Roberts: Can we shift gears? I want to talk about something in the book that in a way precedes what we are talking about, which is about problem-solving generally. I'd like you to talk about the Flynn effect and the testing that a man named Alexander Luria did of pre-moderns[?] and IQ [Intelligence Quotient]. Because, it illuminated a lot of things for me that relate to past episodes of EconTalk and questions of how to think about the world. And I'd love for you to share that.

David Epstein: So, in short, the Flynn effect is the name for the rising scores on IQ tests around the world in the 20th century, at a steady rate of about 3 points per decade. And basically the whole curve just shifting over: it's not particularly concentrated in a particular part. It's not concentrated in a particular area of the world. It is, however, most extreme in the more abstract sections of tests or on the more abstract tests. So, there's a test called Raven's Progressive Matrices that was created to sort of be the--you know, I don't know--if you want to say like the end-all of cognitive tests. Where it--it required like no--it wasn't based on anything that you had learned in school or studied in the world. So, it was just you get these abstract patterns and one is missing; and you just have to deduce the rules from the patterns and fill in the missing pattern. And so this was supposed to be the test like should Martians alight on earth that would be able to determine how clever they were. Because this test wouldn't require any sort of cultural background. And, what James Flynn found was that: not only was that not the case, but in fact, the biggest gains in scores over time were specifically on this Raven's Progressive Matrices. So, each generation did better than the last, to the point where our great grandparents would look as if they were mentally handicapped because they would score--these tests are always norms so that the mean score is 100. But in terms of the actual number of questions they got right, they would look like they were impaired compared to us, today.

Russ Roberts: And my general impression is that my great grandparents were no smarter, no stupider than I am. In terms of raw ability.

David Epstein: Right. But you are much more equipped for that kind of like--

Russ Roberts: for that task--

David Epstein: pattern [?]--right, those sort of abstract. And if you look at--so, improvements in scores on material that's more related to what people learn in school have, like, barely budged, if they've budged at all. And in cases where they've budged on vocabulary, it's largely come on abstract words: so, things like 'law' or 'pledged' or 'citizen,' as opposed to much more concrete nouns. And so, the Flynn effect is the name, broadly, for this increase in IQ scores. But an interesting facet of it is that it's more apparent in the more abstract tests. And prior[?] to Alexander Luria--you mentioned Alexander Luria was a brilliant young Russian psychologist who, in 1931, decided that he wanted to use, essentially--this was a time when the Soviet government was forcing agricultural land to become large collective farms, and for industrial development to occur. So, socializing agricultural land. And, Luria saw a natural experiment possibility here, where he said, 'Okay. I'm going to go out to these areas of,' these very remote areas of what is now Uzbekistan, 'and see if going through this shift from subsistence farming and herding to collective agricultural work and vocational training and some other sorts of school opportunities--will that change the way that people think?' Like, 'Will it change their habits of mind?' And, when he went out there, he learned the local language and everything. And he brought a team of psychologists. There were some areas that were so remote they were still untouched. And some areas that had gone through various degrees of transformation to collective farming from subsistence farming and herding. And so he started studying those people in both conditions. And what he found was that--you know, the so-called pre-modern people who were subsistence farmers or herders were very constrained--and I don't mean that in a way to denigrate them--but, their habits of mind were very constrained to their exact experiences. So, he would ask them questions and they could only answer for things that they had directly experienced. Whereas, the greater a dose of modernity they had had, whether that was some exposure to school or vocational training or even just to collective farming, the more they could start to abstract and make generalizations and use formal logic and sort of answer questions about things that were, that they had never experienced. And this, working for[?] like really basic things. Like, the people in the more pre-modern condition, if you gave them, you know, a circle and a dotted circle and asked them to make groups of shapes together, they wouldn't put the circle and dotted circle together. Because they would say, 'Well, one of these is a coin and one of them is a watch. And you obviously can't put those two things together.' Whereas people who had had some sort of dose of collective work, or some school, even if they didn't know the names of the shapes, they would be able to see that they had sort of abstract qualities in common, and they would group circles together. And so that was some of the most basic examples. But it went all the way up to much more important abstractions, where the people who had had a dose of modernity were much more able to transfer their knowledge to unusual situations. And that's not to say that one way is better than the other. It's just that one is much more adapted to the kind of need for transfer of knowledge that we experience on a daily basis, basically.

1:17:45

Russ Roberts: The one I found so striking was the three adults and a child were shown, and the question was: Which one is different? What doesn't belong here? And supposedly the person couldn't answer it. And they said, 'But don't you see that the child doesn't belong?' 'No,' says the respondent, 'the adults are working; they need the child to help them get stuff when they don't have it, and to run errands.' So, you can't take the child out. And, you know, there's something child-like about that way of thinking, almost. They had the same example--a similar example, is, like, you tell the story of: There's a hammer, an axe, a saw, and a log. Which one doesn't belong? And they can't figure it out. Because, he says, 'Why would you throw out the log? Then what's the use of the saw?'

David Epstein: Right. Right. Right. Because it has no use. All they can think of is, like, right: three are tools and one's a log, so, 'Well, you could, the hatchet works because you can use that to cut the log. The knife isn't as useful but you could hammer it with the hammer into the log. It's all this very practical kind of thinking. And again, it's not worse. It's just more adapted to a different kind of situation. And that was this repeated pattern that Luria kept seeing: like, where, he could ask sort-of formal logic--like, he had this one question where he would say, it was sort of a logic puzzle where he would say, 'Cotton grows well where it's hot and dry. England is cold and damp. Can cotton grow there or not?' And sometimes if you really pushed the farmers, because they had direct experience growing cotton, they would resist answering this question. They would say, 'I've never been to England. I can't tell you.' And, you know, the psychologist would say, 'But I just told you it's cold and damp. And, as you know, cotton grows well where it's hot and dry.' And they'd say, 'Well, I've never been to England, so I can't tell you.' And if you said, like, 'Well, you know, but what do my words imply?' Like if the place is called Namp[?] will it grow there, and they would finally say, like, 'Okay, it's not going to grow there. Well, if it's called Namp.' But then you would ask a separate, very similar logic puzzle, with different details, with something like, 'In the far North where there's snow, all bears are white.' Novaya Zemlya is the example used. It's in the far north, and there's always snow. 'What color are the bears there?' And they would absolutely refuse to answer. No amount of pushing would get them to answer. They would say, 'Your words could only be answered by someone who has been there.' Even though they had previously sort of, with pushing, answered the question about England because they had experience growing cotton. Whereas with the bears, they would absolutely refuse to take knowledge and transfer it to another domain. They would say, 'How could anyone know? You would have to ask someone who has seen it.' And, we take for granted the fact that we do this kind of knowledge transfer all the time. So, we are able to use knowledge that relates to things which we have never directly experienced.

Russ Roberts: So, Iain McGilchrist in his book, The Master and His Emissary--it was on EconTalk a while back. And, he talks about--I mean, this is--all kinds of bells went off when I read those stories because of the McGilchrist book. So, McGilchrist talks about the right side of the brain and the left--the right hemisphere and the left hemisphere. The left hemisphere is analytical. It's precise. It tells itself stories all the time, if it can't fill in the blanks. It's really good at patterns. And it's a little bit reckless because of that. It over-samples. It over-estimates its ability to make the world conform to the things it sees. The right side of the brain is wholistic, connected, etc., etc. And McGilchrist has a lot to say about that. And it's a fascinating book and I hope a good conversation. But I couldn't help thinking that your Luria examples are perfect for this distinction. The inability and unwillingness, both--it's two things. The inability and unwillingness of the, of a, of an Uzbekistan farmer to weigh in on cotton-growing in England, strikes me as incredibly wise. It's like--you know, the 800 SAT-student [Scholastic Aptitude Test student] nails it. 'Oh, yeah. No cotton in England.' And 'Polar bears are white in Novy-wherever.' But, um, those farmers had a rich--I don't think of it as they'd only adapted to their experience. I see them taking a much more connected view of the universe. Of, what we encounter. Of what we perceive. And that, the three adults and the child are just such a perfect example of it. It's like: It's a silly question. And it reminds me a little bit of the Trolley Problem--you know, these sort of abstract moral dilemmas--you can save one person--one person will die if you switch the track the train is on but otherwise 5 people die. This is something that no human being, almost no human being has ever actually had to do. And, you know, there's different versions of it. There's a horrible version: There's a fat guy on a bridge. You can push him over the bridge and stop the train. Would you do that, to save the five lives versus the one? Would you be that active? I remember telling my adolescent son--he said, 'Well, what if the guy is bigger than me? What if he pushes back?' And, you know, there's a temptation to say, 'Well, that's a stupid answer. You don't understand. That's not the point of the problem.' But that is the point. That's what life is like. Life is complicated. It's never that simple. Right? 'Oh yeah, I know, we're just trying to abstract from that.' But, those farmers understand that that abstraction is risky. And it just struck me that this nuance between analogy, case study, transferring insights from one field to another, is one of the most powerful things that human beings can do. And it's unbelievably important to bring us to the modern world. And at the same time, it's dangerous. And you have to understand its limitations. And those farmers, they are all--they are in the wicked world 99% of the time. And the SAT kid is in the kind world all the time, and thinks that, you know, everything is straightforward; it's all connected, it's all linear and mathematical and I can solve for x and y and just a simultaneous set of equations. And that's not the way life works--

David Epstein: I think it's--

Russ Roberts: unless it does.--

David Epstein: I think that's a great point. I think it gets again to this, your 1-star review where your student said, 'This isn't fair. You are asking us to apply the knowledge of things we've never seen.' Right? And that's kind of what some of the pre-modern farmers were saying: 'I'm not going to answer this question. You are asking me to apply knowledge to things we've never seen.' And it's essential thing for us to do, but it can also be a dangerous thing for us to do. And I think it's important to recognize when we are doing it. Right? Flynn himself, and this is a little bit of an aside: But he told me this story where during the Montgomery busboy tests[?] where he was--his father, who he said was very much a man, he kind of, pre-Flynn effect, he kind of basically, so he would not have been, in Flynn's estimation, as far along on the Flynn effect in the rising curve. He said, 'a man very much like grounded in the literal,' was how he put it. I said that his Dad made some derogatory comment about, um, the busboy cut[?]. And, Flynn said, 'Well, how would you feel if you woke up tomorrow and you were black?' And Flynn told me that his father said, 'That's the most ridiculous thing I've ever heard. Who do you know who ever woke up black?' Right? And so it's, like, well--

Russ Roberts: Heh, heh, heh. Well, I'm laughing. But it's a tragic--it's an unbelievably powerful counterexample to my story of my son and pushing [?] over the ledge [?].

David Epstein: But it's not--but it's not--but I think that's important. Because I think these aren't--and I think you are identifying this: these aren't zero-sum things. Right? It's a power and it's a danger. And I think that's the really important thing to recognize. And I think it goes along with that issue of tools, which is this a kind of knowledge, transfer, and abstraction. We use without even knowing it, on a daily basis. And I think we would be better off to sort of recognize it. So, we are going to wield its power one way or the other. But I think we are sometimes blind to what's really going on. And it would be better off, we sort of thought about our own thinking, in order to limit our errors a little bit.

Russ Roberts: Yeah. I'm interested in mindfulness and meditation. And one of the things I'm thinking about right now is how to be mindful about our mindfulness. It's not easy to do. You need--it's a very high level of self-awareness to realize you are applying a tool that you have used a thousand times, and maybe the 1001st time is not the time to use it. It's very--that's a just--it's very powerful.

1:26:47

Russ Roberts: Well, I want to close with two things. First, I want to say something about Adam Smith, because we are talking about specialization, and the range of things we bring--the skills we bring, the range of tools we bring to a problem. [More to come, 1:26:57] And, you know Smith saw specialization--correctly, I believe--in the Wealth of Nations in 1776 he was able to understand, despite the relatively small amount of growth he was experiencing in the world at that time--that specialization combined with trade--and I don't mean just international trade but exchange generally between people in market settings--that that is the great engine of growth, and the great engine of the transformation of the standard of living of the world. And it continues to work that way. And it's--the idea that you don't have to do everything for yourself. The idea that you can rely on others, is one of the deepest ideas in economics. And, I want to let you talk a little bit about that. I often point out that, in a world of--you know, I like to say in a world of, I like to say of a thousand, take the world of a thousand most talented people. And, put them in a place that's rich with resources. An island. They are going to be very poor. I don't care how talented they are, how smart they are. You can pick them; you can decide who they are. You can pick them for a range of skills. There is just not enough scope for specialization among a thousand people to have a modern standard of living. And the reason that you and I can have a conversation across Skype and have something that exists called EconTalk is because we live in a world of 7 billion people. And we interact with hundreds of millions of them indirectly through exchange and trade. And that allows me to specialize as a podcaster and you as a writer. And that's just not imaginable 500 years ago. It's not imaginable really 200 years ago. And, I often use the example of a pediatric oncologist. And I'm sure there are specialties within pediatric oncology. And most of the time that's a really good thing. And you can't be a great pediatric oncologist as a hobby--is my guess. So, talk about that balance. Specialization is, to some dimension, necessary for our modern standard of living. But, I'd say the theme of your book is it can go too far--as you've said earlier. So, talk about that.

David Epstein: And also--yeah, I mean, right, you are, the podcast for economist's interviewing the former sports writer with the geology masters. And Adam Smith. Of course, I learned how much his personal range from your writing. I had no idea he wrote about happiness and such things like that. I think there's an issue of semantics. For one. Because--and this is a difficult, this is one I'm going to have to try to harp on as I try to discuss my book. Which is that what it means to be a generalist in one era is not the same as what it means to be a generalist in another era. Right? So, there's been like a Flynn effect of specialization, right? The background itself has changed. And so it's very much--one of the reasons I tried to make sure to include a number of scientists in the book, and doctors, was that, not just as quoting them on a topic but actually talking about their own careers, is that I think, for most people, from the outside, they may look like the epitome of specialization. A scientist, right? And I sort of thought about, because I was a science grad student; and I wanted to think, 'Well, okay, what,' you know, if these from the outside, to most people, this is the epitome of specialization. So, in that sense, what does it mean to be broader than they have to be? Or to have range--you know, to use my own terminology. So, I think some of what's in the book is--I try to, even, get at what that even means, to expand your breadth when you don't really have to. And so, so many practitioners today would be compared, are, you know, who I would think of as being broad are still more specialized than someone was hundreds of years ago.

Russ Roberts: For sure.

David Epstein: For sure. So, I think it's very much context-dependent, today, of what it means to have more breadth today. And I think there's some evidence, and I go through some of this patent research in the book, that actually, at least within the sort of 20th century and beyond there's an increasing importance or opportunities for generalists where we see, like Andy Ouderkirk in the book, this inventor who then, who won R&D Magazine's Innovator of the Year and then decided to study inventors--finds that the relative importance of deep specialists and comparative generalists--and the way he characterizes this is by looking at millions of patents; and you see people who kind of drill down into a certain area more and more and more. And others who work across, like, a large range of technological classes. And he sees the importance of these different types of different types of individuals' breakthroughs changes over time. And that, in and around World War II [WWII], the importance of the specialist, their contributions, were sort of peaked. And it ebbs and flows. And right now, it's declining. And he doesn't know for sure why. But he thinks some of the reason is that there is so much knowledge out there and communication technology allows it to be so effectively transmitted, that there's way more opportunities to--and more likely successful opportunities--to recombine well-characterized knowledge that is already out there in new ways, than to actually push the cutting edge just a little bit. And so I think even within these very technical domains, I tried to take them on and examine what it means to be a generalist within that given context. As opposed to just being, like, a dilettante, who is someone who is not particularly interested or good at anything. Which is what I really want to differentiate from a generalist. You know, I think this gets to something that I think I've heard. I don't want to be wrong here; I think I've heard you talk about this before. Now, maybe not. But, sorry; but when we think of technological transformation, you know, you can think of Robert Gordon; and he says these, well, all the biggest are slowing down. Right? We've made enormous strides and now technological progress is slowing down. And I think maybe that discounts some of the more serious applications of communication technology, which communicate the results of specialists--of who we so desperately need hyper-specialists. But their contributions can be more broadly and quickly disseminated, which provides a lot more opportunities for people who are broader than specialists. And I think that's why some of these things like, Innocentive, these, which is set up by, to solve, for like, random people to solve the problems that have stumped pharmaceutical companies' work--

Russ Roberts: For large amounts of money. Describe that site, quickly.

David Epstein: So, it's started by a VP [Vice President] of Research at Lilly [Eli Lilly and Co.], where, first Lilly would post problems that would stump their chemists. And so many of them were solved by just like random people outside, coming from other disciplines and bringing some totally separate knowledge, that this VP turned it into its own separate company that helps other companies post problems that have, that they have gotten stuck on, for just outside solvers. And so, I think you mentioned earlier a problem that stuck NASA [National Aeronautics and Space Administration] for 30 years--got solved in 6 months by somebody, like a retired cell-phone engineer who was like living on a farm in New Hampshire and just brought a totally different approach to it, and was just like, 'I can't believe you guys didn't think of this.' And so, I think as the disciplinary boxes get more and more narrow, we don't have disciplines. We don't divide up study into disciplines because that's how the world is. We do it because it's easy for us to categorize. And we try to, like, afterward, put the world back together to understand it in more complexity. And I think as disciplinary boxes get smaller and smaller, more often the knowledge that people need for their problem is outside of that box. And so, it's important to have those specialists; but it's also more important to engage people from outside, and people who are broader. And I think we've really seen that in medicine, where specialization has been inevitable--and fruitful--and also incredibly problematic in ways: where, a cardiologist used to be highly specialized. Now a specialized cardiologist is someone who might only study cardiac valves--like, the little flaps that let blood in and out. The electricity, the rest of the heart muscle, is totally out of their purview. And what happens in that case is that everyone works on what's called 'surrogate markers.' Where, someone might have a problem and so that cardiologist might fix the problem with a valve. But what you really care about is whether that person is going to have a heart attack, stroke, or die. And what we find in many cases is that, a specialist affects the surrogate marker, and so everything is great; and then the person just dies with a better heart valve, or has a heart attack and stroke at the same rate. Or we regulate blood pressure and what you get is people dying at the exact same rate with great blood pressure. Because everyone is working with surrogate markers. And so I think that's inevitable, and useful, and also very problematic. And that we need to recognize both sides of that.

1:35:49

Russ Roberts: So, normally at this point I would say 'Thanks for being part of EconTalk.' But, I'm going to add a personal note here. And we're way over normal time. I'm don't care; I hope listeners are enjoying this. I am. I want to let listeners know that yesterday you and I, David, tried to record this episode; and, you live in the Washington, D.C. area; I live in the D.C. area; and I thought--mostly as I do, over Skype--but you live fairly nearby: we'll do it face to face. And, we had technical problems in making that recording. And it didn't happen. I dragged you down to my office in downtown D.C., and wasted your time. And I was embarrassed that it didn't work out well. And I offered to take you to lunch. Partly because I thought we'd have a nice conversation, but partly just because I felt bad. So, we went to lunch, and we had a great conversation, I thought, at lunch. I was a little uneasy about it because I thought, 'Well, we manned up[?] talking about stuff we were going to talk about in the actual interview; and sometimes it will sound stale if we've already talked about it.' But I said, 'Oh, well, whatever.' So we had that conversation. And I am confident--although I have some confirmation bias here so I have to be careful--I am confident this is a dramatically better conversation than we would have had yesterday face to face where--we'd never met before. That's important. We'd never met each other face to face. We'd only had our conversation over the Skype in 2013 with your first book. And, because we met, I think this conversation today--a day later--is much better. So, I think it's a small example of failure turning into something really excellent. So, I want to thank you for your patience yesterday and your patience today. But, there was a benefit from that trial and error, I think.

David Epstein: Oh, that's great to hear. And I didn't know you were embarrassed; and you absolutely shouldn't have been, because it was out of your control. So, not your fault. And then you bought me lunch. And I ended up with like a half dozen new things on my reading list. Which is, since I no idea what I'm going to do next, actually tremendously important for me to get suggestions to read things that I, you know, wouldn't otherwise come into contact with. And it's one of the perks of my job, to get to scrutinize my own ideas and other ideas with people like you. So, there is absolutely nothing to feel embarrassed about. It was--you know, you can't control the ventilation system at that building. And it was a pleasure. So, didn't expect it; but I really enjoyed the conversation.

Russ Roberts: So, let's close with a tougher personal question. How does this experience of writing this book--which took you a while: it's a lot of digging and interviewing and thinking and reading and then writing--how has it changed your own perception of yourself, and how you see your own career?

David Epstein: For one, I don't--I have no idea what I'm going to do next--

Russ Roberts: It's scary--

David Epstein: like when I recently went to MIT's [Massachusetts Institute of Technology] Sloan Sports Analytics--not nearly as much as I would have been. So, I have career-changed a lot. And constantly been told that it's a bad idea: I will get behind. And, again--and this is my, for sure some of my own confirmation bias, right? But I now think that these things, this changing, this experimenting--which in many cases did put me temporarily behind--but then my growth rates were very quick. Ultimately what I did was a crude, sort of, a group of skills, where I may not be the very best at any one of them, but I kept sort of zigzagging from one area to another where I could be pretty good at a whole bunch of different things. Which ended up with me being able to compete on my own ground. So I'm not in zero-sum competition with anybody else for whatever beat I'm writing about. And that was the same thing at Sports Illustrated: I ended up writing, you know, a book that found an audience because my science background was the most useful thing there. And it meant I wasn't waiting in line to be the next NFL [National Football League] beat reporter. So I think traveling this zigzagging journey has led me to have a toolbox where I might not have the absolutely sharpest tool in any one of those things, but I end up competing in a place where I'm the only one. And so, I'm just competing against myself. And if I can do something interesting, then I can have some good outcomes. And I'm now confident that--you know, I've read quotes like this, just like this, from Christopher Nolan, the director, and Eric Larson, the writer, where they say, 'Between projects, I just have to read with no apparent purpose, to find my next project.' And I used to criticize myself for that, thinking it was inefficient. And now, I think it's actually what gives me this expansive personal search function where I might come up with projects that others don't. So I feel more comfortable and emboldened in the fact that I am proactively not going to look for another job quite yet. And just go back to letting my mind roam; and hope to alight on that next project. Where I used to always think every time I did a project, 'I'll never find another good one,' now I feel much more confident that my meandering actually will continually lead to new projects.

Russ Roberts: My guest today has been David Epstein. His book is Range. David, thanks for being part of EconTalk.

David Epstein: It's a pleasure. And thanks for challenging on some, challenging on my ideas. It really helps me sharpen my own thinking. And a lot of my ideas should be critiqued. And I appreciate that, the way you do it.