Harford on Adapt and the Virtues of Failure
May 23 2011

Tim Harford, author and journalist, talks with EconTalk host Russ Roberts about Adapt, Harford's book on the virtues of failure and the trial and error process. Harford argues that success is more likely when there is experimentation and trial and error followed by adapting, rather than following a top-down, ex ante plan driven by expertise. The conversation looks at the what war can teach us about information, knowledge, and planning, the challenge of admitting mistakes, and the implications of trial and error for our daily lives.

Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Tyler Kleinow
May 23 2011 at 11:52am

Hey Russ, enjoyed the podcast as always, just wanted to let you know that the audio for the phone was exponentially better than normal, so whatever change you did worked. Trial and error has found you a good way to record phone conversations!

Matthew Munoz
May 23 2011 at 11:52am

I very much enjoyed this podcast, but I must point out that the Steve Jobs example is not entirely accurate. While he has probably made a great many mistakes (hockey puck mouse, etc.), the Newton was not one of them. He left Apple in 1984, before it began development (1987), and development ended shortly after his return in 1996.

David B. Collum
May 23 2011 at 3:19pm

Great talk. Gotta fun soon, but had to say the following: I think Greenspan’s mistakes were serial and egregious so I would put him in the serial loser category. My opinion was formulated by the beginning of 2000, so this is not revisionist history. He compounded his policy mistakes by being a cheerleader for egregious behavior (proliferation of derivatives despite Brooksley Born’s best efforts; enthusiastically promoting the housing ATM behavior; declaring adjustable rate mortgages a good idea a few months before starting to hike rates.) I haven’t in the past nor won’t in the present cut him a whiff of slack. I have some positives to add when I return.

JPIrving
May 23 2011 at 3:22pm

Still listening, I can hear Russ breathing while Hartford is talking in the beginning minutes. Not a disaster.

Steve
May 23 2011 at 3:29pm

Well, the sound quality is definitely “better,” there is less distortion and less white noise, especially from the interviewee. Unfortunately, the most obvious effect from this is that Russ’s breathing becomes noticeable. You are going to have to do a better job filtering out incidental studio noise now… Promising experiment though.

Grant Gould
May 23 2011 at 4:16pm

I’ll second the above about the audio quality — filter the breathing and maybe filter treble a bit to blunt the sibilants and you’ll have it.

As an interesting sidenote: When the serial litigant software company SCO sued everyone under the sun to try to kill Linux, one of the (many) things they tried to claim was in intellectual property interest in “negative knowledge” — knowledge of what didn’t work. They claimed that other people who didn’t repeat their failures were in effect stealing from them.

The courts didn’t buy it in the end (and thank goodness for that!). But it will be interesting to see if the increasing recognition of the value of failure leads to greater attempt to propertize and monetize it. An anticommons on failure would be a great setting for a Brazil-esque dystopian novel but not really a good place for anyone to live.

John Price
May 23 2011 at 6:30pm

Excellent podcast as usual, but I also feel obliged to point out that Dr. Robert’s breathing is audible during the entire podcast. To me, it was very distracting, and for a very brief moment I even considered stopping the podcast. Then again, it’s EconTalk! I suppose all the noise can be filtered out.

W.E. Heasley
May 24 2011 at 12:09am

The audio was much, much improved. Moreover, have proof of the audio improvement!

Toward the end of the pod cast there was a reference to Don Boudreaux. My basset hound suddenly came over to the computer. Why? Then it dawned on me, he heard his name: Boudrou-the-Basset-Hound!

There ya go!

Pietro Poggi-Corradini
May 24 2011 at 3:48am

About the effect of technology in the battle between bottom-up vs. top-down, one of the greatest top-down improvements is the Google search engine. On the other hand, the Google search relies on an “algorithm”, an impersonal, automatic, top-down routine. In a sense, Page Rank is the opposite of ‘discretion’ from the top. In other words, maybe technology helps the top-down approach but only in the direction of general, bulk, non-discretional, algorithmic, improvements.

GeroXC
May 24 2011 at 6:08am

Hej Russ!

Your appreciate your new audio, it makes me, as a not-native english listener, understand the dialogue better.

A “but” [sorry], your breathing sounds terribly, not only too loud.
I’d like to suggest you to invest a bit more time in your cardio workout.

Gero

David
May 24 2011 at 11:23am

Roberts’ wheezing + annoying British accent + anti-Americanism implicit in selection of examples = turned it off after 20 minutes.

Lauren Landsburg (Econlib Editor)
May 24 2011 at 12:42pm

Thanks, everyone, for your thoughts about the sound quality. We hear you and will keep it in mind for the future.

If you have additional comments about the sound quality, please remember that you should address them to Russ at mail@econtalk.org. That is the best way to register your thoughts about the sound quality.

Any future comments here on the EconTalk website should address the content of the podcast.

Thanks for your interest! We look forward to your comments, here about the content of the podcast, or via email about the sound quality.

ric caselli
May 24 2011 at 2:51pm

Great podcast with many interesting points (I just ordered the book from Aamazon). However, I would say that the subprime loan crisis is an example of top-down failure since it has more to do with insurance 101 (spreading risk while discounting the possibility of price moves across the whole market – during times of falling wages), rather than lack of judgement at the bottom, where most marked operators “correctly” responded to individual incentives.

Thank You and keep up the good work!!

hp
May 24 2011 at 8:47pm

There is a subtlety that was not fully articulated in the podcast. The conflict really isn’t between top down and bottom up. Very few things work without some level of organization. The real question is how much is needed. This is not something that has an obvious answer, which is where markets come in.

Luke
May 24 2011 at 8:56pm

on the mention of breathing noises from russ:

from a recording technologist’s standpoint, i find it interesting that russ’ breathing was so much more noticeable than tim’s it’s a drastic difference. i suppose it depends upon the point at which the recording device is patched to receive the signal .

i would just suggest a mindfulness of “mic technique” or distance and angle of the microphone relative to nose and mouth.

not knowing anything about existing equipment population and location or format, a simple seperation of mic and headphones would allow for the physical flexibility to manage this to great effect.

Albertus
May 25 2011 at 12:26pm

I like the slip, mistake, violation organization. It made me recall the notion of a “capture error” (a kind of slip) when you follow habit instead of an out-of-the-ordinary goal (you went straight home, not to the store). There might be practical cognitive tools there.

But, and to the point: would a gamble fit in that hierarchy? Businesses and scientists gamble a lot; politicians less so.

Very informative; thanks Russ and Tim.

TGGP
May 26 2011 at 1:41am

Tim has always been a good communicator, looking forward to this.

Now that Robert Clower is gone, is Russ going to interview Axel Leijonhufvud and get the Post-Walrasian take while there are still any expositors around? Axel published something about the crash on voxeu not too many years ago, so he may be interested. I’ve never heard a good layman-friendly explanation of what the disequilibrium folks are on about and how that differs from the mainstream view.

Ray
May 26 2011 at 9:11pm

Enjoyed it immensely.

The US Army is an odd duck to use for this topic however. The Marine Corps would be a nice study of contrast in schools of thought in command structure.

The Marines have a manual called the “Warfighting” that is essentially a statement of their philosophy that is meant as a guide book of sorts for their officers. They emphasize the need for decentralized control, and the subsequent ability for subordinates to make on the fly, informed decisions.

Beyond that, looking up maneuver warfare should turn up quite a number of reading sources.

But the Army is the poster child for top-down thinking and everything that is bad about it.

emerich
May 27 2011 at 10:47am

So enjoyable and thought provoking that I immediately got the book on my kindle. Harford is an entertaining and thought provoking writer. I do, however, think he’s taking the “pure trial and error” hypothesis a bit too far. Let’s not forget that profits, or their absence, provide constant feedback. At a minimum, the profit motive is needed to tickle the itch to experiment. Second, the power of random experimentation to create order an complexity over eons is hardly an argument for random decision-making on an individual, managerial level, as you both implied in the podcast and as Harford suggests in Chapter One. In evolution, a creature is consistent over its lifetime. Would evolution work if creatures evolved randomly every few moments over the course of their lifetimes? How could it, since the fitness of any particular model couldn’t be tested.

O.O
May 27 2011 at 4:17pm

[Comment removed for supplying false email address. Email the webmaster@econlib.org to request restoring your comment privilegest. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]

Gordon
May 29 2011 at 7:05am

About the discussion on politicians near the end:

People need stability in order to plan their lives. Politicians changing their minds –breaking promises– makes it harder for people to plan.

Rick H
May 30 2011 at 10:36am

Russ was panting like a rabid dog and it was quite annoying, also his occasional typing could be heard. Listening to the podcast with Harford was like getting an obscene phone call with a deranged degenerate breathing heavily on the line.

What ever Russ is doing to work himself into such a state is none of my business but I’d recommend that he exercise some self restraint and wait until the podcast is over before engaging in whatever he does that makes him so aroused.

Benet Davies
Jun 13 2011 at 2:38am

Really interesting podcast.

What a treat to hear about Allende’s cybernetic project. I’d seen this website some time ago http://www.cybersyn.cl/ingles/cybersyn/index.html but had never really heard anyone talk about it before.

Despite what anyone thinks of central planning, sitting in the Ops Room in the swivel chair would be a fascinating day at the Museum of Economic Thought aka EconLand (patent pending).

jon
Jun 14 2011 at 12:57am

Another excellent podcast. I just wanted to add my pedantic defense of Steve Jobs: not only was the Newton not his project, it was the baby of John Sculley, Jobs’ replacement and nemesis. One of Steve’s first actions upon rejoining Apple was to kill the project; some say out of spite for Sculley. So while Jobs has had his own share of idea failures, I’d hate to see that particular example used against him.

Comments are closed.


DELVE DEEPER
About this week's guest: About ideas and people mentioned in this podcast:

AUDIO TRANSCRIPT
Time
Podcast Episode Highlights
0:36Intro. [Recording date: May 6, 2011.] Before we begin, we are trying something a little different with technology. Let us know what you think of the sound quality of today's podcast by dropping us an email at mail@econtalk.org. What is the connection between success and failure that you trace out in the book? Quite simply that in a complex world--and I begin the book by trying to explain how complex the world really is, this amazing society that we've created, this complex economy we've created--there's a lot of failure. And that needn't necessarily be anything to worry about. Very successful economies have high levels of failure; successful organizations have high levels of failure. But, we can't really get away from this failure. So, a key skill is being able to manage it, respond to it, and learn from it. I began by examining the capability of markets to do that; and that's something you and your listeners will appreciate. But, I was also interested in whether other organizations could do the trick; what the political obstacles to learning from failure were; and even what the individual obstacles were: why, although I'm not a psychologist and it's mostly an economics book, I did want to ask why is it that you and I struggle to respond constructively? Why does failure hurt us so much? Most of us never fail, so it's easy. Ahem. It is a fascinating question, how difficult it is psychologically for us to accept failure, and I want to come to some of the psychological issues at the end. You emphasize at various points in the book F. A. Hayek's understanding of local knowledge and what he called the particular circumstances of time and space. Why are those important? What's important about local knowledge? Where this emerged--and I don't think it's what Hayek was originally thinking of although he might well agree--was actually when I was looking at the experience of the U.S. army in Iraq. It emerged in a tank battle of all places. This was the first Gulf War. And there is a group of nine American tanks called Eagle Troop, and they are barreling along the desert in the middle of a sandstorm. And in some ways the first Gulf War was a triumph for the planners' view of how a war should be. So you have this wonderful conceit of the guy at the top with all these big screens, satellite images, video footage from planes; he can see everything, he's got battlefield maps; he can make decisions about the whole governance of this war from an air conditioned tent in Qatar or in Washington, D.C. And the experience of this group of tanks--maybe it's not quite that simple. Maybe you can't always rely on centralized control even in a war, even with that level of technology. Because these tanks came over the crest of a sand dune in the midst of a sandstorm; they had no satellite cover, no air cover, and they ran into nearly 100 tanks and armored personnel carriers, dug in, defended emplacements; and they were Saddam Hussein's Republican Guard--the elite guys. And how many tanks did the Americans have? Nine tanks, so they were outnumbered 10 to 1. The captain of Eagle Troop is the man with the specific knowledge of time and space. He's the only guy who can make the decision. He can't call back to base, he can't call any support; he can't say: What does our map say? Because, it's all come down. He's got to make the decision, and incredibly quickly, using the local information he has, evaluating the disposition of the enemy troops, the disposition of his own men, what the chances are; and he made an instant decision. He said: If we try and turn around, we are all going to die. We've got to attack. We're surprised, but they are, too. He yelled out: Fire sabot, telling his gunner to use anti-tank munitions. And they instantly destroyed an Iraqi tank. Three seconds later, they destroyed another. And another. And the other 8 tanks came over they ridge; they all opened up, and very quickly they'd won the battle. That sounds like a bit Tom Clancy; and in fact it was written up by Tom Clancy; and a celebrated battle which was supposed to indicate the superiority of the U.S. army and strategy. And in some ways it did. But the captain who experienced that situation came away with a different conclusion. He said: Look, we all nearly died, and the reason we didn't die was because I had to make a split-second decision. In other words, it didn't work the way it was supposed to work. It didn't work in a top down way. I had to make the decision. And he campaigned--there's a lot of really thoughtful men and women in the U.S. military--in papers saying you need to give more attention to the training and responsibility to the troops on the ground, whether it's ordinary soldiers, captains, colonels. Later on, that became hugely important. That did two things. First of all he wrote a very respected history of the war in Vietnam, analyzing the failures of leadership, both political and military and looking at some of the group think, the way very dubious forecasts were made and the truth wasn't conveyed to the President, and the President didn't really want to hear the truth; and there were a lot of yes-men around. Blistering account, widely read in the army. And then he did something even more important: this is in the spring of 2005. He was a colonel now, in Iraq, Colonel H. R. McMaster. He was responsible for U.S. operations near a city called Tal Afar. What he did was pioneer a new way of dealing with insurgents that was very responsive to what was going on on the ground, very intensive, very difficult, risky for his men, but also risky for his career because at the same time this was going on, the top brass, Donald Rumsfeld, didn't even want to hear the word insurgent. Literally. Famous press conference a few months after the Tal Afar campaign started, when McMaster was developing these techniques, just after Thanksgiving weekend, 2005--Donald Rumsfeld and Peter Pace, Chairman of the Joint Chiefs of Staff at the time, talking to the Pentagon Press Corps. And Rumsfeld, going through these bizarre, Orwellian contortions to avoid the word insurgent. A journalist even called him on it: Why are you not using the word, sir? Well, I've had an epiphany over Thanksgiving and I've realized these men don't deserve to be called insurgents. Then Peter Pace, the most senior general in the U.S. military is sort of trying to talk about the problems in Iraq and apologizing to Rumsfeld: Sir, I can't really think of a better word than insurgent right now. Bizarre. If it was just a press conference, wouldn't matter, but actually that was rife throughout the U.S. military. Senior officers were saying: Don't use the "i" word--it's not an insurgency. Could have been straight out of 1984. Very top down, trying to control even the language. It meant the strategy was just failing. And more important than that the strategy was failing--because strategies do fail a lot--was that there was no way to change the strategy. You couldn't even talk about what was going wrong. And that was something that H. R. McMaster pioneered. He wasn't the only one; there were several brave colonels who put their careers on the line doing this. But I think he was one of the first. Something a British general told me while I was researching the book, which I think really struck home; and I think Hayek would recognize the sentiment: We always implement lessons learned on the front line because lives are lost or saved by how quickly you do that. But we very rarely implement lessons learned at the top, because there is no pressure, no incentive to do so.
10:12One of the things I found so fascinating about the use of the war example is that yes, people on the ground have more knowledge--that sandstorm is a great example; Hayek talks a lot about how local knowledge in the heads of individuals is often difficult to get it out of their heads; they don't know what that knowledge is until they have to use it. How do you get it to a central authority who would then make some decision and pass it down? The time and knowledge burden there--he called it the Knowledge Problem--is so large, it's basically an impossible problem that is not helped by having lots of computers. But you pointed to I think a separate problem, which is the experts often struggle to come to the best decisions even with the limited knowledge they have because of groupthink, ego, the exercise of power. The war in Vietnam, great example that McMaster chronicled and similarly here in Iraq--it's not just that they don't have the right information; they're stuck in these grooves and ruts that they can't seem to get out of. Absolutely true. There are various psychological processes at work. You mentioned groupthink. The classic study of groupthink is by Irving Janis: study of the Bay of Pigs fiasco, where--I forget the exact numbers, but the basic conceit was the American administration of Kennedy was going to sponsor a couple of thousand untrained rebels and drop them in Cuba; and they were going to defeat a standing army of a quarter of a million well-trained, experienced guys fighting on their own ground; oh, and by the way, no one would ever figure out the United States was involved. On the face of it, just crazy; no one could ever convince himself that this was going to work. What Janis's study is: he says it's not that Kennedy was a Stalinesque figure who was trying to suppress dissent. He was trying to get people to tell him what they thought. But there were just social pressures. People didn't want to make a fuss. Kennedy's advisers would look around the room and everybody else seemed to think it was a good idea, so they didn't want to kick up a fuss. It's also risky. If it's a success, you are the guy who said you should be worried about it. You look like a fool. The safe, low-risk strategy is to agree with everybody else in the room. Then no matter what happens you are no worse off than anybody else. Kennedy later on, when trying to think about the Cuban missile crisis actually split his expert advisers up and forced them to have separate discussions, and would occasionally bring them back together. He would also bring in experts from outside to deliberately shake things up, because he realized that just asking people for their opinion, even if it was done very honestly, wouldn't necessarily work. Another thing is people get very attached to their role in a hierarchical organization. Amazing statement that I mention in the book, quoted by John Nagle, now at a Washington security think tank. He was a counter-insurgency expert in Iraq. Quote is from a senior army officer during Vietnam: I'll be damned if I'm going to ruin this army just to win the war. So: If we adapt to the war we are fighting, we are going to spoil everything. All worked out, all neat, all our training programs and chains of command--you are going to ruin everything just to win the war in Vietman? Crazy. When you pluck the quote out of context, sounds insane, but I think we all know people who behave like that when they are facing a business problem or a policy problem: I could solve this problem or I could save the current structure of my organization. That's a lot safer. Especially if I'm a senior person in the organization. Also thought about, when reading that section, the accounts of the Battle of Gettysburg. I've not read a lot of military history, but I've read a little about Gettysburg, and you are just struck--and I think this would be true of any large battle--by how much ignorance there is. And the great generals aren't the ones who sit there in the room and push the pieces around like pawns on a chess board. They are the ones who teach their underlings how to freelance and deal with uncertainty, like the story you told about McMaster in Iraq. General Lee, at Gettysburg, story goes that he really missed Stonewall Jackson because he trusted him. Here was a guy who he'd seen freelance, knew what he did that was right; and I think one of the reasons Lee failed at Gettysburg was that he didn't trust the guys he had on the ground; and he was waiting for information; and he was horribly misinformed--inevitably. The fog of war is one of the most dramatic examples of the challenge of top down versus bottom up. Absolutely right.
15:51Interesting to reflect on how changing technology changes this balance. We naturally assume that more better communications, better computers, more information helps the guy at the center. This is something, a fantasy that goes back to Salvador Allende, in Chile. Tell that story. That's an amazing story that I actually didn't know until very recently; and I didn't know the details. Allende hooked up with a Simonetic theorist called Stafford Beer, British. Allende was a democratically elected Marxist leader in Chile in the early 1970s. Fair to say he had to deal with a lot of economic hostility from the United States; there was possible sabotage; a lot of strikes. He had a lot of problems that were not necessarily his fault. But nevertheless, his strategy for dealing with Chile's economic situation was basically a technological love affair. He was going to get really amazing computers; and Stafford Beer was going to plug these computers in to the Chilean economy. These computers were to process all this information, and literally at 5:00 every day, they would print out a report; the report would go to Allende; and he would be able to give directions. It was Telex machines--people would Telex in all this information about what was going on in their factories, and these big computers would process the information. The most famous visual remnant of this was a control room that looks like something out of Star Trek--it's got swivel chairs with buttons and screens. It was actually never operational; they never plugged it in. But it became a symbol of this project, called Cybersyn, both to people who wanted to mock it and to its proponents. The funny thing is about Allende's project was the type of supercomputer he was using was a Burroughs 3500, and my father actually worked all his life for Burroughs, which later merged and became Unisys. At the beginning of his working career, which was 1969, he was actually working on this kind of computer. So I was able to sit down with my Dad and have a great conversation about what these computers could and couldn't actually do. Of course, in 1970--he said it was not even a supercomputer by the standards of 1970. It was a good, solid computer that you put in the back office of a bank and it would sort out people's accounts; very reliable; hard drives literally the size of washing machines. He said you'd get a physical workout just moving this thing around. So much less powerful than a modern iPhone. Not even funny how much less powerful. It did not work. There was no reason for people, irrespective of the processing power, to Telex in the truth if the truth is not convenient to them. And even if they did want to Telex in the truth, it could be that the bandwidth available doesn't really convey what's important--the opportunities or the problems that are really going on for a particular factory. The only time that Cybersyn really worked was during the strikes, when basic information was really simple: Hey, these guys have cut down, shut down our factory; we can't produce anything. That sort of information Cybersyn could process. But the more subtle information needed to run an economy it was a complete failure. This is a dream that goes back to the 1920s, before that. There was this famous debate, called the calculation debate, in economics over whether you could centrally plan an economy. This was before they had computers. They were imagining the idea that if you had enough information and you could process it that you could assign the right prices to the products and the right quantities to the factory; and in theory you could do even better than the market. But of course, as Hayek and Mises argued, it actually did dramatically worse. They won that intellectual debate, more or less. Boettke podcast over whether the opponents conceded defeat or not. Basically, the world has realized that computers aren't smart enough. It's not a question of processing a lot of information; it's information that's subtle; it's the particular circumstances of time and place where the next supply increase can come from, how to get it the cheapest way available. All that information can never easily be put into a computer or answered in a survey. Really a utopian fantasy. Very interesting to look at the empirics on this--what's the evidence? As technology has progressed beyond anything that Hayek or his intellectual opponents could have dreamed of, has that changed things? How are people actually using it? On the battlefield, there are no prices. It is somewhat helpful for the guy in central command to know what's going on. There are things that could be coordinated. Absolutely. Especially the sorts of one-off lightning strikes that began the war in Iraq. That sort of thing--the "shock and awe"--you can do very well. You don't have a price mechanism; you probably don't want to delegate authority completely to the people on the ground. You want to coordinate that from the top down. But many problems, including the counter-insurgency problem, trying to deal with the local population, get them to cooperate, protect them from terrorists and insurgents, is very hard to coordinate centrally and you have to trust the guys on the ground. Of course, the presence of things like email, newsgroups, to some extent empowers the central command; but it also hugely empowers the guys on the ground. They can swap information on bulletin boards about the latest improvised explosive devices, roadside bombs; what are the latest tricks? What's going on in your area? Brilliant PowerPoint presentation called "How to Win the War in Al Anbar by CPT Trav". Al Anbar was a particularly troublesome province that became the surge; and Captain Trav, with these stick figures explained how to talk to people, how to treat people, what the basic, fundamental problem was with the insurgency. And this PowerPoint presentation was just passed around; and it was really easy to do so. It didn't have to be printed off somewhere in Colorado and then shipped out to Iraq. It was passed around. These people were able to communicate with each other.
23:19One of the things I found interesting about writing the book was I had never really thought about these military problems before--I'd always been an economist. It is striking how important the decisions these men had to make and the pressure they were under and the risks they were taking. Captain Trav himself--Captain Travis Patriquin--died just before Christmas in 2006 and left behind a widow and three children. The thing I'll never forget is reading that all the local sheiks came to his funeral, because he was so respected; he really understood how things worked in that province. He'd formed all these alliances and the locals respected him. It's a little different from writing about economics. But I do try and draw these economic lessons. One of the things I wanted to do was say: What happens when technology bursts onto the scene in an organization? There's some great work by Erik Brynjolfsson of MIT showing that--this is about 10 years old now--there is a really strong connection between new technology and decentralization of responsibility. They go hand in hand. You can either keep your old technology and your old centralization in an organization or you can decentralize responsibility, give people much more flexibility and also introduce that new technology; and that will work really well. But you need to do one or the other. You can't decentralize without the technology and you can't introduce the technology profitably without decentralizing. Interesting to reflect on. It's not obvious that new technology is a force for centralization. I can't avoid noting--I'm doing some light reading, reading a 2,208-page report on the Lehman bankruptcy--a delightful bedtime experience. It's an audit by the government on what went wrong. The author of the report just mentions in the beginning that the universe of Lehman email and stored documents is estimated at 3 petabytes of data. Roughly the equivalent of 350 billion pages. As you talk about it, you think about how technology allows decentralization, allows the sharing of information--you'd think it would allow the top down to be more effective. But of course, when you have 350 billion pages, you have no chance of doing anything from the top down. One of the fascinating things is how technology has made it easier to do bottom up; and how the top down, the people who run an organization, whether it's the military or a corporation, have to put in place some kind of structure for the bottom up to emerge and also to trust it. I do want to make it clear--and this runs through your book--when we talk about bottom up, we don't mean unplanned. It doesn't mean that everything is random, everybody is freelancing. It means that the knowledge and the use of that knowledge is coming from the bottom. Of course, in any organization, the military, an organization, or public policy, there's lots of planning that goes on. Talking about Lehman Brothers--one of the chapters that I most enjoyed writing and really learned a lot about was thinking about the financial crisis--a huge challenge for economists and economics. In this particular case I found myself being drawn to a literature I hadn't explored before on industrial accidents. This is a literature that economists have nothing to say about; it's dominated by engineers, psychologists, and a couple of sociologists who are doing great work on how complex systems go wrong, and looking at the Challenger shuttle disaster, Three Mile Island, Chernobyl; more recently Fukushima, Deep Water Horizon, really terrible oil rigger disaster in the United Kingdom called Piper Alpha. Piper Alpha is what got me interested actually, because it triggered a small financial crisis in the British insurance industry. So, I thought: Hey, I'll write about that financial crisis--mini-financial crisis because it will be a way into the big financial crisis and I better write about what happened on Piper Alpha. The more I read about what happened on Piper Alpha, the more I realized this is a complex system and the people who are trying to understand why it went wrong are producing amazing stuff that's absolutely relevant to the financial crisis. And part of it is, is insuring that people who are in a position to see what's happening on the ground actually have the incentives and the motivation and the right monitoring to do the right thing. Because there are different ways things can go wrong. The psychologist Jens Rasmussen talks about three kinds of error: slips, mistakes, and violations. So, a slip is: you just do something you immediately realize wasn't what you meant to do--pushed the wrong button, locked yourself out of your house, forgot your car keys. Mistakes are things you do because your view of the world is wrong. So, you took out a subprime mortgage and bought a house because you thought house prices would continue to rise and you would be able to remortgage your house. Then there's a violation--something you know is against the rules but you did it anyway, for whatever reason. So, maybe you falsified your income. Violations that take place at oil rigs, at nuclear power stations, on Main Street, and on Wall Street. One of the things that interested me, and this is about trust--we devote a lot of attention to distinguishing between mistakes and violations, because with violations, someone should go to jail. Mistakes are human. But there is something they have in common--they can both just sit there and come back to bite you days, weeks, even years after the original mistake. And that happens in industrial accidents--if somebody leaves a valve open or there's a safety system that's disabled and you don't find out about it till it's a crisis point. That also happened in the financial crisis. People were investing in instruments that were supposed to be safety systems, insurance; regulators had approved them, top management had approved them; if something went wrong, don't worry; it was all covered, all hedged, insured; and then when it comes to the crunch, suddenly you realize the safety system doesn't work. And whether it's a mistake or a violation maybe doesn't matter so much as: How do we spot that an error has been made in the system and uncover it before the critical moment?
30:50It's an interesting analogy. When you look at some of these stories, and I've read some as well before I saw them in your book. What you notice is that the incentives get changed by the safety systems. So, a lot of times in industrial accidents, people aren't so worried when they see something go wrong because they know there's a backup. What they don't realize is it's Tuesday, and on Tuesday, between 3 and 4 o'clock the backup is shut down for evaluation. You give these examples in the book of remarkably strange things happen in sequence that you haven't expected. But of course, every once in a while you get a black swan. They all line up and you get a catastrophe. For me, what's interesting is the incentives for care. Both slips and mistakes are part of being human. Inevitably, you make slips and inevitably you make mistakes. But what you want to do is give people some incentive to avoid them. If the incentives aren't there, then you are going to see a lot more of both. What I wonder--and listeners know this is my take on the financial crisis--when you have a system where people think they are going to be potentially bailed out, their desire and intensity to focus on the potential for mistakes and for slips is smaller. In particular you are going to say: well, value at risk doesn't work so well as a monitor, but it's not so bad if it fails. Or yes, I'm insured; but what if the insurer fails? There's a thing that every savvy person would worry about; and these savvy people knew that everybody was insured with AIG and other firms like it; so it's strange that they went to bed sleeping well at night. That's the challenge as to why they weren't more vigilant about the potential for catastrophe. Maybe it's just a mistake; maybe a slip; or maybe the incentives were wrong. I think it's a bit of both. I don't fully buy the idea that all of this stuff was extreme laziness or willful fraud because everybody knew the government would bail them out. I don't think it was that gross. But I do accept that clearly weighing on people's minds was the likelihood of a government bailout, particularly people who were lending money to banks; and that was a problem. To go back to the parallel with nuclear accidents, you could say: Well, we don't need any government or industry oversight of nuclear power plants, because who is the person who is going to suffer most of all if there is a nuclear power plant accident? First of all, it's going to be the operators; they are there, in the front line; they are the ones who are going to get the dose of radiation. And financially who is going to suffer? It's going to be the power company who owns the facility. So you could say the incentives are pretty good. I think you still want systems that help out. The problem sometimes is that regulators get captured; they end up supporting industry rather than properly regulating it. So, you are always on the lookout for systems that work. I talked to people in the nuclear industry; what I was struck by is they said: We've actually got a lot of peer oversight--a very powerful force, potentially. We do and visit each other's power stations; we keep an eye on each other. And that's something that I think was a very weak force in banking. Maybe it's hard to see how it could be operationalized. But they inspect each other's power stations and issue reports, learn from each other. You can just see. Nobody in the nuclear industry wants a nuclear power plant to blow up. They've got a strong incentive to keep an eye on each other. So, I found that peer monitoring idea interesting and useful, but hard to make operational. I think in the banking case, you really didn't want them visiting each other because a lot of the time they could pick up bad ideas. There are risks, clearly. Arnold Kling, blogs at EconLog, makes an interesting distinction between easy to fix versus hard to break. He argues that we have a tendency to try to find systems that will never fail. He says what we ought to really be doing is finding systems that, maybe they fail occasionally, but when they do fail they are easy to fix--the costs are small. Instead we've gravitated toward the perfect regulation; we want to reduce the odds of any kind of failure to zero--which means when there is one it's absolutely horrific. I think Arnold is absolutely right. There's an insight from Nassim Taleb on this which is, these rare catastrophes are very rare--it's hard to say how rare they really are--statistically it's almost impossible to measure them because they are rare; and so going for making the catastrophe rare but if it happens it's going to be really, really big is not a great strategy. I think Arnold's right. The way an engineer would think of this is you want a system that is robust. If part of the system breaks, that can be contained. You don't want anything to be too big to fail. That's something we really need in our banking system. Whether that comes from market discipline or from regulators, we need to figure this out. It's not a simple problem. I don't think it's a simple problem to create a banking system where we can let banks fail with impunity, but I feel that's got to be the way forward. In the United Kingdom, the Banking Commission, which is run by Sir John Vickers, they've definitely pushed toward making banks more modular, so you can take a chunk out of a bank and rescue it. You can break banks up and isolate failure so they don't become systemic problems. I don't know how successful they'll be, but that's the way they are thinking. The problem is that the bankers push back. They don't like that.
37:24We're going to skip over a couple of issues you explore in the book, which are use of randomized experiments, fighting third world poverty; you have some interesting things to say about the difficulties of figuring out how to live an environmentally correct life. Basically what you explore in both of those is that interconnections often make it difficult for any one individual to see what's going on. More general issues. What are the lessons of your book for our personal lives? The obvious lesson is almost a cliche; interesting how that turned out to be: Learn from your mistakes. We always say that. How come we keep saying that and it turns out to be so hard to do? If you wanted to generalize what I say in the book about how complex problems are solved, what sort of organizations or markets, what sort of systems solve complex problems, it's basically three things. One--this ties into what you were saying about Arnold Kling--is, you should try a bunch of stuff, a variety of things; everyone can't be doing the same thing. Next, failures will be common. Needs to be okay for some things to fail. Can't be a nuclear meltdown or a virus or financial crisis that brings the system down. The failure has to be survivable. So, number 1, diversity, number 2, survivability. Number 3 is you have to be able to tell the difference between a success and a failure. Not so obvious. It was a problem in the financial crisis when bankers had accounting profits but they weren't real economic profits because there were risks embodied that couldn't be measured. And governments have a real problem there, telling the difference between success and failure. Politicians' incentives; the interest to measure whether their policies are succeeding or failing. That's often inconvenient. Knowledge. Diversity, survivability, and distinguishing between success and failure. If you've got those you have a very adaptive system that will very quickly try out new stuff, will cope with its failures, and will pick out its successes and replicate its successes. So, then the question is: Can you do that in your personal life? It's not so easy. Think about the diversity, first of all. Clearly just one person. You can try out a bunch of stuff, but it's easy to lose focus. Most people do tend to get stuck in a bit of a rut, don't try out new ideas as much as they should. I reflected on my time as a university student. I had such a great time. Not everyone enjoys university, but many do. One of the reasons is you are experimenting with everything. You've got new friends, new place; experimenting with ideas; you may be experimenting with things you shouldn't be experimenting with. A whole bunch of different stuff. Political ideologies, different hobbies. So exciting, yet so safe, because none of those things are going to really cause you a problem. In the end focus, prepare for your exams; then you are through that. Very safe space; that's why it's so thrilling. But we really struggle to do that later in life. We get very conservative. The other thing that's difficult about having a diverse bunch of experiments in our lives, different projects, different things we like to do that may be creative--huge internet hits, rap video, unexpected experiments. Thank you. We all know about it; it's great. But these experiments, we know they will likely not work out. Most new things do not work out; and we are too afraid of failures. That's an insight that comes from Behavioral Economics. One of the first discoveries of Behavioral Economics literature, Daniel Kahneman and Amos Tversky--loss aversion, the idea that taking a loss is disproportionately painful. Makes it really hard to just try out these new things: I'm going to ask that person out that I like on a date--the fear of rejection. Why not? The upside would be great. The downside is she says no, he says no--not such a big deal, but it's very painful. We don't like to take chances. And then the third thing, determining the difference between success and failure, that's also very hard. There's a huge amount of work on denial and also on unhelpful responses to failure, what a poker player would call going on tilt. Essentially, you double down, you chase your losses. Different ways in which we can either respond in a bad way to failure or we can deny failures that have happened. It means that this advice, adapt, experiment, learn, grow, is really good advice but it's painfully difficult to take. I've been teaching a seminar this semester on Adam Smith and reading the Theory of Moral Sentiments, something we did a 6-part podcast with Dan Klein, as some of you may remember. Talk about the difficulties of accepting failure, I am reminded of the quote: "He is a bold surgeon, they say, whose hand does not tremble when he performs an operation upon his own person. He is also equally bold who does not hesitate to pull up the mysterious veil of self-delusion which covers from his view the deformities of his own conduct." [Theory of Moral Sentiments, par. III.1.91 --Econlib Ed.] We have this very human impulse to gloss over our mistakes, to see them as successes; and I think part of growing up is learning to live with them and admit them and say "I don't know," or "I made a mistake." One of the things that just fascinates me about the political world versus the business world--first of all, in the business world, mistakes are measured. Your company goes broke; you can't say it's a success. One of the fascinating things about American culture--supposedly they say, in Silicon Valley: You are not a success till you've failed two or three times. And people expect that. They give you a second chance. That is a very wonderful thing. In politics, it seems that the cost of admitting a failure must be close to infinite because both George Bush and Barak Obama, just to pick two recent examples, when asked if they'd made any mistakes, they don't seem to find them coming to mind. Those of us in the electorate can list dozens. But the men themselves? So it must be either they are totally self-deluded--which could be--or they feel that by admitting even the smallest mistake, they open the door to negative ads, criticism, history. I find it interesting; I think it's more than that. I think it's not so much strategic as pathological; may be necessary in some circumstances, not in Presidents, to go forward for people in various times in human history. Absolutely. Partly it's down to the voters. One of the memorable choices in U.S. political history--I'm biased, I was living in Washington, D.C. another time--but John Kerry, George Bush; the defining image that campaign was that George Bush knew his mind and made decisions, and Kerry was a flip-flopper. And that was how the campaign was decided. Now, Kerry's defenders said that's not fair: he's not a flip-flopper. He is decisive. Nobody defended Kerry by saying: Hey, what's wrong with changing your mind? Complicated place. Lots of people were in support of the war in Iraq, for example, and later saw it wasn't going well and changed their mind. Oh, well, in retrospect this has been a mistake. Some people always thought it was good, some that it was bad idea. But a lot of perfectly reasonable people changed their minds halfway through. That's an indication of sanity. Yet politically very toxic. In the United Kingdom, our two most successful politicians, Tony Blair and Margaret Thatcher, and here are the money quotes. Tony Blair: I don't have a reverse gear. Margaret Thatcher: You turn if you want to; the lady's not for turning. We elected them, three times for Blair, three times for Thatcher. Most successful Prime Ministers in British history. We don't like people who change their minds.
47:00The irony is, though, of course, that people we think of as very principled often break their principles relentlessly; but they are able to frame it as if they were principled. I think Ronald Reagan has this great reputation as this great free market capitalist. The effective quote is on Japanese cars he put up lots of increases in the size of government; he decided, prudently, that he wouldn't win, I guess. But we somehow think he was this staunch defender of principle. He changed his mind in Levitan. Oh, he would never withdraw; he's so strong. But he has this image of being the strong person. Being a pragmatist. Strange thing but regarded as a strength. What you just said about Ronald Reagan changing his mind would be regarded as a criticism; whereas in some cases maybe he was absolutely right. Interesting to reflect on what does that do to the incentives for politicians. Fascinating contrast between both markets and also the scientific method. In the case of both, basically one success makes up for a lot of failures. A whole bunch of failed theories--one theory that works, because there is a selection process that we start to believe the one theory that works, has the better evidence behind it--the same with markets. A whole bunch of stores go out of business. That's sad for their owners, their employees, but people pick themselves up again, find jobs again. The one successful business can make a lot of customers happy. Asymmetric. We cope with lots of small failures so long as we have big successes, both in science and in the markets. In politics, completely the other way around. Doesn't matter how many successes you've got; one embarrassing failure and your career is over. And politicians know that, and so it gives them totally the wrong incentives in terms of trying a bunch of diverse things, rigorously measuring what's working. For sure, something won't work. That'll be used against you. We've only got ourselves to blame as voters. Why do we do that? Let's think about the business world. Steve Jobs, one of the most revered and respected entrepreneurs of American history--he had a lot of failures. One example: the Newton, which was a Palm-type device, a lot bigger than your palm, but a portable device that was supposed to help you run your life. It was a total flop. Suppose that had come along later. Suppose that was his last device and all we had was the iPhone. You'd say it was a mixed bag, not everything he tried worked; but he was great--he created a lot of value, a lot of jobs, and on net, he was a wonderfully successful entrepreneur. But Alan Greenspan appears to be going down in history as a loser. He was a genius and then all of a sudden, because of his interest rate policies of 2002-2004, and perhaps--I don't agree with this--but his failure to regulate subprime. I'm more willing to focus on his ability to bail out creditors in the 1995 Mexican rescue, which he supported in Congress. So he's a flawed man. But he's not a mixed bag any more--he's a loser. He's not just a flawed genius; he's a failure. You say it's voters. Why is it that one mistake like that--is it because we don't have the choice to opt out that we have? I didn't buy a Newton. So, I don't have any anger toward Steve Jobs about the Newton. But maybe because I had to live under the regulatory policies of the Fed, I resent it, and about a President who sends our children off to die in a bad war, we don't say: Well, he made some mistakes. Maybe we are a lot more intolerant, and maybe rationally so. There's a lot more at stake. I don't know. Don't have a good answer. Maybe one of the things is that politics is an adversarial system; business, less so. You didn't have Bill Gates running attack ads saying Steve Jobs is a loser, he produced the Newton, it's a terrible product. Doesn't happen. You certainly don't have Bill Gates now dragging up the Newton saying he created this terrible product. Partly that it's not an adversarial system and partly that that just seems crazy. Who would pay any attention to that? But in politics, for some reason, people do. You've got people who are highly incentives to point to any flaws and keep talking about it. It seems to have some effect at the polls. I think you, unintentionally perhaps, highlighted the real difference. I think the adversarial thing is part of it, though I think there were some pretty clever adversarial ads by Apple against the PC at various times in its history; and even recently. But the other thing that comes to mind even as you are talking about it is the difficulty of assessing success and failure. So, there's some political things that are undoubtedly failures; some that are undoubtedly success. But most of them are vague. So, proponents of a particular party, particular candidate will say this was a success, others will say it was a failure. I think it's the measurement problem. You can paint someone as a loser from something they did long ago. Maybe it wasn't even a failure. But in politics, it's not like business where you can say they went out of business, they stopped making it. It would be bizarre for Steve Jobs to hold up the Newton as his success; yet how many politicians hold up something they did as a success that was a failure. The fact they can frame it a certain way, because people don't consume it the way they consume a consumer device. Another part of the problem. If I go out and buy a Newton or an iPad, you can bet I'll be paying attention as to whether it works. But as a voter? We had an election yesterday; I voted, I did my civic duty; I thought about the issues a bit; but I didn't really think my vote was going to make a difference. So, I was kind of more paying attention to stuff that actually made a difference to my life. I think that's how most voters think. They don't examine every aspect of the small print the way they would if they were buying a car. Because the car really matters. Bryan Caplan has talked about that and we've talked to him. Don Boudreaux writes about it a great deal as well.
54:31We've talked about the lessons for our personal lives; tried to talk about some of the policy issues, about what your book is really about--which is trial and error. About the virtues of trial and error. Want to say anything in closing about policy and how we might get more trial and error? Voters are part of the problem. Psychology is part of the problem. Anything else we might think about to let people try more stuff? I think it is partly about changing the intellectual exponents and getting people to accept that it's okay to fail as long as you are failing in the right way--as long as you are generating information; okay to experiment. This is really important. At the moment I think we have political systems that just don't take evidence very seriously. Even people who seem to be serious-minded, the Washington think-tankers. Some of them are interested in really high quality evidence, the kind that epidemiologists would take seriously. But often it's a bit hand-waving, vague. Partly that's the nature of the beast. Hard to know exactly how the economy is doing. The closing thought for me is really going back to how I started the book. I started the book in almost the same way I started my book The Undercover Economist, which is with a riff off Leonard Read's "I, Pencil." In the beginning of the book I talk about a toaster--a design student who decided he was going to try to make a toaster all by himself. Anybody who has led Len Read's "I, Pencil," knows it's not very easy. It was impossible. He made this absurd looking thing; actually quite a funny story, negotiating with British Petroleum [BP] about flying out to an oil rig; he was going to make plastic from potato starch, destroyed various things, microwaves, leaf blowers. Had a lot of adventures trying to make this toaster. Looking at the complexity of that product, first I leapt to the Len Read, pro-market solution, which is: Wow, the economy is unbelievably complex. It's this distributed system. Everybody's working together toward a common goal without ever actually knowing what that common goal is; and nobody working alone could ever produce these miracles. And I really believe that. But where I started with Adapt was just to say: But hang on. Are we actually happy with the way the world is, or have we got some problems? And if we've got some problems, it's going to mean making some changes to that incredibly complex system. And the more complex and amazing you think the system is, the more respect you give the system. The more challenging this project of actually trying to solve problems in a modern economy becomes. So, it was with that humbling thought that I embarked on trying to write the book and explore all of these things. Been amazingly good to write.