counter create hit Superforecasting: The Art and Science of Prediction - Download Free eBook
Ads Banner
Hot Best Seller

Superforecasting: The Art and Science of Prediction

Availability: Ready to download

A  New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." —Jason Zweig, The Wall Street Journal   Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, peop A  New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." —Jason Zweig, The Wall Street Journal   Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?   In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."   In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.


Compare
Ads Banner

A  New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." —Jason Zweig, The Wall Street Journal   Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, peop A  New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." —Jason Zweig, The Wall Street Journal   Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?   In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."   In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.

30 review for Superforecasting: The Art and Science of Prediction

  1. 5 out of 5

    Yannick Serres

    During the first hundred pages, I was sure to give the book a perfect score. It totally caught my attention and made me want more and more. The book made me feel like it had been written for me, someone that don't know much about predictions and forecasts, but feels like he could be good at it. Then, after the half of the book, you get a little bored because it always come back to the same thing: Use number to make your predictions in a well established timeframe, always question your predictions During the first hundred pages, I was sure to give the book a perfect score. It totally caught my attention and made me want more and more. The book made me feel like it had been written for me, someone that don't know much about predictions and forecasts, but feels like he could be good at it. Then, after the half of the book, you get a little bored because it always come back to the same thing: Use number to make your predictions in a well established timeframe, always question your predictions till the time runs out, learn from the past and see beyond your conic vision. This book is very interesting and worth giving a shot. It's a good mix of science and history, but you still feel like you're reading a novel. I was expecting nothing from this book and got quite a fun at reading it. I've been positively surprised and hope you'll be too. I got to thank Philip E. Tetlock and Random House of Canada for this book I received through Goodreads giveaways.

  2. 5 out of 5

    David

    Philip Tetlock is a professor at the University of Pennsylvania. He is a co-leader of the Good Judgment Project, a long-term forecasting study. It is a fascinating project whose purpose is to improve the accuracy of forecasts. You can learn more about the project on theGood Judgment website. In this book you can learn the basics of how to make accurate forecasts in the face of uncertainty and incomplete facts. An amazing tournament was held, which pitted amateur volunteers in the Good Judgment Pr Philip Tetlock is a professor at the University of Pennsylvania. He is a co-leader of the Good Judgment Project, a long-term forecasting study. It is a fascinating project whose purpose is to improve the accuracy of forecasts. You can learn more about the project on theGood Judgment website. In this book you can learn the basics of how to make accurate forecasts in the face of uncertainty and incomplete facts. An amazing tournament was held, which pitted amateur volunteers in the Good Judgment Project with the best analysts at IARPA (Intelligence Advanced Research Projects Agency). The amateurs with the best records for accuracy are termed "superforecasters". They performed 30% better than the professional analysts, who had access to classified information. This was not a simple tournament. It was held over a long period of time, enough time to allow a good amount of research and thinking and discussions among team members. It involved hundreds of questions. These questions were asked in a precise, quantitative way, with definite time frames. And besides giving predictions, players in the tournament estimated their confidence levels in each of their predictions. Their forecasts, along with their estimated confidence levels went into the final scores. So, what are the qualities of a good superforecaster? Perhaps the dominant trait is active open-mindedness. They do not hold onto beliefs when evidence is brought against them. They all have an intellectual humility; they realize that reality is very complex. Superforecasters are almost all highly numerate people. They do not use sophisticated mathematical models, but they understand probability and confidence levels. Superforecasters intuitively apply Bayes theorem, without explicitly using the formula quantitatively. They care about their reputations, but their self esteem stakes are less than those of career CIA analysts and reputable pundits. So, when new evidence develops, they are more likely to update their forecasts. Superforecasters update their forecasts often, in small increments of probability. The book discusses the movie, Zero Dark Thirty, about the military assault on the compound in Pakistan, where Osama bin Laden was hiding. The character playing Leon Panetta railed against all the different opinions of the intelligence analysts. But the real Leon Panetta understood the differences in opinions, and welcomed them. He understood that analysts do not all think alike, they have diverse perspectives, and this helps to make the "wisdom of the crowd" more accurate overall. It was found that teams score 23% better than individuals. The book dispells the myth that during World War II, German soldiers unquestioningly followed orders, while Americans took the initiative and improvised. The truth, especially in the early phases of the war, was often exactly the opposite. The Germans followed a philosophy that military orders should tell leaders what to do, but not how to do it. American leaders were given very detailed orders that removed initiative, creativity, and improvisation. The author deliberately chose this example to make us squirm. One should always keep in mind, that even an evil, vicious, immoral enemy can be competent. Never underestimate your adversary. This is difficult in practice; even superforecasters can conflate facts and values. Nowadays, the military has radically changed. The military encourages initiative and improvisation. However, corporations are much more focused on command and control. Their hierarchical structure tends to micro-manage. In fact, some corporations have hired ex-military officers to advise company executives to worry less about status, and instead to empower their employees. An appendix at the end of the book is a list of the Ten Commandments for superforecasting. These are useful generalities for successful forecasting. But even here, the authors are intellectually humble; their last commandment is not always to treat all the commandments as commandments! This is a fascinating, engaging book, about a subject I had never thought much about. The book is easy reading, filled with lots of anecdotes and interesting examples. The authors rely quite a bit on the wisdom of behavioral economists, Daniel Kahneman and Amos Twersky. They have given a lot of thought to the subject of forecasting, and it really shows.

  3. 4 out of 5

    Anton

    5⭐️ - What a great book! It will definitely appeal to the fans of Thinking, Fast and Slow, Predictably Irrational: The Hidden Forces That Shape Our Decisions and The Black Swan: The Impact of the Highly Improbable. Thought-provoking and full of very perceptive observations. But I particularly would like to commend authors for how well this book is written. This is an example of non-fiction at its best. There is definitely research and background science overview but each chapter is a proper story 5⭐️ - What a great book! It will definitely appeal to the fans of Thinking, Fast and Slow, Predictably Irrational: The Hidden Forces That Shape Our Decisions and The Black Swan: The Impact of the Highly Improbable. Thought-provoking and full of very perceptive observations. But I particularly would like to commend authors for how well this book is written. This is an example of non-fiction at its best. There is definitely research and background science overview but each chapter is a proper story as well. Philip E. Tetlock and/or his co-author (not sure who should take the credit) are superb storytellers! It was not only insightful but genuinely enjoyable to read this book. I usually read several books simultaneously one or two non-fiction titles and a bunch of fiction stories. But last week 'Superforecasting' monopolised my reading time. And it is particularly telling how well it managed to trample competition from its fiction 'rivals'. It goes straight to my absolute best non-fiction shelf. I recommend it strongly to all curious about the psychology of decision making and an ability of our mind to cope the uncertainty.

  4. 5 out of 5

    Elizabeth Theiss

    When it comes to forecasting, most pundits and professionals do little better than chimps with dartboards, according to Phillip Tetlock, who ought to know because he has spent a good deal of his life keeping track. Tetlock has partnered with Dan Gardner, an excellent science journalist, to write this engaging book about the 2 percent of forecasters who manage to consistently outperform their peers. Oddly, consumers of forecasts generally do not require evidence of accuracy. Few television networ When it comes to forecasting, most pundits and professionals do little better than chimps with dartboards, according to Phillip Tetlock, who ought to know because he has spent a good deal of his life keeping track. Tetlock has partnered with Dan Gardner, an excellent science journalist, to write this engaging book about the 2 percent of forecasters who manage to consistently outperform their peers. Oddly, consumers of forecasts generally do not require evidence of accuracy. Few television networks or web sites score the accuracy of forecasts. Years ago, as a stockbroker I gave very little weight to the forecasts of my firm's experts; the stocks they recommended were as likely to go down as they were to go up. Today, as an occasional television pundit, I'm often asked to forecast electoral outcomes, so I was very curious about Tetlock's 2 percent that managed "superforecasting." "How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances," according to Tetlock and Gardner. It makes no sense to try to predict the economy ten years from now, for example. But he wanted to understand how the best forecasters manage to maintain accuracy over the course of many predictions. In order to find out, he launched the Good Judgement Project, which involved 2800 volunteer forecasters who worked on a series of prediction problems over several years. After the first year, he identified the best forecasters and put them on teams to answer questions like whether Arafat was poisoned by polonium, whether WMDs were in Iraq and whether Osama bin Laden was in Abbottabad. His findings shed light on the kind of evidence-based, probabilistic, logical thought processes that go into the best predictions. A section on group think is nicely illustrated by the Bay of Pigs disaster; the ability of JFK's team to learn from their mistakes is demonstrated by the same group's more skillful response to the Cuban missile crisis. Written in an engaging and accessible style, Superforecasting illustrates every concept with a good story, often featuring national surprises like 9/11 and the lack of WMDs in Iraq with explanations of why forecasters missed what looks obvious in hindsight. Ultimately, this is a book about critical thinking that challenges the reader to bring more rigor to his or her own thought processes. Tetlock and Gardner have made a valuable contribution to a world of internet factoids and snap judgments.

  5. 5 out of 5

    Michael

    Philip E. Tetlock feels a bit too polite. Sometimes it seems he is excusing wrong predictions by finding weasel words in them or interpreting them kindly instead of using the intended assertion. Just say, Thomas Friedman is a bad forecaster. Instead of reading this book I recommend reading the books he references: Thinking, Fast and Slow The Black Swan: The Impact of the Highly Improbable and The Signal and the Noise: Why So Many Predictions Fail - But Some Don't This books feels like a (superficial Philip E. Tetlock feels a bit too polite. Sometimes it seems he is excusing wrong predictions by finding weasel words in them or interpreting them kindly instead of using the intended assertion. Just say, Thomas Friedman is a bad forecaster. Instead of reading this book I recommend reading the books he references: Thinking, Fast and Slow The Black Swan: The Impact of the Highly Improbable and The Signal and the Noise: Why So Many Predictions Fail - But Some Don't This books feels like a (superficial) summary of the afore mentioned books and an attempt to combine them. "The central lessons of “Superforecasting” can be distilled into a handful of directives. Base predictions on data and logic, and try to eliminate personal bias. Keep track of records so that you know how accurate you (and others) are. Think in terms of probabilities and recognize that everything is uncertain. Unpack a question into its component parts, distinguishing between what is known and unknown, and scrutinizing your assumptions." (New York Times review)

  6. 5 out of 5

    John Kaufmann

    This book was solid, though perhaps not quite as good as I hoped/expected. It was interesting reading, full of interesting stories and examples. The author doesn't prescribe a particular method - superforecasting, it appears, is more about a toolbox or set of guidelines that must be used and adapted based on the particular circumstances. As a result, at times I felt the author's thread was being lost or scattered; however, upon reflection I realized it was part of the nature of making prediction This book was solid, though perhaps not quite as good as I hoped/expected. It was interesting reading, full of interesting stories and examples. The author doesn't prescribe a particular method - superforecasting, it appears, is more about a toolbox or set of guidelines that must be used and adapted based on the particular circumstances. As a result, at times I felt the author's thread was being lost or scattered; however, upon reflection I realized it was part of the nature of making predictions. On reflection, his guidelines are clear and should be helpful, even if they cannot provide a method for correct predictions 100% of the time. One critique I had was that the author didn't provide any statistical evidence of why the people he identified as superforecasters were good as opposed to lucky. I continued to think some of the examples he gave were based on luck, not necessarily skill - the author distilled a lesson that contributed to the success, but I would have had more confidence that his conclusion represented the reason for the superforecasters' success if he had provided more statistical evidence to support that conclusion. Nonetheless, his conclusions/guidelines appear sound, and I plan on using them.

  7. 4 out of 5

    Andy

    This book features some interesting trivia about "Super-forecasters" but when it comes to explaining evidence-based practice, it was Super-disappointing. It starts off well with a discussion of Archie Cochrane and evidence-based medicine (EBM), but then it bizarrely ignores the core concepts of EBM. -In EBM, you look up what works and then use that info to help people instead of killing them. But when Tetlock talks about social philanthropy he implies that it's evidence-based as long as you rigo This book features some interesting trivia about "Super-forecasters" but when it comes to explaining evidence-based practice, it was Super-disappointing. It starts off well with a discussion of Archie Cochrane and evidence-based medicine (EBM), but then it bizarrely ignores the core concepts of EBM. -In EBM, you look up what works and then use that info to help people instead of killing them. But when Tetlock talks about social philanthropy he implies that it's evidence-based as long as you rigorously evaluate what you're doing. NO! If your doctor gives you arsenic instead of antibiotics for your bacterial infection, that's not OK even if he does lots of lab tests afterwards to see how you're progressing. -In EBM, you focus on the best available evidence. There's a difference between what some drug rep told you vs. the conclusions of a randomized clinical trial. But when Tetlock reviews the Iraq War fiasco, he argues there was a really big pile of evidence so it made sense to go to war. He doesn't seem to get that a really big pile of crap is still just crap. Some elements of the pro-war narrative were known to be bogus before the war. Others turned out to be bogus after (Curveball, etc.), so those were not investigated and confirmed as solid evidence beforehand either. -In EBM, the point of a diagnostic test is to get a predictive value. This number tells you how likely a test result is to be true, based on its track record. Instead, Tetlock praises his forecasters for making up percentages that reflect their subjective degree of certitude. And he calls those "probabilities" but that is very misleading, because in science a probability is something like the chance of drawing a royal flush in poker, i.e. it's an objectively calculated number based on reality. -In EBM, the big issue is whether the treatment works for the main relevant outcome. So for the Iraq War example, the question for the CIA was whether an invasion would A) spread democracy in the Middle East after preventing an imminent nuclear attack on the USA, or B) not prevent anything (because there was no secret new nukes program) and increase regional chaos as well as global terrorism (think ISIS). This decision tree is absent from the book, and that omission violates Tetlock's own rule about asking the meaningful hard question. This book has good content on cognitive biases. But I would recommend going directly to the source on that topic.

  8. 5 out of 5

    Pavlo Illiashenko

    Harry Truman famously said : Give me a one-handed economist! All my economics say, ''On the one hand? on the other.'' Philip Tetlock combines three major findings from different areas of research: 1) People don't like experts who are context specific and could not provide us with clear simple answers regarding complex phenomena in a probabilistic world. People don't like if an expert sounds not 100% confident. They reason, that confidence represents skills. 2) Experts who employ publicly acceptable Harry Truman famously said : Give me a one-handed economist! All my economics say, ''On the one hand? on the other.'' Philip Tetlock combines three major findings from different areas of research: 1) People don't like experts who are context specific and could not provide us with clear simple answers regarding complex phenomena in a probabilistic world. People don't like if an expert sounds not 100% confident. They reason, that confidence represents skills. 2) Experts who employ publicly acceptable role of hedgehogs (ideologically narrow-minded) and/or express ideas with 100% certainty are wrong on most things most of the time. General public is fooled by hindsight bias (on the part of experts) and lack of accountability. 3) We live in the nonlinear complex probabilistic world, thus, we need to shape our thinking accordingly. Those who do it ("foxes" comparing to "hedgehogs" can think non-simplistically) become much better experts in their own field and better forecasters in general. I guess, nobody with sufficient IQ or relevant experience will find any new and surprising ideas in this book. However, the story is interesting in itself and many Tetlock's arguments and examples can be borrowed for further discussions with real people in the real life settings.

  9. 5 out of 5

    Tony

    I'm giving this a 4 even though I didn't complete it. It's very well written and structured but I just decided half way through that the subject wasn't for me. Some exceptional real-world examples though!

  10. 5 out of 5

    Maru Kun

    Troubled to find I own, as yet unread, a book recommended by Dominic Cummings. Now I’ve lost all desire to read it and have to put it on the “interesting-but-dubious shelf” and wonder whether or not I’m a weirdo who can’t tell the difference between real science and pseudoscience. The book gets a mention from Cummings in this video: Tory backlash as Boris Johnson's rogue No10 chief Dominic Cummings refuses to condemn ousted 'superforecaster' Andrew Sabisky who 'posted vile Reddit comments defendi Troubled to find I own, as yet unread, a book recommended by Dominic Cummings. Now I’ve lost all desire to read it and have to put it on the “interesting-but-dubious shelf” and wonder whether or not I’m a weirdo who can’t tell the difference between real science and pseudoscience. The book gets a mention from Cummings in this video: Tory backlash as Boris Johnson's rogue No10 chief Dominic Cummings refuses to condemn ousted 'superforecaster' Andrew Sabisky who 'posted vile Reddit comments defending rape and incest fantasies'

  11. 4 out of 5

    Michal Mironov

    I usually rank my favorite books on a line between „extremely readable“ and „ very useful“. This one is probably among my Top 3 most useful books ever. The other two are Kahneman's “Thinking, Fast and Slow”, and Taleb's “Black Swan”. You don't have to agree on everything with the author, but you still will get dozens of truly important facts that can fundamentally affect your life. Don't be misguided by the title – you really have to read this book even if you don't have the ambition to predict I usually rank my favorite books on a line between „extremely readable“ and „ very useful“. This one is probably among my Top 3 most useful books ever. The other two are Kahneman's “Thinking, Fast and Slow”, and Taleb's “Black Swan”. You don't have to agree on everything with the author, but you still will get dozens of truly important facts that can fundamentally affect your life. Don't be misguided by the title – you really have to read this book even if you don't have the ambition to predict stock prices or revolutions in the Arab world. Whether we like it or not, we are all forecasters - making important life decisions such as changing career path or choosing a partner - based on dubious, personal forecasts. This book will show you how to dramatically improve those forecasts based on data and experience of the most successful forecasters. You’ll be surprised that those experts usually aren’t CIA analysts or skilled journalists, but ordinary intelligent people, who knows how to avoid most common biases, suppress their ego and systematically assess things from different angles. We will never be able to make perfect predictions but at least we can learn from the very best.

  12. 5 out of 5

    Paul Phillips

    Really good and well thought out ideas, particularly relevant to anyone who has any sort of forecasting responsibilities in their work. I think this is a must read for economists. My only quarrel is that the beginning is a lot more punchy and the end kind of drags.

  13. 5 out of 5

    Leland Beaumont

    Summarizing 20 years of research on forecasting accuracy conducted from 1984 through 2004, Philip Tetlock concluded “the average expert was roughly as accurate as a dart-throwing chimpanzee.” More worrisome is the inverse correlation between fame and accuracy—the more famous a forecasting expert was, the less accurate he was. This book describes what was learned as Tetlock set out to improve forecasting accuracy with the Good Judgement Project. Largely in response to colossal US intelligence erro Summarizing 20 years of research on forecasting accuracy conducted from 1984 through 2004, Philip Tetlock concluded “the average expert was roughly as accurate as a dart-throwing chimpanzee.” More worrisome is the inverse correlation between fame and accuracy—the more famous a forecasting expert was, the less accurate he was. This book describes what was learned as Tetlock set out to improve forecasting accuracy with the Good Judgement Project. Largely in response to colossal US intelligence errors, the Intelligence Advanced Research Projects Activity (IARPA) was created in 2006. The goal was to fund cutting-edge research with the potential to make the intelligence community smarter and more effective. Acting on recommendations of a research report the IARPA sponsored a massive tournament to see who could invent the best methods of making the sorts of forecasts that intelligence analysis make every day. This tournament provided the experimental basis for rigorously testing the effectiveness of many diverse approaches to forecasting. And learn they did! Thousands of ordinary citizen volunteers applied, approximately 3,200 were invited to participate, and 2,800 eventually joined the project. “Over four years, nearly five hundred questions about international affairs were asked of thousands of Good Judgment Project’s forecasters, generating well over one million judgments about the future.” Because fuzzy thinking can never be proven wrong, questions and forecasts were specific enough that the correctness of each forecast could be clearly judged. These results were used to compute a Brier score—a quantitative assessment of the accuracy of each forecast— for each forecaster. In the first year 58 forecasters scored extraordinary well; they outperformed regular forecasters in the tournament by 60%. Remarkably these amateur superforecasters “performed about 30 percent better than the average for intelligence community analysts who could read intercepts and other secret data.” This is not just luck; the superforecasters as a whole increased their lead over all other forecasters in subsequent years. Superforecasters share several traits that set them apart, but more importantly they use many techniques that we can all learn. Superforecasters have above average intelligence, are numerically literate, pay attention to emerging world events, and continually learn from their successes and failures. But perhaps more importantly, they approach forecasting problems using a particular philosophic outlook, thinking style, and methods, combined with a growth mindset and grit. The specific skills they apply can be taught and learned by anyone who wants to improve their forecasting accuracy. This is an important book. Forecasting accuracy matters and the track record has been miserable. Public policy, diplomacy, military action, and financial decisions often depend on forecast accuracy. Getting it wrong, as so often happens, is very costly. The detailed results presented in this book can improve intelligence forecasts, economic forecasts, and other consequential forecasts if we are willing to learn from them. This is as close to a page-turner as a nonfiction book can get. The book is well-written and clearly presented. The many rigorous arguments presented throughout the book are remarkably accessible. Sophisticated quantitative reasoning is well presented using examples, diagrams, and only a bare minimum of elementary mathematical formulas. Representative evidence from the tournament results support the clearly-argued conclusions presented. Personal accounts of individual superforecasters add interest and help create an entertaining narrative. An appendix summarizes “Ten Commandments for Aspiring Superforecasters”. Extensive notes allow further investigation, however the advanced reader edition lacks an index. Applying the insights presented in this book can help anyone evaluate and improve forecast accuracy. “Evidence-based policy is a movement modeled on evidence-based medicine.” The book ends with simple advice and a call to action: “All we have to do is get serious about keeping score.”

  14. 5 out of 5

    Frank

    PT's Superforecasting correctly remarks upon the notable failure to track the performance of people who engaged in predicting the outcome of political events. This lack of accountability has led to a situation where punditry amounts little more than entertainment; extreme positions offered with superficial, one-sided reasoning; aimed mainly at flattering the listeners' visceral prejudices. One problem is expressed positions are deliberately vague. This makes it easy for the pundit to later requa PT's Superforecasting correctly remarks upon the notable failure to track the performance of people who engaged in predicting the outcome of political events. This lack of accountability has led to a situation where punditry amounts little more than entertainment; extreme positions offered with superficial, one-sided reasoning; aimed mainly at flattering the listeners' visceral prejudices. One problem is expressed positions are deliberately vague. This makes it easy for the pundit to later requalify his position conform with the eventual outcome. For example: a pundit claims quantitative easing will lead to inflation. When consumer inflation doesn't appear, he can claim that 1) it will, given enough time, 2) in fact, there is inflation in stock prices 3) He never said how much inflation. Thus, the first task in assessing performance is to require statements of clearly defined, easily measurable criteria. Once this is done, PT began a series of experiments, testing which personality characteristics and process variables led to good prediction outcomes, both for individuals and groups. Key attributes include independence from ideology, an openness to consider a variety of sources and points of view and a willingness to change one's mind. Native intelligence, numeracy and practical experience with logical thinking all correlate positively with prediction accuracy; at least to a point. But moderately intelligent and diligent individuals can often surpass the super bright, who sometimes show a tendency to be blinded by their own rhetorik. And some "superforecasters" consistently outperform professionals with access to privileged information. The chapter on how to get a group to function well together is especially applicable for business management. PT wrote his book at a mid brow style, and anyone already familiar with basic psychology writing, e.g. from D Kahneman, will often feel annoyed by his long and overly folksy explanations. Indeed, while it has good things to say about applied epistemology, it isn't necessary read all all 200 pages. A good alternative starting point would be to consult Evan Gaensbauer's review at the Less Wrong website: https://www.lesswrong.com/posts/dvYeS....

  15. 4 out of 5

    Andrew

    Superforcasting: The Art of Science and Prediction, by Philip E. Tetlock, is a book about the art and science of statistical prediction, and its everyday uses. Except it isn't really, that is just what they are selling it as. The book starts off really strong, analyzing skepticism, illusions of statistical knowledge, and various types of bias. However, the majority of the book focuses on a US government intelligence project called IARPA, designed to use everyday citizens to make statistical pred Superforcasting: The Art of Science and Prediction, by Philip E. Tetlock, is a book about the art and science of statistical prediction, and its everyday uses. Except it isn't really, that is just what they are selling it as. The book starts off really strong, analyzing skepticism, illusions of statistical knowledge, and various types of bias. However, the majority of the book focuses on a US government intelligence project called IARPA, designed to use everyday citizens to make statistical predictions on real life events. This book really threw me off, after such a strong start. I was expecting a book on forecasting; its design, uses and various techniques. And in some ways Tetlock delivers. However, I did not expect a book on US foreign policy, or a comparison of the fictional version of CIA director Leon Panetta from Zero Dark Thirty with the real one. It was all a little bit mind-boggling, and not in a statistical way. The book just doesn't hold my interest. Frankly, I could continue to criticize, suffice to say that a book on a US intelligence program should probably be labeled a bit better than this. The title suggests a cut and dry analysis of forecasting, but the book delivers in propaganda, US political criticism and so on, as opposed to interesting information on a form of statistical prediction. I would recommend a pass on this one, if you are not interested in the addition of fairly watered down US political theory. If you are, however, the book may be of interest to you. I was more disappointed than anything, and hope to read a book actually focusing on forecasting in the near future.

  16. 5 out of 5

    Asif

    Possibly the best book I read in 2015. Couldn't have read at a better time as the year nears an end. I could relate with a lot of things as I work as an equity analyst trying to do the seemingly impossible thing of forecasting stock prices. In particular, the examples of how superforecasters go about doing their jobs were pretty inspiring. Examples of taking the outside view and creating a tree of various outcomes and breaking down that tree into branches are something I could benefit from. As a Possibly the best book I read in 2015. Couldn't have read at a better time as the year nears an end. I could relate with a lot of things as I work as an equity analyst trying to do the seemingly impossible thing of forecasting stock prices. In particular, the examples of how superforecasters go about doing their jobs were pretty inspiring. Examples of taking the outside view and creating a tree of various outcomes and breaking down that tree into branches are something I could benefit from. As an analyst I am certain of only one thing. My estimates for earnings and target price for stocks will be wrong 99% of the time. However, I try to make sure that I am not missing the big picture and when there is a screaming buy or a sell caused by changing fundamentals or market overreactions I do not want to miss that. My earnings estimates and target prices are actually much less important. What is true however that small bit and pieces of information, including quarterly earnings do play a role in getting to a high conviction big picture story.

  17. 5 out of 5

    Frank Ruscica

    Just finished reading an advance copy. The signal-to-noise ratio of this book: maximum.

  18. 5 out of 5

    Romanas

    As Hume noted, there’s no rational basis for believing that the sun will rise tomorrow. Yet our brain wants to have it simple – we believe that the future will be like the past. The problem here is that the truth of that belief is not self-evident, and there are always numerous possibilities that the future will change. It means that our causal reasoning can’t be justified rationally, and thus there’s no rational basis for believing that the sun will rise tomorrow. However, virtually nobody woul As Hume noted, there’s no rational basis for believing that the sun will rise tomorrow. Yet our brain wants to have it simple – we believe that the future will be like the past. The problem here is that the truth of that belief is not self-evident, and there are always numerous possibilities that the future will change. It means that our causal reasoning can’t be justified rationally, and thus there’s no rational basis for believing that the sun will rise tomorrow. However, virtually nobody would claim there will be no sunrise tomorrow (at least in Stockholm we haven’t seen it during the last three weeks). This is human nature – to believe that certain things will happen. To predict and forecast is in human nature too. Some people claim they are very good at it. The author takes on it and gives and excellent overview and analysis of the state of affairs in the prediction and forecasting world, and introduces some big achievers in this area – superforecasters. Regarding superforecasters, obviously, they exist. Exist just like Warren Buffet, like the guys winning the lottery, like some personas becoming presidents of the US, etc., i.e. the rare ones who reaches extremely improbable heights. Is it a pure skill or is it a product of the skill and the result of the randomness filter, aka as luck? Forecasting of the events emerging from the social formations and human made systems is virtually impossible. Assuming free will exists, it makes our world extremely complex, and complicated. There’s a possibility that supercomputers and AI will eventually greatly improve forecasting of natural phenomena, like weather, earthquakes, flooding and the like. But then again, humans are in the loop, and most likely have the capacity to significantly alter the natural processes. This makes the whole human altered natural system practically unpredictable. To become a superforecaster one needs to constantly calibrate and verify the underlying models. As author points out, it is relatively easy in some areas, because some events forecasted have a high frequency, like weather forecasts, where one can verify the forecast on daily basis and improve the models. Rare processes are virtually impossible to calibrate, just because they are rare and there is not enough data to calibrate the prediction model, would it be a mental or a computer model. That’s why the forecasting of the events residing in fat tails, the extremes, Black Swans, as Taleb calls them, is epistemically impossible. I took on this book with the premise that listening to anyone who thinks they can predict the future is a big waste of time. The author mapped out what abilities a superforecaster usually possess to become such a good one. Among the strengths of a good forecaster is the open-mindedness and the ability to update one’s believes based on the new evidence. I think, I have those dispositions and this book actually shifted me some degrees toward the optimistic sceptic. My curiosity on the aspect of how the groups can do some valuable forecasting has increased. Sometimes, it can be very dangerous to base ones activities on predictive grounds. Good reactive capabilities, like risk management is essential to be able to cope with whatever the future will bring. Someone said that planning is useless, but it is indispensable. And many say we should live in a moment. Still, it’s hard and maybe even not so wise to not put down some ideas on how the future would unfold. Maybe planning is an illusion, but like many illusions it might give some existential comfort. Very soon we will close the books, raise a glass and try to look into the new year with the hope it will be at least a better year than the previous one. We will certainly forecast many things we believe will happen next year. No one could have said it better about this kind game as Lin Wells (just change 2010 to 2019) -- “All of which is to say that I’m not sure what 2010 will look like, but I’m sure that it will be very little like we expect, so we should plan accordingly”. This was my 100th book this year. I have forecasted this to happen, and it did! On this forecast my Brier score is 0 – meaning: perfection! On that note, to my followers and friends: Warmest thoughts and best wishes for a wonderful holiday and a very happy new year!!!

  19. 5 out of 5

    Allen Adams

    http://www.themaineedge.com/style/fut... Ever since mankind has grasped the concept of time, we have been trying to predict the future. Whole cottage industries have sprung up around the process of prediction. Knowing what is coming next is a need that borders on the obsessive within our culture. But is it even possible to predict what has yet to happen? According to “Superforecasters: The Art and Science of Prediction”, the answer is yes…sort of. Social scientist Philip Tetlock and journalist Dan http://www.themaineedge.com/style/fut... Ever since mankind has grasped the concept of time, we have been trying to predict the future. Whole cottage industries have sprung up around the process of prediction. Knowing what is coming next is a need that borders on the obsessive within our culture. But is it even possible to predict what has yet to happen? According to “Superforecasters: The Art and Science of Prediction”, the answer is yes…sort of. Social scientist Philip Tetlock and journalist Dan Gardner have teamed up to offer a treatise on the nature of prognostication. Not only do they discuss the many pitfalls of prediction, but they also offer up some thoughts on that small percentage of the population who, for a variety of reasons, are very, VERY good at it. Tetlock has spent decades researching the power of prediction. Basically, people are pretty terrible at it. He himself uses the oft-offered analogy of a chimpanzee throwing darts – the implication is that random chance is at least as good at predicting future outcomes as the average forecaster. Even the well-known pundits, the newspaper columnists and talking heads – even they struggle to outperform the proverbial dart-tossing simian. But over the course of Tetlock’s years of study by way of his ongoing Good Judgment Project, he uncovered an astonishing truth. Yes, most people have no real notion of how to predict the outcome of future events. However, there are some who can outperform the chimp. They can outperform the famous names. They can outperform think tanks and universities and national intelligence agencies and algorithms. Tetlock calls these people “superforecasters.” These superforecasters were among the tens of thousands that volunteered to be a part of the Good Judgment Project. They were part of a massive, government-funded forecasting tournament. These people – folks from all walks of life, filmmakers and retirees and ballroom dancers and you name it – were asked to predict the outcome of future events. And so they did. These people soon separated themselves from the pack, offering predictions about a vast and varied assemblage of global events with a degree of unmatched accuracy. In “Superforecasters,” we get a chance to look a little closer at some of these remarkably gifted individuals. Tetlock offers analysis of some past predictions that were successful and others that were failures. We also get insight from prominent figures in the intelligence community and from people ensconced in the public and private sectors alike. And as Tetlock and company dig deeper, it becomes clear that the key to forecasting accuracy – to becoming a superforecaster – isn’t about computing power or complex algorithms or secret formulas. Instead, it’s about a mindset, a desire to devote one’s intellectual powers to a flexibility of thought. The accuracy of these superforecasters springs from their ability to understand probability and to work as a team, as well as an acceptance of the possibility of error and a willingness to change one’s mind. There’s something inherently fascinating about predicting the future. One might think that Tetlock’s findings are a bit complex – and they undoubtedly are. However, what he and co-author Gardner have done is condense the myriad complications of his decades of research into something digestible. A wealth of information has been distilled into a compelling and fascinating work. It’s not quite pop science – it’s a bit denser than that – but it’s still perfectly comprehensible to the layman. In essence, this book gives us a clear and demonstrable way to improve the way we predict the future. “Superforecasting” is a fascinating and compelling exploration of something to which many of us may not have given much thought. It’s not all haphazard chance – there are actually ways to improve your ability to predict the future, some of which are laid out right here for you. If nothing else, you’ll never look at a pundit’s prediction the same way again.

  20. 4 out of 5

    Mehrsa

    Really interesting. The book shows some common misconceptions about forecasting, well, statistics really. I'm often surprised by how people, including me, misinterpret data. The book also showed what it takes to be an excellent forecaster. Basically, requires the same skills as anything: pay attention, evaluate yourself, know your blindspots, be humble, practice.

  21. 4 out of 5

    Sabin

    It sucks when an audiobook is penned by two people but you hear a lot of “I” and “me”. After a little bit of background check, apparently the “I” and “me” guy is Tetlock, the scientist, while Gardner is just here for the ride. And also because he’s a journalist and because he can write. But maybe I’m wrong. Anyway, the end-result is worth it. It’s a very detailed account of two forecasting tournaments, which aim to find out if people are better than chance at predicting the future. Short answer: It sucks when an audiobook is penned by two people but you hear a lot of “I” and “me”. After a little bit of background check, apparently the “I” and “me” guy is Tetlock, the scientist, while Gardner is just here for the ride. And also because he’s a journalist and because he can write. But maybe I’m wrong. Anyway, the end-result is worth it. It’s a very detailed account of two forecasting tournaments, which aim to find out if people are better than chance at predicting the future. Short answer: for some people yes, but on average no. Which means that, while some are better than average, some people are actually worse at predicting the future than the flip of a coin, or Tetlock’s infamous “dart-throwing monkey”. Aside from describing the experiments and drawing conclusions from them, which by themselves would have made this book worth every minute spent reading it, the authors also discuss other experiments and connect their findings to other theories proposed by psychologists Daniel Kahneman and Amos Tversky, essayst of “Black Swan” fame Nassim Nicholas Taleb and a few other theorists, both with civilian and military backgrounds. The authors focus on predictions about international events and especially on their correctness. The tournaments aim to find people who can make very good predictions (a lot better than the monkey) and find out how they go about achieving such good scores. Their conclusions are, as always, common-sense, if you stop to think about it. But their insight also helps avoid the pitfalls we may encounter along the way. While not many of us will be called upon to predict if, upon examination, Yasser Arafat’s body contains traces of Polonium, or if North Korea develops nuclear weapons within a given timeframe, these experiments point out judgement flaws inherent in our human nature and make us aware of our own mistakes. Unlike Kahneman’s Thinking Fast and Slow, this book’s contents are perhaps less directly relevant to everyone. It seems to apply more to people who are in the business of forecasting, like economists, financial analysts and stock brokers. And I say this also because it’s very practical and gives a lot of details on the methods one can use to achieve better forecasting results. What is actually relevant to everyone, is their description of the mindset required for good decision-making and how someone should go about weighing the consequences of important decisions before making them. Of course, going with your gut feelings is one way of making a decision, and, apparently, if you already have experience in that situation, a gut decision is already a lot better than random. But a better way, even if your gut tells you a course of action is good, and you have enough time to analyse the issue, is to check your internal point of view against an external one, which can be quantified, and then adjust your initial estimate accordingly. Actually you get a much better explanation and also a lot of examples in the book, and they do, in fact, make sense. It is one of the better books I read his year, and I found it pleasant to be reminded of some of the concepts discussed previously by Kahneman. Forecasting is an integral part of our humanity, and this book can also help us understand ourselves a bit better. One more thing: listening to it was a delightful experience. The performance, except for a few German names, was admirable, while the pace and the examples kept everything interesting.

  22. 5 out of 5

    Ragnar

    The point Philip Tetlock and Dan Gardner are trying to make is that in the field of forecasting we seldom measure the accuracy of a prediction retrospectively, it applies especially to talking heads giving vague opinions often with no timeframes in media about the trends in the stock market, crisis in Syria, results of next elections etc. The unfortunate thing is that the same happens or happened previously in the national intelligence services as well. No scores were kept on the accuracy levels The point Philip Tetlock and Dan Gardner are trying to make is that in the field of forecasting we seldom measure the accuracy of a prediction retrospectively, it applies especially to talking heads giving vague opinions often with no timeframes in media about the trends in the stock market, crisis in Syria, results of next elections etc. The unfortunate thing is that the same happens or happened previously in the national intelligence services as well. No scores were kept on the accuracy levels of the forecasts, therefor it was impossible to improve as well. To start validating the forecasts Philip Tetlock ran an experiment - the Good Judgement Project – a competition for the ordinary folks to forecast the future, in essence a crowdsourcing project. It was a success, a revelation for the prediction science as it turned out the results were ~40% better than set benchmark. The accuracy of those probabilistic predictions was measured with Brier score, a validation method where the set of possible outcomes can be either binary or categorical and must be mutually exclusive. An example. Suppose that one is forecasting the probability P that it will rain on a given day. Then the Brier score is calculated as follows: If the forecast is 100% (P = 1) and it rains, then the Brier Score is 0, the best score achievable. If the forecast is 100% and it does not rain, then the Brier Score is 1, the worst score achievable. If the forecast is 70% (P = 0.70) and it rains, then the Brier Score is (0.70−1)2 = 0.09. If the forecast is 30% (P = 0.30) and it rains, then the Brier Score is (0.30−1)2 = 0.49. If the forecast is 50% (P = 0.50), then the Brier score is (0.50−1)2 = (0.50−0)2 = 0.25, regardless of whether it rains. By using Brier score, Tetlock managed to find from thousands of forecasters the so called superforecasters who had exceptional results. By interviewing and oberving them he noted that the superforecasters had following characteristics: Cautious: Nothing is certain. Humble: Reality is infinitely complex Nondeterministic: What happens is not meant to be and does not have to happen Actively open-minded: Beliefs are hypotheses to be tested, not treasures to be protected. Intelligent and knowledgeable, with a need for cognition: Intellectually curious, enjoy puzzles and mental challenges Reflective: Introspective and self-critical Numerate: Comfortable with numbers In their methods of forecasting they tend to be: Pragmatic: Not wedded to any idea or agenda Analytical: Capable of stepping back from the tip-of-your-nose perspective and considering other views Dragonfly-eyed: Value diverse views and synthesize them into their own Probabilistic: Judge using many grades of maybe Thoughtful updaters: When facts change, they change their minds Good intuitive psychologists: Aware of the value of checking thinking for cognitive and emotional biases. In their work ethic, they tend to have: A growth mindset: Believe it's possible to get better Grit: Determined to keep at it however long it takes Best quotes: Magnus Carlsen – I often can´t explain move, it just feels right. Usually I know what I will do in 10 seconds; the rest is double-checking. Unknown - Not everything that counts can be counted and not everything that can be counted counts. Galen – All who drink of this remedy recover in a short time except those whom it does not help, who all die.

  23. 4 out of 5

    Gabriele

    Many of my friends were recommending this book to me and now that I've read it, I regret not doing it sooner. Do yourself a favor and read it asap!

  24. 5 out of 5

    Katrina Stevenson

    I truly think I learned a lot about how to forecast more accurately, and what to be aware of when listening to the forecasts of others. It definitely expanded my critical eye when it comes to experts, claims of fact, etc.

  25. 5 out of 5

    Morgan Blackledge

    I don’t have a lot to say about this book. Other than it’s good and the author Phil Tetlock is an extremely well regarded social scientist. My uncharacteristic lack of verbiage is not necessarily a slam on this book. It’s more of a reflection of how dang overloaded I am at present with work and school. It all has to get done. But in the meantime, my goodreads output has jumped the shark so to speak. That being said: Superforcasters documents Tetlock’s work studying individuals and teams that have I don’t have a lot to say about this book. Other than it’s good and the author Phil Tetlock is an extremely well regarded social scientist. My uncharacteristic lack of verbiage is not necessarily a slam on this book. It’s more of a reflection of how dang overloaded I am at present with work and school. It all has to get done. But in the meantime, my goodreads output has jumped the shark so to speak. That being said: Superforcasters documents Tetlock’s work studying individuals and teams that have extraordinary (i.e. consistently slightly better than chance over the long term) track records for predicting important events. As it turns out, these individuals are not necessarily experts in the given field they make predictions in. They aren’t necessarily smarter or better at math. They are certainly not psychic or particularly intuitive. But rather, they are meticulous critical thinkers who understand logic and probability, and utilize a bayesian methodology for incrementally adjusting their predictions as the data rolls in. It’s like google maps estimated arrival time (ETA). It’s shockingly accurate because it continuously compares its initial estimate (based on previous data) to real progress, and up and down modulates your ETA based on an algorithm that goes something like: Probability (P) = revised estimate of probability (B) divided by initial estimate of probability (A) multiplied by A, and divided by B. At least I think that’s something like the the way it goes. Please feel free to clarify in the comments. So. It’s at once really neat, and like so many of the findings in social science, kind of a no duh when you consider what is being said i.e. I can tell you what’s about to happen if I am allowed to continuously change my prediction the closer we get to the event. It’s not exactly that simple. Because every datahead can run that same methodology, but not all of them are Superforcasters. So read the book. But that’s it in a nutshell.

  26. 4 out of 5

    Aloke

    Social scientist Philip Tetlock talks about his multi-year experiment to measure and identify ways to improve our forecasts. A lot of his findings appear to be common sense, i.e. many forecasts are very nebulous and difficult to disprove and so we can't really tell if we are improving, big problems can be forecast by breaking them into more manageable pieces (Fermi-izing in his terminology), our judgement is affected by various biases, etc. The contribution is that Tetlock has brought these idea Social scientist Philip Tetlock talks about his multi-year experiment to measure and identify ways to improve our forecasts. A lot of his findings appear to be common sense, i.e. many forecasts are very nebulous and difficult to disprove and so we can't really tell if we are improving, big problems can be forecast by breaking them into more manageable pieces (Fermi-izing in his terminology), our judgement is affected by various biases, etc. The contribution is that Tetlock has brought these ideas together in one place and really proved that applying them can produce forecasts that are significantly better than other approaches. The book is also really well written (it is co-written with the author Dan Gardner) and thought provoking enough to elevate it far above the average pop-sci tome. The anecdotes are well-chosen and persuasive but Tetlock stays humble and avoids sweeping pronouncements: he repeatedly warns against false dichotomies. He isn't interested in absolute certainty. He pokes holes in his own biases (e.g. against forecasters like Thomas Friedman) and he frankly discusses criticisms of his research by Taleb and Kahneman.

  27. 4 out of 5

    A Man Called Ove

    I first heard of this book on CNN's GPS podcast, but the name "Superforecasting" reminded me of "Super-freakonomics", which inturn remined of dubious smartass hindsights and which caused me to ignore the recommendation. Tetlock was cited again by Steven Pinker in his book "Enlightenment Now" and that finally got me to pick it up. Can you really forecast geopolitical events ? Surprisingly yes. Do you need a special ability to be a "super-forecaster" ? Not really. What then do you need ? The book desc I first heard of this book on CNN's GPS podcast, but the name "Superforecasting" reminded me of "Super-freakonomics", which inturn remined of dubious smartass hindsights and which caused me to ignore the recommendation. Tetlock was cited again by Steven Pinker in his book "Enlightenment Now" and that finally got me to pick it up. Can you really forecast geopolitical events ? Surprisingly yes. Do you need a special ability to be a "super-forecaster" ? Not really. What then do you need ? The book describes the methods used by super-forecasters and in doing so describes a number of systemic biases in our thinking. Also, there are many relevant examples and except for a couple of complex equations which can be ignored, the author makes his points really well. This was a fun, fast read that was also satisfying. To the author's credit, he has finally made me pick up Thinking, Fast and Slow which I already think will be life-changing as far as books and ideas can be.

  28. 4 out of 5

    Nick Pascucci

    This is really a book on epistemology: How do you manage uncertainty in a world with tons of it? How can we update our beliefs while taking into account the usefulness of new evidence? In the forecasting arena these questions and their answers are key to producing accurate predictions. I would recommend this book to anyone interested in the art of Bayesian epistemology and probabilistic thinking, and doubly so if one has a predilection for prediction.

  29. 4 out of 5

    Michael Burnam-Fink

    Prediction is hard, especially about the future. And despite the importance people and organizations lay on having a clear view of the future, we're not very good at prediction. The authors, Tetlock and Gardner, argue that the state of prediction is similar to the state of medicine, before randomized clinical trials. Sometime forecasters are right, but mostly they're wrong, and there's no way to separate the potentially useful treatments from quackish nonsense. But there might be a better way. S Prediction is hard, especially about the future. And despite the importance people and organizations lay on having a clear view of the future, we're not very good at prediction. The authors, Tetlock and Gardner, argue that the state of prediction is similar to the state of medicine, before randomized clinical trials. Sometime forecasters are right, but mostly they're wrong, and there's no way to separate the potentially useful treatments from quackish nonsense. But there might be a better way. Superforecasters is a write up of the authors' Good Judgement Project, an IARPA (Intelligence Advanced Research Projects Agency) effort to systemically study predictions. The Good Judgement Project smoked the competition, including in-agency experts and prediction markets. Tetlock and Gardner used their background in psychology to find out what made their top 2%, the titular superforecasters, tick better than everyone else. The first finding is that most of us are astonishingly bad at prediction. Generally, people have three settings for probability: impossible, certain, and maybe. Kahneman's System 1, the intuitive rush to judgment, is terrible at complex problems. Perhaps the biggest step is to slow down, and engage System 2, the rational and logical side of the mind. Beyond that, people with ideological blinders are lousy predictors. If everything has to fit into Marxist dialectics, or the immortal science of Friedman-Hayek Thought, you'll overlook evidence that contradicts the theory. The Good Judgement project provide some necessary structure. Instead of weasel words (most of, highly likely) and vague timelines (in the near future), participants are given clear factual statements with a definite endpoint. There are thousands of predictions, and participants are allowed to update their initial assessments as they do more research. Superforecasters are adept at seeing a problem in numerous ways, rather that focusing on grand theories. They tend to be comfortable with statistics. Participants in The Good Judgement Project self-selected as more intelligent and better educated than the population at large, but superforecasters aren't notably smarter or more credentialed than their less accurate peers. Instead, they have a bulldog tendency towards research, an ability to question their own assumptions, update beliefs, and think inside-out and outside-in. Superforecasting is a fascinating and very useful book for anyone who is thinking about the future.

  30. 4 out of 5

    Graeme Newell

    I really loved this book. The author ran a multi-year study for the government agency DARPA (Defense Advanced Research Projects Agency) attempting to find ways to make predictions of future world events more accurate. The agency recruited thousands and thousands of ordinary people, then asked them to predict events of major importance. Examples included: •Political leaders that might fall from power •The prices of important commodities such as oil, precious metals and food stocks •Major economic oc I really loved this book. The author ran a multi-year study for the government agency DARPA (Defense Advanced Research Projects Agency) attempting to find ways to make predictions of future world events more accurate. The agency recruited thousands and thousands of ordinary people, then asked them to predict events of major importance. Examples included: •Political leaders that might fall from power •The prices of important commodities such as oil, precious metals and food stocks •Major economic occurrences such as recessions and stock market growth. Dozens and dozens of “super forecasters” quickly rose to the top prediction ranks. DARPA then conducted a long-term study of the specific techniques these prediction geniuses used to create their startlingly accurate prognostications. Tetlock’s book skillfully explains the shared best-practices used by these prescient champions. Predicting world events and even important events in personal life is more achievable than you might think - but you must learn to minimize the sabotaging innate biases that taint the average person’s prediction abilities. Over thousand of years of evolution, our human judgments were optimized for evolutionary survival in small hunter-gatherer tribes. Unfortunately, humanity has not had time to evolve a brain that’s great at solving problems in modern society. Our brains are fantastic at making decisions that keep us from starving to death, but pretty terrible at predicting modern problems such as currency fluctuations. Monetary policy just never seemed to be a major discussion point around the homo sapien campfire. Tetlock shows just how remarkably badly our out-of-date brains predict future events. But there is hope! Tetlock dives into the simple yet effective techniques shared by these superstar predictors, then shows how we mere mortals can put these same practices to work on our own personal problems. Tetlock’s writing is delightfully flowing and he weaves a fascinating story about a big government program that brought about a revelation in the intelligence gathering process. It is an inspirational story about how everyday people who practice some very simple mental disciplines were able to bring about some major good in the world.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.