For the psychology professor Philip Tetlock, the hunt for Osama Bin Laden is a classic example of the insufficiency of secret-service agencies. When Barack Obama gave the green light for that operation four years ago, he knew he was making one of the most difficult decisions in his life—one that would not only mean life or death for those involved, but also sway the course of history and help determine his legacy. The prognoses offered by the secret-service agencies were inconclusive: some put the likelihood for success at 40%, others at 80%. In the movie based on this operation, Zero Dark Thirty, the CIA agent Maya insists she is 100% certain of success. In reality, Obama determined the chances stood at fifty-fifty and gave the green light against the advice of his secretary of defense.
In Tetlock's view, such imprecisions present an unacceptable risk. Forecasts alleging complete certainty are, of course, unscientific. But Tetlock argues that a historic decision must not be based on imprecise reports. While Obama may have enjoyed luck on a historic scale, with his special task force finding Bin Laden and killing him, Tetlock insists that the work of secret-service agencies must change—fundamentally.
Since the eighties Tetlock has worked on precisely this endeavor. For four years now he has pursued research at the University of Pennsylvania at the behest of the Intelligence Advanced Research Projects Activity (IARPA), which the NSA and the CIA, together with fourteen other American secret-service agencies, established in 2006, in order to develop new methods for secret-service work in the post-9/11 era. Among IARPA’s divisions are the Office for Anticipating Surprise, the Office of Smart Collection, and the Office of Incisive Analysis.
Psychologists' "forecasting tournaments" capture the interest of the NSA and the CIA
This past weekend Tetlock met with twenty scientists and engineers on a vineyard north of San Francisco. Two European journalists were invited; otherwise, the meeting was closed to the public. Tetlock wanted to discuss the results of his Good Judgment Project, which he has worked on for 24 years. The scientists discuss the project under ideal circumstances: sheltered from the summer heat in the cool living room of a stately Victorian house. With palms in the garden, a front porch and wainscoting, the house exudes colonial splendor. The air is redolent with the rose beds in front of the windows and the precious woods of the furniture. The host is John Brockman of Edge Foundation, Inc. (http://edge.org), which is the best network for such debates in the country. That explains the presence of such intellectual heavyweights as the Nobel Laureate in Economics Daniel Kahneman, the political scientist and National Medal of Science winner Robert Axelrod, the political scientist Margaret Levi, and Google Vice President Salar Kamangar. It isn’t easy to hold one’s own in such a group. Kahneman in particular, the cleverest of them all, is skeptical.
During the lunch break the time finally seems right to pose the question: is it ethically admissible for scientists to work for an institute like IARPA? It soon becomes clear that a European journalist alone is apt to find that problematic. It turns out that many here have also worked for the Defense Advanced Research Projects Agency (DARPA), which is the research arm of the defense department and the model for IARPA. IARPA is run by the Office of the Director of National Intelligence.
According to Peter Lee, now head of research at Microsoft, IARPA is only the best-known attempt to imitate the success of DARPA. The latter was founded in 1958, after the Soviet launching of Sputnik, as the Advanced Research Projects Agency (ARPA) in order to give the U.S. an edge in the technological race between the superpowers. Its successes have made scientific history. This is where the rockets were developed that later flew to the moon, as well as the first version of the Internet, of GPS, the first drones and the first self-driving cars. For American scientists, according to Lee, DARPA has always been the institute of unlimited possibilities.
Nobel Laureate Daniel Kahneman also worked for the military, but in Israel. After the shock of the Yom Kippur War of 1973 he established a military prediction team. This is why Tetlock's work interests him: it provides an opportunity to introduce scientific standards into secret-service work. Until now, he says, secret-service agencies drafted their reports as essays. You couldn't ask for a less precise approach.
At the end of the weekend Philip Tetlock explains once more in detail why there is also a civilian application for the work of the "super forecasters," as there once was for the Internet or self-driving cars. Over the long term, he says, their methods and mindset could take the sting out of public discourse. If the traditional opinion leaders, the commentators and columnists, were faced with "forecasting tournaments," he believes their supposed expertise would soon be exposed as opinion. In any case, he finds, an analytical approach would constitute an improvement.
Daniel Kahneman, too, is convinced in the end. If we were able to teach "super forecasting," he says, it would gain a foothold in the scientific community. He only sees one obstacle: there's a long way to go from attaining scientific to attaining political relevance. A very long way. But Tetlock's methods will only be effective, he believes, when they improve the quality of political decisions. Yet perhaps that isn't so essential. Perhaps it's already an important step that science is forging the ideal of a post-ideological era in which reason can prevail.