800_natesilver

Bestselling author Nate Silver speaking at the ULI Spring Meeting in Philadelphia.

Big data—the collecting of enormous sets of information and analyzing it for patterns and insights to aid decision making—is a hot trend in fields ranging from heath care to sports to real estate.

But at the closing session of ULI’s Spring Meeting in Philadelphia, one of the nation’s most prominent statistical seers gave what might seem like a surprising warning: a mountain of numbers and facts can just as easily lead you to the wrong conclusion as to the right one.

“When you have more data, you have more opportunities to be wrong,” said Nate Silver, founder and editor-in-chief of FiveThirtyEight, a website devoted to applying sophisticated statistical analysis to a variety of subjects.

Silver, the author of the New York Times bestseller The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, first gained fame for developing a mathematical model that correctly called the results in 49 of 50 states in the 2008 presidential election. In 2012, he did even better, picking the winner in all 50 states.

With that sort of record, Silver might be expected to tout the supremacy of number crunching over mere observation, experience, and instinct. Instead, he cautioned his audience that those old-fashioned aids to decision making not only are still viable, but actually are essential to getting the correct answers from statistical modeling.

He also pointed out the importance of developing a good theory—a reasoned explanation for a phenomenon, whether it is a baseball star’s hitting proficiency or the performance of real estate investments—and using data to evaluate its predictive value, rather than simply making predictions based on numbers.

Silver sought to debunk the notion that “once you have enough variables and enough computers, you don’t need to worry about theory and the scientific method, and that you just need to find correlations in data raining out of the cloud.” To the contrary, he said, that approach can be a recipe for disaster. “If you have false theories, you see the data in the wrong way.”

Silver cited the 2008 financial meltdown, in which investment banks and regulators failed to foresee that trillions of dollars in mortgage-based securities would go into default, because the possibility of such a massive catastrophe seemed too remote.

But another, perhaps even more glaring example was Japan’s lack of preparedness for a 9.0 earthquake in March 2011, which triggered a massive tsunami that crippled the Fukushima Daiichi nuclear plant and caused the meltdown of three reactors. In that instance, earthquake scientists had failed to anticipate the possibility that a quake that size could occur because they relied on relatively recent seismic records, Silver said. “Using 90 years of data to predict a 300-year event” won’t work, he said.

It also can be misleading to give too much weight to individual data points, Silver noted, such as the latest poll in a presidential race, rather than look at the pattern from multiple points over time. In the 2012 presidential race, for example, looking at polls in isolation gave a misleading picture of the contest between President Obama and challenger Mitt Romney. “Data is messy,” Silver explained. But when the results of many polls were combined, they plotted pretty much a straight line in which Obama maintained a consistent lead, except for a swing of a point or two at the end.

Silver chided cable TV news commentators for chronically misusing polling data. “They want to create a narrative of major swings back and forth,” he said. That way, “every minor gaffe becomes an opportunity for a new story to be written.” In reality, the electorate doesn’t change direction so rapidly. “People are fairly sensible, and it ends up being fairly stable,” he said. “Most people aren’t watching cable news.”

That problem actually has been amplified by increased polling, he said. “If you have 500 polls instead of five, you can cherry-pick the ones you like.”

To avoid drawing wrong conclusions from data, people should remember that any prediction comes with a degree of uncertainty, Silver told those attending. For that reason, he prefers to publish “probabilistic” forecasts. Instead of claiming to know exactly what will happen, Silver runs his model enough times to provide a range of possible outcomes and the likelihood of a given result.

He also urged listeners to develop an awareness of their own biases and how they might affect the interpretation of data. He cited research in which identical résumés with male and female names are sent to hiring managers, who tended to interpret the same information differently because of gender. “The male name is more likely to be called back, especially in male-dominated industries,” Silver said.” Worse yet, “people who say they have no gender bias actually are more likely to have bias.”

And most important, perhaps, Silver extolled the importance of using “gut instinct” as a check against wrong interpretations of data. As an example, he described an Uber ride in Manhattan in which he got to his destination 40 minutes late because the driver insisted on following his GPS device’s instructions, which showed that the quickest route with the least traffic was on a street that cut through Central Park. “There was no traffic because the road was closed except during rush hour,” Silver explained.

It had never occurred to the driver how unlikely it was that “all 8 million New Yorkers are too dumb to know about this shortcut, but ‘my GPS found it,’” Silver said.