The photo at right was one of several that made the rounds on the Internet as Hurricane Sandy lashed the east coast on Monday and Tuesday of this week. It’s a powerful image. It’s also completely bogus, a two-year-old Photoshop mashup that took on new significance when no one had a clear picture of what was happening on the Atlantic seaboard. It was one of many false reports that circulated on social networks during the storm. Although the increasingly Twitter-dependent mainstream media didn’t circulate this photo, it reported its share of falsehoods.
We personally heard the CNN report of three feet of water in the New York Stock Exchange. In fact, live security camera feeds showed that the floor was dry. We also heard media reports that Con Edison had shut off power to all of Manhattan. Also not true. The Detroit Free Press rounds up some of the prominent rumors here.
Are these deceptions proof that citizen journalism sucks, that the ability to reach a global audience tempts people to spread falsehoods and make mischief?
We don’t think so. While social networks spread a lot of rumors during the storm, that’s nothing unique to the Web 2.0 age. Disasters always spawn speculation. Remember the reports of planes flying into buildings in Chicago and San Francisco on 9/11? The difference today is the speed at which falsehoods spread. But another important difference is the speed at which they’re dispelled.
We like John Herrman’s analysis on BuzzFeed. He notes that Twitter users were just as quick to disabuse each other of storm-related misinformation as to spread it in the first place. “Twitter is a fact-processing machine on a grand scale, propagating then destroying rumors at a neck-snapping pace,” he writes. “To dwell on the obnoxiousness of the noise is to miss the result: that we end up with more facts, sooner, with less ambiguity.”
Sites like Snopes.com and Wikipedia are effective at sifting fact from fiction. Although neither is under the same time pressure as CNN, in the long run they get it right. Electronic media are always under the gun during a news event, and have always been susceptible to reporting bad information. To their credit, the news networks are usually good about qualifying unconfirmed information as just that. Any experienced reader of blogs or social networks knows that fantastical claims shouldn’t be taken at face value. New media even have some fact-checking features built in. For example, The New York Times used geo-location to verify that eyewitness tweets were in fact from people who might reasonably be assumed to be eye witnesses.
We think more information is always better than less, even if some of it is bad. As layoffs continue to hack away at mainstream media, those outlets continue to turn to citizens as front-line news sources. We don’t see that changing anytime soon. Rather, the tools for spotting bad information will mature and our bullshit detectors become more refined.
Anyone watching the #Sandy or #Frankenstorm hash tags on Monday and Tuesday read amazing stories from people who taking the storm head-on. Mobile social networks continue to deliver information from blacked-out areas that would otherwise have no outlet. The fact that some of that information is bad is the price we pay for having a First Amendment.
This entry was posted on Friday, November 2nd, 2012 at 8:21 am and is filed under Citizen Journalism, Future of Journalism, Journalism. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.