Social media rumors can sow even more unhelpful (and potentially very dangerous) havoc amid major crises, and governments need to create emergency response systems to swiftly dispel misinformation online, according to a new study titled “Community Intelligence and Social Media Services: A Rumor Theoretic Analysis of Tweets During Social Crises,” published in a journal called Management Information Systems Quarterly.
The study focused on the spread of rumors following a number of major emergencies that took place in the social media age, including terrorist attacks in India in 2008, the Toyota recalls due to faulty accelerator panels in 2009-2010, and a mass shooting which killed five people in Seattle in 2012. In each case social media served to spread a variety of falsehoods: in the case of the Indian terrorist attacks, for example, police were overwhelmed by social media reports of additional attacks that weren’t actually happening, and the BBC admitted to reporting incorrect information based on Twitter accounts.
Among other observations, the authors note that Twitter is now the favorite platform for eyewitness accounts of major events including disasters, terrorist attacks, and social upheavals -- but that can include quite a lot of erroneous or exaggerated reports too. This is especially dangerous, they add, because people often turn to social media like Twitter to find out what is going on in their own vicinity, as opposed to traditional media, which they consult to find out the overall situation. In this context alarming (untrue) reports suggesting the individual user may be in danger him- or herself could obviously trigger mass panic.
Lead author Onook Oh, of the Warwick Business School, stated: “Emergency response teams need to put in place prompt emergency communication systems to refute the misinformation and provide citizens with timely, localized and correct information through multiple communication channels such as website links, social network websites, RSS, email, text message, radio, TV or retweets.” Furthermore, “In cases of community disasters, emergency responders need to make extra effort to distribute reliable information and, at the same time, control collective anxiety in the community to suppress the spreading of unintended rumor information. This includes the setting up of an ‘emergency communication center’ in the local community who would monitor social media very closely and respond rapidly to unverified and incorrect rumor information.”
The study amplifies some previous warnings from institutions including the World Economic Forum. In January 2013 a report from the WEF titled “Global Risks 2013” warned that deliberate or accidental spreading of misinformation, termed “digital wildfires” by the report, could result in mass stock sell-offs as well as even more serious consequences like disorganized, panicked mass evacuations resulting in thousands of deaths.
Indeed, in August 2012 a false Twitter rumor to the effect that Syrian dictator Bashar al-Assad had been killed or injured caused crude oil futures on the New York Mercantile Exchange to rise from $90.82 to $91.99. Also during August of that year, tens of thousands of people from the northeastern Indian state of Assam fled after rumors of impending communal violence circulated on social media. And during Hurricane Sandy a Twitter user, @ComfortablySmug, posted alarming misinformation that was picked up by mainstream news outlets.
On the positive side, technology may offer tools to combat social media rumors: in February I wrote about a “social media lie detector” being developed by researchers at Britain’s University of Sheffield, who set out to create an algorithm that can automatically analyze, in real time, pieces of information to determine whether they are true or false. The development team said this could allow journalists, governments, emergency services, health agencies and the private sector to respond more effectively to claims on social media, especially in emergency situations, such as civil disorder and epidemics, important events like elections.
The system will automatically categorize sources to assess their authority, including news outlets, journalists, experts, supposed eyewitnesses, members of the public and automated ‘bots’. Its algorithm will then look for history and background, to determine whether Twitter accounts have been created purely to spread false information. Then, it tries to find corroborating (or contradictory) information and analyze the structure of online conversations over social networks, to reach a final determination about truth or falsehood.
Good point, as I experienced first hand. In the days after Hurricane Irene devastated our rural communities in Ulster County and eastern Delaware County NY, I was taking part in rescue and relief efforts. People meant well when they would make announcements about availability of supplies, emergency shelter, medical attention and so on via Twitter. But often information had errors, was contradictory with official but slower circulating announcements, or missing details. This added some confusion and delay when matching people in need to assistance. So it was largely solved by getting verification of facts, then setting up a few people or locations as the central sources for social sharing via Twitter and Facebook. By about day two or three of the relief effort, we had this part of the program working fairly well.