“Research has shown that the downside of powerful, centralized networks is their susceptibility to being subverted and exploited,” writes The Wall Street Journal’s Christopher Mims in a fascinating analysis of why social networks, which were supposed to challenge hierarchy, have reinforced it instead.
Delving into network theory, Mims explains why networks that start out with flat, distributed power structures ultimately, become vertical hierarchies. That was true in the Bolshevik revolutions of 1917, when a circle of insiders around Joseph Stalin created a hierarchy within the supposedly distributed network of citizens who overthrew the Czar.
It is also true in the 16th century, when the printing press and Martin Luther’s vernacular versions of the Bible, rather than democratizing access to information, led to nearly 200 years of civil war. The impact of the internet has often been compared to that of Gutenberg’s invention.
“Even when networks aren’t architected for this kind of control, they tend to organize themselves in ways that lead to disproportionate influence by a handful of their members,” Mims writes. “When any new person or entity joins a network, it is likely to attach to the most visible hubs, making them even more influential.”
Facebook magnified this effect by designing its algorithms to optimize for engagement rather than for truth. Russia understood this, and brilliantly exploited it to foster confusion and misinformation in the 2016 election.
Pro Publica is using fire to fight fire. Co. Design reports on the work that a team at the nonprofit news organization has been doing to employ the tools of big data to see if companies like Amazon and Facebook are living up to their own policies.
The team crowdsourced the process of identifying examples of people who felt their free-speech rights had been violated by Facebook, or that they had been denied information because of some arbitrary decision. Facebook publishes its censorship rules, but verifying compliance is nearly impossible. That’s what the big data team at Pro Publica figured out a way to do. It used a Facebook Messenger survey to gather input from the crowd and then combed through the most puzzling cases by hand. In the end, Facebook had to admit not following its own policies in 22 examples brought forth by members.
The Pro Public team’s next step will be to investigate how political ads work by using a browser plug-in that scrapes Facebook ads and analyzes them using machine learning. The team has already published some of its initial findings, including the fact that many political ads don’t carry the required disclaimers or candidate endorsements.
Image: Wikimedia Commons
Comments
This entry was posted on Tuesday, February 20th, 2018 at 12:13 pm and is filed under Facebook. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.