Select Page

While dismissed by younger generations as being primarily for older individuals, with many often using it only as a tool to keep in contact with others, Facebook has taken the spotlight in recent political affairs, often not for the right reasons. The social network, founded in 2004 by Mark Zuckerberg, has been accused of being an echo chamber for groups spreading false information around the web. Though many fear that false news has has some impact on recent political events, such as the US election, no conclusive evidence has been presented to suggest that it has led to anything decisively. However, its impact can certainly be felt; many of these fake stories have received far more engagement on social media platforms than their truthful counterparts.

Not only this, but the addition of more ways to share life events on Facebook, such as Facebook live, has given anyone with a message a platform to reach others. Unfortunately, this includes individuals looking to spread hate speech. While Facebook has a reporting system in place for cracking down on problematic content, response is often slow.

This has caused incidents in which graphic or inappropriate content has remained on the social media platform long enough for it to go viral. Take, for instance, the murder of Robert Goodwin in April, in which a man uploaded multiple videos stating his intent to kill, as well as the act itself several minutes later and five minutes of live posted afterwards. Two hours after the event, the account had been reported and suspended, but the damage had already been done.

All of these concerns combined raise an important question: to what extent should technology companies, particularly social media platforms, police content on the tools that they create? The difficulty lies in finding a balance between removing problematic content and censoring free speech. Unfortunately, there is no one solid way to address the myriad of issues with Facebook’s content, so the company is experimenting with a few tactics.

One of the most prominent was the company’s recent release of a list of tips with the intent to allow users to educate themselves and spot false news stories. The list includes advice to examine web addresses, investigate the outlet that is running the story, and check other sources. In the UK, with an upcoming general election, Facebook released the list as a print advertisement in several prominent newspapers.

With the list comes a promise from Facebook to limit the spread of these stories. To that end, they’ve worked to target both the stories themselves and the accounts set up with the sole purpose of propagating them across the network. In particular, Facebook algorithms are on the lookout for repeatedly posted content and spikes in messaging activity.

Facebook has also used artificial intelligence to preemptively censor obscene photos or scan posts for content that indicates that the poster may harm themselves or somebody around them. However, their AI is not at a state where it is capable of sifting through millions of posts and accurately picking out the ones that might be an issue. For that, a more human touch is necessary.

In order to help monitor Facebook activity and increase response time to problem content, Facebook is hiring 3,000 individuals to work on the company’s community operations team. The company has declined to give specifics about where the hires will be based out of and whether they will be employees or contractors. This is a huge announcement for Facebook, and Mark Zuckerberg has stated that the purpose this influx of labor is to “help [them] get better at removing things [they] don’t allow on Facebook like hate speech and child exploitation.”

Zuckerberg went on to state his intention of working with law enforcement to react to situations in which an individual might bring harm to themselves or others. This process isn’t completely reliant on the trained eyes of professionals; existing patterns in content allow for automatic flagging of posts for review by Facebook workers. Some reviewers with expertise in certain languages and cultures may field content that is specific to their knowledge base. The idea is that the efforts of humans reviewing content can determine intent and meaning in a way that AI cannot. Machine learning is still a developing field, and it will take much more time for anything that Facebook produces to match the judgment of a person.