Little over a year ago, the term ‘fake news’ didn’t really mean much to anyone. Now, it’s often cited as one of the greatest threats to modern democracy and free debate. Collins Dictionary even crowned it one of their 2017 words of the year, and it’s not even a singular word. There’s no hiding from the fact that fake news is big news.
From people inventing restaurants that top the charts on TripAdvisor, to false political stories designed to provoke and alter election outcomes, we live in an age that’s governed by information – or, rather, misinformation.
Cutting the wheat from the chaff
Before Facebook and X, there was the coffee house. Back in the 17th Century, these drinking holes became forums for political rumour and discourse and Charles II even issued a proclamation to halt the spreading of ‘false news’. Much like Charles II, more than 3 centuries later, we’re all now facing some very tough decisions about how to clamp down on rumour and slander. But instead of policing coffee shops, we now have to police the World Wide Web.
Or do we?
Freedom of speech is the very cornerstone of democracy, and who is to say what is fake and what is not? Who should be the Editor In Chief of the Internet? Under normal circumstances, we’d warn against any kind of filtering of information, but these are unique times. The sway of public opinion can be hugely altered by the sharing of fake news. Elections can be won or lost based on which side is willing to troll the hardest.
It’s clear that something needs to be done, but who should we hand the keys to?
With social media comes social responsibility
It probably won’t surprise you to learn that nearly half of all fake news that is generated is spread through social media. Social giants such as Facebook and X have come under increasing pressure to do something about it, but how are they supposed to moderate literally billions of posts every day? What’s more, how do they do it in a way that doesn’t infringe on our privacy or freedom of speech? Over time, such a system could be corrupted and become the very problem it set out to solve.
Nevertheless, the social behemoths are aware that they have a role to play and many of them are trialling various ideas at this very moment. Below are just a few of them.
YouTube to enforce stricter advertising rules
Advertising on YouTube has always been easy and relatively unmoderated. However, Google have pledged to make sure that every single video in its premium advertising programme is manually reviewed. This is to reassure advertisers that their ads won’t appear during inappropriate content. It isn’t tackling fake news directly, but it is giving businesses more control over the kind of content they want to be associated with. Meanwhile ‘fake news’ propagators will find it harder to monetise their content and the thresholds for getting into advertiser programs will be far higher.
Facebook offer context to fight fake news
This is an interesting one. Rather than decide what’s fake and what’s not and put themselves in the firing line, Facebook are starting to surface more information about the publisher with each article. According to a recent blog post, users will start seeing the letter ‘i’ near the headline of an article which will give users information about the publisher’s Facebook and Wikipedia pages, as well a ‘related articles’ feed and stats on how other users are sharing the information.
Rather than decide what’s fake and what’s not and put themselves in the firing line, Facebook are starting to surface more information about the publisher with each article.
This is a good move. Facebook aren’t trying to play editor here, they’re simply handing over more power and information for users to make their own minds up.
Snapchat uses machine learning to pull out the weeds
Snapchat may have started off as a lowly video sharing app, but since it added the ‘stories’ feature and allowed publishers to post content, it’s become a hot destination for fake news propagators. Their approach to routing out misinformation is more pragmatic than most. They’re going to use machine learning algorithms to try and gather data on what people enjoy reading, and serve more of it up. On the one hand, this will mean users get useful and relevant content instead of simply what’s ‘voted up’ through the ranks, but on the other hand it could turn the platform into somewhat of an echo chamber and limit its usefulness. It’s similar to what Netflix does, but one is an entertainment streaming service and the other is effectively a news outlet.
The impact of fake news on brands and their loyal customers
Fake news is a problem; one we’re yet to solve. While the social media giants are taking steps to try and police content and educate their users, there will be an inevitable blowback on businesses and brands that are trying to market themselves online. Advertisers need to be more picky about where they advertise, and users will take longer to trust news stories – even from brands they already have a relationship with.
Clarity and transparency will be invaluable moving forward, and all businesses should be careful when it comes positioning their own content and sharing it online. Don’t hit share mindlessly, and don’t retweet without first checking your sources. In other words, don’t do a Trump.