In a Post Truth Era
We are in an age where fake news is prevalent and deep fake generation is on the rise. For this reason, we need to find new and effective ways to design against fake news to counteract this issue. Upcoming events such as the 2020 US election highlight the growing importance with this new way of design thinking.
In 2016, the effect fake news had on society became apparent with the outcome of that year’s presidential election and the inauguration of President Trump. The fact that a countries leader became elected by swayed votes through the spreading of misinformation is truly frightening. Furthermore, this highlights the potential dangers of fake news.
Today, companies all around the world are investing in ways to keep ‘true’ information spreading through the internet. This is being achieved by first mitigating the creation of fake news whilst defining the design criteria of real information.
Platform Crackdown on Fake News
From greedy businesses gaining revenue from ad-supported 'fake news' to online trolls wanting to spread hate, news can now be generated from virtually anywhere by using social media and news platforms. These platforms grant users the ability to inform millions of people. As a result, these platforms also have the power to shape the mentalities of society. In this connected world, developers must implement safeguards to protect audiences from misinformation to ensure truthful information is being shared. Let’s take a deeper dive into how some companies are designing ways to crack down on the spread of fake news.
Announced on 5 February, Twitter has just released a new rule for its users. The rule states that users cannot 'deceptively share synthetic or manipulated media that is likely to cause harm’. As a result of Twitter's fight against fake news, the new rule aims to inform their audience more effectively. For example, manipulated media will now appear flagged with a ‘manipulated media’ tag.
In addition to this, users will become alerted to some of their actions using the app, receiving warnings upon sharing or liking Tweets with ‘manipulated media.’
Facebook is also cracking down on fake news because of the recent controversy around manipulated videos on their platform.
In addition, a recent statement from Facebook said that they would be removing content which had been edited "in ways that aren’t apparent to an average person and would likely mislead someone."
Adobe has partnered up with the University of California (Berkley) and developed a new tool to detect facial manipulation in images. In addition, the company stated in a blog that this effort aims "to help verify the authenticity of digital media created with [their] products and to identify and discourage misuse."
Instagram has already designed a way to reduce the spreading of fake news. In fact, Instagram now works alongside third-party fact-checkers all around the world. Like Twitter, the posts identified become labelled with ‘false information’ tags and become harder to find by Instagram’s search tools.
In our post-truth world where digital information has become harder to trust, companies have designed different ways to moderate and authenticate information. These methods range from content removal to using fact-checkers, to even software detection. Whilst all approaches offer a positive step towards reducing misinformation, other tough questions arise such as what information can be considered 'true'.
It will be interesting to observe how online content is moderated in 2020.