Not everything you see is true. It is even more evident on social media such as Facebook. People share millions of photos, videos, and other contents daily on Facebook. With such a high volume of data, it is a challenging task to check the quality of content being uploaded. As a result, Facebook is full of deepfakes and other forms of manipulated content. The company now looks to change this scenario, with the US presidential elections 2020 on the horizon.
Forms of content manipulation
Content manipulation on the internet is way common than we think. Most of them are harmless and can include activities such as improving the quality of content. But there are also people actively working to mislead people by manipulating content and making them appear what they aren’t. They use varieties of tools ranging from simple photo-editing apps to complex deep learning techniques. The use of the latter is now on a rise. People use different AI tools to create videos to misguide the public. These are commonly known as the “deepfakes”. While deepfakes aren’t that common, the rate at which they are growing is alarming.
Also, read about Facebook ditching sign-ups using phone numbers
Facebook and deepfakes
Facebook wants to put such unethical activities in check. It has started investigations on its own to tackle deceptive behaviors on the social media platform starting from taking down fake accounts. It will also partner with different authorities to expose people behind such misleading acts. Facebook is already discussing its policy development and manipulation detection tools with more than 50 global experts.
View this post on Instagram
[NEW RELEASE] ‘I wish I could…’ (2019) Mark Zuckerberg, founder of @facebook reveals the ‘truth’ about privacy on #facebook . This artwork is part of a series of AI generated video works created for ‘Spectre’ – an immersive exploration of the digital influence industry, technology and democracy. [link in bio]. Art by Bill Posters & @danyelhau for @sheffdocfest @sitegallery #artnotmisinformation #thefutureisntprivate #deepfakes #deepfake #spectreknows #surveillancecapitalism #privacy #democracy #dataism #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
A snippet from an official blog from the Vice President of the company’s Global Policy Media Ms. Monika Bickert reads:
we will remove misleading manipulated media if it meets the following criteria:
- It is edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
However, it does not apply to parodies or satires.
Facebook’s Community Standards
Facebook will also be removing contents that don’t align with the company’s Community Standards. Contents that aren’t eligible for removal by these standards can still be reviewed by an independent panel of over 50 fact-checkers. If the panel rates any content false, Facebook will reduce its distribution in NewsFeed. It will also warn people when they see or try to share such false content.
Deepfakes Detection challenge
In an attempt to battle deepfakes, Facebook launched Deepfake Detection Challenge last September. The challenge invites people from different regions to develop systems capable of detecting deepfakes and other forms of manipulated media. There is a $10 million in grants to support the challenge.