Massively popular photo-sharing social media platform Instagram is the latest front against the perennial fight against fake news. With deepfakes, doctored images and falsified images taking the uphill battle to tell truth from forgery ever steeper, top technology companies have made a spate of announcements over the past few years.
Instagram’s new initiative blurs out flagged images with the cautionary message “False Information/Reviewed by independent fact-checkers”, followed by the option to “See Why” and to ignore the warning and “See Post” anyway.
Tapping on “See Why” reveals the independent fact-checker, the conclusion of the fact checking, and more information as to why the image had been flagged.
This new Instagram policy was first reported by PetaPixel via photographer Toby Harriman. It seems that the warning is in its initial phases of roll-out and can only be viewed by some parties in some regions. Singapore-based VR Zone staff were unable to replicate the warning.
It seems likely that this initiative could roll out to other Facebook-owned properties, including Facebook itself even Messenger. It is also unknown if this policy will hide photoshop-based digital art and other hyperrealistic art based on manipulated photographs in the future.
Instagram isn’t alone in this stand against misinformation. At Adobe MAX 2019, the software behemoth announced the Content Authenticity initiative alongside short-form social media platform Twitter and news publication The New York Times to combat fake news. This partnership is intended to boost viewer confidence in published material, with the opportunity for sources to be verified.
An in-development software was also showcased at Adobe’s signature Sneaks, which was able to detect software-based manipulation of photo imagery. Named Project About Face, the software examines images at the pixel level to detect anomalies and produce a percentage score indicative of the probability of the photo being doctored.