Could deepfake technology impact sustainability efforts?
What’s happening? Microsoft’s M12 venture capital operation has invested in photo and video verification company de to tackle the increasing threat of deepfake images. The investment leads Truepic’s Series B funding. The company’s digital inspection platform Vision collates high-integrity data for each file it checks, analyses the file for signs of manipulation, provides it with cryptographic hashing and seals the result into a chain of custody through a stress-testing process. Any sign of manipulation will mean the image being flagged. Truepic also offers a library of verified images for companies. (techradar)
Why does this matter? Reports have suggested that deepfake technology is no longer just being used to impersonate famous faces but being programmed to alter satellite-based geographical images.
Satellites are being increasingly deployed to provide verification for businesses and governments, particularly for environmental purposes, allowing entities to monitor a range of issues from palm oil plantations to methane leaks and even forced labour operations. The ability to influence these images could therefore undermine global efforts to validate sustainable practices.
As climate misinformation continues to surge online, the inclusion of false images could also be used to strengthen denial and peddle accusations of “fake news”. What better way to claim a climate event wasn’t as severe or that it didn’t happen than by generating a faked satellite image?
Spotting a deepfake – Both Microsoft and Facebook are developing solutions for detecting deepfake manipulation by focusing on whether the image’s digital fingerprint has been altered. Alternatively, in faces, AI’s inability to correctly imitate circular pupils can expose a fake as the programme often produces jagged edges within the eyes.
Can deepfakes be used positively? There are potentially some areas where deepfakes could contribute in a beneficial way. In health care, the accumulation of patient data, for example, is being considered as a foundation for designing virtual patients in which AI could be applied to test new methods for diagnosing or monitoring diseases.
Alternatively, EY has enlisted the help of UK start-up Synthesia to create virtual doubles of employees to invigorate corporate pitches and presentations. The AI is effective in creating more engaged client meetings, EY claims, and offers employees with several clients to look after the ability to send personalised videos despite time pressures.
Worth noting – It’s evident deepfakes are no longer just a gimmick. Not to be too alarming, but computer researchers at UCL have identified deepfake technology as the AI-based crime with the potential to cause the most harm in the next 15 years.
If more techniques can be developed to help identify an AI-generated image, then the spread of misinformation could be limited. Unfortunately, not everyone waits for the facts and so the damage could almost be done as soon as the deepfake emerges online.