The rise of deep fakes in crypto crime

Tara Annison
6 min readJan 26, 2024

In Subsum and CryptoUK’s “State of Verification and Monitoring in the Crypto Industry 2023” report they studied over 800,000 fraud events and found that 70% of companies noted an increased use of deep fakes by crypto fraudsters. In fact, deep fakes were noted as the threat technology of the year.

From a digital identity perspective, the use of deep fakes has been the criminal upgrade to rubber masks and photo-editing software which was previously used to try and fool verification processes for customer onboarding. Now the use of AI is being put to fake profile generation, to pass liveness checks for onboarding, to dupe employers in interviews and to fool legacy KYC platforms which haven’t kept pace with the speed of innovation.

The report outlined how AI and deep fakes were being used to create customer profiles “I have seen 2 profiles which were made with the help of AI. An additional verification (manual) is required. The AI made all the smurfing in a way that it was difficult to recognise by software. The human effort should be very attentive to details. I used Google reverse image and tineye to detect the inappropriate/fraudulent act.”

One important element within this case study is the use of AI to create profiles which can run undetected by existing fraud detection software — it’s the robots vs the robots!

However, back in August last year, the Google DeepMind team introduced a way to invisibly watermark AI images in such a way that even editing of the image doesn’t destroy the watermark. The idea is that AI detection tools can then scan the pixels of the images to spot the watermark and know that it is AI created. SynthID was rolled out in various Google products but the team have been tight-lipped about its inner workings and deployment, likely to try and avoid tipping off malicious actors who would try to find ways around the watermark. They hope it can become an internet standard and a way to fight the use of AI created imagery which is used to dupe the public and spread misinformation.

In line with this, there are also tools such as GPTZero, which helps identify whether text was written by an AI or human. However, ChatGPT’s own efforts to bring a detection tool to market were shutted just 6 months after announcement due to the low accuracy of results.

Being able to spot AI generated video and audio content is also providing a challenge with UCL research showing that participants were wrong over 25% of the time in trying to detect deep faked audio. You can test this out yourself here: ? (I was MUCH worse than the robot!). There are also recent examples such as the deep fake https://deepfake-demo.aisec.fraunhofer.de/ , , and the videos which show how convincing the deep fake versions can look. President Zelensky Bella Hadid

So how could these deep fake videos be used by illicit actors?

Identity Fraud

As noted at the top of this piece, deep fakes are being used to create fake profiles for service onboarding. These can then be used to money launder or commit other illicit acts, and leave a dead end for investigations since the real person behind the identity is not connected to the impersonator and likely has no knowledge of their actions. In the world of crypto where criminals often use crypto exchanges to cash out after illicit activity, they may use deep fake profiles to onboard and try to evade investigations post event.

Misinformation

With both the UK and the US heading into election season, it’s likely that deep fake videos and AI created information will be spread by actors who would seek to disrupt the fair running of democratic processes and spread fake news. We may see videos of politicians saying things they haven’t actually said, fake polling results and other voter swaying actions. We may even see them trying to pretend they didn’t say something that they actually did, all due to the rise of deep fakes casting doubt on reality itself. In the cryptosphere, it feels only a matter of time before we see deep fakes of Gary Gensler making crypto regulatory comments and I’ve seen my fair share of Elon Musk deep fakes promoting various different crypto projects. Such videos could move markets and be used to dupe investors.

Phishing

The use of faked voice clips may be used to extort money from victims or phish employees. I’ve had a countless number of emails and even whatsapp messages from people claiming to be senior management from the companies I’ve worked for, all trying to get me to buy giftcards or send funds to them. These are unconvincing even with the use of their photo or spoofed email, however with voice mimicking software it could become harder to tell real from fake if it really sounds like it’s them. One scary case study of AI voice mimicking was in April last year when a mother received a voice note supposedly from her daughter who was in trouble and required money being sent to her. Only it wasn’t actually her daughter but an AI generated voice note intended to fool. https://www.youtube.com/watch?v=djSCxz_QfIE It’s likely we’ll see an increasing number of scary ways like this that deep fake audio and video is used, and especially in the crypto realm where getting access via employees could result in the loss of private keys and client funds.

Privacy Violations

In October 2023 news reports outlined how 20 girls from the Spanish town of Almendralejo had seen deep fake naked photos of themselves spread online. The faked images had been created partly from the girl’s social media profiles (where they were clothed) and raised discussions about what online or real world laws had in fact been broken. Such tactics could be used by crypto criminals in efforts to extort crypto from victims in an upgraded play from the classic sextortion scams which are usually via an email saying that software on your device has recorded you watching explicit material and a video of this will be sent to your address book. However in a deep fake world, no recording software would be required and deep fake video of you saying or doing anything could be used to blackmail you into sending them crypto.

So how can we better protect against deep fakes?

Improving innovation in detection tools will surely play the most significant role, however individuals can also be on alert when assessing information, videos and images. Often hands and eyes can appear distorted in AI created imagery and videos, and many AI detection companies recommend asking people to turn to a side profile in live videos as AI generated videos often have a blind spot here.

You can test your own skills of spotting deep fakes and contribute to research by the Kellogg School of Management at Northwestern University at: https://detectfakes.kellogg.northwestern.edu/

There are also innovations happening at the hardware level with a recent announcement by Sony to include a digital “birth certificate” for images captured with their devices. This confirms the origin of the content and can help prove authenticity for photographs for sale but also for images being used for identification purposes which may have been altered from the original. You can read more about this announcement here: https://techcrunch.com/2024/01/08/sony-digital-birth-certificate/?utm_source=Sailthru&utm_medium=email&utm_campaign=MONEY%20REIMAGINED%20JAN%2016%202024&utm_term=Money%20Reimagined&guccounter=1

Originally published at https://www.linkedin.com.

--

--