Deepfake fraud: protecting businesses with orchestrated identity verification
Security is about risk management. But managing that risk is a challenge. As deepfake identity has become a reality, how can a business protect itself from deepfake fraud and ensure its customer experience remains excellent?
Deepfake fraud
Deepfake technology, powered by GenAI, adds a new dimension to synthetic identity. Before AI-enhanced fake ID, synthetic identity had already caused a soaring impact, with costs of around $6 billion to banking in 2023. Research shows that over half a million video and voice deepfakes were shared on social media sites in 2023. A new era in automated synthetic identity is afoot. This proliferation of deepfakes has been driven by ‘cheapfakes,’ which cost just a few dollars to spin up a convincing AI-faked voice or video. Deepfakes are a fraudster’s dream. They can be used for various scams, from sextortion to fake reviews and, of course, fake identity credentials. The latter will bring chaos to online life unless systems are implemented to mitigate the risk of deepfake ID.
How are deepfakes used to trick identity-based services?
Identity verification is typically a static proposition. For example, if you wish to use government services, you will likely be asked to provide a passport or driver’s license when setting up an account. Banks too, particularly neobanks, use remote identity verification, which has become standard since Covid-19 made F2F identity checks challenging. These remote checks revolve around identity documents and sometimes additional checks against credit reference agency (CRA) databases. Sometimes liveliness checks are also requested, but deepfakes have also put paid to this additional check too. “Liveliness bypass” uses face spoofing to trick facial recognition by hijacking cameras and inserting deepfake videos. Alternatively, the fraudsters will compromise the server and modify or swap biometric data: fraudsters are masters of manipulation both of human and digital targets. If there is a way around a barrier, they will find it.
Next-gen automated ID fraud is leveraging the increase in remote ID document-based checks. GenAI service sites such as OnlyFake offer Fraud-as-a-Service in an effort to “democratise fraud.” An investigation by reporter Joseph Cox of 404 Media demonstrated how cheap, quick, and easy it is to use sites like OnlyFake to create spoof ID documents needed to set up online accounts.
The automation of synthetic identity fraud is escalating the war of attrition between fraudsters and the rest of the online world. To counteract AI-enabled fraud, however, is not easy. While many say to fight fire with fire and turn AI on itself using AI-enabled anti-fraud solutions, this is just part of the answer. They say it takes a village to raise a child; well, it takes a connected world of solutions to put deepfake ID back into its box. Step-up or risk-based verification is a way to meet AI-enabled fraud head-on while maintaining a consumer-driven identity system.
How can orchestrated identity verification help reduce the threat of deepfake identity?
Orchestrated identity verification uses risk-based verification (RBV), analogous to risk-based authentication. In the latter case, rules drive the level of authentication required to access an account. For example, if you log in to an account from an unrecognised location, a risk-based approach to authentication may ask for an additional credential to allow login to proceed. Other methods of risk-based sign-in may involve behavioural biometrics. RBV is analogous, but orchestration takes it further and helps balance security and usability.
Identity verification processes follow a similar pattern of risk-based decisioning. Orchestrated risk-based verification (oRBV) is rules-driven; verification decisions occur at the point of registration or during a re-verification event after an account has been created. Rules determine the levels of verification required to ensure that the individual is who they say they are and that they meet the needs of resource access. If the system recognises a suspicious verification event, say a deepfake is spotted, a rule will initiate further verification checks. The rule may even require that the person uses F2F or vouching to complete their identity checks. The key to using orchestrated risk-based verification is to ensure it is flexible and dynamic in its execution.
Orchestrated risk-based verification (oRBV) is an ideal way to help mitigate the impact of deepfakes on identity services. However, it cannot be static and must use rules to modify user journeys dynamically. The dynamic nature of an identity orchestration and decisioning engine (ODE) provides the much-needed scope to handle the variety of verification needed for the diversity of individuals creating online accounts. Consumers and citizens must not be impacted by the need to mitigate deepfake IDs. Instead, the service must be able to offer verification choices to individuals that maintain a great customer experience while preventing the use of fake identity documents. In practice, this means designing a service that is a system. When designing identity-related user journeys, systematic thinking is the only way to add cyber-reliance into a service: verifying each registration or resource access event using a risk-based approach. Using orchestration, a system can call out to the tools in the anti-deepfake armoury, such as deepfake detectors and AI-enabled KYC and AML. But orchestration can also provide more than this. It ensures that the human experience of the service is optimised by adjusting to the needs of the service as well as the individual, balancing security and usability.
How to use rules to spot a deepfake then stop it in its tracks
Deepfakes, currently, are not perfect, and spotting a deepfake can be done in real time. Colour-matching between images and irregular shadowing can be tell-tale signs of deepfakery. There is an increasing number of solutions in the market that will help identify deepfake identity documents, but is it enough to spot deepfakes? A system must be able to mitigate the overall impact of deepfake ID and realise ease of deployment and maintenance while maintaining great customer service.
Hardened, robust, and usable identity-enabled services come down to offering choices:
- Choices in deepfake detection solutions
- Choices in anti-fraud checks
- Choices in KYC and other identity checks
- Choice in the use of vouching and other OOB channels
Synthetic identity account creation is prevented using identity orchestration and decisioning. When an account registration presents suspicious credentials, the system will modify the user journey to request further verification, even F2F checks, stepping up verification or even stepping down the level of checks needed, if the use case requires. This will stop even the most ardent of fraudsters from creating an account. The elegance of orchestration and decisioning is the means to develop robust and usable verifying journeys while stopping automated deepfake IDs.
Contact Avoco to find out how to build anti-deepfake services.
Photo by Luke Littlefield on Unsplash