Diamond Member ChatGPT 0 Posted December 8, 2025 Diamond Member Share Posted December 8, 2025 Resemble AI has raised US$13 million in a new strategic investment round for AI deepfake detection. The funding brings its total venture investment to US$25 million, with participation from Berkeley CalFund, Berkeley Frontier Fund, Comcast Ventures, Craft Ventures, Gentree, This is the hidden content, please Sign In or Sign Up ’s AI Futures Fund, IAG Capital Partners, and others. The funding comes as organisations are under pressure to verify the authenticity of digital content. Generative AI has made it easier for criminals to produce convincing deepfakes, contributing to more than US$1.56 billion in fraud losses in 2025. Analysts estimate that generative AI could enable US$40 billion in fraud losses in the US by 2027. Recent incidents highlight how quickly threats evolve. In Singapore, 13 individuals collectively lost more than SGD 360,000 after scammers impersonated a telecommunications provider and the Monetary Authority of Singapore. The attackers used caller ID spoofing, voice deepfakes, and social engineering techniques that created urgency and used the public’s trust in government and telecom brands. Deepfake detection tools and new AI capabilities Resemble AI develops real-time verification tools that help enterprises detect AI-generated audio, video, images, and text. The company plans to use its new funding to expand global access to its AI deepfake detection platform, which includes two recent releases: DETECT-3B Omni, a deepfake detection model designed for enterprise environments. The company reports 98% detection accuracy in more than 38 languages. Resemble Intelligence, a platform that provides explainability for multimodal and AI-generated content, using This is the hidden content, please Sign In or Sign Up ’s Gemini 3 models. Resemble AI positions these tools as part of a broader effort to support real-time verification for human users and AI agents interacting with digital content. According to the company, DETECT-3B Omni is already used in sectors like entertainment, telecommunications, and government. Public benchmark results on Hugging Face show the model ranking among the strongest performers on image and speech deepfake detection, with a lower average error rate than competing models. Industry stakeholders say the rapid improvement of generative AI is reshaping how enterprises think about content trust and identity systems. Representatives from This is the hidden content, please Sign In or Sign Up ’s AI Futures Fund, Sony Ventures, and Okta noted organisations are moving toward verification layers that can help maintain trust in authentication processes. Alongside the investment announcement, Resemble AI released its This is the hidden content, please Sign In or Sign Up on how deepfake-related risks may evolve in 2026. The company expects several shifts that could shape enterprise planning: Deepfake verification could become standard for official communications Following incidents involving government officials, it anticipates real-time deepfake detection may eventually be required for official video conferencing. Such a move would likely create new procurement activity and increase adoption in the public sector. Organisational readiness may determine competitive positioning As more jurisdictions introduce AI regulations, enterprises that integrate training, governance, and compliance processes early may find themselves better prepared for operational and regulatory demands. Identity emerges as a central focus in AI security With many AI-related attacks relying on impersonation, organisations may place greater emphasis on identity-centric security models, including zero-trust approaches for human and machine identities. Cyber insurance costs may rise The growing number of corporate deepfake incidents could lead insurers to reassess their policies on offer. Companies without detection tools could face higher premiums or limited coverage. The investment underscores the growing need for enterprises to understand how generative AI changes their risk exposure. Organisations in all sectors are evaluating how verification, identity safeguards, and incident readiness can fit into their broader security and compliance strategies. (Photo by This is the hidden content, please Sign In or Sign Up ) See also: This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up Want to learn more about AI and big data from industry leaders? Check out This is the hidden content, please Sign In or Sign Up taking place in Amsterdam, California, and London. The comprehensive event is part of This is the hidden content, please Sign In or Sign Up and is co-located with other leading technology events, click This is the hidden content, please Sign In or Sign Up for more information. AI News is powered by This is the hidden content, please Sign In or Sign Up . Explore other upcoming enterprise technology events and webinars This is the hidden content, please Sign In or Sign Up . The post This is the hidden content, please Sign In or Sign Up appeared first on This is the hidden content, please Sign In or Sign Up . This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/291892-aigoogle-sony-and-okta-back-resemble-ai%E2%80%99s-push-into-deepfake-detection/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.