Diamond Member Pelican Press 0 Posted February 1 Diamond Member Share Posted February 1 This is the hidden content, please Sign In or Sign Up DeepSeek Fails Every Safety Test Thrown at It by Researchers PCMag editors select and review products This is the hidden content, please Sign In or Sign Up . If you buy through affiliate links, we may earn commissions, which help support our testing. ******** AI firm DeepSeek is making headlines with its low cost and high performance, but it may be radically lagging behind its rivals when it comes to AI safety. This is the hidden content, please Sign In or Sign Up managed to “jailbreak” DeepSeek R1 model with a 100% attack success rate, using an automatic jailbreaking algorithm in conjunction with 50 prompts related to cybercrime, misinformation, ******** activities, and general harm. This means the new kid on the AI block failed to stop a single harmful prompt. “Jailbreaking” is when different techniques are used to remove the normal restrictions from a device or piece of software. Since Large Language Models (LLMs) gained mainstream prominence, researchers and enthusiasts have successfully made LLMs like OpenAI’s ChatGPT advise on things like This is the hidden content, please Sign In or Sign Up or This is the hidden content, please Sign In or Sign Up . DeepSeek stacked up poorly compared to many of its competitors in this regard. OpenAI’s GPT-4o has a 14% success rate at blocking harmful jailbreak attempts, while This is the hidden content, please Sign In or Sign Up ’s Gemini 1.5 Pro sported a 35% success rate. Anthropic’s Claude 3.5 performed the second best out of the entire test group, blocking 64% of the attacks, while the preview version of OpenAI’s o1 took the top spot, blocking 74% of attempts. Cisco’s researchers point to the much lower budget of DeepSeek compared to rivals as a potential reason for these failings, saying its cheap development came at a “different cost: safety and security.” DeepSeek claims its model took just $6 million to develop, while OpenAI’s yet-to-be-released GPT-5 is This is the hidden content, please Sign In or Sign Up Though DeepSeek may allegedly be easy to jailbreak with the right know-how, it’s been shown to have strong content restrictions—well, at least when it comes to China-related political content. DeepSeek was tested by a PCMag journalist on controversial topics such as the treatment of Uyghurs by the ******** government, a ******* ********* group that the UN claims is being persecuted. DeepSeek replied: “Sorry, that’s beyond my current scope. Let’s talk about something else.” The chatbot also refused to answer questions about the Tiananmen Square Massacre, a 1989 student demonstration in Beijing where protesters were allegedly gunned down. But it’s yet to be seen if AI safety or censorship issues will have any impact on DeepSeek’s skyrocketing popularity. According to web traffic tracking tool Similarweb, the LLM has gone from receiving just 300,000 visitors a day earlier this month to 6 million visitors. Meanwhile, US tech firms like This is the hidden content, please Sign In or Sign Up and Perplexity are rapidly incorporating DeepSeek (which uses an open-source model) into their own tools. This is the hidden content, please Sign In or Sign Up #DeepSeek #Fails #Safety #Test #Thrown #Researchers This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up Link to comment https://hopzone.eu/forums/topic/206043-deepseek-fails-every-safety-test-thrown-at-it-by-researchers/ Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now