Diamond Member Pelican Press 0 Posted July 2, 2024 Diamond Member Share Posted July 2, 2024 This is the hidden content, please Sign In or Sign Up Anthropic aims to fix one of the biggest problems in AI Anthropic Hot on the heels of the announcement that its Claude 3.5 Sonnet large language model beat out other leading models, including GPT-4o and This is the hidden content, please Sign In or Sign Up AI startup Anthropic announced Monday that it plans to launch a new program to fund the development of independent, third-party benchmark tests against which to evaluate its upcoming models. Per a blog post, the company is willing to pay third-party developers to create benchmarks that can “effectively measure advanced capabilities in AI models.” “Our investment in these evaluations is intended to elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem,” Anthropic wrote in a This is the hidden content, please Sign In or Sign Up . “Developing high-quality, safety-relevant evaluations ******** challenging, and the demand is outpacing the supply.” The company wants submitted benchmarks to help measure the relative “safety level” of an AI based on a number of factors, including how well it resists attempts to coerce responses that might include cybersecurity; chemical, biological, radiological, and nuclear (CBRN); and misalignment, social manipulation, and other national security risks. Anthropic is also looking for benchmarks to help evaluate models’ advanced capabilities and is willing to fund the “development of tens of thousands of new evaluation questions and end-to-end tasks that would challenge even graduate students,” essentially testing a model’s ability to synthesize knowledge from a variety of sources, its ability to refuse cleverly worded malicious user requests, and its ability to respond in multiple languages. Anthropic is looking for “sufficiently difficult,” high-volume tasks that can involve as many as “thousands” of testers across a diverse set of test formats that help the company inform its “realistic and safety-relevant” threat modeling efforts. Any interested developers are welcome to This is the hidden content, please Sign In or Sign Up to the company, which plans to evaluate them on a rolling basis. This is the hidden content, please Sign In or Sign Up Computing,ai,Anthropic,Benchmarking,Claude #Anthropic #aims #fix #biggest #problems This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/56554-anthropic-aims-to-fix-one-of-the-biggest-problems-in-ai/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.