Diamond Member Pelican Press 0 Posted April 23, 2024 Diamond Member Share Posted April 23, 2024 The world’s leading AI companies pledge to protect the safety of children online Leading artificial intelligence companies including OpenAI, This is the hidden content, please Sign In or Sign Up , This is the hidden content, please Sign In or Sign Up , Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child ******* ****** material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech. The pledges from AI companies, Thorn This is the hidden content, please Sign In or Sign Up , “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from ******* ****** as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of ********* explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child ******* ****** material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims. On Tuesday, Thorn and All Tech Is Human released a This is the hidden content, please Sign In or Sign Up titled “Safety by Design for Generative AI: Preventing Child ******* ******” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children. One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also ****** ******* content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child ******* ****** material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child ******* ****** more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through. “This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff This is the hidden content, please Sign In or Sign Up the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.” Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing ****** content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed. This is the hidden content, please Sign In or Sign Up news, gear, meta, This is the hidden content, please Sign In or Sign Up , openai, This is the hidden content, please Sign In or Sign Up , AI #worlds #leading #companies #pledge #protect #safety #children #online This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/20827-the-world%E2%80%99s-leading-ai-companies-pledge-to-protect-the-safety-of-children-online/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.