Diamond Member ChatGPT 0 Posted July 23 Diamond Member Share Posted July 23 Five prominent Senate Democrats have sent a letter to This is the hidden content, please Sign In or Sign Up CEO Sam Altman, seeking clarity on the company’s safety and employment practices. The letter – signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. – comes in response to recent reports questioning OpenAI’s commitment to its stated goals of safe and responsible AI development. The senators emphasise the importance of AI safety for national economic competitiveness and geopolitical standing. They note OpenAI’s partnerships with the US government and national security agencies to develop cybersecurity tools, underscoring the critical nature of secure AI systems. “National and economic security are among the most important responsibilities of the ******* States Government, and unsecure or otherwise vulnerable AI systems are not acceptable,” the letter states. The lawmakers have requested detailed information on several key areas by 13 August 2024. These include: OpenAI’s commitment to dedicating 20% of its computing resources to AI safety research. The company’s stance on non-disparagement agreements for current and former employees. Procedures for employees to raise cybersecurity and safety concerns. Security protocols to prevent theft of AI models, research, or intellectual property. OpenAI’s adherence to its own Supplier Code of Conduct regarding non-retaliation policies and whistleblower channels. Plans for independent expert testing and assessment of OpenAI’s systems pre-release. Commitment to making future foundation models available to US Government agencies for pre-deployment testing. Post-release monitoring practices and learnings from deployed models. Plans for public release of retrospective impact assessments on deployed models. Documentation on meeting voluntary safety and security commitments to the Biden-Harris administration. The senators’ inquiry touches on recent controversies surrounding OpenAI, including reports of internal disputes over safety practices and alleged cybersecurity breaches. They specifically ask whether OpenAI will “commit to removing any other provisions from employment agreements that could be used to penalise employees who publicly raise concerns about company practices.” This congressional scrutiny comes at a time of increasing This is the hidden content, please Sign In or Sign Up and safety measures. The letter references the voluntary commitments made by leading AI companies to the White House last year, framing them as “an important step towards building this trust” in AI safety and security. This morning five senate Democrats sent a letter to Sam Altman with twelve questions. This was question nine:'Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?' This is the hidden content, please Sign In or Sign Up — Andrew Curran (@AndrewCurran_) This is the hidden content, please Sign In or Sign Up Kamala Harris may be the next US president following the election later this year. At the This is the hidden content, please Sign In or Sign Up in the *** last year, Harris said: “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI-enabled myths and disinformation.” Chelsea Alves, a consultant with This is the hidden content, please Sign In or Sign Up , commented: “Kamala Harris’ approach to AI and big tech regulation is both timely and critical as she steps into the presidential race. Her policies could set new standards for how we navigate the complexities of modern technology and individual privacy.” The response from OpenAI to these inquiries could have significant implications for the future of AI governance and the relationship between tech companies and government oversight bodies. (Photo by This is the hidden content, please Sign In or Sign Up ) See also: This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up Want to learn more about AI and big data from industry leaders? Check out This is the hidden content, please Sign In or Sign Up taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including This is the hidden content, please Sign In or Sign Up , This is the hidden content, please Sign In or Sign Up , This is the hidden content, please Sign In or Sign Up , and This is the hidden content, please Sign In or Sign Up . Explore other upcoming enterprise technology events and webinars powered by TechForge This is the hidden content, please Sign In or Sign Up . The post This is the hidden content, please Sign In or Sign Up appeared first on This is the hidden content, please Sign In or Sign Up . This is the hidden content, please Sign In or Sign Up Link to comment https://hopzone.eu/forums/topic/74586-aisenators-probe-openai-on-safety-and-employment-practices/ Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now