First US safety bill for AI vetoed
The legislation would have required tech firms to test artificial intelligence models before release
California Governor Gavin Newsom has vetoed a landmark bill on artificial intelligence that would have established the first safety measures for the industry in the US. The bill, known as California Senate Bill 104, or SB 1047, was aimed at reducing potential risks created by AI.
The proposed regulation would have obliged tech companies with powerful AI models to subject them to safety testing before releasing them to the public, as well as publicly disclosing the models’ safety protocols. This would have been done in order to prevent the models from being manipulated into causing harm, such as hacking strategically important infrastructure.
In a message accompanying the veto on Sunday, the governor said that while the proposal was “well-intentioned,” it wrongly focused on the “most expensive and large-scale” AI models, while “smaller, specialized models” could potentially cause more harm. Newsom also argued that the bill does not take into account in what environment an Al system is deployed or whether it involves critical decision-making or the use of sensitive data.
“Instead, the bill applies stringent standards to even the most basic functions… I do not believe this is the best approach to protecting the public from real threats posed by the technology,” the governor stated. Newsom stressed that he agrees the industry must be regulated, but called for more “informed” initiatives based on “empirical trajectory analysis of Al systems and capabilities.”
Read more
“Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself… Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right,” he concluded.
As California governor, Newsom is seen as playing an important role in the nascent AI regulation process. According to his office’s figures, the state is home to 32 of the world’s “50 leading AI companies.”
The bill’s author, state Senator Scott Weiner, called the veto “a setback” for those who “believe in oversight of massive corporations that are making critical decisions” affecting public safety. He pledged to continue working on the legislation.
The bill had drawn mixed reactions from tech firms, researchers, and lawmakers. While some viewed it as paving the way towards country-wide regulations on the industry, others argued that it could stifle the development of AI. Former US House Speaker Nancy Pelosi branded the proposal “well-intentioned but ill informed.”
READ MORE: McDonald’s scraps AI trial after bacon added to ice cream – media
Meanwhile, scores of employees of several leading AI firms, such as OpenAI, Anthropic and Google’s DeepMind, supported the bill, because it added whistleblower protections for those who speak up about the risks in the AI models their companies are developing.
This article was originally published by RT at RT World News – (https://www.rt.com/news/604939-us-ai-safety-bill-veto/?utm_source=rss&utm_medium=rss&utm_campaign=RSS).
General Content Disclaimer
The content on this website, including articles generated by artificial intelligence or syndicated from third-party sources, is provided for informational purposes only. We do not own the rights to all images and have not independently verified the accuracy of all information presented. Opinions expressed are those of the original authors and do not necessarily reflect our views. Reader discretion is advised, as some content may contain sensitive, controversial, or unverified information. We are not responsible for user-generated content, technical issues, or the accuracy of external links. Some content may be sponsored or contain affiliate links, which will be identified accordingly. By using this website, you agree to our privacy policy. For concerns, including copyright infringement (DMCA) notices, contact us at info@texasnews.app.
Add Comment