[ad_1]
The top companies involved in the development of artificial intelligence (AI) tools and products have committed to protecting users from risks posed by the technology by voluntarily agreeing to a series of promises. These companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.
According to a note by the White House, the Biden-Harris Administration has “secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
It also said that the companies have chosen to undertake the commitments immediately and they underscore three principles – safety, security, and trust – fundamental in developing responsible AI.
What are the commitments?
The commitments by the tech giants are broadly divided under these three principles. These companies have committed to internal and external security testing, which will be carried out in part by independent experts, of their AI systems before their release. Secondly, the companies will share information across the industry along with governments, civil society and academia on managing AI risks.
These seven tech companies will also invest in cybersecurity, and facilitate third-party discovery and reporting of vulnerabilities in their AI systems. They will also have to develop and deploy advanced AI systems to help address societal challenges, and publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
Tech CEOs take on AI development
Google and OpenAI, among others, have already been promoting responsible development of the technology. While Google CEO Sundar Pichai has spoken a lot about it in public forums and interviews, OpenAI chief executive recently concluded his global tour where he visited multiple countries, including India, to talk about the need for responsible AI.
In June this year, Apple CEO Tim Cook also opened up about the potential and dangers that AI poses to humanity. He said that large language models (LLMs) show “great promise” but also the potential for “things like bias, things like misinformation [and] maybe worse in some cases.”
Emphasising the need for regulation and guardrails, Cook said, “If you look down the road, then it’s so powerful that companies have to employ their own ethical decisions.”
According to a note by the White House, the Biden-Harris Administration has “secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
It also said that the companies have chosen to undertake the commitments immediately and they underscore three principles – safety, security, and trust – fundamental in developing responsible AI.
What are the commitments?
The commitments by the tech giants are broadly divided under these three principles. These companies have committed to internal and external security testing, which will be carried out in part by independent experts, of their AI systems before their release. Secondly, the companies will share information across the industry along with governments, civil society and academia on managing AI risks.
These seven tech companies will also invest in cybersecurity, and facilitate third-party discovery and reporting of vulnerabilities in their AI systems. They will also have to develop and deploy advanced AI systems to help address societal challenges, and publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
Tech CEOs take on AI development
Google and OpenAI, among others, have already been promoting responsible development of the technology. While Google CEO Sundar Pichai has spoken a lot about it in public forums and interviews, OpenAI chief executive recently concluded his global tour where he visited multiple countries, including India, to talk about the need for responsible AI.
In June this year, Apple CEO Tim Cook also opened up about the potential and dangers that AI poses to humanity. He said that large language models (LLMs) show “great promise” but also the potential for “things like bias, things like misinformation [and] maybe worse in some cases.”
Emphasising the need for regulation and guardrails, Cook said, “If you look down the road, then it’s so powerful that companies have to employ their own ethical decisions.”
[ad_2]
Source link
More Stories
Google Maps: Three privacy features coming to Google Maps on Android, iPhones
Most-Downloaded IPhone App: This Chinese app was the most-downloaded iPhone app in the US in 2023
Ukraine’s largest mobile operator goes offline for millions of users after cyber attack