[ad_1]
As artificial intelligence (AI) gathers steam and makes its presence felt more than ever, there is a concern about security risks associated with it. Google, which is a major player in the development of next-generation AI tools, has emphasised on adopting a cautious approach towards AI. In a blog post, Google has now — for the first time ever — revealed that it has a group of ethical hackers that work on making AI safe. Called Red Team, Google said that it was first formed almost a decade back.
Who is part of Google’s Red Team?
In a blog post, Daniel Fabian, head of Google Red Teams, said that it consists of a team of hackers that simulate a variety of adversaries, ranging from nation states and well-known Advanced Persistent Threat (APT) groups to hacktivists, individual criminals or even malicious insiders. “The term came from the military, and described activities where a designated team would play an adversarial role (the “Red Team”) against the “home” team,” Fabian noted.
He also said that the AI Red Team is closely aligned with traditional red teams, but also has the necessary AI subject matter expertise to carry out complex technical attacks on AI systems. Google has these so-called red teams for its other products and services.
What does the Red Team do?
The primary job of Google’s AI Red Team is to take relevant research and adapt it to work against real products and features that use AI to learn about their impact. “Exercises can raise findings across security, privacy, and abuse disciplines, depending on where and how the technology is deployed,” explained Fabian.
How effective has Google’s Red Team been?
Quite successful, according to Fabian, who added, “Red team engagements, for example, have highlighted potential vulnerabilities and weaknesses, which helped anticipate some of the attacks we now see on AI systems.” Attacks on AI systems quickly become complex, and will benefit from AI subject matter expertise, he further added.
Who is part of Google’s Red Team?
In a blog post, Daniel Fabian, head of Google Red Teams, said that it consists of a team of hackers that simulate a variety of adversaries, ranging from nation states and well-known Advanced Persistent Threat (APT) groups to hacktivists, individual criminals or even malicious insiders. “The term came from the military, and described activities where a designated team would play an adversarial role (the “Red Team”) against the “home” team,” Fabian noted.
He also said that the AI Red Team is closely aligned with traditional red teams, but also has the necessary AI subject matter expertise to carry out complex technical attacks on AI systems. Google has these so-called red teams for its other products and services.
What does the Red Team do?
The primary job of Google’s AI Red Team is to take relevant research and adapt it to work against real products and features that use AI to learn about their impact. “Exercises can raise findings across security, privacy, and abuse disciplines, depending on where and how the technology is deployed,” explained Fabian.
How effective has Google’s Red Team been?
Quite successful, according to Fabian, who added, “Red team engagements, for example, have highlighted potential vulnerabilities and weaknesses, which helped anticipate some of the attacks we now see on AI systems.” Attacks on AI systems quickly become complex, and will benefit from AI subject matter expertise, he further added.
[ad_2]
Source link
More Stories
Google Maps: Three privacy features coming to Google Maps on Android, iPhones
Most-Downloaded IPhone App: This Chinese app was the most-downloaded iPhone app in the US in 2023
Ukraine’s largest mobile operator goes offline for millions of users after cyber attack