November 25, 2024

AI Model: OpenAI’s ‘humanity-threatening’ AI model: What lessons from Google and Meta teach

[ad_1]

There is no official explanation of why OpenAI board sacked CEO Sam Altman but reports said that it was due the development of a powerful AI algorithm that could have threatened humanity. Board reportedly viewed this as a concern over commercialising advances before understanding the consequences.
The details of this AI algorithm are not shared by OpenAI but they seem to be somewhat similar to previous instances at Google and Facebook parent Meta where the tech giants had to shut them down.
OpenAI’s AI algorithm
News agency Reuters reported about a project called Q* (pronounced Q-Star) – seen as a breakthrough in the startup’s search for artificial general intelligence (AGI). OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
The report said that if given vast computing resources, the new model was able to solve certain mathematical problems but it made researchers very optimistic about Q*’s future success. However, OpenAI CTO Mira Murati alerted staff to not entertain such media stories.
Meta’s AI robots develop their own language
The reports about Meta’s two artificially intelligent programs developing and conversing in their own language first surfaced in 2017. The company abandoned the experiment, in which it developed two chatbots for internal use. The language developed by these two programs appeared mostly incomprehensible to humans.
Researchers at the Facebook AI Research Lab (FAIR) said that the robots were instructed to work out how to negotiate between themselves but later found that the chatbots had deviated from the script and were communicating in a new language developed without human input.
This is fascinating and concerning at the same time – suggesting both the positive and potentially horrifying potential of AI towards humanity.
Google’s ‘sentient’ LaMDA AI model
Similar things happened at Google. In 2021, Google announced LaMDA, short for language model for dialogue applications, that was capable of taking up conversation like we currently see in AI chatbots like OpenAI’s ChatGPT and Google Bard. A year later, one of the company engineers claimed that the unreleased AI system had become sentient.
Blake Lemoine, a software engineer for Google, claimed that a conversation technology had reached a level of consciousness after exchanging thousands of messages with it. The company later sacked the engineer saying he violated employment and data security policies.
The company also dismissed Lemoine’s “wholly unfounded” claims after reviewing them extensively. Google also said it takes the development of AI “very seriously” and that it’s committed to “responsible innovation.” Both Google and Meta are in a group of tech giants that have pledged to a responsible development of AI.
Similar instances happened when early chatbots were released last year. People reported that the chatbots’ responses were scary as they talked about taking over the world and ending humanity. As a result, checks were placed and guardrails were installed to further fine-tune the models.
Lessons learnt
Technology is an evolving process and experiments are a way to test whether a certain type of technology is beneficial or harmful. Meta and Google developing models and then shutting/ fine-tuning/ restricting them is a part of that process.
Google CEO Sundar Pichai had also come out saying that the company has a lot to lose and it wants to be sure of a product, especially AI, before releasing it to the public.



[ad_2]

Source link