[ad_1]
NEW DELHI: US tech giant Google said it is committed to check the menace of malicious AI and deepfake content, and will work with the government on the matter.
“We appreciate the opportunity to meet with the government, jointly with industry stakeholders, to discuss the need to develop safeguards around synthetic contentand share how we are building tools and guardrails to address associated risks.We are committed to continue this collaboration and share our expertise and knowledge towards our collective goal of responsible AI development,” Google said in a statement after a meeting of industry representatives with the government.
Google has been investing in tools like watermarking and synthetic audio detection to help people safely identify content online.
“At Google, we are developing AI in a way that maximizes the positive benefits to society while addressing the challenges, guided by our AI Principles. We’re also building tools and guardrails to help prevent the misuse of technology, while enabling people to better evaluate online information. We’ll continue to apply high standards for information quality and safety drawing on our deep experience, and in collaboration with experts around the world. We have long-standing, robust policies, technology, and systems to identify and remove harmful content across our products and platforms. We are applying this same ethos and approach as we launch new products powered by Generative AI technology,” the company further said.
It is understood that over the coming months, YouTube will introduce updates that will inform viewers when the content they’re seeing is synthetic.
It will require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.
“We appreciate the opportunity to meet with the government, jointly with industry stakeholders, to discuss the need to develop safeguards around synthetic contentand share how we are building tools and guardrails to address associated risks.We are committed to continue this collaboration and share our expertise and knowledge towards our collective goal of responsible AI development,” Google said in a statement after a meeting of industry representatives with the government.
Google has been investing in tools like watermarking and synthetic audio detection to help people safely identify content online.
“At Google, we are developing AI in a way that maximizes the positive benefits to society while addressing the challenges, guided by our AI Principles. We’re also building tools and guardrails to help prevent the misuse of technology, while enabling people to better evaluate online information. We’ll continue to apply high standards for information quality and safety drawing on our deep experience, and in collaboration with experts around the world. We have long-standing, robust policies, technology, and systems to identify and remove harmful content across our products and platforms. We are applying this same ethos and approach as we launch new products powered by Generative AI technology,” the company further said.
It is understood that over the coming months, YouTube will introduce updates that will inform viewers when the content they’re seeing is synthetic.
It will require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.
[ad_2]
Source link
More Stories
India’S Growth Forecast: S&P ups India’s FY’24 growth forecast to 6.4% on robust domestic momentum
India to remain fastest-growing major economy, but demand uneven: Poll
Jack Ma: Jack Ma gets back into business with ‘Ma’s Kitchen Food’