[ad_1]
Artificial intelligence has gone beyond being the buzzword and is slowly spreading its tentacles all across the tech industry around the world. While the advantages of AI are immense, so are the risks. That’s why China is planning strict curbs on how AI is used, according to a report by South China Morning Post. As per the report, Chinese authorities are looking to strike a balance between the risks and benefits of AI.
China has published new draft guidance and is looking at improving two aspects. First, how the training data is used and whether its secure and the security of large language models (LLMs), which are used for generative AI tech like ChatGPT or Baidu’s Ernie Bot.
What China wants to do
The authorities in China wants to ensure that any data which is used to train AI models shouldn’t have any copyright infringement. Also, it shouldn’t breach any personal data. According to the report, all the training Daya should be “processed by authorised data labellers and reviewers to pass security checks first.” In other words, the data should go through human eyes first before it is generated artificially.
Also, when developers build their LLMs, they “should be based on foundational models filed with and licensed by authorities.” Again, this will ensure that the authorities know what information is being fed to these models that are used for generative AI.
No ‘illegal content’
China’s definition of illegal content is rather vast and includes sensitive political information. For instance, questions about Taiwan are considered illegal content. Furthermore, fake news, promoting superstitions or pornography is also considered illegal content. The draft proposal wants to block all AI training data that will contain more than “5% of illegal content”. This is because the information will be considered dangerous, as per China’s cybersecurity laws.
China has tabled the draft proposal and is seeking public feedback till October 25.
China has published new draft guidance and is looking at improving two aspects. First, how the training data is used and whether its secure and the security of large language models (LLMs), which are used for generative AI tech like ChatGPT or Baidu’s Ernie Bot.
What China wants to do
The authorities in China wants to ensure that any data which is used to train AI models shouldn’t have any copyright infringement. Also, it shouldn’t breach any personal data. According to the report, all the training Daya should be “processed by authorised data labellers and reviewers to pass security checks first.” In other words, the data should go through human eyes first before it is generated artificially.
Also, when developers build their LLMs, they “should be based on foundational models filed with and licensed by authorities.” Again, this will ensure that the authorities know what information is being fed to these models that are used for generative AI.
No ‘illegal content’
China’s definition of illegal content is rather vast and includes sensitive political information. For instance, questions about Taiwan are considered illegal content. Furthermore, fake news, promoting superstitions or pornography is also considered illegal content. The draft proposal wants to block all AI training data that will contain more than “5% of illegal content”. This is because the information will be considered dangerous, as per China’s cybersecurity laws.
China has tabled the draft proposal and is seeking public feedback till October 25.
[ad_2]
Source link
More Stories
Google Maps: Three privacy features coming to Google Maps on Android, iPhones
Most-Downloaded IPhone App: This Chinese app was the most-downloaded iPhone app in the US in 2023
Ukraine’s largest mobile operator goes offline for millions of users after cyber attack