Chinese authorities are tightening the screws for artificial intelligence (AI) companies in the country, and the latest in a long line of developments is the requirement for security evaluations before a commercial launch.
According to proposed regulations released by China’s National Information Security Standardization Committee, AI developers will be tasked with the burden of evaluating data sources used in training their models. The incoming rules list certain data sources for training generative AI models, with firms risking the possibility of steep fines for violating the rules.
The blacklisted data sources include those containing information on statewide violence and terrorism and running contrary to the socialist values of the Chinese state. AI developers will be precluded from training their models with data containing “more than 5 of illegal and harmful information,” but the metrics for measuring the data remain unclear.
Per the rules, AI developers are also barred from using censored materials on the Chinese internet to train their models. Moving forward, Chinese AI developers are required to submit a detailed security report of their generative AI systems to authorities before a mainstream launch.
On obtaining training data, the rules order firms to respect content creators’ intellectual property (IP) rights and seek consent before collection. If biometric data is to be used in training models, AI developers are mandated to obtain permission from the individual in a clear and concise manner.
The new guidelines come after a deluge of generative AI products flooded the Chinese market over the last 60 days. One report placed the number of AI products at over 70, with several local technology firms like Baidu and Alibaba competing for market dominance.
Chinese firms began rolling out their AI services following the launch of interim measures to regulate developers in the middle of August. The rules, hailed for their innovative stance, impose several guardrails, including the requirement for clear labeling of AI-generated content and the ban against discriminatory content while placing additional liability on AI developers.
“Providers shall legally assume the responsibility of network information content producers and fulfill network information security obligations,” read a clause in the August rules.
China’s stance on emerging technology
China has always eyed emerging technologies with much suspicion, moving to flex its regulatory muscles over fledgling industries. In 2021, the regulatory hammer fell on its growing Web3 industry, effectively banning all virtual currency activities in the country.
With AI, China’s Network Security Bureau revealed a blueprint to assist the government in cracking down on misuse of the technology. An executive of the national security bureau confirmed a spike in crimes facilitated by AI and digital currencies, vowing to increase their enforcement action over the sector.
“We have been monitoring the use of ChatGPT, cloud computing, blockchain, deep fake AI, and other new emerging technologies, new applications, and new organizations in these incidents,” said the executive. “We will strike hard at such methods as we research their use.”
Watch: AI, blockchain, and secret to winning in technology
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.