GlobalRegulatory

Global: China’s Generative AI Regulations Go Live Next Month

0
Chinas Generative AI Regulations Go Live Next Month
Share this article

China is a famously hands-on nation when it comes to controlling information within its borders.

Still, that hasn’t stopped some of the world’s largest technology companies from complying with Beijing’s rules around internet access and the spread of certain media, nor the country’s marketplace from giving rise to its own breed of tech giants, including Alibaba, TikTok, Huawei, JD.com, Tencent and more.

After all, as one of the world’s leading economies with a population of more than a billion, China is an attractive place to do business — particularly for tech platforms explicitly built to monetize scalability.

Now, China has become one of the world’s first major market economies to regulate generative artificial intelligence (AI) with a new set of rules that will go into effect on Aug. 15.

Western firms will need to comply with the interim set of guidelines from the country’s top internet watchdog, the Cyberspace Administration of China (CAC), if they want to operate and do business within the nation’s borders.

They will have a short runway before they need to do so. The rules were announced at the end of last week (June 13).

This, as Elon Musk repeatedly referred to China as being “on team humanity” during a live chat Friday (June 14) introducing his new AI company, xAI.

China Takes a State-Centric Approach to Regulating AI 

The statement by CAC underscored Beijing’s firm rules that content generated by generative AI must be in line with the nation’s sore socialist values – but apply just to content accessible by, or provided to, the domestic public.

“The provisions of these Measures shall not apply to the research and development and application of generative artificial intelligence technology by industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc., and do not provide generative artificial intelligence services to the domestic public,” the CAC statement reads.

It also added that AI-produced content, “shall not … subvert the state power, overthrow the socialist system, endanger national security and interests, damage the image of the country, incite division of the country, undermine national unity and social stability, promote terrorism and extremism, promote national hatred, ethnic discrimination, violence, [or promote] false and harmful information.”

The freshly unveiled rules will be overseen by seven national agencies, including the cyberspace watchdog, the National Development and Reform Commission, and the ministries of education, science and technology, radio and television, and industry and information technology.

Only firms planning to offer services to the Chinese domestic public need to submit security assessments, per the CAC statement, which suggests that firms working on enterprise-facing products or those intended for use by overseas users would be given more leeway, at least for now.

Indeed, most Chinese tech companies — including Baidu, Alibaba and JD.com — have so far focused primarily on developing AI applications designed solely for industrial and enterprise use.

JD.com unveiled a new enterprise-facing LLM (large language model) called ChatRhino on the same day as the CAC’s statement.

Sharpening the Spear Tip of Innovation 

While China leads the world in AI-driven surveillance and facial recognition technology, the country lags behind other nations in developing cutting-edge generative AI systems due to its censorship rules that limit the data that can be used to train foundation models.

That could be why the latest statement from the CAC walks back some of the more draconian information control elements seen in an earlier version of the proposed roles, reframing the guardrails to better support domestic development of the AI industry.

Last week’s guidelines join two previous regulatory frameworks from Beijing focused on enacting rules of the road for AI and its algorithmic foundations — including 2021 regulations around recommendation algorithms and a set of 2022 rules for deep synthesis (deepfakes).

Information control remains a central goal of all three measures, which require AI developers to file with China’s algorithm registry, a newly built government repository that gathers information on how algorithms are trained, as well as requiring them to pass a security self-assessment.

As PYMNTS has covered, observers believe that the U.S. and EU’s shared concern over China’s growing global digital influence could potentially lead to closer cooperation across the Atlantic.

Industry insiders have drawn parallels between the purpose of AI regulation in the West to both a car’s airbags and brakes and the role of a restaurant health inspector in previous discussions with PYMNTS.

Until global regulation does appear, “Pandora’s box has been opened. AI is really powerful … The tail wags the dog now: Things happen online first, and then trickle down to real life,” Wasim Khaled, CEO and co-founder of intelligence platform Blackbird.AI, told PYMNTS.

Share this article

Global: German Regulator Extends Anti-Money Laundering Measures at N26

Previous article

Nigeria: Naira Rebounds, External Reserves Slide to $34bn

Next article

You may also like

Comments

Comments are closed.

More in Global