Connect with us

International Circuit

Chinese experts back call by Musk, others for pause in AI race

A group of mainland Chinese and Hong Kong artificial intelligence (AI) experts have joined calls by some global tech veterans to pause development of AI technologies more advanced than GPT-4, warning that the current rate of progress is “too fast”.

The Future of Life Institute (FLI), a research facility that researches technological risks to human society, drafted an open letter last month that counts Tesla’s Elon Musk, Apple co-founder Steve Wozniak and published historian Yuval Harari among its thousands of signatories. It says the current AI race is dangerous and calls for the creation of independent regulators to ensure future systems are safe to deploy.

Although some practitioners have criticised the letter for sowing fear about the future of AI, several experts based in mainland China and Hong Kong have expressed support, saying it is important to address concerns about ChatGPT, the AI-powered chat bot developed by Microsoft-backed OpenAI that uses the GPT-4 large language model (LLM).

Since long before the November launch of ChatGPT, there has been open debate about whether AI would one day outsmart human beings.

Cai Hengjin, a professor at the Artificial Intelligence Research Institute at Wuhan University, says the advent of ChatGPT has smashed the arguments of those who thought this could never happen.

“One measurement is how fast and powerful it [AI] would grow to the extent beyond our imagination. Some people thought it would grow slowly and we still have decades or even hundreds of years [left],” said Cai in an interview with the Post on Sunday. “But that’s not the case … we only have a couple of years – because our [AI] advancement is just too fast.”

Cai’s concerns are shared by many other Chinese signatories of the FLI’s open letter.

Zhang Yizhe, an associate professor at the Nanjing University of Science and Technology, and Amazon veteran Zhao Yaxiong, chief executive at cloud-based software-as-a-service start-up Tricorder Observability, have both voiced concerns about the security challenges AI could bring and have called for more scrutiny.

“I signed the [FLI] letter to support the AI-related security issues raised, hoping there will be a better system to coordinate LLM development activities across big companies,” said Zhang, although he pointed out that he does not agree with everything in the letter.

“GPT-4 and bigger models will unleash a productivity upgrade and lead to the loss of jobs for many. How to safely, reasonably, and ethically develop and use super large language models needs our attention,” said Zhang.

Tricorder’s Zhao called the recent trend of having ChatGPT grade research papers on AI chatbot technologies a dangerous sign, showing that we have already begun the process of allowing AI to judge human work.

Alfonso Ngan, associate dean of engineering (research) at the University of Hong Kong who also signed the letter, said that it will be asking students not to use the technology for one semester, to give the institution time to develop an agreed position on how it can be appropriately applied within an academic environment.

“We’re not banning this forever,” said Ngan, adding that powerful AIs like ChatGPT pose a conundrum for the education sector.

“On the one hand, we want our students to be exposed [to AI tech] to learn, and [we even] need to teach such technologies,” said Ngan. “But on the other hand, the education sector needs time to adjust operations, especially in relation to assessment, homework and assignments.”

Ngan, and other signatories, including Tricorder’s Zhao and Chen Yongli, founder of start-up Edgenesis which focuses on the Internet of Things, said ChatGPT’s power cannot be taken lightly.

“You can’t simply pray that when AI grows smarter and stronger, it will still have compassion for us humans, that it will still be rational and only inherit good and not humanity’s bad nature,” said Cai, noting that ChatGPT has been trained exclusively with human language materials.

“We could make use of the time we still have to evolve with it side by side in the metaverse, and coach it to be good,” said Cai. South China Morning Post

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!