Connect with us

Company News

Pause AI systems, they are a risk to society

Artificial intelligence experts, industry leaders and researchers are calling on AI developers to hit the pause button on training any models more powerful than the latest iteration behind OpenAI’s ChatGPT.

More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols.

Prominent figures in the tech community, including Elon Musk and Apple co-founder Steve Wozniak, were listed among the signatories.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” said an open letter published on the Future of Life Institute website. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The call comes after the launch of a series of AI projects in the last several months that convincingly perform human tasks such as writing emails and creating art. Microsoft backed OpenAI released its GPT-4 this month, a major upgrade of its AI-powered chatbot, capable of telling jokes and passing tests like the bar exam. Google and Microsoft are among the firms using AI, while Morgan Stanley has been using GPT-4 to create a chatbot for its wealth advisers.

Developers should work with policy makers to create new AI governance systems and oversight bodies, according to the letter. It called on governments to intervene in the development of AI if major players did not imminently agree to a public, verifiable pause.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” it said.

Yoshua Bengio, the founder of AI research institute Mila, signed the petition. And Emad Mostaque, founder and CEO of Stability AI, confirmed to Bloomberg that he signed.

There were no signatories from OpenAI.

The Future of Life Institute is a nonprofit that seeks to mitigate risks associated with powerful technologies and counts the Musk Foundation as its biggest contributor.

And for many a computer engineering degree is optional. They’re called “prompt engineers,” people who spend their day coaxing the AI to produce better results and help companies train their workforce to harness the tools.

Over a dozen artificial intelligence language systems called large language models, or LLMs, have been created by companies like Alphabet, OpenAI, and Meta Platforms.

As the technology proliferates, many companies are finding they need someone to add rigor to their results.

“It’s like an AI whisperer,” says Albert Phelps, a prompt engineer at Mudano, part of consultancy firm Accenture. “You’ll find prompt engineers come from a history, philosophy, or English language background, because it’s wordplay. You’re trying to distill the essence or meaning of something into a limited number of words.”

Firms like Anthropic, a Google-backed startup, are advertising salaries up to $335,000 for a “Prompt Engineer and Librarian.” Automated document reviewer Klarity is offering $230,000 for a machine learning engineer who can “prompt and understand how to produce the best output” from AI tools.

Google, TikTok and Netflix have been driving salaries higher, but the role is becoming mainstream among bigger firms. Bloomberg

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!