Connect with us

Trends

AI is moving fast enough to break things. Sound familiar?

In January 2015, the newly formed—and grandly named—Future of Life Institute (FLI) invited experts in artificial intelligence to spend a long weekend in San Juan, Puerto Rico. The result was a group photo, a written set of research priorities for the field and an open letter about how to tailor AI research for maximum human benefit. The tone of these documents was predominantly upbeat. Among the potential challenges FLI anticipated was a scenario in which autonomous vehicles reduced the 40,000 annual US automobile fatalities by half, generating not “20,000 thank-you notes, but 20,000 lawsuits.” The letter acknowledged it was hard to predict what AI’s exact impact on human civilization would be—it laid out some potentially disruptive consequences—but also noted that “the eradication of disease and poverty are not unfathomable.”

The open letter FLI published on March 29 was, well, different. The group warned that AI labs were engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for an immediate pause on the most advanced AI research and attracted thousands of signatures—including those of many prominent figures in the field—as well as a round of mainstream press coverage.

For anyone trying to wrap their heads around the current freakout over AI, the letter was instructive on multiple levels. It’s a vivid example of how the conversations about new technologies can shift with jarring speed from wide-eyed optimism to deep pessimism. The vibe at the 2015 Puerto Rico event was positive and collegial, says Anthony Aguirre, FLI’s vice president and secretary of its board. He also helped draft the recent letter, inspired by what he argues is a distressing turn in the development of the technology. “What there wasn’t then was giant companies competing with one another,” he says.

Looking back, the risk that self-interested technology companies would come to dominate the field seems obvious. But that concern isn’t reflected anywhere in the documents from 2015. Also absent was any mention of the industrial-scale dissemination of misinformation, an issue that many tech experts now see as one of the most frightening consequences of powerful chatbots in the near term.

Then there was the reaction to last month’s letter. Predictably, leading AI companies such as OpenAI, Google, Meta Platforms and Microsoft gave no indication that it would lead them to change their practices. FLI also faced blowback from many prominent AI experts, partially because of its association with the polarizing effective altruism movement and Elon Musk, a donor and adviser known for his myriad conflicts of interest and attention-seeking antics.

Aside from any intra-Silicon Valley squabbles, critics say FLI was doing damage not for voicing concerns, but for focusing on the wrong ones. There’s an unmistakable tinge of existential threat in FLI’s letter, which explicitly raises the prospect of humans losing control of the civilization we’ve built. Fear about computer superintelligence is a long-standing topic within tech circles—but so is the tendency to vastly overstate the capabilities of whatever technology is the subject of the latest hype cycle (see also: virtual reality, voice assistants, augmented reality, the blockchain, mixed reality, and the internet of things, to name a few).

Predicting that autonomous vehicles could halve traffic fatalities and warning that AI could end human civilization seem to reside on opposite ends of the techno-utopian spectrum. But they actually both promote the view that what Silicon Valley is building is far more powerful than laypeople understand. Doing this diverts from less sensational conversations and undermines attempts to address the more realistic problems, says Aleksander Madry, faculty co-lead of Massachusetts Institute of Technology’s AI Policy Forum. “It’s really counterproductive,” he says of FLI’s letter. “It will change nothing, but we’ll have to wait for it to subside to get back to serious concerns.”

The leading commercial labs working on AI have been making major announcements in rapid succession. OpenAI released ChatGPT less than six months ago and followed with GPT-4, which performs better on many measures but whose inner workings are largely a mystery to people outside the company. Its technology is powering a series of products released by Microsoft Corp., OpenAI’s biggest investor, some of which have done unsettling things, such as professing love for human users. Google rushed out a competing chatbot-powered search tool, Bard. Meta Platforms Inc. recently made one of its AI models available to researchers who agreed to certain parameters—and then the code quickly showed up for download elsewhere on the web.

“In a sense we’re already in the worst-of-both-worlds scenario,” says Arvind Narayanan, a professor of computer science at Princeton University. The best AI models are controlled by a few companies, he says, “while slightly older ones are widely available and can even run on smartphones.” He says he’s less concerned about bad actors getting their hands on AI models than AI development happening behind the closed doors of corporate research labs.

OpenAI, despite its name, takes essentially the opposite view. After its initial formation in 2015 as a nonprofit that would produce and share AI research, it added a for-profit arm in 2019 (albeit one that caps the potential profits its investors can realize). Since then it’s become a leading proponent of the need to keep AI technology closely guarded, lest bad actors abuse it.

In blog posts, OpenAI has said it can anticipate a future in which it submits its models for independent review or even agrees to limit its technology in key ways. But it hasn’t said how it would decide to do this. For now it argues that the way to minimize the damage its technology can cause is to limit the level of access its partners have to its most advanced tools, governing their use through licensing agreements. The controls on older and less powerful tools don’t necessarily have to be as strong, says Greg Brockman, an OpenAI co-founder who’s now its president and chairman. “You want to have some gap so that we have some breathing room to really focus on safety and get that right,” he says.

It’s hard not to notice how well this stance dovetails with OpenAI’s commercial interests—a company executive has said publicly that competitive considerations also play into its view on what to make public. Some academic researchers complain that OpenAI’s decision to withhold access to its core technology makes AI more dangerous by hindering disinterested research. A company spokesperson says it works with independent researchers, and went through a six-month vetting process before releasing the latest version of its model.

OpenAI’s rivals question its approach to the big questions surrounding AI. “Speaking as a citizen, I always get a little bit quizzical when the people saying ‘This is too dangerous’ are the people who have the knowledge,” says Joelle Pineau, vice president for AI research at Meta and a professor at McGill University. Meta allows researchers access to versions of its AI models, saying it hopes outsiders can probe them for implicit biases and other shortcomings.

The drawbacks of Meta’s approach are already becoming clear. In late February the company gave researchers access to a large language model called LLaMA—a technology similar to the one that powers ChatGPT. Researchers at Stanford University soon said they’d used the model as a basis for their own project that approximated advanced AI systems with about $600 of investment. Pineau says that she hadn’t assessed how well Stanford’s system worked, though she says such research was in line with Meta’s goals.

But Meta’s openness, by definition, came with less control over what happened with LLaMA. It took about a week before it showed up for download on 4chan, one of the main message boards of choice for internet trolls. “We’re not thrilled about the leak,” Pineau says.

There may never be a definitive answer about whether OpenAI or Meta has the right idea—the debate is only the latest version of one of Silicon Valley’s oldest fights. But their divergent paths do highlight how the decisions about putting safeguards on AI are being made entirely by executives at a few large companies.

In other industries, the release of potentially dangerous products comes only after private actors have satisfied public agencies that they’re safe. In a March 20 blog post, the Federal Trade Commission warned technologists that it “has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.” Ten days later the Center for AI & Digital Policy, an advocacy group, filed a complaint with the commission, asking it to halt OpenAI’s work on GPT-4.

Being able to build something but refraining from doing so isn’t a novel idea. But it pushes against Silicon Valley’s enduring impulse to move fast and break things. While AI is far different from social media, many of the players involved in this gold rush were around for that one, too. The services were deeply entrenched by the time policymakers began trying to respond in earnest, and they’ve arguably achieved very little. In 2015 it still seemed like there was lots of time to deal with whatever AI would bring. That seems less true today. Bloomberg

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!