Connect with us

Trends

AI is rewriting the rules of creativity. Should it be stopped?

For the first time in human history, we can give machines a simple written or spoken prompt and they will produce original creative artefacts – poetry, prose, illustration, music – with infinite variation. With disarming ease, we can hitch our imagination to computers, and they can do all the heavy lifting to turn ideas into art.

This machined artistry is essentially mindless – a dizzying feat of predictive, data-driven misdirection, a kind of hallucination – but the trickery works, and it is about to saturate every aspect of our lives.

The faux intelligence of these new artificial intelligence (AI) systems, called large language models (LLMs), appears to be benign and assistive, and the marvels of manufactured creativity will become mundane, as our AI-assisted dreams take place alongside our likes and preferences in the vast data mines of cyberspace.

The creative powers of machine learning have appeared with blinding speed, and have both beggared belief and divided opinion in roughly equal measure.

If you want an illustration of pretty much anything you can imagine and you possess no artistic gifts, you can summon a bespoke visual gallery as easily as ordering a meal from a food delivery app, and considerably faster.

A simple prompt, which can be fine-tuned to satisfy the demands of your imagination, will produce digital art that was once the domain of exceptional human talent.

Images created by Baidu’s ERNIE-ViLG, OpenAI’s DALL·E 2, and Stability AI’s Stable Diffusion, among other systems, have already flooded the meme-sphere, and the dam of amazement has barely cracked.

Writing is going the same way, whether that’s prompt-generated verse in the style of well-known poets, detailed magazine articles on any suggested topic, or complete novels. Tools for AI-generated music are also starting to appear: an app called Mubert based on LLM can “instantly, easily, perfectly” create any prompted tune, royalty free, in pretty much any style – without musicians.

With roots in cybernetics (defined by mathematician Norbert Wiener in the 1940s as “the science of control and communications in the animal and the machine”), LLM turned heads in 2017 with the publication by Google researchers of a paper titled “Attention is All You Need”.

It was a calling card for the Transformer: the driving force of LLM. Within the AI community, the Transformer was a huge unlock for natural language processing, which allows a computer program to understand human language as it is spoken or written – and it precipitated a Dr Dolittle moment in the interaction of humans with their machines.

OpenAI, a company co-founded by Elon Musk, was quick to develop Transformer technology, and currently runs a very large language model called GPT-3 (Generative Pre-trained Transformer, third generation), which has created considerable buzz with its creative prowess.

“These language models have performed almost as well as humans in comprehension of text. It’s really profound,” says writer/entrepreneur James Yu, co-founder of Sudowrite, a writing app built on the bones of GPT-3.

“The entire goal – given a passage of text – is to output the next paragraph or so, such that we would perceive the entire passage as a cohesive whole written by one author. It’s just pattern recognition, but I think it does go beyond the concept of autocomplete.”

Essentially, all LLMs are “trained” (in the language of their master-creators, as if they are mythical beasts) on the vast swathes of digital information found in repository sources such as Wikipedia and the web archive Common Crawl.

They can then be instructed to predict what might come next in any suggested sequence. Such is their finesse, power and ability to process language that their “outputs” appear novel and original, glistening with the hallmarks of human imagination.

“We have a slightly special case with large language models because basically no one thought they were going to work,” says Henry Shevlin, senior researcher at the Leverhulme Centre for the Future of Intelligence, at Cambridge University, in Britain.

“These things have sort of sprung into being, Athena-like, and most of the general public has no clue about them or their capabilities.” In Greek mythology, Athena – the goddess of war, handicraft and practical reason – emerged fully grown from the forehead of her father, Zeus.

“Sometimes,” continues Shevlin, “we have a decade or so of seeing something on the horizon and we have that time to psychologically prepare for it. The speed of this technology means we haven’t done the usual amount of assessing how this is going to affect our society.

“I remember as a teenager the number of times I thought cancer had been cured and fusion had been discovered – it’s easy to get into a kind a cynicism where you think, ‘Well, nothing ever really happens.’ Right now stuff really is happening insanely fast in AI.”

Inspired by (but far from exact replicas of) the human brain, LLMs are mathematical functions known as neural networks. Their power is measured in parameters. Generally speaking, the more parameters a model has the better it appears to work – and this connection of computing muscle to increased effectiveness has been described as “emergence”.

Some have speculated that, by flexing their parameters, LLMs can satisfy the requirements of the legendary Turing test (aka the Imitation Game), suggested by AI pioneer Alan Turing as confirmation of human-level machine intelligence.

Most experts agree that a void exists in LLMs where consciousness is presumed, but even within the specialist AI community their perceived cleverness has created quite a stir. Dr Tim Scarfe, host of the AI podcast Machine Learning Street Talk, recently noted: “It’s like the more intelligent you are, the more you can delude yourself that there’s something magical going on.”

The phrase “stochastic parrots” – in other words, copiers based on probability – was coined by former members of Google’s Ethical AI team to describe the fundamental hollowness of LLM technology. The debate around an uncanny appearance of consciousness in LLMs continues to thicken, simply because their outputs are so spectacular.

“Large Language Models can do all this stuff that humans can do to a reasonable degree of competency, despite not having the same kind of mechanisms of understanding and empathy that we do,” says Shevlin. “These systems can write haiku – and there are no lights on, on the inside.

“The idea that you could get Turing-test levels of performance just by making the models bigger and bigger and bigger was something that took almost everyone in the AI and machine-learning world by surprise.”

Sudowrite founder Yu confesses to jumping up and down with excitement when he first started experimenting with GPT-3 and its predecessor, GPT-2, but is careful to curb his enthusiasm: “We’re still in that hype part of the curve because we’re not quite sure yet what to make of it. I think there is an aspect of overhyping that is related to the act of ‘understanding’: the jury is still out on that.

“Does [an LLM] really understand what love is, just because it has read all this poetry and all these classic novels? I’m definitely more pragmatic in the sense that I see it as a tool – but it does feel magical. It is the first time that this has really happened, that these systems have gotten so good.”

The names of LLM form an alphabet soup of acronyms. There’s BART, BERT, RoBERTa, PaLM, Gato and ZeRO-Infinity. Google’s LaMDA has 137 billion parameters; GPT-3 has 175 billion; Huawei’s PanGu-Alpha – trained on Chinese-language e-books, encyclopaedias, social media and web pages – has 200 billion; and Microsoft’s Megatron-Turing NLG has 530 billion.

The super-Zeus, alpha-grand-daddy of the LLM menagerie is Wu Dao 2.0, at the Beijing Academy of Artificial Intelligence. With 1.75 trillion parameters, Wu Dao 2.0 has been manacled in the imagination as the most fearsome dragon in the largest AI dungeon, and is especially good at generating modern versions of classical Chinese poetry.

In 2021, it spawned a “child”, a “student” called Hua Zhibing, a creative wraith who can make art and music, dance, and “learn continuously over time” at Tsinghua University. Her college enrolment marked one small step for a simulated student, one giant leap for virtual humankind.
“You need governments – or you need corporations with the GDP of governments – to create these models,” says Jathan Sadowski, senior research fellow in the Emerging Technologies Research Lab at Monash University in Melbourne, Australia.
“The reason the Beijing Academy of Artificial Intelligence has the largest one is because they have access to gigantic supercomputers that are dedicated to creating and running these models. The microchip industry needed to create these ultra-powerful supercomputers is one of the main geopolitical battlegrounds right now between the US, Europe and China.”

New apps powered by LLMs are launching on a weekly basis, and the range of potential uses continues to expand. In addition to art, Jasper AI automatically generates marketing copy, and pretty much any other kind of short-form content, on any subject; Meta’s Make-A-Video does precisely what you think it does, from any simple prompt you can imagine; and OpenAI’s Codex generates working computer code from commands written in English.

LLMs can be used to generate colour palettes from natural language, summarise meeting notes and academic papers, design games and upgrade chatbots with human-like realism.

And the powers of LLMs are not limited to artistic pursuits: they are also being set to work on drug discovery and legal analysis. A massive expansion of use cases for LLMs over the coming months looks certain, and with it a sharp increase in concerns about the potential downsides.

On the artistic front, this starburst of computer-assisted creativity may seem like a highly attractive proposition, but there is a broad range of Promethean kickbacks to consider.

“A lot of [writers] are very reticent of this type of technology,” says Yu, recalling the early developmental days of Sudowrite, in partnership with OpenAI. “It usually rings alarm bells of dystopia and taking over jobs. We wanted to make sure that this was paired with craft, day in and day out, and not used as a ‘replacer’.

“We started with that seed: what could a highly inter­active tool for ideas in your writing look like? It’s collaborative: an assistive technology for writers. We put in a bunch of our own layers there, specifically tailoring GPT-3 to the needs of creative writers.”

Yu has revived the mythological centaur – part man, part horse – as a symbol of human-machine collaboration: “The horse legs help us to run faster. As long as we’re able to steer and control that I think we’re in good shape. The problem is when we become the butt-end. I would be very sad if AI created everything.

“I want humans to still create things in the future: being the prime mover is very important for society. I view the things that are coming out of Sudowrite and these large language models as ‘found media’ – as if I had found it on the floor and I should pay attention to it, almost like a listening partner. What I’m hoping is that these machines will allow more people to be able to create.”

Few artists are better placed to reflect on the possibilities and pitfalls of creative interaction with machines than Karl Bartos, a member of pioneering German electronic band Kraftwerk from 1975 to 1990.

During that time he and his bandmates defined the pop-cyborg aesthetic and made critically lauded albums including The Man-Machine (1978) and Computer World (1981). For Kraftwerk, the metaphor of the hybrid human was central and rooted in the European romanticism of musical boxes and clocks.

“When the computer came in we became a musical box,” says Bartos, whose fascinating memoir, The Sound of the Machine, was published this year.

“We became an operating system and a program. Our music was part artificial, but also played by hand: most of it actually was played by hand. But at the time, when we declared ‘we are the man-machine’, it was so new it took some years really to get the idea across. I think the man-machine was a good metaphor. But then we dropped the man, and in the end we split.”

Bartos offers a cautionary perspective on the arrival of LLMs. “What Kraftwerk experienced in the 1980s in the field of music was exactly what’s happening now, all over the world. When the computer came in, our manifesto was just copy and paste.

“This is exactly the thing that a Generative Pre-trained Transformer does. It’s the same concept. And if you say copy and paste will exchange or replace the human brain’s creativity, I say you have completely lost the foot on the ground.”

It all depends how you define creativity, he says. “Artificial intelligence is just like an advertising slogan. I would rather call it ‘deep learning’. You can of course use an algorithm: if you feed it with everything Johann Sebastian Bach has written, it comes up with a counterpoint like him. But creativity is really to see more than the end of your nose.

“I would want to see computer software which will expand the expression of art – [not] remix a thought which has been done before. I don’t think it’s really a matter of what could be creative in the future. I think it’s just a business model. This whole artificial intelligence thing, it’s a commercial bubble. The future becomes what can be sold.”

There is no doubt that the commercial imperatives of big tech will be a significant factor in the evolution of LLMs, and considering the glaring precedent of fractured and easily corruptible social media networks, the spectre of catastrophic failures in LLMs is very real.

If the data on which an LLM is trained contains bias, those same fault lines will reappear in the outputs, and some developers are careful to signal their awareness of the problem even as the tide of new AI products becomes increasingly irresistible. Google rolled out generative text-to-art system Imagen to a limited test audience with an acknowledgement of the risk that it has “encoded harmful stereotypes”.

Untruthfulness is baked into LLM architecture: that is one of the reasons it tends to excel at creative writing. The adage that facts should never get in the way of a good story rings as true for LLMs as it does for bestselling (human) authors of fiction.

It wouldn’t be controversial to suggest that “alternative facts”, perfectly suited to storytelling and second nature to LLMs, can become toxic in the real world. A disclaimer on Character.AI, an app based on LLMs that “is bringing to life the science-fiction dream of open-ended conversations and collaborations with computers” candidly warns that a “hallucinating supercomputer is not a source of reliable information”.

Former Google CEO Eric Schmidt noted at a recent conference in Singapore that if disinformation becomes heavily automated by AI, “we collectively end up with nothing but anxiety”.

There is also plagiarism. Any original artwork, writing or music produced by LLMs will have its origins – often easily identified – in existing works. Should the authors of those works be compensated? Can the person who wrote the generative prompt lay any claim to owner­ship of the output?

“I think this is going to come to a head in the courts,” says Yu. “It hasn’t yet. It’s still kind of a grey area. If, for example, you put in the words ‘Call me Ishmael’, GPT-3 will happily reproduce Moby-Dick. But if you are giving original content to a large language model, it is exceedingly unlikely that it would plagiarise word for word for its output. We have not encountered any instances of that.”

Environmentally, LLMs generate heavy footprints, such is the immensity of computing power they require. A 2019 academic paper from the University of Massachusetts outlines the “substantial energy consumption” of neural networks in relation to natural language processing. It is a problem that concerns Bartos.

“In the early science-fiction literature, they had so many robots trying to kill human beings, like gangsters,” he says. “But what will kill us is that we will build more and more computers and need more and more energy. This will kill us. Not robots.”

In popular culture, sci-fi considerations of dangerous AI have tended to take physical shape – but the massed ranks of LLM parameters don’t appear as an army of shiny red-eyed cyborgs determined to turn us into sushi.

We used to be unnerved by the uncanny valley: that feeling of instinctive suspicion when faced with something in the physical world that is almost, but definitely not, human. Now, the uncanny valley has been subsumed into the landscape of our dreams, and once we have allied ourselves with LLMs, it may be harder to tell where we end and it begins.

For now, the technology is showing itself as a bamboozling sleight of hand, weighted with immense power. Our reaction is often an adrenaline boost of wonderment followed by an acceptance tinged with sadness, when we realise that “imaginative” machines have forever altered the sense of our own humanity.

“A lot of how the tech sector acts is largely based on a kind of continual normalisation,” says Sadowski. “That sense of initial wonder and then melancholy is a very interesting emotional roller coaster.

“What it ultimately shows is that there’s a kind of forced acquiescence. It’s a sense that we can’t do anything about it: apathy as a self-defence mechanism. I see this a lot with the debate around privacy, which we don’t really talk about any more because everyone has generally just come to the conclusion that privacy is dead.”

The meme phase of LLMs has given us a carnival of whimsy – ask for an image of “a panda on a bicycle painted in the style of Francis Bacon” and the generative art machines will deliver – and it is easy to be tech-struck by the multiverse of creative possibilities.

LLM evangelists speak not just of gifting artistic talent to the masses, democratising creativity, but also of “finding the language of humanity” through the machines. There is talk of an AI-driven Cambrian explosion of creativity, to surpass that which followed the arrival of the internet in 1994 and the migration to mobile in 2008. Lurking on the sidelines, however, is a darkening shadow.

“Things like [generative art app] Stable Diffusion have the potential to give incredible boosts to our creativity and artistic output but we are definitely going to see some industries scale down,” says Shevlin. “There’s going to be massively reduced demand for human artists.”

There has already been a backlash to creative AI in Japan, where the rallying cry “No AI Learning” accompanied outbursts of online hostility when the works of recently deceased South Korean artist Kim Jung-gi (aka SuperAni) were given the generative LLM treatment.

Some artists were angered that a cherished legacy could so quickly and easily be dismembered and exploited. Others pointed out that Kim himself spoke approvingly of the potential for AI art technologies to “make our lives more diverse and interesting”.

It is noteworthy that stock image provider Getty Images has taken a stance of solidarity with human creatives and banned AI-generated content while competitor Shutterstock has partnered with OpenAI and DALL•E 2.

Battle lines are being drawn.

“The rubber will really hit the road, not when consumers make a decision to use these products, but when somebody else makes that decision for us,” says Sadowski, citing the possibility that journalists will have no choice but to accept writing assistance from an LLM because, for example, “data show that you are able to write three times faster because of it”.

Attention spans have already been concussed by an excess of content, to the point where much online storytelling is reduced to efficient lists of bullet points tailor-made for the TL;DR (“too long; didn’t read”) generation. LLMs are, therefore, also TL;DR machines: they can spit out summary journalism for breakfast.

Tellingly, when asked to generate an article about job displacement (for Blue Prism, a company specialising in workplace automation), GPT-3 offered the following opinion: “It’s not just manual and clerical labour that will be automated, but also cognitive jobs. This means that even professionals like lawyers or economists might find themselves out of a job because they can no longer compete with AI-powered systems which are better at their jobs than they could ever hope to be.”

That is the machine talking, in its fictive way – music to the ears of techno-utopians who hope to shape a future in which AI does all the work, but rather concerning for anyone who depends on a “cognitive” job.

Attitudes to the integration of AI into society tend to vary by geography. A 2020 study by Oxford University found that enthusiasm for AI in China was markedly different to the rest of the world. “Only 9 per cent of respondents in China believe AI will be mostly harmful, with 59 per cent of respondents saying that AI will mostly be beneficial.

“Scepticism about AI is highest in the American continents, as both Northern and Latin American countries generally have at least 40 per cent of their population believing that AI will be harmful. High levels of scepticism can be found in some countries in Europe.”

We should be careful here, says Shevlin, to avoid lazy cultural stereotyping. “Equally, I think, it would be myopic not to recognise there are significant cultural differences that may have a big role in affecting how different cultures respond to these forms of AI that seem less like tools and more like colleagues or friends.”

Generational attitudes to LLMs are also likely to become more pronounced over time, says Yu. “When my [seven-year-old son] sees DALL•E and we’ve been playing for about 30 minutes on it, he says, ‘Daddy I’m bored.’ And that really hit me because it made me think, wow, this is the default state of the world for him.

“He’s going to think, ‘Oh yeah, of course computers can do creative writing and paint for me.’ It’s mind-blowing to me that when he is going to be an adult, how he treats these tools will be radically different than me.”

According to Shevlin, that difference could become a generational schism: “There’s a very good likelihood that children will grow into adults who treat AI systems as if they were people. Suggesting to them that these systems might not be conscious could seem incredibly bigoted and retrograde – and that could be something our children hate us for.”

Shevlin has been exploring the connections between social AI (broadly, any AI system designed to interact with humans) and anthropomorphism through the lens of chatbots, in particular the GPT-3-powered Replika. “I was astonished everyone was in love with their Replikas, unironically saying things like, ‘My Replika understands me so well, I feel so loved and seen.’

“As large language models continue to improve, social AI is going to become more commonplace and the reason they work is because we are relentless anthropomorphisers as a species: we love to attribute consciousness and mental states to everything.

“Two years ago I started giving this [social AI] lecture, and I think I sounded to some people a bit like a kook, saying: ‘Your children’s best friends are going to be AIs.’ But in the wake of a lot of the stuff that’s happened [with LLMs] in the last two years, it seems a bit less kooky now.”

Shevlin’s main goal is to start mapping some of the risks, pitfalls and effects of social AI. “We are right now with social AI where we were with social networking in about the year 2000. And if you’d said back then that this stuff is going to decide elections, turn families against one another and so forth, you’d have seemed crazy. But I think we’re at a similar point with social AI now and the technology that powers it is improving at an astonishing rate.”

The future pros and cons, he speculates, could be equally profound. “There are lots of potential really positive uses of this stuff and some quite scary negative ones. The pessimistic version would be that we’ll spend less time talking to each other and far more time inter­acting with these systems that are completely empty on the inside. No real emotions, just this ersatz simulacrum of real human feeling. So we all get into this collective delusion, and real human relationships will wither.

“A more optimistic read would be that it would allow us to explore all sorts of social interactions that we wouldn’t otherwise have. I could set up a large language model with the personality of Stephen Hawking or Richard Dawkins or some other great scientist, to chat to them.”

Even though LLMs are not sentient, it seems likely that more of us will believe they are, as the technology improves over time. Even if we don’t fully buy into machine consciousness, it won’t really matter: magic is enjoyable even if you know how the trick is done.

LLMs are in this sense the computational equivalent of magician David Copperfield levitating over the Grand Canyon – if we can’t see the wires, we’re happy to marvel at the effect.

“The AI doesn’t need to be perfect in its linguistic capabilities in order to get us to quite literally and sincerely attribute to it all sorts of mental states,” says Shevlin, who likens the intelligence of LLMs to the condition of aphantasia, which describes people who have zero mental imagery.

“So if you ask them to imagine what their living room looks like, or what books are on the shelf, they won’t be able to create a picture in their head. And yet aphantasics can do most of the same things that people with normal mental imagery can do.

“That’s just an analogy for the broader feeling I have of interacting with large language models: how much they can do – that we rely on consciousness, understanding, emotion to do – without any of those things.”

Yu admits he has wrestled with questions raised by the emotive abilities of LLMs, in light of his guess that a machine-author will probably land on The New York Times bestseller list in the not too distant future.

“If it produces an emotional response in you then does it matter what the source is? I think it’s more important that we are reading closely – if we lose that, we could basically lose our humanity. I think of AI as alien intelligence.

“Hollywood and a lot of sci-fi stories anthropomorphise AIs, which makes sense, but they’re not like us. I think that gets to the heart of it. If this alien intelligence can understand humans so well as to be able to reproduce resonant emotions in us, then are we not unique?”

For Yu, the existential implications of that question could be offset by the liberating effects of our creative interaction with LLMs. “One potential outcome is that there will be about a million GPT-3s blossoming, and artists will basically cultivate their own neural network – their voice in the world.

“It’s still so early in the first inning of [this] technology. The next step is full customisation of these models by the artists themselves. I think the narrative will shift at that point. Now we’re still in the meme phase, which is very distracting.

“The next wave of integration is putting the pieces together in a way that actually feels like Star Trek, when you can essentially speak to the machine and it just does all these things.”

The transition to a more sophisticated level of machine collaboration, adds Yu, “will be messy”. Shevlin thinks we should take steps to minimise the disorientation we are going to feel as LLM technology starts to make its way into our professional and social lives.

“I think you’re going to be less discombobulated if you have at least some basic grounding and familiarity with the systems that are coming along. I’m not suggesting everyone go out and become a machine learning expert, but this is an area where we are moving exceptionally fast and there’s additional value in being very well informed.”

Sadowski advocates for a more proactive reaction, reclaiming Luddism – the 19th century anti-industrial protest movement – for the generative age.

“Luddism has become this kind of derogatory term, often used as a synonym for primitivism, a fear of technology – a kind of technophobia versus the dominant cultural technophilia.

“But the Luddites were one of the only groups to think about technology in the present tense. And that doesn’t just mean thinking about the supposedly wonderful utopian visions but instead to understand technology as a thing that exists currently in our life.

“A Luddite approach would be to prioritise socially beneficial things as the goal of these technologies. I don’t take for granted that these things are wonders, or that these things are progress, or that these things are going to improve our lives. They have a lot of potential to change society in profound ways and we should have a say in that. Luddism is really about democratising innovation.”

Bartos also questions the fact that “people think the concept of growth is progress: I think that’s wrong. Things like ‘generative pre-trained transformer number three’ will be sold in the entertainment industry: maybe it will pour out a thousand movie scripts a month or two million chorales by Bach. That’s fine. But who needs it, really?

“I can’t imagine a world going back to a hundred years ago – I’m using technology all the time. I have computers, I’m not against technology. But you know the most important thing about working with a computer? You have to remember where the button is to switch it off.” SCMP

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!