Could a bot write this intro? The hype around AI has pushed this issue to the center of public discourse. Conversational bots like ChatGPT and automated imagers like DALL-E — both built by OpenAI — are popping up everywhere.
Despite some case studies that are optimistic of their potential, current models are limited and the results they produce are deeply flawed. But it doesn’t seem to matter that the technology behind the AI isn’t ready for prime time. Models just need to tell a compelling story to the humans writing the checks – and they are.
Microsoft, which developed its own Bing chatbot, invested $13 billion in OpenAI. Early-stage ventures invested $2.2 billion in generative AI last year alone, and this year Salesforce announced a $250 million fund to invest in the space. Headlines and institutions are declaring that AI is the future of work, especially for replacing writers and artists. The fact that these exciting predictions are outpacing the quality of the technology itself says a lot about our cultural moment – and our tendency to devalue creative work for a long time.
The supposed promise of the future of AI is efficient and abundant content. Office workers can now generate entire presentations with a prompt or click. Creative agencies are using imagers to simulate client concepts. Even literary magazines have reported being bombarded with AI-generated submissions, and yes, editors are publishing AI-generated articles and illustrations.
But AI models have been proven time and again to perpetuate biases, misinterpret cultural context, and prioritize being convincing over telling the truth. They rely on datasets from the creative work of humans, an approach that could be labeled plagiarism or data mining. And the models that drive ChatGPT and DALL-E are black boxes, so the origins of the data cannot be technically traced.
Today these and other models require humans (with their own biases) to train them to get “good” results and then check their work at the end. Because the tools are built for pattern matching, their results are often repetitive and thin, an aesthetic of likeness rather than invention.
The impetus to replace human workers, then, does not come from technology’s unbeatable capabilities. It stems from years when companies large and small – especially those in publishing, technology and media – tightened the screws on creative work to spend less and less on workers.
In today’s financial crisis, even tech companies are cutting costs through mass layoffs (including cutting AI ethics teams) while funding and selling AI tools. But the situation is more dire for writers, artists and musicians who have struggled to make a living for a long time.
Pay for writers, editors, and illustrators in this country has stagnated over the past two decades. Some nations have begun to treat art as a public good: Ireland is experimenting with paying artists to make art, and other countries are subsidizing the public for art. But in the US, public funding for the arts is embarrassingly low compared to other wealthy Western nations, and has dropped further during the pandemic. Many artists have to migrate from one social media application to another to build an audience for their work and earn an income.
Meanwhile, ubiquitous streaming subscriptions and algorithmic feeds, with their laser focus on getting maximum engagement, have flattened creativity into an endless scroll of mediocre and repetitive styles – an untenable model for original work.
Automation is the next chapter in this story of ever-cheaper content. How much lower can the value of art go? Why pay creative workers minimum wages when you can program machines to produce interchangeable units of content?
Well, because these models are no substitute for human creative work. If we want to break with repetitive molds, fight to unravel prejudices and build new possibilities, the work must start from the human being.
The danger of reducing creative work to widgets for outsourcing is that we miss the reflection and iteration steps that produce new connections. The language learning models behind chatbots are designed to provide a single, authoritative answer, contracting the world within reach of the information they’ve already received.
The human brain, on the other hand, has a unique recursive processing capability that allows us to interpret ideas beyond a set of rules. Every step of the creative process – no matter how slow, small or boring – is an expansive act, transporting a concept to a new place and imagining a wider world than exists today.
An AI takeover is not inevitable, despite what some business and technology leaders say. This isn’t the first cycle of tech hype, and some regulators, unions and artists are already resisting. Following the cryptocurrency meltdown, the Federal Trade Commission established an Office of Technology to support application in emerging technology areas, and the agency has released several public notices that false claims about the AI capabilities of products will be challenged.
The Writers Guild of America, which is about to go on strike, has proposed regulatory protections and standards around the use of AI in screenwriting. SAG-AFTRA, The Union of Screen Actors and Radio and TV Workers has said that if studios want to use AI to simulate actor performances, they will have to negotiate with the union. Some researchers are building tools to protect the work of visual artists from being absorbed into models for imagers, and others have released open source systems to highlight biases in AI models.
But the broader call to action is a cultural one: recognizing that creative work is not just a commodity or content, but a necessary and highly skilled practice that deserves robust funding and support. Creativity is how meaning is constructed in culture. This is a task that cannot be done by machines – and should not be controlled by the companies that make them. A bot might be able to quickly write an ending to this story, but we have to ask ourselves: whose voices are we really needing?
Rebecca Ackermann has written about technology and culture for MIT Technology Review, Slate, and elsewhere. Previously, she worked as a designer at tech companies like Google and NerdWyou could