Last month, it was reported that OpenAI — the company behind the generative AI chatbot ChatGPT — was on track to make $1 billion in revenue this year from user subscriptions and by licensing its technology to other companies. ChatGPT was introduced into the world only last November, and its explosive growth shows the demand for AI-powered chatbots and their potential to transform lives.
But there are still pressing ethical issues that need to be addressed, such as, what do AI companies owe to the creators whose work informs their chatbots?
Generative AI chatbots, which can conduct human-like conversations and generate unique written responses to prompts from users, are taught to communicate using writing samples from hundreds of thousands of writers whose work is available on the web. Even though the written work is publicly available, much of it is protected under copyright law.
According to OpenAI, ChatGPT doesn’t produce plagiarized outputs — it only learns from the ideas of the books and articles on which it is trained to produce unique content. That OpenAI could become a billion-dollar corporation by using human writers’ work to train its artificial intelligence machine does not sit well with many creators. And some writers are asking for compensation for their contributions, as well as ways to opt out of having their data used in this way.
Writers are concerned for a couple of reasons, according to Lila Shroff, a tech ethics and policy fellow at Stanford University. They feel exploited as their painstaking intellectual labor has been used without their permission. And using their data, AI could eventually replace them by automating the writing process. One of the main demands of the Hollywood writers’ strike has been for the establishment of a contract that protects writers from losing their jobs to artificial intelligence.
While Congress is alarmed by AI’s unconstrained, meteoric rise in the last year, lawmakers find it hard to regulate. Federal authorities have been slow to keep up because AI is such a new technology and is advancing at breakneck pace. Lawmakers are also ill-equipped to deal with the AI boom because most do not have a technical background and struggle to understand the mechanisms by which AI technologies work. But it is clear there need to be laws and regulations to ensure that technological advancement takes shape in a way that is fair.
Although AI chatbots are not good enough yet to completely eliminate the need for writers, they are well on their way. If you ask ChatGPT to generate an article in the “voice” of any writer, it can do so convincingly and will even send back pointers on why it believes its output is similar to the style of the person being requested. The list of writers it can mimic includes not only acclaimed authors, such as Margaret Atwood and Zadie Smith, but also lesser-known bloggers and emerging voices who have published material on the web — even members of The Times’ editorial board.
I tried experimenting with this by plugging in the names of friends and colleagues and asking ChatGPT to generate short essays in their “voice” based on whatever it knew about them. When I showed the subjects the results, they were impressed and creeped out. Editorial writer Carla Hall, on seeing a ChatGPT-generated article in her voice and under her byline, remarked: “This is kinda wild! I started reading it thinking, ‘Wait, what? When did I write this?’ But then I thought, ‘No this doesn’t sound like anything I’ve written.’ And yet, I could have written the lead sentence.”
How does an AI seem to know them so well? Will this be part of a writer’s life in the 21st century — knowing that AI will learn your voice and continue to generate material in it for all posterity? Or, as Atwood put it, will writers stand to be, in a manner of speaking, “murdered by their [AI] replicas”?
At least two groups of authors, including Mona Awad and Paul Tremblay and comedian Sarah Silverman, have filed copyright infringement lawsuits against OpenAI. Silverman’s group alleges that OpenAI has fed ChatGPT with copyrighted books to “profit richly” from them. Each group seeks nearly $1 billion in damages.
While technological progress is a matter to celebrate, we need to ensure that this technology is created with guardrails to protect both its users and the creators of the content that the AI was trained on.
To see what ChatGPT had to say about this, I pasted the text of my essay into it and asked the chatbot to respond to my argument in its own words. Here is what it had to say:
“The growth and success of AI, like ChatGPT demonstrate the boundless potential of technology, but it also underscores the importance of ensuring that the rights and efforts of creators are respected and compensated. In our quest for progress, it’s crucial that we don’t leave our ethical compass behind. There’s a need for a balanced dialogue between AI developers and the creators who unintentionally fuel them, ensuring that the benefits of this technology are widely shared without trampling on individual rights.”