In the beginning, the chatbots and their ilk fed on the human-made internet. Various generative-AI models of the sort that power ChatGPT got their start by devouring data from sites including Wikipedia, Getty, and Scribd. They consumed text, images, and other content, learning through algorithmic digestion their flavors and texture, which ingredients go well together and which do not, in order to concoct their own art and writing. But this feast only whet their appetite.
Generative AI is utterly reliant on the sustenance it gets from the web: Computers mime intelligence by processing almost unfathomable amounts of data and deriving patterns from them. ChatGPT can write a passable high-school essay because it has read libraries’ worth of digitized books and articles, while DALL-E 2 can produce Picasso-esque images because it has analyzed something like the entire trajectory of art history. The more they train on, the smarter they appear.
Eventually, these programs will have ingested almost every human-made bit of digital material. And they are already being used to engorge the web with their own machine-made content, which will only continue to proliferate—across TikTok and Instagram, on the sites of media outlets and retailers, and even in academic experiments. To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetic—a potentially disastrous change in diet for both the models and the internet, according to researchers.
[Read: AI doomerism is a decoy]
The problem with using AI output to train future AI is straightforward. Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional—their outputs filled with biases, falsehoods, and absurdities. “Those mistakes will migrate into” future iterations of the programs, Ilia Shumailov, a machine-learning researcher at Oxford University, told me. “If you imagine this happening over and over again, you will amplify errors over time.” In a recent study on this phenomenon, which has not been peer-reviewed, Shumailov and his co-authors describe the conclusion of those amplified errors as model collapse: “a degenerative process whereby, over time, models forget,” almost as if they were growing senile. (The authors originally called the phenomenon “model dementia,” but renamed it after receiving criticism for trivializing human dementia.)
Generative AI produces outputs that, based on its training data, are most probable. (For instance, ChatGPT will predict that, in a greeting, doing? is likely to follow how are you.) That means events that seem to be less probable, whether because of flaws in an algorithm or a training sample that doesn’t adequately reflect the real world—unconventional word choices, strange shapes, images of people with darker skin (melanin is often scant in image datasets)—will not show up as much in the model’s outputs, or will show up with deep flaws. Each successive AI trained on past AI would lose information on improbable events and compound those errors, Aditi Raghunathan, a computer scientist at Carnegie Mellon University, told me. You are what you eat.
Recursive training could magnify bias and error, as previous research also suggests—chatbots trained on the writings of a racist chatbot, such as early versions of ChatGPT that racially profiled Muslim men as “terrorists,” would only become more prejudiced. And if taken to an extreme, such recursion would also degrade an AI model’s most basic functions. As each generation of AI misunderstands or forgets underrepresented concepts, it will become overconfident about what it does know. Eventually, what the machine deems “probable” will begin to look incoherent to humans, Nicolas Papernot, a computer scientist at the University of Toronto and one of Shumailov’s co-authors, told me.
The study tested how model collapse would play out in various AI programs—think GPT-2 trained on the outputs of GPT-1, GPT-3 on the outputs of GPT-2, GPT-4 on the outputs of GPT-3, and so on, until the nth generation. A model that started out producing a grid of numbers displayed an array of blurry zeroes after 20 generations; a model meant to sort data into two groups eventually lost the ability to distinguish between them at all, producing a single dot after 2,000 generations. The study provides a “nice, concrete way of demonstrating what happens” with such a data feedback loop, Raghunathan, who was not involved with the research, said. The AIs gobbled up one another’s outputs, and in turn one another, a sort of recursive cannibalism that left nothing of use or substance behind—these are not Shakespeare’s anthropophagi, or human-eaters, so much as mechanophagi of Silicon Valley’s design.
The language model they tested, too, completely broke down. The program at first fluently finished a sentence about English Gothic architecture, but after nine generations of learning from AI-generated data, it responded to the same prompt by spewing gibberish: “architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.” For a machine to create a functional map of a language and its meanings, it must plot every possible word, regardless of how common it is. “In language, you have to model the distribution of all possible words that may make up a sentence,” Papernot said. “Because there is a failure [to do so] over multiple generations of models, it converges to outputting nonsensical sequences.”
In other words, the programs could only spit back out a meaningless average—like a cassette that, after being copied enough times on a tape deck, sounds like static. As the science-fiction author Ted Chiang has written, if ChatGPT is a condensed version of the internet, akin to how a JPEG file compresses a photograph, then training future chatbots on ChatGPT’s output is “the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.”
The risk of eventual model collapse does not mean the technology is worthless or fated to poison itself. Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, which is sponsored by the National Science Foundation, pointed to privacy and copyright concerns as potential reasons to train AI on synthetic data. Consider medical applications: Using real patients’ medical information to train AI poses huge privacy violations that using representative synthetic records could bypass—say, by taking a collection of people’s records and using a computer program to generate a new dataset that, in the aggregate, contains the same information. To take another example, limited training material is available in rare languages, but a machine-learning program could produce permutations of what is available to augment the dataset.
[Read: ChatGPT is already obsolete]
The potential for AI-generated data to result in model collapse, then, emphasizes the need to curate training datasets. “Filtering is a whole research area right now,” Dimakis told me. “And we see it has a huge impact on the quality of the models”—given enough data, a program trained on a smaller amount of high-quality inputs can outperform a bloated one. Just as synthetic data aren’t inherently bad, “human-generated data is not a gold standard,” Ilia Shumailov said. “We need data that represents the underlying distribution well.” Human and machine outputs are just as likely to be misaligned with reality (many existing discriminatory AI products were trained on human creations). Researchers could potentially curate AI-generated data to alleviate bias and other problems, by training their models on more representative data. Using AI to generate text or images that counterbalance prejudice in existing datasets and computer programs, for instance, could provide a way to “potentially debias systems by using this controlled generation of data,” Aditi Raghunathan said.
A model that is shown to have dramatically collapsed to the extent that Shumailov and Papernot documented would never be released as a product, anyway. Of greater concern is the compounding of smaller, hard-to-detect biases and misperceptions—especially as machine-made content becomes harder, if not impossible, to distinguish from human creations. “I think the danger is really more when you train on the synthetic data and as a result have some flaws that are so subtle that our current evaluation pipelines do not capture them,” Raghunathan said. Gender bias in a résumé-screening tool, for instance, could in a subsequent generation of the program morph into more insidious forms. The chatbots might not eat themselves so much as leach undetectable traces of cybernetic lead that accumulate across the internet with time, poisoning not just their own food and water supply, but humanity’s.