AI models show “collapse” if repeatedly trained from the outputs of other neural networks. This means losing knowledge about the rare events in the modelled probability distribution, misjudging the probability of others, and thus becoming dull, stereotypical, simple, and wrong. The worry is that only the first generation of language models could have been trained on the human-produced text corpus of the internet. Now chatbots and other large language models produce so much text on the internet, that the next generations of models might mostly be trained on their output. Analogous problems arise with ai image and video generation that become training materials for the next models. Imagine generation after generation of image generators mostly trained on the output of previous generations of image generators. They would learn and amplify the mannerisms of these models, and incorporate their errors without check. Surely, the number fingers on hands in generated images would go from mostly five to perhaps eight, ten, or twelve with more and more variability in each generation of models. Eyes would move around on heads, while the style would be more and more same-y.
Neural networks are meant to emulate human behaviour, and they do it well. So both artificial and biological neural networks clearly share some behaviours. Why then do they not share this behaviour, why doesn’t humanity collapse, even though each generation of the neural networks in our heads, that we are, is trained on the previous generations’s output since language and writing exist?
Answer is it partially does. It does in areas where the particulars of behaviour don’t matter, such as language. Symbols, within some constraints of practicability, we are free to choose. And indeed, language and customs change between generations, evolve into something new all the time. Over generations language without active policing changes so much that the relationship is hard to recognise after a while. Standard French vs Creole, or Medieval English vs modern English are examples. If there are geographical boundaries the changes evolve in various directions, with vastly different outcomes, as demonstrated by the indo-European language family that includes Latin, English, and Sanskrit, all derived from the same origin within a few thousand years of separate evolution. Some of the changes are clear improvements that simplify use and learning, such as dispensing with grammatical genders. Others introduce unnecessary complexity. Overall, from cursory observation, the total difficulty seems to be kept roughly constant. Perhaps to a level that can be acquired by humans during a usual childhood.
The constrains on change seem to be practicality and usability. There is a set of contents that speakers want to convey and for which they need efficient means. Words and sentences should be not too long, easy to form, and easy to distinguish for the users. Examples of content that humans might want to express elegantly are future tense and potentials, such as”I will go,” and “I might go,” as opposed to “I go”. Pidgin as a simplified derivative of English and French lost through incomplete and careless transmission the originating language’s specific ways of expressing such concepts, and could only express the more basic concepts elegantly. However, the next generation of users, the children of the original pidgin speakers came up with new ways of expressing the more complex concepts. These new ways have nothing to do with the original ways, but are just as powerful, while they are surely as ad hoc as the original means were at the time of their invention. So the needs of speakers are relatively fixed, being a constraint on language evolution, while the means to reach those aims are largely free.
In this sense we might expect new AI languages to develop, but they will be only useful in any sense if we apply the constraints that efficient communication requires. Then they might evolve into something better than humans have produced this far. Given that humans feel strongly about their own languages, they might not want to learn these new ways of expression though.
The second way how human neural networks do not become maladaptive is by training on external experience, which is not dependent on other neural network’s models of reality. It is important to honour and make one’s own experiences, and not only with humans, but with nature itself. This is playing and exploring and climbing for a child. It is observing nature and experimenting with real things for a learner. Man-made models and learning apps are no substitute, because those are products of other neural networks, and over a few generations of models will degrade. We have seen that with the state of knowledge about human anatomy from late antiquity to the renaissance. In that time period dissections of humans were widely banned, and even animal dissections were seen as dispensable, since medical professionals had already the ultimate source: Galen’s writings on anatomy. Every new generation learned from these texts, and from derivative texts that others formed without checking with reality. What they did not know was that Galen already got much of his “knowledge” from dissections of animals, not humans. So following generations learned wrong concepts that were a mix of animal and human anatomy. This faulty knowledge further degraded over the following centuries, by misunderstandings perpetuated and expanded by generations of scholars, manual text and image reproductions becoming corrupted over generations of book copies, and by unchecked mixing with the fanciful and superstitious, such as astrology. After learning from merely neural network outputs for centuries, when people began to look at actual human bodies again in the renaissance, a bizarre thing happened. The dissection master would show an organ from a human body, say a kidney, while the professor would read from Galen, and make tall claims, such as that the kidney has lobes (dog kidneys have lobes, humans’s have not). But students and everyone would somehow still believe to see lobes on the human kidney in front of them, because the text said so. Or at least claim to see them, because there was an exam to be passed. This indeed was the time when neural network outputs were more highly valued than own observations. Only with the founder of modern anatomy Andreas Vesalius and his book De Humani Corporis Fabrica did real observation start to win, against much resistance, as the primary source of learning about human anatomy. Only after this could the development of the today mostly helpful modern medicine begin. It seems therefore counterproductive if “modern” medical schools believe the direct experience of cadaveric dissections could be replaced with instruction using textbooks and models, because those are easier and cheaper.
Similar things seem to have happened in rocketry, where the early experimentation of Goddard, the Verein für Raumschiffahrt, and of the Jet Propulsion Laboratory were replaced with sterile computer work that made space rocket development slow, unreliable, and expensive. The renewed direct experiments of SpaceX lately, the company which builds rockets and sees how they fail in testing have again proved superior. They managed to outclass all other competitors, who are now in terminal decline. In conclusion, although it seems cruel or wasteful, but dissecting a frog or human bodies, and going to the chemistry lab, and building a rocket and launching it can not be substituted for long by plastic models or 3D models, or simulations or animations.
Without any possibility of grounding in experience, areas of human discourse are known to quickly fragment and collapse. Examples are religions that reject own observation (“hybris”) and rely mostly on texts, and opinions of others based on these texts. Some straightforward examples are Christianity and Judaism with all the commentary written based on previous texts, and with minimal perceived need of grounding in the outside world or usefulness. Practices based on these commentaries in a few generations fragmented into a variety of denominations such as Catholic, Orthodox, Protestant, Methodist, … or Progressive, Hasidic, …. Sometimes the differences are seen as large enough to warrant violence against each other.
Another example are parts of the humanities that evolved into only tangentially useful models of reality such as postmodernism or some currents of moral philosophy. The latter sometimes fully rejects any concern about the outside world and is only concerned with tradition and introspection.
As a last example must serve the law. Outside of times of revolution and existential crisis, legal discourse prefers to be only tenuously connected to reality, and rather uses the transmitted products of neural networks. The latter are almost always preferred for training and as input for transformation. The jargon calls these preferences for model outputs “precedence”, “scholarly opinion”, “morality”, “common practice”, and “fiat law”. The economic analysis of law and legal anthropology as observation-based correctives could contribute hugely to keep legal models useful, but they are marginal in the formation and transmission of law.
Outside of textual discourses the visual arts are not immune against model collapse either. Byzantine iconography, the making of devotional pictures of Orthodox holy topics, became wholly stereotypical and unchanging within a relatively short time. Seeing a large collection of Orthodox icons can be a disconcerting experience because of the sameness. Without a fine eye for detail it is hard to even guess the century in which a picture was made. A 20st century mainstream icon is hardly different from a well-preserved 13th century image of the same topic. The newer images faithfully replicate every feature of older images, even stereotypical mannerisms such as a questionable realism of depicting hands, which surely must have originated in limited skill early in the history of these images. This is an example of learning purely from a limited and derivative set of training material, and having the only aim of emulating those faithfully. A different approach would be to take a higher principle such as emulating nature well (some ancient Greek painting), or emulate well, but also evoke certain emotions (e.g. Italian landscape painting or French academic painting), delight in ways not seen before (e.g. pointillism), or discuss injustice in a strongly appealing way (some activist art of the 20th century). These approaches all required however input beyond the output of previous generations of neural networks.
While music clearly also shows model collapse and renewal, and living in a bubble produces odd political obsessions, I would like to discuss the positive side of model collapse as well. Sometimes living in a bubble can produce wonderful flourishes, and can be great fun. The high artifice of courtly life in renaissance Italy or at Versailles were formed through amplification of the odd or pleasurable features of previous neural network outputs, without much regard to the outside world. They could afford this disconnect from worldly needs. Undoubtedly those were great times for the arts and entertainment, grotesque as they may have been. It is therefore not always the case that more outside input is always better. If the outside world is too grim or dysfunctional, being judicious of what is allowed into the model is of benefit. Same if the aim is the grotesque. Nonetheless, while model collapse can produce great art and lots of fun for a while, inevitably without external input it all becomes too stale and stereotypical eventually. Then input from the outside world is urgently needed, as the resulting models of these times go out of fashion and live on as historical curiosities.
In conclusion, thanks to observing human-like behaviour in neural networks we have a mirror onto our own behaviour. We can study what works and what does not. We can learn a lot about how to keep the human neural network collective in a useful state. The first step I suppose is seeing the need to do so. So that fields of human endeavour do not end up like the odd couple who spends their lives without speaking with anyone else, and whose conversations become bored and barren.
For artificial models, the usual solution is probably grounding in reality, for example using real photos and videos, which is already happening. Ultimately necessary is choosing aims that go beyond faithful or pleasant replication of past results. Most aims we might choose, I suspect, will require real curiosity, play, and experimenting on the side of the model, which means the ability of the models to collect their own sensor data, rejecting some of the output of earlier generations ( teenage rebellion?) and manipulating reality to see what happens. The only exceptions might be fields of pure and rigorous thought, such as mathematics, which has its own means of purification from the absurd, mainly by demanding internal consistency according to fixed, unambiguous rules. But even then some taste is required to choose what question is worth pursuing.