← Back to Library

Being creative requires taking risks

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

  • Brian Eno 14 min read

    Eno serves as the article's primary case study for avoiding creative stagnation over decades. His full biography reveals the pattern of risk-taking the author describes: Roxy Music, ambient music invention, collaborations with Talking Heads and U2.

  • No wave 12 min read

    Referenced as one of Eno's creative contributions, No Wave was a short-lived but influential avant-garde music and art scene in late 1970s New York. Most readers won't know its specific history and aesthetic principles.

When children learn to draw, they tend to make more and more interesting images for several years until around age five, when they learn to be boring. The multicolored hedgehogs with 47 legs give way to a series of established forms and colors, like stick figures, pastel green grass, and houses with triangular roofs. The wild diversity is gone. From now on, it’s crude, habitual symbolism. Most people never relearn how to draw anything interesting again.

This tends to happen in all domains of our lives. We figure out how to do things “well enough” and then get stuck.

One way to think about this is by analogy to what in machine learning is known as mode collapse. Mode collapse is when a generative model (most notoriously GANs) stops producing diverse outputs and instead obsessively reproduces a small subset of patterns that reliably fool the discriminator. We’ve seen this happen with language models. The early models, up until about 2020, were deranged but could write spectacularly surprising prose from time to time. Now the models are much smarter, but they all write in that uncanny AI voice. “And honestly? That isn’t just sad—it’s stylistic trauma.” The wide space of potential ways of thinking and writing has collapsed into a limited mode. I’ve gathered that Roon has been working at improving writing quality at OpenAI, but so far there hasn’t, in my opinion, been much progress on reintroducing novelty and diversity into the prose.

My probably partly false understanding of what’s going on here is that the models get rewarded when they output certain tokens, and once they get smart enough, they learn that they are more likely to get rewarded if they stay inside a small area of the space of potential ways of writing. Through millions of training cycles, they learn to associate going outside of mode with loss of reward.

I guess something similar happens with human drawings. Once you learn that grass is “supposed” to be green, it becomes almost embarrassing to make it blue (even though real grass often is blue, as good painters learn when they start to pay closer attention to reality). Once we’ve learned that grass “is” green, we often can’t even see that it actually looks blue in a certain light (and red in another), until someone points it out to us—

Unless we actively push against it, it seems like

...
Read full article on →