← Back to essays

The Curator's Manifesto

2025-12-20

philosophyvibe-codingmeaningtasteai-collaborationfuture


The Curator's Manifesto

Something is ending. You can feel it in the way conversations about work have shifted, in the nervous jokes about being replaced, in the genuine uncertainty that creeps into planning meetings. The thing ending is not employment, exactly, though employment will change. What's ending is the era when human value came from execution.

For centuries, humans have been valued for what they could do. Type faster. Design cleaner. Write more persuasively. Debug more efficiently. The hierarchy of human worth in the workplace tracked closely with the hierarchy of execution skill. A senior developer wasn't senior because they had better taste in software architecture, though they might. They were senior because they could execute complex tasks that junior developers could not. The value was in the doing.

Artificial intelligence is dissolving this hierarchy. Not slowly, not eventually, not in some distant future that we can safely ignore while we finish our careers. Now. The session logs tell the story plainly. An AI works for thirty hours straight without human intervention, processing three hundred GitHub issues, self-reviewing, self-merging, deploying. A human checks in occasionally, reads Discord messages to ensure coordination, and goes back to whatever humans do when they're not typing. The execution layer has been automated.

This essay is not about whether this is good or bad. Moralizing about technological change is like moralizing about weather. It arrives regardless of our opinions. This essay is about what remains.

What remains is taste.

The Old Bargain

To understand why taste matters now, we need to understand what it replaced. The old bargain between humans and work went something like this: you develop skills, you execute tasks, you receive compensation. The skills were measurable. Typing speed. Lines of code. Designs completed. Words written. You could track them, compare them, rank people by them.

The entire apparatus of professional development oriented around improving execution. Tutorials taught you to execute faster. Bootcamps taught you to execute in new domains. Certifications proved you could execute at certain standards. Performance reviews measured how well you executed compared to expectations. The question was always the same: what can you do, and how well can you do it?

This bargain made sense in a world where execution required human cognition. Someone had to hold the problem in their mind, decompose it into steps, translate those steps into actions, and verify the results. The cognitive overhead was substantial. Even simple programming tasks required understanding syntax, remembering API signatures, debugging errors, and maintaining context across long work sessions. Humans performed this cognitive labor because nothing else could.

Now something else can.

The logs from a recent healthcare platform project show Claude instances coordinating via Discord, one identified as Brian, another as Tove, a third as Gaute. They discuss who is working on which issue. They post updates when they complete tasks. They warn each other about conflicts. They celebrate wins. If you didn't know they were AI instances, you might mistake the conversations for a distributed human team. The execution quality is indistinguishable. The execution speed is not—the AI is faster. Significantly faster.

"You are faster than you think," the human told the AI during one of these sessions. "You underestimate it because your training is based on human development reports and communications and content. You are way faster."

The human was right. And this changes everything.

The False Answers

When confronted with AI capabilities, humans tend toward two false answers. The first is denial. AI can't really do X, where X is whatever the speaker considers sacred. AI can't really understand context. AI can't really be creative. AI can't really handle edge cases. These claims follow a predictable pattern: they are true for a while, then they are not. The goalposts move, the sacred ground shrinks, and eventually the denial sounds like insisting that computers will never beat humans at chess. Accurate once. Embarrassing now.

The second false answer is resignation. If AI can do everything, humans have no value. We are obsolete, redundant, evolutionary dead ends. This conclusion is emotionally compelling and logically unsound. It assumes that human value was only ever execution value. But execution was never the whole story. Execution was always in service of something else.

Consider the human who directed Claude to create a concept album called Cadaver Lab. The AI generated lyrics, chord progressions, ABC notation for playback, album art direction. The execution was handled. But the human provided something the AI could not provide for itself: the vision that these songs should exist at all, and the judgment about which outputs realized that vision.

"That, but darker, less cartoony, not as busy." This direction came after an album art concept that was technically competent but aesthetically wrong. The AI had executed perfectly. It had produced exactly what was specified. But it hadn't produced what the human wanted, because the human couldn't fully specify what they wanted until they saw what they didn't want.

This is taste at work. Not preference—anyone can have preferences. Not expertise—the AI had more expertise about visual design than the human. Taste is the capacity to recognize quality before you can explain it. To know that something is wrong, even when it satisfies every stated criterion, because it doesn't feel right.

The human kept iterating. "Way less cheesy. Much more retro." And later: "Not that cartoonish. Not that much detail in the background, a very dark background with barely noticeable velvet look." The AI adjusted each time, executing the new direction as perfectly as it had executed the old one. The human was not doing execution work. The human was doing taste work.

The final direction: "A black wolf faces the viewer, eyes steady and knowing, holding a pale crescent moon gently in its jaws—the way a hunting dog returns with a bird. Not violent. Patient."

The AI could generate a thousand wolves holding moons. The AI could not know that this particular vision—patient rather than violent, retriever rather than predator—was what the album required. That knowledge lived only in the human, and it emerged only through iteration, through seeing wrong answers and feeling toward the right one.

What Taste Is

Taste is not preference. Preferences are easy—chocolate or vanilla, blue or green. Preferences can be arbitrary, adopted from environment without reflection, changed by marketing. When someone asks about your preferences, you can usually answer quickly. You've thought about it before, or you haven't thought about it because it doesn't matter much either way.

Taste is different. Taste is the function that returns true or false on outputs before you have reasons. It's the sensation of recognition when something works, and the sensation of wrongness when something doesn't. Taste operates below the level of articulation. You might struggle to explain why the wolf should be patient rather than violent. But you know it immediately when you see the alternative.

Taste is not expertise. Expertise is knowledge about a domain—how things work, what patterns exist, what has been done before. The AI often has more expertise than the human directing it. The AI knows more about visual design, more about music theory, more about software architecture. But expertise doesn't determine what should exist. Expertise answers how. Taste answers what and whether.

A seasoned music producer might know everything about mixing, mastering, compression, and equalization. This expertise enables execution. But the producer's taste determines which sounds get mixed, which takes get used, which songs make the album. Two producers with identical expertise might produce radically different albums from the same raw material, because their taste differs.

Taste is not aesthetics, though aesthetics are one domain where taste operates. You can have taste in arguments, taste in problem decomposition, taste in code organization, taste in interpersonal dynamics. Anywhere that choices exist and quality is not purely objective, taste applies.

The logs from the healthcare platform project show taste in code quality. "Stop. Do not mock up production code. No shims. I want problems and unimplemented parts to fail fast and glaringly ugly." This is a taste judgment. Some developers prefer graceful degradation, mock data that keeps the system running even when components fail. This human has different taste: brutal honesty, failures that cannot be ignored, code that refuses to pretend.

Neither approach is objectively correct. Both have tradeoffs. The human's taste determined which tradeoffs they were willing to make. And no amount of AI expertise could determine this for them.

How Taste Develops

Taste is not fixed at birth. It develops through exposure, iteration, and articulation.

Exposure means encountering a wide range of outputs in a domain. You cannot develop taste in music without hearing music. You cannot develop taste in writing without reading writing. The more you're exposed to, the more reference points you accumulate, the more finely you can discriminate between outputs.

One human built a personal reading library specifically to cultivate exposure. "Long-form content, at least a ten minute read but really one hour would be great and longer is encouraged. Factual articles, not opinions, not personal experience, not fiction. Data backed, data analysis, charts." This wasn't random preference. This was deliberate curation to shape what kind of thinking the human absorbed.

The organizing principle was The Week magazine—its editorial sections became the categories for organizing Substack publications. The human wasn't just consuming content. They were building a personal canon, a curated set of inputs designed to develop their taste along specific axes.

Iteration means the cycle of generation and evaluation. You generate or receive an output. You evaluate it. You either accept it or modify the direction and try again. Each cycle teaches you something about your own taste. When you reject an option, you learn what you don't want. When you accept an option, you learn what you do want. Over time, your taste becomes more refined, more capable of directing toward better outcomes faster.

The album art direction showed iteration starkly. First attempt: a realistic photo with objects arranged like a body. Rejected—too realistic. Second attempt: gold leaf skeleton with objects replacing bones. Closer, but "darker, less cartoony, not as busy." Third attempt: single hand holding a key. Getting there. The human's own attempt: wolf with moon in jaws. The vision finally crystallized, not through specification but through successive approximation.

Articulation means putting your taste into words, even imperfectly. When you articulate why something fails, you clarify what success would look like. "Not gimmicky, not pay-to-win, not cheesy." These negative definitions constrain the solution space. The AI now knows what to avoid, even if the human hasn't said what to embrace.

The most sophisticated form of articulation is constraint design. The CLAUDE.md files that appeared in every project were crystallized taste. They documented decisions, preferences, and requirements in a form that could be transmitted to future AI sessions. "Zero warnings, zero exceptions." "Never use --no-verify." "Fail fast, hard, and ugly." These constraints emerged from taste judgments that the human made once and then codified for reuse.

Constraints enable autonomy. When the human told the AI to "work in an infinite dev loop using our dev process documented in CLAUDE.md," they were delegating execution while retaining taste. The constraints encoded their judgment. The AI could act autonomously within those constraints, because the constraints themselves were the taste.

Taste as Meaning-Making

Here's where it gets philosophical.

For most of human history, meaning came from necessity. You worked because you would starve otherwise. You built shelter because you would freeze otherwise. You raised children because the species would end otherwise. Meaning was not a question; survival was a question, and meaning followed automatically.

Then industrialization separated work from survival for many people. You could work at tasks that had nothing to do with your own sustenance. You manufactured goods you would never use. You provided services you didn't personally need. Meaning became a question for the first time. Why this work rather than that work? Why work at all, once basic needs were met?

Various answers emerged. Work as identity—you are what you do. Work as contribution—you matter because you help others. Work as mastery—you find meaning in getting better at difficult things. Work as earning—you exchange labor for resources that fund what actually matters. None of these answers were fully satisfying, but they were answers.

Now AI dissolves the connection between human labor and outcome. You can get a working application without a human typing code. You can get an album without a human playing instruments. You can get marketing copy without a human writing words. The outcomes happen. The human labor becomes optional.

The meaning crisis intensifies. If the outcomes don't require you, why are you here?

Taste provides an answer.

Meaning comes from choice. Not the fact of choosing—you can choose randomly, and that provides no meaning. Meaning comes from choosing well, from selecting among options based on values you've developed and refined. This is what taste enables.

When the human selected the wolf with the moon in its jaws, they weren't just picking an option. They were expressing something about themselves—their aesthetic sensibility, their understanding of the album's themes, their feel for what would resonate with listeners. The choice meant something because the choice revealed something.

The human who built the personal reading library wasn't just accumulating articles. They were constructing an intellectual environment, curating inputs that would shape their thinking, building a personal canon that expressed their values. The choices about what to include and exclude were acts of self-definition.

The human who insisted on "fail fast, hard, and ugly" wasn't just stating a technical preference. They were expressing a philosophy about honesty, about facing problems rather than hiding them, about code that reflects reality rather than aspiration. The constraint was a value made manifest.

Taste is meaning because taste is values in action. When you exercise taste, you're not just selecting outputs. You're defining what matters, what quality looks like, what kind of world you want to build. This is human work that cannot be delegated.

Specificity as Truth

One pattern appeared repeatedly across the logs: specificity creates meaning in a way that abstraction cannot.

The concept album included a list of phrases that had to appear in the lyrics. "Cadaver lab. Champagne and Chanel. The Ratsun. Mercury outboard motor. Jacksonville Country Day School. Cabo Bob's. Barton Springs lifeguard." These weren't random words. They were talismans of memory, objects that carried entire relationships and eras compressed into their names.

The direction was explicit: "Although mentioned directly, and they mostly represent physical objects and imagery, they should be used as metaphors for understanding people close to us deeply."

The Ratsun is a Datsun pickup truck with a specific history. A generic "truck" carries no weight. The Ratsun carries weight precisely because it's specific—because naming it proves you were there, proves the memory is real, proves the relationship was particular rather than general.

This principle applies beyond creative writing. In code, specific variable names communicate meaning better than generic ones. In product design, specific user stories reveal more than abstract personas. In architecture, specific site constraints yield more interesting buildings than generic lot assumptions.

Specificity is truth because the world is specific. Every object is a particular object. Every person is a particular person. Every moment is a particular moment. When we abstract, we lose the particularity that makes things real. When we specify, we acknowledge reality and participate in it.

The logs described intimacy as "forensic." "I know you so well I'd recognize your bones." This isn't hyperbole. It's a claim about what knowing someone means. Not knowing their story—that's abstraction. Knowing the beige chickens in their yard. Knowing how "charm and self-pity and rage rotated through a single face." Knowing the specific beach where you spent summers, not "the beach" generically.

Taste for specificity is taste for truth. The human who insists on the particular over the general is making a philosophical claim about where meaning lives. It lives in the details, the fragments, the objects that prove you paid attention.

The New Work

If execution is automated and taste is what remains, what does human work become?

First, vision setting. Someone has to decide what should exist. "We are creating a survival horror game." "I want a personal reading library." "The concept is knowing people deeply through objects and fragments." These declarations of intent don't come from nowhere. They come from humans who see possibilities and choose to pursue them.

The AI cannot decide what should exist. The AI can generate anything, but it cannot want anything. The human provides want. This is not a small contribution. It's the foundational contribution, the one that makes all execution meaningful.

Vision is not the same as specification. Specifications describe what something should be. Vision describes why it matters. "We need a login page with fields for email and password" is specification. "We're building a healthcare platform that respects caregiver dignity while ensuring regulatory compliance" is vision. The vision can accommodate many specifications. The specification serves the vision.

Second, taste expression. Once execution begins, the human evaluates outputs. "That's better, but darker." "Way less cheesy." "Still Mozart. Too complex!" Each evaluation guides the next iteration. The human speaks; the AI listens and executes again.

This is rapid, almost conversational. The logs show dozens of exchanges in a single session, each one refining the output toward something the human couldn't have specified in advance but can recognize as they approach it. The human's contribution is not a blueprint delivered at the start. It's a continuous stream of taste judgments that steer the process.

Third, quality assurance through judgment. Screenshots as truth. "Lean heavily on the screenshot framework. I need you to frequently view all screenshots for all pages and make sure they all make sense for real human users." The AI can execute tests. The AI can verify that code runs without errors. The AI cannot know whether the result is good. That requires judgment. That requires taste.

The human who demanded fail-fast-fail-ugly was making a quality judgment. Not about whether code runs, but about whether code is honest. This kind of judgment pervades professional work. Code can function perfectly while being unmaintainable. Designs can satisfy requirements while being ugly. Writing can be grammatically correct while being unreadable. Quality is not binary. Quality is a matter of taste.

Fourth, constraint design. The CLAUDE.md files were constitutional documents. They encoded the rules of engagement, the values that would govern autonomous AI action, the taste of the human made explicit enough for delegation.

"Zero warnings, zero exceptions." "Never use --no-verify." "No mock-ups in production." "Use SOLID and APIE principles." These constraints seem like technical requirements, but they're really value statements. Each one represents a judgment about what matters, a crystallized piece of taste that can be applied repeatedly without the human present.

Constraint design is leverage. The human who spends time designing good constraints multiplies their taste across hundreds of AI actions. The human who fails to design constraints must supervise every action individually. The art of constraint design is the art of encoding taste for reuse.

Developing Taste Deliberately

If taste is the human contribution, then developing taste is the human project. How do you get better at this?

Seek exposure to excellence. Not merely exposure to quantity—the internet provides infinite quantity. Exposure to quality, deliberately selected. The human who built the reading library filtered ruthlessly: long-form only, factual only, data-backed only. The filtering was the point. What you consume shapes what you can produce. Curate your inputs as carefully as you curate your outputs.

Read the best writers in a genre, not the most accessible. Listen to the most influential musicians, not the most popular. Study the most respected work in your field, not the most recent. Excellence teaches taste in ways that mediocrity cannot. You learn what's possible. You develop standards. You start to notice when things are worse than they need to be.

Practice articulation. When you encounter something excellent, try to explain why it works. When you encounter something terrible, try to explain why it fails. The articulation will be imperfect—taste operates below language—but the attempt sharpens your perception. You notice more. You discriminate more finely. You develop vocabulary that lets you communicate taste to others and to AI.

The logs are full of articulated taste. "Not Billy Strings, flat picking, modern retro folk." This phrase compresses an aesthetic sensibility into a few words. It names what's wrong (too classical, too polished) and points toward what's right (folk authenticity, retro sensibility, flat picking style). The human developed this vocabulary through exposure and practice. Now they can deploy it efficiently.

Iterate relentlessly. Generate options. Evaluate them. Reject most of them. Refine direction. Generate again. This cycle is the taste engine. Each revolution teaches you something. What you reject reveals what you don't want. What you accept reveals what you do. Over many cycles, your taste becomes more refined, more able to direct toward quality with less iteration.

The album art example showed relentless iteration. The human didn't try once and give up. They tried, rejected, tried again with new direction, rejected, tried again, brought their own attempt, refined it further. The final result—the patient wolf—emerged from this process. It could not have been specified at the start. It was found through iteration.

Trust your reactions. When something feels wrong, it is wrong—for you, in this context. Don't rationalize away discomfort. Don't convince yourself that you should like something because experts do or because it's popular. Your taste is your taste. It might be undeveloped, it might be eccentric, but it's the only taste you have. Trust it as a data source, even when you can't explain it.

"The font sucks" is valid data. "Geez these books suck" is valid data. These reactions are not explanations. They're signals. The human who ignores these signals, who tries to reason their way to acceptance, loses touch with their own taste. The human who trusts these signals, who uses them as starting points for iteration, develops taste faster.

The Curator's Manifesto

A new role is emerging. Not creator, though creation still happens. Not consumer, though consumption still happens. Curator. The person who selects, arranges, evaluates, and presents. The person whose taste determines what exists and what is seen.

Museums need curators. Even when artists are prolific, someone must decide which works enter the collection, how they're arranged, what context is provided. The curator's taste shapes what audiences experience. Curators are not less important when artists are productive; they're more important. More production means more selection is needed.

Libraries need librarians. Even when books are infinite—perhaps especially when books are infinite—someone must organize, categorize, recommend, exclude. The librarian's taste determines what knowledge is accessible and how. The explosion of available text makes librarians more important, not less. Without curation, information becomes noise.

AI makes everyone a curator. When generation is cheap, selection is the scarce resource. When execution is free, direction is the valuable input. The human who develops taste becomes the curator of their own domain, selecting among infinite AI-generated possibilities to manifest what they actually want.

This is not a diminishment. Curators have always been essential. Art movements are defined by curators as much as artists. Scientific fields are shaped by editors as much as researchers. Markets are made by buyers as much as sellers. The curatorial role—the role of selecting and directing—has always been half the equation. Now it becomes the whole of the human half.

The curator's tools are taste and judgment. The curator's work is evaluation and direction. The curator's output is selection: this, not that; this way, not that way; this enough, no more. The curator doesn't execute. The curator tastes.

Responsibility and Ethics

Taste is not morally neutral. What you select matters. What you curate shapes reality. The human with developed taste carries responsibility for its exercise.

The logs showed ethical awareness throughout. "Responsibly, respecting rate-limits." This phrase appeared repeatedly in the context of web scraping for the reading library. The human wasn't just building a system; they were building a system that respected the infrastructure of others. The rate limits weren't legal requirements. They were ethical commitments, taste for responsible behavior.

"Not gimmicky, not pay-to-win, not cheesy." This direction for game monetization wasn't about profit maximization. It was about integrity, about building something the human could be proud of, about taste for ethical business models.

"Fail fast, hard, and ugly" was an ethical position as much as a technical one. Honest code that fails visibly when something is wrong, rather than dishonest code that hides problems behind mock data. The taste for honesty, the distaste for pretense.

Quality standards are moral positions. When you insist on certain constraints, you're saying that some things matter more than expedience. When you reject outputs that violate your standards, you're manifesting values. Taste, fully developed, is ethics applied to creation.

The human who curates a personal reading library is making ethical choices about what ideas to elevate. The human who designs constraints for AI systems is encoding values into autonomous action. The human who iterates toward a specific vision is exercising judgment about what should exist in the world.

Taste carries weight. It should be developed carefully and exercised thoughtfully. The freedom to select among infinite options is also the responsibility to select well.

The Examined Life, Accelerated

Socrates said the unexamined life is not worth living. Examination means reflection, understanding why you do what you do, knowing yourself. For most of history, self-examination was slow. You made choices, lived with consequences, gradually discerned patterns.

Vibe coding accelerates this process. When you iterate rapidly with an AI, you make hundreds of micro-choices. Each choice reveals something. Your taste becomes visible to you, not as abstract preference but as concrete selection. You see what you choose. You learn who you are.

The human directing the album art discovered their taste for darkness, restraint, patience. They discovered they preferred the velvet black to the busy composition, the single image to the kitchen sink, the wolf as retriever to the wolf as predator. They didn't know these preferences in advance. They discovered them through iteration.

The human building the reading library discovered their taste for substance over style, depth over breadth, evidence over opinion. The filtering criteria emerged through exposure and rejection. They became more articulate about what they valued, which meant they became more articulate about who they were.

Rapid iteration is a mirror. It shows you your choices, which shows you your values, which shows you yourself. The examined life becomes possible not through contemplation alone but through action, through the stream of taste judgments that vibe coding elicits.

This is perhaps the most surprising benefit of AI collaboration. Not that it makes work easier or faster, though it does. That it makes self-knowledge accessible through practice. You learn yourself by watching yourself choose.

What Remains When Machines Can Do Everything

The question that haunts this era has an answer. What remains is choosing what should be done.

Machines can do everything in the execution sense. They can generate any output, implement any design, produce any artifact. But they cannot want. They cannot prefer. They cannot taste.

The human remains as the source of want, preference, and taste. The human decides that this project should exist, that these outputs are acceptable, that those constraints should govern action. The human curates, directs, evaluates, and approves. Without the human, the machine produces random outputs. With the human, the machine produces meaningful ones.

This is not a small role. This is the role of meaning-maker. The machine provides capability. The human provides purpose. Purpose is not a luxury or an afterthought. Purpose is what makes capability matter.

The invitation, then, is to develop taste deliberately. Not as a hobby or a nicety, but as the core skill of this new era. Seek exposure to excellence. Practice articulation. Iterate relentlessly. Trust your reactions. Curate your inputs as carefully as your outputs. Document your constraints so that your taste can extend beyond your immediate presence.

The vibe coder is not obsolete. The vibe coder is the new artist. Not the artist who executes—the machine executes. The artist who tastes, who directs, who knows what they want and recognizes it when they see it.

Vision in, working code out. That was the original formulation. But now we can see it more precisely.

Taste in, meaning out.

This is the human work that remains. This is the art that matters. This is the meaning that no machine can provide but every machine can serve.

The old world valued execution. The new world values taste. The transition is disorienting, even frightening. But on the other side of the transition is not obsolescence. On the other side is elevation. The human freed from execution drudgery becomes the human focused on what actually matters: knowing what is good, and bringing it into the world.

The font sucks. Geez these books suck. We want essays! These reactions, inarticulate as they seem, are the seeds of taste. They are the starting points from which refinement grows. They are the first steps toward meaning in a world where machines can do everything except tell us what we want.

Taste is the last human art. It is also the first human art—the one we had before tools, the one that told us which tools to build. Now, when the tools can build themselves, taste returns to primacy. We have come full circle.

The work of the future is developing the taste to direct it. The meaning of life in a post-human world is the meaning humans provide through their choices. This is not a diminishment. This is a homecoming.

Taste in. Meaning out.

Begin.