When Claude Suggested Brian Eno: Building Art You Can't Control

| In Tech
| 12 minute read |by Constantin Gonzalez
A watercolor illustration of a solitary figure in dark clothing standing on a vast, layered desert landscape. The scene features rolling sand dunes in warm tones of beige, tan, and gold, with deep burgundy and rust-colored bands flowing across the foreground. A golden sun or citrus slice appears in the upper left corner. The composition conveys a sense of isolation and contemplation in an arid, minimalist environment.

On constrained serendipity, learning by doing, and whether the system is the art.

I’m chatting with Claude, kicking around ideas on building a tool that generates AI images, but without the prompting.

I like to start these chats by brain-dumping ideas into it. In this case it’s about auto-generating generative AI image generator prompts using random lists of topics and concepts, some guardrails to keep things positive, maybe a temporal thing to connect each image to the moment it was created.

And then I type: “Let’s invite some creative minds into this conversation. I’ll pick Seth Godin. Claude, you pick someone else who could add value.”

Claude suggests Brian Eno.

Of course it does!

This is the technique I wrote about recently: inviting virtual versions of people you admire into your AI conversations. But here’s the meta-moment that makes it even more interesting: Claude picked its own expert. And it made a great choice!

Brian Eno, the godfather of generative music and Oblique Strategies (which Seth recently transported into the AI age), the guy who literally invented systems that create art through constrained randomness. Of course that’s who you’d want when designing a tool about surrendering creative control.

Virtual Brian asks: ”How much control versus serendipity do we want?”

Suddenly, we’re not just building a tool. We’re designing a philosophical stance about AI and creativity.

The original problem that opened a rabbit hole

Constantin, wearing over-ear headphones and a dark hoodie, yawning during a video call, sitting at a desk with a generic, uninspiring stock video background behind him, including a monitor, and a potted green plant on the desk near a window.
I don’t like boring video call backgrounds

When I’m on a video call, I love using AI-generated images as backgrounds. They’re much more interesting than stock photos of fake offices with suspiciously perfect plants (mine are more on a near-death experience trip most of the time). Just yesterday, someone on a call commented on my nice background picture, so here’s the story of where it came from.

But the problem is not generating the images, the real problem is: I hate spending 20 minutes crafting the perfect image generator prompt just to get one background image.

What if there was a system that generated unique, inspiring images without any prompting at all? Not random chaos. Curated randomness, instead. Bounded by intention but free within those bounds.

Brian (through Claude) helped me figure it out: this isn’t about automation, it’s about designing the possibility space. The art isn’t in the output. The art is in the system that generates it.

We agreed on five dimensions that would be randomly selected:

  • Feeling/mood
  • Environment
  • Concept
  • Time era
  • Visual style

Each dimension has a carefully curated list, including emotions like “anxious” and “melancholic” but excluding ”hate” and “disgust.” Not because I wanted to pretend negative emotions don’t exist, but because I wanted the system to generate images that inspire rather than disturb.

This curation is an artistic choice.

We added temporal anchoring: the current date, season, and time of day would be fed into the prompt generator. Not as numbers, but as human concepts: “late afternoon, Monday, autumn.” Each image would be connected to the specific moment it was created.

We added a randomly selected fortune cookie into the mix as well: random wisdom to spark unexpected directions.

And we made sure every generation would be unrepeatable. Same inputs would never align again. Time passes. Random numbers change. Your mood shifts.

We called it Constrained Serendipity.

By the end of the conversation, I knew exactly what to build.

Because here’s the thing: you only learn what you do, and I was excited to explore the world of generative art, a mixture of Eno’s generative music and today’s generative AI.

Building in public (sort of)

I’m not a real software developer. I’m an Expert Generalist, 27 years in tech, but as a solutions architect, not a software development engineer. I code pet projects for fun, but I don’t ship production software.

Which turned this into another learning opportunity for me.

Starting in the Claude desktop app, we designed the spec and iterated on the design document together. Claude and I got back and forth on architecture, word lists, prompt engineering strategy, the whole technical stack. When we had a solid spec, I moved to Zed, my favorite code editor, and used Claude Code to implement the initial version.

Then Anthropic gave all Claude Pro users $250 in credits to try their new Claude Code on the web feature. Nice!

This gave me an opportunity to embrace the latest in modern software development: managing the product, its architecture and features, rather than typing lines of code. It’s all about managing ideas now.

I also wanted to get more GitHub-based workflow hours under my belt, since that’s the only workflow Claude Code on the web supports: I opened GitHub issues for new features, bugs, and improvements whenever they occured to me. Then I’d pull out my phone (yes, my phone!), open Claude Code, and say: “Look at these issues. Help me prioritize them. Which ones matter most?”

We’d discuss. Debate. Decide. Then: ”Let’s work on issue #7 now.”

Claude Code would clarify the details with me, write the code, explain the changes, and submit a PR. I’d review it on my phone, approve it, merge it. Done.

That’s real software development on a mobile device! While walking Elvis, our dog, waiting for coffee, or on the bus. The AI wasn’t replacing my thinking, it was amplifying it. Steve Jobs’ promise of a “bicycle for the mind” has become reality.

This connects to what Jeremy Utley talks about in design thinking: AI helps unblock creative processes. Not by doing the work for you, but by removing the friction that prevents you from doing it yourself.

People might argue this is “too much AI, too little Constantin.” I’d argue the opposite.

Using AI is a bit like walking into a clothing store with a personal stylist. They show you options, explain what works, help you try things on. But you still pick the pieces that are “you.” You’re not forced to knit everything from scratch—but the final outfit is absolutely yours.

So, my latest pet project is called “The Serendipity Engine”. Every decision about which emotions to include, how temporal anchoring should work, which constraints matter, the flow, the user experience, those are my artistic choices. Claude Code just helped me manifest them faster than I could have alone.

And as a side effect, I have learned how to manage PRs, issues, and development workflows. You only learn what you do. And you can only teach what you have experienced yourself.

The Art Question

So here’s the interesting question that’s been lurking since the beginning: is this art?

Not the images—those are AI-generated, and people have strong opinions about whether that counts. I’m asking about the system itself. Is the Serendipity Engine art?

I think it is. And I think the answer matters more than you might expect.

YouTube video thumbnail
Play Video

Asking “Is this art?” in this video is not a rhetorical question. I genuinely want to know what people think. Because if we can’t expand our definition of art to include thoughtfully designed systems, we’re going to struggle with AI creativity.

Claude educated me about Sol LeWitt’s wall drawings. LeWitt wrote instructions like: “Draw 10,000 lines, 12 inches long, in random directions” and other people executed them. The art wasn’t in the execution. The art was in designing the constraint system that made execution possible.

Or Brian Eno’s generative music. The music is different every time you play it, but Eno is still the artist. Because he designed the possibility space: the rules, the interactions, the probabilities that create emergence.

The Serendipity Engine works the same way.

It includes 40 curated feelings, 45 environments, 55 concepts, 12 time eras, and 59 visual styles. That’s over 70 million possible combinations before you even add temporal anchoring or user input. Every choice, which emotions to include, how to translate time into human concepts, the inclusion of fortune cookies, those are creative decisions.

The images the system generates are artifacts. Beautiful, unique, unrepeatable artifacts. But the art is in the system that makes them possible.

Here’s why I think this matters:

First, it gives people permission to be artists even if they can’t paint. You don’t need to wield a paintbrush, master Photoshop or spend years learning composition. If you have a vision and can design constraints, if you can curate a possibility space, you can also create art.

Second, it’s about surrendering control. In the age of infinite tweaking, infinite undos, infinite prompts until you get it perfect—there’s something radical about building a system that surprises you. That creates things you couldn’t have imagined. That respects the role of chance. That creates value through the scarcity of each output being unique and personal.

Third, it expands what we mean by “art” in the AI age. If we only count the output, we’re going to have endless debates about whether AI art is “real.” But if we recognize that designing the system is an artistic act, suddenly the conversation shifts. It’s not about the paintbrush. It’s about what you choose to paint.

Some people won’t agree. They’ll say this is engineering, not art. That randomness isn’t creativity. That AI-generated images can’t be art because there’s no “hand of the artist.” In fact, over a year ago, I posted a LinkedIn update about art, choices and Shannon Entropy, and it got more than 90 comments so far. I didn’t expect that!

I’m genuinely curious what you think.

There’s another beautiful aspect to this: the recursion. I used AI (Claude) to help me design a system (Serendipity Engine) that uses AI (Claude Haiku and Nova Canvas) to create images. And now I’m using AI (Claude again) to help me write about it. Some people are afraid about AI “poisoning” its own training data, as more and more AI-generated content is published.

But: In this case, there’s a human in he loop. At every step, I’m the curator. The decision-maker. The artist designing the possibility space.

AI isn’t replacing my thinking. It’s helping me unearth more of the “Constantin” that would otherwise be harder to find, realize, or articulate.

And that might be the most important artistic choice of all: choosing to collaborate rather than control.

What I Learned

Here’s what this project taught me:

On software development: I now have some muscle memory for GitHub issues and pull requests, and am using this approach more and more often. I understand why and how developers work this way. Not bureaucracy, clarity. Each issue is a container for a decision. Each PR is a conversation about change. The process makes collaboration possible at scale.

On art and creativity: I understand conceptual art differently now. It’s not about the object, it’s about the idea the object represents. Sol LeWitt, Brian Eno, even Duchamp with his readymades. They were all asking: what if the artistic act is in the framing, the system, the constraint design?

On AI collaboration: I have a better sense of how AI actually behaves when you work with it daily. It’s not magic. It’s not sentient. But it’s also not autocomplete. It’s a thinking partner that helps you navigate possibility space faster than you could alone.

And most importantly: I had fun.

That’s the real secret of learning. Not the grind. Not the discipline. Not forcing yourself through boring textbooks because someday it might be useful.

Fun is the most underrated learning accelerant.

I built this because I wanted to. Because the conversation with Claude and virtual Brian was energizing. Because seeing each new image appear felt like opening a gift. Because it allowed me to explore an artistic side I didn’t know I had.

If it’s not fun, you’re doing it wrong.

Your Turn

Check out the companion YouTube video above where I demo the Serendipity Engine and explore the art question in some depth.

And here’s an example wall of 16 images the system generated:

A diverse collage of 16 vibrant digital artworks arranged in a 4x4 grid, featuring: ornate golden mandalas with purple gems, desert landscapes with towering rock formations, figures standing before glowing crystal portals and icy caverns, colorful geometric abstract patterns in magenta and orange, serene mountain and lakeside scenes, traditional Asian architecture with autumn leaves, graffiti-covered storefronts, flowing abstract water designs with golden swirls, playful illustration of children in clouds, grand library interiors with arched windows, minimalist line drawings of landscapes with animals, and cosmic nebula scenes with bright energy centers. The collection showcases a mix of digital art styles including surrealism, abstract design, landscape art, and whimsical illustrations predominantly featuring warm golden tones, vibrant purples, and cool blues.
16 art pieces (or not), generated with the Serendipity Engine

The code is open source on GitHub. You’ll need an AWS account with access to Amazon Bedrock (Anthropic Claude Haiku and Amazon Nova Canvas), but once you’re set up, you can generate as many images as you want, at less than 5 ct per image. Each one unique. Each one unrepeatable.

Now I want to hear from you: Is the Serendipity Engine art?

Not the images—the system. The curated constraints. The designed possibility space. The choice to surrender control to emerge something unexpected.

Comment wherever you are, by linking to this post on your favourite social network, then adding your opinion. I’m on LinkedIn, Bluesky, Mastodon, GitHub, and YouTube. Or send me email.

Tell me which of the 16 images speaks to you and why.

And if you try out this tool, I’d love to hear about it.