We’re still living in the cat pictures era of AI

| In Tech
| 7 minute read |by Constantin Gonzalez
A tabby cat with striking amber eyes sits at a desk in warm, golden lighting, surrounded by an open book, stacked papers, and a glowing desk lamp. Digital icons representing WiFi, images, shopping, and other technology symbols float in the air around the cat, suggesting connectivity and digital communication. The scene is bathed in warm orange and yellow tones, with a window visible in the background showing an evening sky.

Most people use AI the way our parents used the Internet in 1995.

They’d dial up their modem, check the weather forecast on Yahoo, maybe look up a recipe, and call it a day. The idea that this same technology would eventually let them video call their grandchildren, run a business, or access humanity’s entire knowledge base? Unimaginable.

Today, we’re doing the same thing with AI. We use it to summarize documents, search for answers, and write some code. That’s it. That’s our email-and-weather-forecast phase.

Let me offer one more pattern that’s hiding in plain sight and can bring you one step closer to “modern AI usage” (whatever that will be 5 years from now). I re-learned this after spending a weekend building an AI-powered news briefing shell script.

The knowledge-harvesting pattern that actually matters

As a general rule, AI works best, the more data you send into its contect. This is what we leverage with this pattern:

  1. Collect a large amount of information
  2. Filter and preprocess it down to what matters
  3. Synthesize it with AI into something personally useful

If that sounds familiar, it should. It’s basically MapReduce, the Big Data pattern that powered Google and transformed how we think about processing information at scale. Except now, instead of needing a cluster of servers and a team of engineers, you can do it from your terminal with a few tools and some scripting.

Actually, you don’t even need to script. Throw a bunch of documents into a Claude project or Perplexity workspace and ask for a personalized analysis. Same pattern, no code required.

But here’s the thing most people miss: this pattern works for almost any everyday problem.

  • Research: Collect papers, filter relevant sections, synthesize insights.
  • Career planning: Collect job postings, extract patterns, generate strategy.
  • Learning something new: Collect tutorials, filter by your skill level, create a personalized roadmap.

The script I’m about to show you is just one more example of this pattern, with an AI twist.

My weekend problem: information overload

I like staying informed. Doomscrolling? Not so much.

Every morning, I’d check dozens of sources: news, AI developments, world events, positive stories (who doesn’t need those?). I’d skim headlines, get distracted by clickbait, and 30 minutes later realize I’d absorbed almost nothing useful.

Three years ago, the solution would have been “get better at discipline” or “use an RSS reader and train yourself to be more selective.”

This weekend, I built something that seemed impossible back then, with a simple shell script.

Building my personal news briefing system

This news briefing system pulls from 20 sources: RSS feeds for reliable news outlets, Perplexity searches for real-time or some broader topics I care about. It can run every morning or on demand, processing everything in parallel, and handing out a beautifully formatted Markdown briefing that includes:

  • Top news from Germany, the world, economy, tech, and entertainment
  • AI and personal productivity tips
  • An Oblique Strategy (Brian Eno style)
  • A writing prompt
  • An “Editor’s Corner” where Claude gets to be snarky about the day’s news

The whole thing takes about 90 seconds to run and delivers exactly what I need—no more, no less. You can watch me walk through the technical details in this YouTube video, grab the complete Fish shell script from this gist, or see an example output in this other gist.

YouTube video thumbnail
Play Video

A few interesting details of how I got there:

The technical journey (including my facepalm moment)

I started with the obvious approach: feed each news source through Claude individually, asking it to “optimize” and “clean up” the content before the final synthesis.

Seemed smart, right? Let AI do the preprocessing!

Except I kept hitting context length limits. With 20, sometimes quite chatty input sources, even Claude Haiku’s 200k token context window wasn’t enough.

That’s when I had my facepalm moment: Of course it wasn’t enough, I was burning through context across multiple LLM calls instead of one. I was treating AI like a preprocessing tool when what I really needed was old-school data munging.

After trying out some CLI tools for RSS/XML processing and failing, I went back to the good old Swiss army knife of data manipulation: Python. Thanks to its feedparser library, it lets me:

  • Convert messy XML feeds to clean Markdown
  • Limit each feed to the 10 most recent items
  • Prioritize summary/description (short and crisp) over full content (blows up context size) where possible

This Python part is particularly fun because it’s embedded directly inside my Fish shell script, thanks to the magic of Astral’s uv and its easy handling of virtual environments. Python-inside-Fish: Inception-style scripting!

Parallel processing: the secret sauce

Here’s where impatience and some occasional Q&A with Claude paid off:

Instead of processing 20 sources sequentially (which would take forever), I learned about Fish’s job control. Each feed fetch runs in the background with &, and Fish’s built-in job counter lets me know when everything’s done:

while test (count (jobs)) -gt 0
    sleep 1
end

All 20 sources process in parallel. What would take 2–3 minutes sequentially now takes about 30 seconds.

Good old Unix thinking, combined with AI.

Why redundancy actually helps

You might think 20 news sources is overkill. Why not just pick 3–4 good ones?

Because redundancy makes AI smarter.

When Claude sees the same story mentioned across multiple sources—each with slightly different details, emphasis, or framing—it can synthesize a more complete picture. One RSS feed might have a sparse entry with just a headline. Another might include quotes. And some things are hard to find RSS feeds for—like my local weather forecast, productivity tips, or daily facts. That’s why I added some Perplexity searches on top of RSS.

Claude takes all of that and creates something better than any individual source could provide. It’s the difference between reading one journalist’s take and reading five, then forming an opinion.

Surprise touch: personality

Here’s my favorite part: I added a bit of personal context about myself to the prompt—my location (Munich), my interests, my dog Elvis.

Now Claude sprinkles little references throughout the briefing. A weather report that suggests “perfect for a barefoot run with Elvis.” A snarky comment in the Editor’s Corner that connects back to my work. Small touches that make a daily briefing feel less like a news dump and more like a conversation with a well-informed friend.

This wasn’t planned. It emerged from the pattern: collect (news), filter (what matters), synthesize (for me specifically).

What this means for you

If you’re reading this thinking “cool script, but I’m not a command-line person,” you may be missing the point.

The script is just automation. The pattern is the unlock.

You probably have a problem right now where you need to make sense of too much information:

  • Research for a project
  • Competitive analysis for your business
  • Learning a new skill
  • Understanding a complex topic

Try the pattern:

  1. Collect everything relevant (don’t filter yet)
  2. Preprocess it just enough to be useful (remove noise, extract key parts)
  3. Let AI synthesize it into something actionable for you

You can do this in a Claude Project. You can do it with Perplexity. You can do it with a shell script if you like to code.

The method doesn’t matter. The pattern does.

Start Experimenting

Three years ago, this kind of personalized, automated intelligence gathering was science fiction. Today, it’s a weekend project.

We’re living in the cat pictures era of AI: most people only see the obvious use cases. But just like the Internet evolved beyond email and weather forecasts, AI will evolve beyond summarization and code generation.

The question is: will you be experimenting, or will you still be stuck checking the weather?

If you want to start having AI fun on the command-line, here’s how:

  1. Install the llm tool: Simon Willison’s llm CLI makes working with AI models from the command line super easy
  2. Pick one daily annoyance: Not everything, just one problem where you’re drowning in information
  3. Try the pattern: Collect → Filter → Synthesize. Or something else!
  4. Share what you discover: Seriously, I want to know what you build

If you want to see my exact implementation, check out:

The real innovation isn’t the technology. It’s learning to see patterns where others see chaos.


What’s one daily problem you’d like to solve with this pattern? Share this post on your favorite social media, add your comment and let me know. I read every response.