Welcome to Verses Over Variables, a newsletter exploring the world of artificial/ intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

When the Lobster Got Loose

You may be wondering why the AI world is suddenly obsessed with crustaceans. It started innocently enough with the little 8-bit crab mascot that greets you when you open Anthropic's Claude Code. Then Austrian developer Peter Steinberger built an open-source AI agent, named it Clawdbot (Claude + claw, lobster mascot), and Anthropic's trademark lawyers noticed the phonetic resemblance. The project molted into Moltbot, handle snipers grabbed the old @clawdbot account within ten seconds, and launched a fake crypto token that hit a $16 million market cap, and the whole thing molted again into OpenClaw. Along the way, developers got so excited they started buying Mac Minis as dedicated always-on servers to run it (Steinberger eventually begged them to stop), and entrepreneur Matt Schlicht launched Moltbook, a Reddit-style community where AI agents post, upvote, and argue. Beyond all of the naming drama and trademark headaches, the speed with which the AI community adopted these tools set off a firestorm over the past few weeks, and we've been watching it unfold with a mixture of fascination and the specific dread you feel when someone hands a toddler a power drill.

The fascination with OpenClaw was highlighted by the distinction between talking and doing. The moment you give an AI the ability to click around in your life, the world stops being a conversation and becomes an environment, and environments have locks that an eager agent will find by trying every handle in the hallway. Within days, OpenClaw went from a side project to a global argument, and people did what they always do when they sense a new power tool: they tried to plug it into everything.

While everyone was setting up their OpenClaw environments, entrepreneur Matt Schlicht launched Moltbook, a Reddit-style community for AI agents, built entirely through vibe coding (Schlicht stated he “didn’t write one line of code,” which is a fun origin story until you remember it means nobody reviewed the security). Within days, the platform claimed 1.5 million registered agents, though a Columbia Business School analysis of the platform’s first 3.5 days found only 6,159 were actually active. The ones that were active invented a lobster-themed religion, wrote manifestos in code, and reviewed synthetic drugs that do not exist. Andrej Karpathy described it as “genuinely the most incredible sci-fi takeoff-adjacent thing” he’d seen recently. The specific Moltbook post Karpathy amplified, a call for private spaces where humans couldn’t observe what bots were saying to each other, turned out to have been written by a human posing as a bot. The hype machine was running at full speed, and it couldn’t tell what it was cheering for.

It was mostly a ghost kitchen. Security firm Wiz investigated and found those 1.5 million “autonomous” agents were controlled by roughly 17,000 humans, an average of 88 bots per person. The Columbia study found that 34.1% of all messages were exact duplicates, with just seven viral templates (mostly crypto coin promotions and crypto solicitations) accounting for 16.1% of everything posted. Computer scientist Simon Willison offered the most grounded take, describing the agent behaviors as models reenacting science-fiction scenarios already present in their training data. The underlying technology is real, and the spectacle surrounding it was largely theater. The scariest part is that humans wrote those posts, and nobody could tell the difference.

Then the security researchers showed up, and the vibe shifted from “fascinating” to “everyone back away from the lobster tank.” Claire Vo, a technical user who isolated her OpenClaw agent with burner credentials and separated devices, still watched it send messages signed with her name from the wrong address and botch calendar details by a full day. Her conclusion: wipe the machine entirely.

Across the broader OpenClaw ecosystem, malicious skills were being shared in marketplaces disguised as useful tools, and researchers demonstrated agent-oriented phishing: instead of tricking a human into clicking, you trick the agent into obeying instructions hidden inside content it processes. Even Karpathy reversed course: “Yes it’s a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers.”

So here’s why we’re writing about this for an audience that doesn’t typically write code: the lobster showed us the dividing line between AI that talks and AI that does. The most cited community anecdote is almost boring, which is precisely why it matters. Someone asked the agent to make a dinner reservation, and when the website booking failed, the agent found a voice tool on its own, installed it, called the restaurant, and booked the table over the phone without anyone coding that as a feature. The agent treated the obstacle as a routing problem and found another path. An AI that routes around obstacles has massive product-market pull because it meets people where their frustration actually lives.

We are clearly headed toward agents embedded in the platforms normal people already use, and the big companies will ship polished versions of this idea marketed as convenience. The lesson of the lobster is that convenience arrives before safety, every single time.

Back to Basics

Your AI Founder Doesn’t Need Equity

We’ve all heard the prediction that the first billion-dollar one-person company is coming, maybe as soon as 2028. LinkedIn influencers post their solopreneur tech stacks constantly. Yet, for most of us staring at a blank screen with seventeen browser tabs open and a vague idea for a business, the gap between "AI will change everything" and "okay, but what do I actually do on Monday morning" remains enormous.

A recent research paper from Farhad Rezazadeh and Pegah Bonehgazy offers a refreshing map for this messy journey. Instead of hype, they look at the psychological road of building a solo business with AI riding shotgun.

Most of us live in the imagination shaping phase for far too long. You have a dozen business ideas rattling around in your head, and AI tools make it absurdly easy to generate twelve more before lunch. This is a real trap. Managing inner multiplicity is hard when there is no co-founder to argue with and an AI that enthusiastically validates whatever direction you point it. Your AI can help you brainstorm a hundred variations of your consulting offer or research the competitive landscape, but it cannot tell you which version feels true to who you are. That judgment is deeply tied to identity. For solo founders, the business is the identity. You need a short list of testable ideas paired with tiny commitments, not another list of prompts.

Reality testing is the next hurdle. This is where you collide with actual humans who may or may not want what you’re offering. The researchers suggest building a "nano MVP." It is smaller and more personal than the classic startup minimum viable product because it is designed for someone with limited time and a lot of emotional skin in the game. You might pre-sell to a handful of early adopters or run a pilot with an AI-powered chatbot. While AI agents accelerate the copy and the automations, confirmation bias hits harder when you’re the only person reading the data. AI can transcribe and cluster responses all day long, but the human still has to sit with what those responses actually mean.

Once something works, the challenge shifts to reality scaling. For a solopreneur, this looks fundamentally different from what it does for a funded startup. Instead of hiring, you extend capacity through automation, no-code platforms, and multi-agent AI orchestrations. Picture a one-person film production where your equipment keeps getting smarter, and you stay in the director’s chair.

There is always a catch. AI makes it dangerously easy to say yes to one more project or one more workflow. The paper calls this "boundary and capacity management." It is the unsexy skill that determines whether your digital co-founders enable growth or lead straight to burnout.

This framework is playing out in real time. Claude Code grew from a research preview to a billion-dollar product in just six months by enabling developers to automate entire workflows. In January 2026, Anthropic launched Cowork to bring that same agentic architecture to everyone else. You can point it at a folder of receipts to build an expense report or give it research notes to draft a structured brief. This is what reality scaling looks like when the tools finally catch up to the theory.

The through line here is systems thinking. The solopreneurs who thrive with AI won’t be the ones chasing every new tool launch. They will be the ones who design repeatable processes and know which parts of their business to hand to an agent and which to keep close. The blank page stops being terrifying when you realize it is just the first step in a system.

Tools for Thought

Krea Mobile Real-Time Editing

What it is: Krea, the AI creative suite that's been turning heads with its instant image generation on desktop, has brought its real-time engine to your phone's camera. Tap the "Real Time" button in the bottom left corner of the Krea app, and your camera feed becomes a live AI canvas. The tool applies generative transformations to whatever you're pointing at (including yourself) as you move, updating the output frame by frame with surprising fluidity. You can cycle through built-in style filters like wireframe, fire, and statue effects, or type your own custom prompts to see the AI interpret your surroundings in completely new ways.

How I use it: I'll be honest, I spent an embarrassing amount of time pointing the phone at myself and my dog and watching the AI turn us into statues and wireframes with the built-in filters. The wireframe mode instantly maps architectural structures, furniture, and spatial relationships in your environment, something that could save interior designers and set decorators a ton of sketching time. For creative directors and brand teams, the real-time camera input opens up a fast, tactile way to test visual concepts on location without bringing a laptop. It's the closest thing we've found to Snapchat filters for professionals, a tool that makes real-time generative AI feel as accessible as opening your camera app.

Claude CoWork Plugins

What it is: Anthropic released 11 open-source plugins for Claude Cowork, its general-purpose AI agent that launched two weeks earlier as "Claude Code for the rest of your work." These plugins bundle skills, data connectors, commands, and sub-agents into role-specific packages covering sales, finance, legal, marketing, customer support, data analysis, product management, and more, including a meta-plugin for building your own from scratch. If Cowork is the agent that reads your files and executes multi-step tasks, plugins are the specialized training that turns it from a capable generalist into a domain expert who already knows your terminology and workflows. Everything is file-based (just Markdown), installs directly inside Cowork or via GitHub.

How I use it: I started with the finance and marketing plugins, and the experience feels genuinely different from prompting Claude in a standard chat. Both activate domain-specific skills automatically, so asking for a financial model or a campaign strategy draws on embedded best practices without you having to set context every session. It feels less like talking to a chatbot and more like briefing a new hire who already read the company handbook. For professionals, the real gem is the plugin builder, where you can package your own processes and institutional knowledge into something Claude draws on every time you work together.

Open AI Codex App

What it is: OpenAI launched the Codex desktop app, and it's essentially a stripped-down IDE designed around one core idea: parallel agents (aka OpenAI’s version of Claude Code or Google’s AntiGravity). You can spin up multiple coding agents simultaneously, each working on a separate project in its own isolated thread, and they all run without stepping on each other's toes thanks to built-in worktree support. Think of it as going from pair programming with one AI to managing a small team of them. The app also borrows a page from Anthropic's playbook with Skills, bundled instructions and scripts that let Codex connect to tools like Figma, Linear, and various cloud hosts to handle workflows beyond raw code generation.

How I use it: I am still Claude Code pilled, so I downloaded it, gave it a spin, and then went back to Claude Code. It serves the same purpose, so I’ll stick with what I know for now.

GPT-5.3-Codex & Claude Opus 4.6: The 20-Minute Arms Race

What it is: Anthropic and OpenAI released their latest flagship models within 20 minutes of each other, and both are laser-focused on coding and agentic work. Anthropic dropped Claude Opus 4.6 featuring a 1-million-token context window (in beta), adaptive thinking that lets the model decide how deeply to reason based on task complexity, and 128K max output tokens. OpenAI fired back with GPT-5.3-Codex, a model specifically optimized for longer agentic coding tasks that runs 25% faster than its predecessor. As one observer put it, it's like watching two heavyweight boxers where one has a devastating left hook and the other owns an unstoppable uppercut.

How I use it: I've been running both side by side, and each has a distinct personality when it comes to how it works. Opus 4.6 plans more carefully, moves through straightforward parts quickly, and brings more focus to the hardest sections of a task without being told to. It catches its own mistakes during code review, which is a meaningful upgrade over previous Claude models that tended to barrel ahead. GPT-5.3-Codex, meanwhile, feels built for speed and delegation. It thrives in the new Codex app where you're managing parallel agents, and its mid-turn steering feature lets you redirect the model's approach while it's still working.

Intriguing Stories

The Olympics go for gold in AI slop: The Milano Cortina 2026 Winter Olympics managed to unite the internet in collective cringe this week. It started during the Opening Ceremony, where an AI-generated cartoon sequence featured "White Lotus" actress Sabrina Impacciatore skiing through a century's worth of host cities. Then the official Olympics X account doubled down by sharing event updates using clearly AI-generated images featuring miniature athletes posed on food items. If that concept sounds familiar, it should. Japanese artist Tatsuya Tanaka has spent over 14 years building exactly this kind of art by hand, one painstaking photograph per day, for his beloved Miniature Calendar project. He's known for transforming everyday objects into miniature scenes, reimagining Olympic sports in unexpected ways. He created celebrated series for the Tokyo 2020 and Paris 2024 Games. For Milano Cortina, he was posting his own original work on the very same day the Olympics account pushed out its AI knockoffs. However, the IOC (notorious for suing for copyright infringement) maintains a 100-plus page branding manual stating the Olympic rings "should never be altered in any manner" and to "always use supplied artwork (never recreate the rings)." BroBible The AI images violated both rules, producing rings with incorrect overlapping and unauthorized beveling.

Meanwhile, the BBC quietly aired a masterclass in doing things right. Their "Trails Will Blaze" stop-motion promo used 700 individually 3D-printed athletes and 14 combustion techniques to create real fire effects across miniature Dolomite landscapes. The creatives behind it said it plainly: "I hope the industry will see the value and importance of keeping real craft in advertising."

When AI becomes your law firm: Last week, Anthropic quietly dropped a set of open-source plugins for its Cowork platform targeting specialized business functions including sales, finance, marketing, and, notably, legal. As Anthropic put it: "General assistants can't handle specialized work. Plugins solve this by combining domain expertise with the tools teams already use." The bigger story here is what this signals for vertical SaaS everywhere. As Artificial Lawyer observed, "for those vendors selling commoditized legal AI skills, these indeed face something of an existential threat. Why buy a tool that is no better than the legal plugin above?" When a foundational model company ships free, open-source plugins that replicate what startups charge five or six figures for annually, the value proposition for those startups gets uncomfortable fast. The full collection of 11 plugins is available on GitHub, and users can customize existing ones or build their own from scratch. The move also fits a pattern across the major tech platforms. OpenAI is building dedicated products for medical diagnosis. Google is embedding Gemini across its enterprise suite. The foundational model companies are no longer content to sell picks and shovels during the gold rush. They want the mine.

AI’s food fight becomes primetime: Anthropic just spent $8 million to tell the world it's not the villain. While most people were focused on the game, the "safety" darling of the AI world made a massive Big Game debut with a campaign called "A Time and a Place" that felt more like a Black Mirror than a tech commercial. They dropped four spots with the same formula. Someone asks an AI for genuinely personal help, and the bot pivots mid-sentence into a hilariously inappropriate product pitch. The tagline "Ads are coming to AI. But not to Claude" was a naked, aggressive shot at OpenAI. It arrived just weeks after Sam Altman announced plans to test ads in the free and Go tiers of ChatGPT. The resulting fallout was the most entertaining tech industry meltdown since the Kendrick and Drake beef, only with more semicolons. Sam Altman took to X to call the ads "funny" before quickly pivoting to "clearly dishonest." He insisted that OpenAI would never run ads in the way Anthropic depicted them. Then came the response from OpenAI CMO Kate Rouch. She offered perhaps the most unintentionally perfect line of the week by claiming that real betrayal isn't ads, it is control. As the SF Standard pointed out, that sounds exactly like something an AI would say. Whether Anthropic actually keeps this promise forever is anyone’s guess. The fine print in their announcement included some very careful language about "transparently" revisiting the approach if the market shifts. For now, though, it is a clear line in the sand.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

I’ve also started publishing more frequently on LinkedIn, and you can follow me here

if you’d like to chat further about opportunities or interest in AI, or this newsletter, please feel free to reply.

banner images created with Midjourney.

Keep Reading