
Welcome to Verses Over Variables, a newsletter exploring the world of artificial/ intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
Thinking in Systems with AI
Over the break, I was using Claude Code to build multi-agent systems (and my course on Thinking in Systems with AI), and I couldn't quite articulate why it felt so different from how I'd been using AI before. I already knew how to think in systems. I was a COO, and mapping processes was the job. But I'd been treating some AI tools like a really good blender: put stuff in, press the button, and hope for the best. Then this week, a bunch of articles landed at once and gave me the language for what I'd been doing.
One of those articles was Nathan Lambert's essay "Get Good at Agents." He writes: "Every engineer needs to learn how to design systems. Every researcher needs to learn how to run a lab. Agents push the humans up the org chart." That last line especially: Pushed up the org chart, by your own tools.
Nathaniel Whittemore's AI Daily Brief podcast from this weekend circles the same idea, calling it a mix of "Agent Manager" and "Enterprise Operator" thinking. Then Ethan Mollick posted on Twitter about how AI folks who delegate to coding agents are suddenly relearning basic management: setting goals, giving clear direction, coordinating workers. "Management 101!" he said. I don't think these three planned to release a coordinated statement on the future of work, but here we are.
What's driving this conversation is that tools like Claude Code and Clawdbot now let anyone run multiple autonomous agents at once. People are buying Mac Minis just to keep Clawdbot running 24/7. They text it through Telegram while standing in line at the grocery store. They wake up to completed tasks they assigned before bed. Peter Steinberger, who built Clawdbot, has a machine in Vienna that drafts his newsletter while he sleeps. Mac Minis are reportedly getting hard to find.
I kept coming back to the book "Thinking in Systems" by Donna Meadows, while building my own prototypes. She talks about "leverage points," which is just a fancy way of saying: stop messing with the fonts and start fixing the plumbing. Most of us are stuck messing with the fonts. We adjust prompts, and we try different phrasings, and then wonder why the outputs still feel kind of mid.
Here's what actually helped: before using any AI tool, I wrote down exactly what I do when building a brand. Every step, every deliverable, in order. Each output becomes input for the next thing. It sounds obvious when I write it out, and as a COO I have always mapped out my own process. I just couldn’t feed it to AI. It felt like finally reading the instructions for furniture I'd been assembling wrong for years.
The risk, of course, is what Meadows calls "overshoot and collapse." Reinforcing loops that spin faster and faster without anything to stabilize them. You've seen this happen: technical debt piling up, content slop accumulating faster than anyone can review it. The Sorcerer's Apprentice, but the brooms are writing LinkedIn posts. Building in checkpoints and quality gates turns out to matter.
Garbage in, garbage out. We knew this. We just forgot it applied to us. Give the system a vague two-paragraph brand description, and you get generic marketing speak back. Give it detailed context with history, competitive landscape, and clear objectives, and something useful emerges. Agents work with strictly limited context. They don't know what Whittemore calls "unstated constraints": the institutional knowledge that lives in your head and never makes it into any document.
Once I saw one workflow turned into a system, everything started looking different. I'm staring at my content calendar now and realizing it's just a mess of patterns I haven't bothered to map yet. Same with so many other daily workflows. It's daunting, but at least now I know why I'm tired.
The Accountability Premium
Carl Cortright published a piece this week on the commoditization of services, which argues that: AI agents are driving deflation in knowledge work, pushing once-expensive, human-led services toward utility economics, where margins compress, and the “high-growth” aura starts to fade. But I think we are slightly mislabeling what’s being commoditized. What’s actually being commoditized are plausible narratives.
AI can generate a convincing strategy deck in minutes. It can draft product specs, investor updates, brand positioning, and first-pass legal-style documents with the polish of a seasoned operator. That creates narrative inflation: a flood of persuasive but ungrounded plans. The “consultant voice” becomes abundant. Rhetorical sophistication stops being a signal of expertise and starts being a default setting. When persuasion is cheap, the question becomes brutal: can anyone trust that the argument is tethered to reality?
When drafts are effectively free, the scarce product is commitment under uncertainty. The premium shifts to the person who can turn infinite possibilities into one defensible decision, not by generating more options, but by shaping the conditions that make the right option emerge. This is where “curation” starts to look less like choosing from outputs and more like writing constraints that prevent failure modes before they happen. AI can produce unlimited variations, which means production is no longer the differentiator. The differentiator is the requirements document, which allows only the right variations. A good spec is not “make it modern.” It is enforceable boundaries: a brand vocabulary that forbids certain claims, a creative brief that names the single decision the work must support. It anticipates edge cases, and it defines what success and failure look like before the machine starts talking. People who can do this become force multipliers. People who can’t become bottlenecks, because they’re stuck reacting to output instead of shaping it.
The economic reality is that services aren’t flattening evenly—instead, they’re bifurcating along a power law. On one side, low-stakes work becomes cheap, fast, and self-serve: routine contract triage, standard website copy, and monthly performance reporting. This is a race to the bottom, with AI handling most of the work and any remaining human touch priced like a commodity. On the other side, high-stakes work concentrates among a small set of trusted operators. When you’re planning a merger narrative, managing a legacy rebrand, or steering crisis communications, organizations need what AI can't provide: a named adult—someone who can be questioned, held accountable, and blamed if a plan backfires. Organizations can’t sue ChatGPT when the crisis response fails in the real world.
This split predicts something uncomfortable: inequality in creative services. The middle thins out. You’re either moving up-stakes, where your judgment becomes the product, or you’re productizing, becoming the human wrapper around an AI service and competing on volume and price. The comfortable tier where you combined solid craft with reasonable rates is being automated out of existence, and the people who occupied it face a real strategic choice about which direction to take.
In that bifurcated world, polish stops being scarce, and legibility becomes valuable. When everyone can produce clean-looking work, clients start paying for audit trails, not because they love bureaucracy, but because they can’t afford the liability of untraceable decisions. The deliverable expands to include provenance: the source map, the assumptions ledger, the risk register, the alternatives you rejected, and the reasons you rejected them. A one-page “here’s what we assumed and why” document becomes as valuable as the creative itself because it allows legal, comms, and leadership to sign onto reality rather than vibes. The AI helps produce the work, but your value is the paper trail that makes the decision defensible.
Then there’s the problem that doesn’t look like a creative problem until it detonates: alignment. AI creates more choices, and choices create more disagreement. Previously, a team produced just one or two versions of a campaign, arguing and eventually converging. Now, generating fifty options is easy, with each tailored to a different stakeholder’s preferences. As a result, the bottleneck shifts from production speed to internal alignment, making deliberate decision-making the highest-leverage skill. Without it, organizations start hoarding options. Every alternative sounds plausible, so commitment gets delayed; options feel safer than decisions because they preserve flexibility and avoid forcing responsibility. Eventually, though, someone has to pick one logo, one message, one strategy, and defend it when the legal team asks questions or the CEO pushes back. That is governance.
We’ve been calling this taste for the past year, and I think we’ve been slightly wrong about it. Taste implies preference, and can be debated endlessly. What enterprises actually need is governance: the systematic application of constraints, standards, and accountability that turns machine possibility into human decision. In the agent era, taste is not a vibe. It is governance that enforces strategic clarity and holds someone responsible when the AI-generated solution hits the real world.
Back to Basics
The AI That Reads Its Own Rulebook
Anthropic released a new Constitution for Claude this week. And what caught my eye: "The document is written with Claude as its primary audience." The rulebook is addressed to the AI. The AI is supposed to read it, and we're all just eavesdropping.
AI policy documents are usually written for regulators, investors, or some future historian trying to figure out who to blame. This one is different. It runs 15,000 words (longer than most employee handbooks, shorter than most divorce proceedings), and it reads less like a compliance document than a letter from a worried parent sending their kid off to college, with advice on who we hope you'll become or how to think through hard choices.
The bet at the center of it should make the "alignment is impossible" crowd nervous: Anthropic thinks you can teach an AI judgment. Not rules. They explicitly say they "generally favor cultivating good values and judgment over strict rules and decision procedures." Rules are brittle because they fail at the edges. Good judgment adapts. The trade-off is predictability, and Anthropic seems willing to accept some chaos in exchange for a system that can reason through situations no one anticipated.
Also, helpfulness is fourth on the priority list. Safety is first, then Ethics, then Compliance with Anthropic's guidelines, and then, at the back of the line like the youngest sibling waiting for the bathroom, Helpfulness. When Claude declines your request or hedges with unnecessary caveats, something higher on the stack was triggered. The document is weirdly honest about this being a problem. They know "too cautious" is a failure mode. They're trying to fix it, but the hierarchy is the hierarchy.
And "safe" doesn't mean "won't say bad words." It means humans retain the ability to correct the AI if something goes wrong. The Constitution compares Claude to a new employee: follows more rules, exercises less independent judgment, earns autonomy over time as trust builds. Except that the verification mechanisms for AI trustworthiness don't yet exist. We're all just hoping the new hire doesn't turn out to be weird.
Two heuristics buried in there that I keep coming back to. The "1,000 users test": imagine a thousand people sending the same message with different intentions, then choose the response policy that best serves the population. And the "thoughtful senior employee test": would someone who cares about doing right but also wants Claude to be genuinely useful be satisfied with this response? These are attempts to solve an impossible problem, balancing access against harm at scale, and nobody knows if they work.
Anthropic calls the Constitution 'a perpetual work in progress.' That's one way to say we're figuring this out as we go.
Tools for Thought
Claude Cowork: Your New Digital Colleague
What it is: Anthropic noticed something funny happening with Claude Code. Developers kept using their terminal-based coding assistant for tasks that were distinctly non-coding, so on January 12th, Anthropic stripped away the terminal and released Cowork as a research preview. The concept is simple: give Claude access to a folder on your computer, describe what you want done, and walk away. The system makes a plan, breaks complex tasks into parallel subtasks, executes them, and delivers finished outputs. The whole thing runs inside an isolated virtual machine, meaning Claude can only touch what you explicitly grant access to.
How I use it: I've been playing around in Cowork, and the most immediately satisfying use case has been taming our chaotic downloads folder. We pointed it at 847 files and asked for organization by type and date with sensible naming. Twenty minutes later, done. A few caveats: Cowork burns through tokens faster than standard chat, the Mac-only limitation stings, and every task starts fresh with no memory across sessions.
Claude in Excel: Your Spreadsheet Whisperer
What it is: Claude in Excel is a native add-in that embeds a Claude sidebar directly into your spreadsheet workflow. Hit Control+Option+C (Mac) or Control+Alt+C (Windows), and you can interrogate your workbook in plain English. Claude traces the dependency chain across sheets and provides cell-level citations, so every explanation points to the exact cells it references. Launched as a research preview in October 2025, the add-in expanded to all Pro subscribers on January 24th. It runs on Opus 4.5, Anthropic's best model for financial reasoning, and supports .xlsx and .xlsm files.
How I use it: I've been testing this against project tracking and revenue models, and the "explain this workbook" capability alone has saved hours of archaeology. I pointed Claude at a client's inherited financial model (the kind where the original creator left three years ago and nobody's touched the assumptions tab since) and asked it to document how inputs flow to outputs. Five minutes later, I had a readable summary that would have taken an afternoon to reverse-engineer manually. Debugging works surprisingly well for common errors. A few caveats: chat history doesn't persist between sessions, it can't handle dynamic arrays yet, and Anthropic explicitly warns against using it with spreadsheets from untrusted external sources due to prompt injection risks. But for anyone who lives in multi-tab workbooks, this is Claude meeting you where the actual work happens.
Intriguing Stories
Big Tech Goes Back to School: Two different visions for AI in education dropped this week. Anthropic announced a global partnership with Teach For All to train roughly 100,000 educators across 63 countries on practical classroom AI use, positioning teachers as co-creators of resources adapted to local languages and curricula. The program includes live training sessions, workflow tools for lesson planning, and a sandbox where selected teachers get office hours with Anthropic staff. While Google took the opposite approach: going straight to students. Gemini now offers free, full-length SAT practice tests developed with The Princeton Review. Google has also integrated Khan Academy content and signaled plans to expand to ACT and GRE prep. The traditional test prep industry, built on expensive courses and private tutoring, just got a very well-funded competitor giving away its core product. Anthropic is investing in gatekeepers, building relationships with educators who decide how AI shows up in classrooms. Google is bypassing institutions entirely, gathering learning data at scale while making Gemini the first AI tool students associate with academic success. Both are racing to define what the AI classroom looks like before anyone else can.
Pay to Train your Replacement: Mercor, now valued at $10 billion, pays 30,000 highly credentialed professionals roughly $2 million a day to train AI models how to do their jobs. Consultants teach chatbots consulting, and lawyers walk language models through legal reasoning. The company's three founders, all 22-year-old college dropouts and former high school debate teammates, recently became the world's youngest self-made billionaires, beating Mark Zuckerberg's record by a year. When Meta dropped $14.3 billion on a 49% stake in data-labeling giant Scale AI last June, rival AI labs panicked about their proprietary training methods being exposed to a competitor. Google, OpenAI, and others began cutting ties with Scale, and Mercor swooped in, quintupling its valuation in eight months. CEO Brendan Foody describes the company as building a new "category of work" where humans create rubrics for AI to replicate infinitely. He predicts AI will automate "two-thirds of knowledge work," which he frames as exciting because it will help us "cure cancer and go to Mars." Harvard labor economist Zoe Cullen, commented that "If what you're teaching the model to do is your core expertise, by definition you're reducing your labor power."
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
I’ve also started publishing more frequently on LinkedIn, and you can follow me here
if you’d like to chat further about opportunities or interest in AI, or this newsletter, please feel free to reply.
banner images created with Midjourney.

