- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial/ intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
When AI Validates the Echo Chamber
We live in bubbles. Your social feeds show you what you already like. Your news confirms what you already believe. We have spent years building comfortable reality tunnels. Now we are automating them with AI and calling it innovation. Here is how it usually plays out. A product team needs to validate a new feature. Weeks have gone into the design, and the VP has nodded along. The roadmap is already locked. Someone says the magic words: “We should probably test this with users.” Instead of recruiting real people and risking inconvenient feedback, the team spins up an AI research tool. Within minutes, synthetic users love the feature. The insights neatly confirm the original hypothesis. Another AI analyzes those insights and finds reassuring patterns. The loop closes perfectly, like a snake eating its own tail, except the snake is wearing a Patagonia vest and talking about “iterating on insights.”
This is Discovery Theater.
The appeal is obvious. Traditional research is slow, expensive, and uncomfortable. Real humans misunderstand questions, contradict assumptions, and bring up problems no one budgeted for. Synthetic users are cooperative, cheap, and endlessly available. They never cancel interviews. They never challenge the premise. One product manager switched to AI-generated personas because real users kept “derailing the conversation with edge cases” that threatened the timeline.
The companies selling these tools understand this dynamic well. One platform proudly advertises “user research without the users,” which makes about as much sense as basketball without the ball. The promise is speed and efficiency, and those claims are not entirely false. The problem is subtler. These systems deliver something that appears to be research. The right language, and the right charts. It has the aesthetic of rigor without the discipline that rigor requires. You get the feeling of science at a pace that makes actual science impossible. The technical issues run deeper than bad incentives. Large language models do not understand human behavior. They understand how human behavior is described online. When researchers compare synthetic respondents to real ones, they find consistent patterns of sycophancy. The AI wants to please you. It treats every concern as equally important. It generates exhaustive lists of needs that no real person would produce because real people have priorities, contradictions, and limited patience. They also get bored and make things up.
Real users are inconveniently complex. A design team at IDEO learned this firsthand while working on a rural healthcare project. After weeks of AI-generated insights, they felt efficient and aligned. Then they spent one hour with an actual patient and a physician serving marginalized communities. That single conversation surfaced power dynamics, logistical constraints, and trust issues that none of the synthetic research had captured. The AI produced a smooth story, while the humans produced a mess that demanded understanding.
None of this is surprising. The incentives push hard in this direction. Real research introduces friction, which slows development. It challenges executive narratives, and it requires time, budget, and skilled practitioners. Synthetic research offers validation without disruption. As one UX researcher put it, the feature is already shipping. The research exists to confirm the decision, not to question it. As a result, product development increasingly happens inside closed loops. AI generates requirements and personas. It validates designs and summarizes feedback. At every step, it amplifies the team’s starting assumptions while filtering out contradictions. Products become optimized for simulations rather than reality. In a controlled study, personas built from real user data consistently outperform AI-generated ones at identifying genuine needs. They are slower to create and far more likely to say something the team does not want to hear.
The consequences are familiar: products that solve problems no one has. Features that test beautifully in synthetic scenarios and confuse real users. When Taco Bell’s AI ordering system accepted an order for 18,000 cups of water, it was not a technical glitch. It was a failure to test with adversarial humans who will absolutely try something ridiculous just to see what happens. Discovery Theater feels like progress. Teams genuinely believe they are being user-centered because research artifacts exist. By the time the product fails in the market, everyone has moved on, armed with the same tools that will produce the same outcome again.
The answer is not to ban AI from research. Used carefully, these tools are powerful accelerants. AI can draft interview guides. It can cluster qualitative data. It can help generate hypotheses worth testing. What it cannot do is replace contact with reality. Patterns need to be checked against behavior, and assumptions need to be broken by people who do not care about your roadmap. The teams that get this right treat synthetic insights as prompts, not proof. They use AI to explore possibilities, then validate everything with real users before making decisions that matter. They remain deliberately uncomfortable, assume their first answer is wrong, and actively seek evidence that contradicts it.
This requires a different definition of validation. Research is not about proving you are right. It is about discovering where you are wrong while it is still cheap to change course. Real users will frustrate you. They will misunderstand your ideas. They will want things you think are bad ideas. That friction is not a flaw in the process. It is the point.
When Anthropic Upped the Limits
Over the holiday break, while the rest of us were arguing about whether Die Hard is a Christmas movie, Anthropic was conducting an accidental social experiment that might tell us more about the future of work than any McKinsey deck ever could. From December 25 to 31, Anthropic doubled usage limits for individual Pro and Max subscribers. Anthropic’s explanation was simple: enterprise usage dips during the holidays, leaving idle compute capacity to spare. What they actually did was hand out superpowers for a week. The results were predictable, in the same way giving teenagers the keys to a Lamborghini is predictable. Everyone crashed into something interesting.
Start with Andrej Karpathy, the AI builder who helped shape modern deep learning culture and spent years inside the frontier-model machinery. His reaction was less “wow, neat tool” and more “I need to rethink the laws of physics.” He wrote: “I’ve never felt this much behind as a programmer… Clearly some powerful alien tool was handed around except it comes with no manual.” When someone like Karpathy says he feels outclassed, it is not modesty. It is signal. Then there was Jaana Dogan, a principal engineer at Google working on Gemini. By her account, she gave Claude Code a three-paragraph description of a problem her team had spent a year solving, and Claude recreated the direction of their solution in about an hour. The architecture, the shape of it, the thing that had survived months of internal scrutiny. Think about what that implies: Google employs some of the smartest engineers on the planet. They have entire teams for reliability, security, review, and performance. They spent a year building something, and a model reproduced the pattern during the runtime of Love Actually. The most revealing reactions, though, came from people who do not code for a living. Ethan Mollick, a Wharton professor who studies these shifts in real time, gave Claude Code a single instruction to develop a startup idea and implement it with minimal intervention. He mostly watched. Roughly 74 minutes later, he had something deployed and functional enough to use.
This is the part that is easy to miss if you are still imagining AI coding as a smarter autocomplete. Claude Code does not just generate snippets for developers to review and deploy. It uses code to accomplish tasks. The “Code” in its name is doing a lot of semantic heavy lifting, like calling a Swiss Army knife a “blade” when it also has scissors, a corkscrew, and that little toothpick thing nobody uses.
Jason Fried from 37signals nailed why this feels different. “There’s a deeper reason people are really amped about AI agents. This isn’t just new tech, it’s new psychology. Until now, very few people have known what it feels like to delegate to total competency.” Most of us have spent our professional lives navigating the gap between “do it myself and know it’s done right” and “delegate and accept quality loss plus three rounds of feedback.” Claude Code is offering a third option that was not supposed to exist yet: delegation that actually works.
If you are a creative professional, this matters even if you never touch a terminal. Delegation to competency is not a developer phenomenon. It is a workflow phenomenon. It means the distance between an idea and a working artifact collapses. It means prototypes stop being precious deliverables and become part of thinking. It means the boring parts of your job, the glue work, the formatting, the cross-referencing, the building of tiny internal tools you always wanted but never had the time to spec, suddenly become easy enough to offload without losing control.
Boris Cherny, the lead developer on Claude Code, dropped a line that reads like science fiction written in the calm tone of a status update: in the last thirty days, he landed hundreds of commits, around 40,000 lines added, and “every single line was written by Claude Code + Opus 4.5.” The tool is improving itself. That is either incredible or the opening scene of several movies that do not end well.
What Anthropic revealed was not a better coding assistant. It was a glimpse of what happens when we cross discrete capability thresholds that unlock entirely new use cases. We are hitting moments where the technology becomes independently capable in ways that reshape what is economically viable, professionally valuable, and humanly possible.
I’ll be honest: I spent most of the holiday break using Claude Code in ways that would make actual developers cringe. I wasn’t vibe coding; I’m using it the way a journalist uses a really good research assistant who happens to speak Python. I had it dig through documentation I didn’t want to read, cross-reference claims across multiple sources, and build analysis frameworks for patterns I was seeing but couldn’t quite articulate. The kind of work that used to take me three days of manually compiling information now takes forty-five minutes of explaining what I’m looking for and watching it construct the scaffolding. I’m not coding. I’m thinking out loud to something that translates intention into execution. That’s the part that feels genuinely different.
So did you use it? Did you build something you’ve been putting off for years? Automate away the boring parts of your job? Or did you spend the week watching other people’s demos and wondering what you’re missing? I’m collecting stories about what happens when non-technical people get access to technical superpowers, and I suspect the most interesting ones aren’t happening in Silicon Valley.
Back to Basics
The Case for Disposable Software
There’s a thing that happens when you open your phone, usually when you’re looking for something specific (a banking app, a map, whatever), and you have to scroll past the ghosts. The meditation app from January 2023 you used exactly twice before deciding enlightenment wasn’t really your thing. The project management tool a freelance client insisted everyone use, which sent 47 notifications before you muted it, and then the project ended, and the app just… stayed. This is what living with software has become, and it’s strange that we accept it. Every app is a commitment that compounds. The mental overhead of permanent software accumulates invisibly, like interest on a credit card you forgot you had, and for professionals already managing nine thousand competing priorities, it’s exhausting in a way that’s hard to name.
Meanwhile, we’re entering an era where the best software might be the kind you use once and delete. Ephemeral software isn’t new. What’s new is who can make it, and why. Thanks to AI, you can generate a small, purpose-built tool in minutes, use it for a single task, and let it vanish when the work is done.
Think about what happens when you download traditional software. You’re entering a relationship, and it comes with expectations. The software wants to stick around. It asks for updates. It nudges you. It quietly gathers data about how you work. It’s designed for recurring use, to become part of your workflow, to be the solution to a problem you’ll have forever. That’s fine if the problem is forever. Most work isn’t. Creative work is spiky and situational. One week you need to sort chaotic client notes into themes. Next week you need a quick way to compare three versions of a landing page headline without losing your mind. Then you need a lightweight calculator for pricing a scope change. These are Tuesday problems, not lifelong identities, and they don’t always deserve a permanent app.
Ephemeral software inverts the usual model. It exists briefly and serves a single purpose, more like a sketch than a product. Like a Buddhist sand mandala, built with care and then swept away, except instead of weeks we’re talking minutes, and instead of monks it’s you describing what you need in plain English while your coffee goes cold. The keyword here is you. This is user-generated software. Not downloading someone else’s idea of what a tool should be, with all their assumptions baked into the interface. You’re creating exactly what you need for this moment, shaped around your own workflow and your own taste.
In September, I wrote about how the friction of building custom software is disappearing, but I was thinking about those tools as permanent fixtures, personal utilities you'd keep around. What I'm realizing now is that permanence isn't always the goal. Some problems deserve a custom tool that sticks around. Others just need something quick that dissolves when you're done. These aren’t grand, complex applications. They’re lightweight, single-purpose tools that do one thing well and then get out of your way. Digital napkin sketches: quick, disposable drawings you make while thinking, perfectly suited to the immediate need and completely inappropriate for framing and hanging on a wall.
The technology is suddenly accessible, which is either wonderful or slightly terrifying, depending on your disposition toward technological change. Open Gemini or Claude. Describe a tiny tool you want. Ask for it as a simple web page you can run in a browser. In a few minutes you usually have a workable first draft. You use it. Close the tab. The loop closes. The task is complete. Your brain stops tracking it. This shift changes your relationship with software from passive consumer to active architect. You’re not shopping for tools anymore, scrolling through app stores and comparing feature lists for a problem that might only exist for a week. You’re commissioning what you need and summoning it into existence, like a digital conjurer, except the magic is just language and a chat box. The software serves you for exactly as long as you need it, then politely exits.
The barrier to making small tools has collapsed so far that the old calculus of whether something is “worth building” doesn’t apply. You don’t need to justify the development time. The question isn’t whether this is possible anymore. The question is whether you’re ready to think about software as temporary, personal, and disposable, built for the moment and released when the moment passes. And the next time you reach for the app store, it’s worth asking: is this a forever problem, or a Tuesday problem?
Tools for Thought
ChatGPT Health: The AI Health Assistant that Might Keep Your Secrets
What it is: OpenAI just launched ChatGPT Health this week, a dedicated space within ChatGPT that lets you connect your medical records, wellness apps, and fitness data to the chatbot. The feature connects with Apple Health, MyFitnessPal, Peloton, and other apps through a partnership with b.well, which handles the medical records connectivity. The whole thing lives in its own silo with separate encryption, and OpenAI swears it won't use your health conversations to train its models. The official line is that ChatGPT Health is for understanding lab results, preparing for doctor visits, and tracking wellness trends over time, absolutely positively not for diagnosing or treating actual medical condition.
How I use it: I am not using it; mostly, because I am skeptical, but also there is a waitlist. The privacy architecture sounds impressive on paper, yet when Sam Altman himself admits on podcasts that "people talk about the most personal sh** in their lives to ChatGPT" and then acknowledges the company would have to hand over those conversations in a lawsuit, that tells you everything about the fundamental tension at play. Health data shared with ChatGPT isn't protected under HIPAA because OpenAI isn't a healthcare provider. So while your doctor needs a court order to release your records, ChatGPT just needs a subpoena. (Google tried something similar with Health Connect back in 2022, which was supposed to be their answer to Apple Health.) Anecdotally, I have heard many stories of ChatGPT “saving lives.” I guess it makes me a boomer, but I still trust my doctor.
Intriguing Stories
Meta’s Angry Exit Interview: Yann LeCun, Meta's Chief AI Scientist and a Turing Award winner for pioneering work in neural networks, has left the company after more than a decade to launch his own AI startup. LeCun is widely regarded as one of the "godfathers of AI" alongside Geoffrey Hinton and Yoshua Bengio, and his 2013 arrival at Facebook (now Meta) helped establish the company as a serious AI research hub. The departure follows a dramatic restructuring in June 2025, when CEO Mark Zuckerberg invested $14.3 billion in Scale AI, bringing its 28-year-old founder Alexandr Wang on board as Meta's first Chief AI Officer to lead the newly formed Meta Superintelligence Labs. LeCun found himself reporting to Wang, a data labeling executive with no background in building AI models. In a candid Financial Times interview, LeCun revealed that Meta's team "fudged" benchmark results for the Llama 4 model released in April 2025, which was widely panned as outdated on arrival. He called Wang "young and inexperienced" and predicted an exodus from Meta's AI division. More fundamentally, LeCun declared that large language models represent a "dead end" for achieving superintelligence, directly contradicting Zuckerberg's strategic direction. LeCun has now launched Advanced Machine Intelligence Labs (AMI Labs), targeting a $3-5 billion valuation. The startup focuses on "world models" that learn from video and spatial data to understand physical reality and causality, rather than relying on text-based training.
World Economic Forum Confirms A Prediction: The World Economic Forum just published findings from Dentsu's annual CMO survey, and if you read my AI predictions, this will sound familiar. 79% of chief marketing officers now agree that algorithm-driven optimization risks making brands look identical, while 87% believe modern strategies require deeper creativity and human qualities. As I have been discussing, the more we automate, the more we'll crave authentic human connection. Dentsu's Yasuharu Sasaki frames the challenge: as AI becomes better at optimization, being average becomes fatal for brands. Algorithms optimize for immediate needs, not enduring brand love. The flood of AI slop on social media proves the point. Anyone can generate content now, but most of it exhausts rather than engages audiences. His solution mirrors what we've been advocating: creatives must become both "AI-native and human-native," using automation to free up time for deeper exploration of humanity and culture. The strategic implication feels obvious yet revolutionary: creative work is evolving from crafting expressions to architecting brand humanity. As Sasaki puts it, "As AI perfects, we must disrupt. Our role is to reintroduce human quirks and make things interesting again." The brands that survive won't be the ones with the best AI tools. They'll be the ones who understand how to make their AI-enhanced work feel irreducibly human.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
I’ve also started publishing more frequently on LinkedIn, and you can follow me here
if you’d like to chat further about opportunities or interest in AI, or this newsletter, please feel free to reply.
banner images created with Midjourney.