- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial/ intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
The Real AI Bottleneck Isn’t What You Think
There’s this thing that happens when you’re driving at night on a highway you think you know. You’ve got good visibility, decent speed, and then out of nowhere, the road narrows. Not gradually, but it just chokes. One second you’re cruising, the next you’re white-knuckling it through a construction zone that appeared because some crew decided to resurface half the lanes at 2 a.m. on a Wednesday.
That’s exactly where we are with AI in 2026. Except the thing getting resurfaced isn’t the asphalt. It’s every single assumption we made about what “having good AI” actually looks like in the real world.
Ethan Mollick recently put out a piece that calls it the jagged frontier. It sounds like a prog rock album, sure, but it describes something way more consequential: AI is wildly capable at some tasks and embarrassingly, almost hilariously bad at adjacent ones, in ways that make zero intuitive sense. A system that can generate a stunning presentation in seconds might still fail spectacularly at counting the number of times the letter ‘r’ appears in “strawberry.” When even small, "jagged" failures can block an entire workflow, you end up with bottlenecks. Your system is only as functional as its weakest component. And in 2026, those weak components aren’t the models anymore.
The bottleneck has migrated. AI can generate ten concepts before you’ve finished your first coffee, but those concepts still have to survive brand review, legal approval, and that one stakeholder who will absolutely have opinions just for the sake of having them. The generation happens at machine speed. The approval happens at human speed. Guess which one determines your actual throughput?
This connects to something Ed Sim has been tracking from the enterprise side. He argues that most software was built to store records, not to actually capture "decision logic." Your Figma files, asset folders, and Slack channels become fossils instead of memory. They show what happened without explaining the why: what constraints applied, who granted exceptions, or the actual process to create the artifact. Agents that only see the fossils will give you safe, generic averages. But agents that can access "decision traces" can provide continuity, coherence, and a taste that actually fits your situation. As of now, we have Systems of Record, not Systems of Meaning.
New bottlenecks are already appearing on the horizon.
The verification crisis is coming. You can’t trust smooth AI performance because of the jaggedness problem. This means constant spot-checking, continuous audits, and review loops that scale linearly with output. The bottleneck has shifted from generation capacity to review capacity. The “last 5%” problem becomes the entire job. Product truth. Consistency of imagery and typography. Licensing constraints. Brand safety. Accessibility compliance. The AI can nail the first 95% in seconds, but that final 5% still requires human judgment and institutional knowledge. And unfortunately, 5% of infinite output is still infinite work.
Your role transforms from maker to editor-in-chief of a high-output newsroom staffed entirely by interns who are incredibly fast, occasionally brilliant, and thoroughly unreliable about things you thought would be easy. The scarce skill becomes intent architecture: writing constraints that actually constrain and defining quality criteria that work in the real world rather than just in theory.
Think about your creative brief. In 2026, it stops being a static PDF you write once and forget. It becomes a living spec: an active object that updates as decisions change. The brief becomes the source of truth for both humans and agents. This is similar to what’s happened in coding, where "spec-driven development" emerged because agents need explicit constraints to stay on the rails. Fuzzy human intent doesn’t translate to autonomous action without structure.
But agents also create a bottleneck from unexpected consequences. Every piece of enterprise software was built for us. Visual interfaces. Conversational ambiguity. Agents need the exact opposite. They need machine-readable constraints. Explicit decision trees. Deterministic outcomes. Companies can’t abandon human interfaces because humans still make the final calls and provide the oversight. But building separate agent-optimized interfaces doubles the maintenance burden. So most organizations in 2026 will run hybrid architectures. UIs for humans, APIs for agents, and a whole new layer of synchronization complexity. It’s like running two kitchens in the same restaurant with different menus but the same ingredients, and somehow both need to serve the same customers without anyone noticing the chaos in the back.
The missing artifact is the process. The final output matters less than the trail: the precedent that actually governed the decision. Most organizations have mountains of assets but zero structured record of why choices were made. Fix that, and you unlock agent capabilities that actually understand your specific reality rather than producing generic, plausible work.
There’s also the pacing mismatch nobody prepared for. AI produces options at electronic speed. You decide at human speed. This creates one of two outcomes: burnout from trying to keep up, or intentional friction through better gating and stricter constraints earlier in the process. Most teams will alternate between these states for most of 2026 before they figure out a sustainable pattern.
Mollick notes that even if AI becomes superhuman at analysis, institutions have processes that have nothing to do with capability. Human-paced review. Regulatory compliance. Stakeholder buy-in. You can make the AI infinitely faster, and it won’t matter if legal still takes two weeks to get back to you.
The winners in 2026 will be the people who can encode intent, navigate jaggedness, and ship decisions through realistic boundaries. They’ll be the ones who figured out that the bottleneck is the messy, human, organizational reality that sits between capability and execution. The bottleneck is no longer technical but structural. It’s the gap between what the AI can do and what your organization can actually absorb.
The road narrowed. Most of us are still driving like it didn’t.
AI Aesthetics and the Accelerated Nostalgia Machine
I spent some time last year writing about humanity as a luxury good, how human craft would command premiums in an AI-saturated market. I thought I’d nailed the 2026 vibe. But I missed something weirder: we’re not just craving human-made things. We’re developing a frantic ache for the early AI failures themselves.
To some, DALL-E 1 images from 2021 now carry the same fuzzy warmth we once reserved for Polaroids from our childhood. Those blurry, hallucinated nightmares where a “dog” looked like a pile of wet, sentient laundry with too many teeth? They’re acquiring vintage status faster than any aesthetic movement in history. We’re treating 2021 like 1970s Kodachrome. Three years ago was “the good old days” because back then, the machines still let us feel superior.
This isn’t just internet culture being internet culture. Artists are deliberately recreating that early AI aesthetic, baking digital rot back into their work, adding the equivalent of dust and scratches to make it feel authentic. We’re watching AI eat its own tail in real-time and creating nostalgia for nostalgia.
AI models trained on historical aesthetics naturally lean backward, generating cyberpunk Mona Lisas and Shakespearean ChatGPT responses, because that’s what exists in their training data. But they’re also revealing how we process aesthetic change when it happens at machine speed rather than human speed. We’re nostalgic for the time before the tools got too good.
Meanwhile, photorealism is having a total meltdown. When any kid with a prompt can generate a National Geographic shot in four seconds, the “perfect” image starts to feel like cheap wallpaper. By 2026, photorealism won’t be a flex. The competitive edge shifts to exactly what AI struggles to fake: the messy evidence of someone actually struggling with a medium. You can already feel the shift in gaming and animation. While some studios chase the Uncanny Valley to its sterile end, others run toward stylization. Hand-drawn textures, weird proportions, deliberate “mistakes.” These are flares sent up to signal that a human was in the room.
There’s a weird paradox here. AI pushed for perfection so hard that it made perfection worthless. The pendulum swings back to aesthetics that scream, “I am made, not calculated.” But stylization can be generated too. The real signal of value is the fingerprints. The literal or metaphorical grease on the lens.
We needed the flood of perfect pixels to realize how much we miss the "wrong" stuff. We’re mourning the “weird AI” era because it was the last time the technology felt like a toy instead of a mirror. It’s a strange feeling, to feel nostalgic for a 2021 glitch while the 2026 model stares back at you with perfect, terrifying eyes.
Back to Basics
AI's Big Three in 2025
The three major AI labs spent 2025 one-upping each other with state-of-the-art models while quietly pivoting toward a bigger prize: becoming the operating system for how people actually work. Anthropic, Google, and OpenAI all released models that topped benchmarks for about fifteen minutes before the next one shipped. But the real story wasn't the incremental gains in intelligence. It was the race to escape the chatbot box and become infrastructure. Here's what actually landed for me from each lab's year of releases.
Anthropic 2025: Infrastructure Over Intelligence
Anthropic spent 2025 building infrastructure while competitors raced toward bigger models. Three releases changed everything: MCP, Skills, and Code. The research lab showed its muscle, and most other labs raced to incorporate these functions.
MCP (launched in November 2024) solved the problem of using AI with your other apps and data, and frankly, it couldn't have come sooner. Before MCP, connecting AI tools to 100s of data sources could mean potentially a thousand custom integrations. MCP collapsed that nightmare into a universal protocol. Think USB-C for AI systems. Within weeks, OpenAI, Google DeepMind, and Microsoft had adopted it. I remember when the Connectors directory dropped in July with over 75 integrations. Suddenly, Claude could access dozens of enterprise tools without custom code. By December 2025, Anthropic donated MCP to the Linux Foundation's Agentic AI Foundation. The protocol became the de facto standard for how AI systems connect to tools and data.
Skills arrived in October 2025, packaged as modular folders containing instructions, scripts, and domain knowledge that Claude loads automatically when relevant. The shift from one-off prompts to reusable expertise felt significant. By December, Anthropic open-sourced the Agent Skills specification, and OpenAI quietly implemented the identical architecture in ChatGPT and Codex. The community exploded, and partners like Notion, Figma, Canva, and Atlassian built their own skills libraries.
Claude Code might be the most revolutionary piece. Released as a CLI (command line interface) tool, it popularized what Andrej Karpathy dubbed "vibe coding" in February 2025. Collins Dictionary even named it Word of the Year. Code brought autonomous AI agents to terminals, transforming them into genuine productivity infrastructure. The tool orchestrates, tests, and iterates independently while you work on other things. I've seen developers who swore they'd never trust AI with their codebase become converts within a week. This is what people mean when they talk about AI changing workflows: letting creativity become the bottleneck instead of execution.
Google's 2025: Brilliant Tools, Scattered Everywhere
Google built the best AI ecosystem of the year, then hid it across 17 different URLs. I'm not even exaggerating that much.
NotebookLM became my favorite research tool this year. Audio Overviews went viral in late 2024, but 2025 brought Video Overviews, customizable outputs for different audiences, infographics via Nano Banana Pro, slide deck generation, and Data Tables. I recommend NotebookLM to anyone doing research or learning.
With the introduction of Build, AI Studio became my go-to App for app building. The Build function lets you generate web apps from prompts and iterate in the browser. Google's take on vibe coding stays visual instead of terminal-based, which I prefer for prototyping. Although Google launched Anti-Gravity last month, which upped the vibe coding game.
Then there's Nano Banana Pro. Most labs saddle models with names like Gemini 2.5 Flash Image (the actual name of Nana Banana) that sound like router firmware. Nano Banana Pro kept its ridiculous name and somehow became more memorable for it. Nano Banana Pro has not only the power to generate incredible knowledge, but it has Google’s world knowledge, so the outputs also have context. Other labs could surely learn from this. The model renders readable text inside images and generates complex infographics that would've taken hours in Canva.
Labs kept launching standalone experiments. Pomelli generates brand-consistent marketing assets. Disco's GenTabs turns open tabs into custom mini-apps. Each solves a real problem beautifully at a different URL with different access requirements.
This is peak Google. NotebookLM lives at one site. AI Studio has its own flow. Pomelli sits at 3 clicks away from the Labs’ main site. Disco requires macOS and a waitlist. You need three Google accounts: one has beta access, but the other doesn't.
Google has the best ingredients in the world right now, but they're still trying to cook a five-course meal in seventeen different kitchens. I just want one menu.
OpenAI 2025: The Year of Better UX, Not Better AI
OpenAI launched a lot in 2025, but I’m struggling to point to anything that actually changed how I work. GPT-5.2 is objectively one of the best models available, topping benchmarks in coding and reasoning, but it felt more like refinement than revolution.
Sora 2 was supposed to be their big social play. They launched a whole TikTok-style app around it in September, complete with AI-generated video feeds and a Characters feature that lets you drop yourself into any scene. It hit a million downloads fast. Then it became exactly what they didn’t want: a tool to make content for other platforms. Everyone I know who uses Sora makes videos for Instagram or TikTok, not for the Sora feed. The social network dream died before it launched.
They also launched Atlas in October, the third stand alone agentic browser after Perplexity's Comet and Dia from the Browser Company. It hit 27.7% of enterprises within weeks, then immediately triggered security warnings about prompt injection vulnerabilities and comprehensive data collection. OpenAI's own CISO called prompt injection an unsolved problem. Critics called it an anti-web browser. I don't know anyone using it as their daily driver because the privacy trade-offs feel too steep for an experience that's marginally better than opening ChatGPT in a tab.
The actual story of OpenAI’s year was interface design. In July, they added a Tools dropdown that finally organized Agent mode, Deep Research, and Canvas into a coherent grouping, rather than burying them three clicks deep. In December, they gave Images its own dedicated tab with pre-loaded prompts and trending styles, turning it into an actual creative workspace rather than a chat afterthought. Group chats arrived in November, letting up to 20 people collaborate with ChatGPT, which is either brilliant for team brainstorming or a recipe for your most annoying friend fact-checking everyone in real time.
None of it was groundbreaking. All of it made ChatGPT easier to use. That might be the most 2025 thing about OpenAI: they spent the year making their product feel less like a chatbot and more like software people actually want to open. Incremental wins dressed up as innovation.
Intriguing Stories
Meta Buys Its Way Into Agentic AI With Manus Acquisition: Meta just announced it's acquiring Manus, an AI agent startup that's been quietly building what amounts to a digital employee-for-hire. Manus isn't another chatbot, which is precisely why this deal matters. It's a general-purpose autonomous agent that handles complex tasks like research, workflow automation, market analysis, and coding with minimal hand-holding. The company hit around $125M in annual revenue selling business subscriptions. Meta's buying something that already makes money. Supposedly, Meta plans to keep Manus running as a standalone service while also weaving its agent tech into Meta AI, Instagram, WhatsApp, and Facebook. The play here is moving from conversational AI to actually useful AI. Think less ask-me-anything and more do-this-for-me. They're trying to become the infrastructure where billions of people interface with billions of AI helpers. Manus gives them a revenue-proven agent to start with, rather than building from scratch. Which makes sense when you consider Meta's track record: they're not exactly leading the AI lab race.
NVIDIA Spends $20B to Make Sure No One Else Gets Groq: NVIDIA just dropped roughly $20 billion on Groq. Except they didn't technically buy Groq. (This isn’t the Grok created by xAI.) They licensed the tech, grabbed the assets, hired the founders and engineering team, and left behind a corporate shell with a new CEO to keep GroqCloud running. If that sounds like buying a company with extra steps, that's because it is. Why this matters: NVIDIA dominates AI training, but Groq built something different. Chips are designed for instant responses. The kind of speed that matters when you're running chatbots, streaming services, or anything that can't wait around for an answer. NVIDIA wasn't about to let that become someone else's excuse for avoiding NVIDIA chips. Groq's founder Jonathan Ross and most of the team are now NVIDIA employees. What's left is a skeleton crew running GroqCloud as an "independent" service. The read here is simple: NVIDIA isn't buying innovation. They're buying insurance. When you're this dominant, you don't acquire to build something new. You acquire to make sure competitors can't use it against you.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
I’ve also started publishing more frequently on LinkedIn, and you can follow me here
if you’d like to chat further about opportunities or interest in AI, or this newsletter, please feel free to reply.
banner images created with Midjourney.