Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

The AI Split Between Sam and Zuck

I've been watching this summer's AI recruitment war unfold with the kind of fascination usually reserved for trade deadline drama (did you hear about the researcher who declined a $1BN offer). When Mark Zuckerberg drops $14.3 billion to poach Scale AI's CEO and starts throwing nine-figure packages at researchers, while Sam Altman quietly publishes philosophical treatises about humanity crossing "event horizons," what we're witnessing goes way beyond corporate competition. Both Zuckerberg and Altman dropped manifestos (or musings) on their visions for the future. First came Altman's "The Gentle Singularity" in June, followed by Zuckerberg's "Personal Superintelligence" memo in July. On the surface, both sound like typical Silicon Valley optimism wrapped in careful corporate messaging. But dig deeper, and you'll find something competing philosophies about humans flourishing in the age of artificial minds, disguised as business strategies.

Altman is positioning the transformation to superintelligence as inevitable, society-wide, something that will sweep through civilization whether we're ready or not. His timeline reads like a tech roadmap on steroids: agents doing real cognitive work arrived in 2025, systems with novel insights coming in 2026, real-world robots by 2027. Zuckerberg takes a completely different tack. Instead of riding the wave, he wants to build you a surfboard. His whole pitch revolves around "personal superintelligence" that becomes your intimate digital companion, learning your quirks and amplifying your individual potential. While Altman's busy preparing us all for the inevitable tsunami of change, Zuckerberg's promising to hand each of us the controls. It's a fundamental philosophical split that makes their corporate rivalry feel almost quaint by comparison. Altman talks about "recursive self-improvement" and warns that society needs new social contracts to handle what's coming. He's asking us to collectively prepare for a future that will arrive regardless of our individual preferences. Zuckerberg, by contrast, promises that "personal devices like glasses that understand our context" will become our primary computing interfaces, putting superintelligent capability directly in individual hands. It's the difference between surfing a tsunami and installing your own private wave machine.

Both executives warn against concentrated AI power, but their solutions diverge dramatically. Altman repeatedly emphasizes that "intelligence too cheap to meter" must be distributed widely, not hoarded by the few. His concern is structural: prevent a small number of entities from controlling superintelligent capabilities that could reshape civilization. Think of it as the antitrust argument applied to artificial brains. Zuckerberg's solution is more tactile and product-focused. He envisions personal devices, smart glasses, intimate AI agents that give individuals direct access to superintelligent capability. Rather than preventing centralization through policy or governance, he wants to route around it through ubiquitous personal technology. This difference reveals something deeper about their worldviews. Altman sees the challenge as fundamentally political and social, requiring collective action and new forms of governance. Zuckerberg sees it as fundamentally technological and individual, solvable through better products and more intimate human-AI relationships. One wants to redesign the system; the other wants to give you tools to bypass it entirely.

Neither leader fully addresses what happens to human purpose when machines can do cognitive work better than we can, but their approaches to job disruption reveal telling differences. Altman acknowledges that "whole classes of jobs going away" will be painful, but falls back on historical precedent. The Industrial Revolution ultimately created more prosperity even as it displaced agricultural workers, he argues, so we'll figure it out again. Zuckerberg frames the challenge differently. Rather than focusing on job replacement, he emphasizes job transformation. His vision centers on AI as a creativity amplifier rather than a labor replacer. Personal superintelligence becomes a tool for "self-realization" and deeper relationships, not just productivity gains.

Both acknowledge superintelligence risks, but their approaches to safety couldn't be more different. Altman treats alignment as something we need to solve both technically and societally, requiring world-leading research and feedback-driven development. His safety philosophy is proactive and research-heavy, assuming we can solve alignment problems before they become existential. Think of it as the NASA approach: extensive testing, multiple fail-safes, and rigorous protocols before launch. Zuckerberg's safety approach is more pragmatic and product-focused. He acknowledges "novel safety concerns" but speaks mostly in general terms about risk mitigation and careful deployment. Where Altman sees safety as a research problem requiring new technical breakthroughs, Zuckerberg treats it as an engineering problem manageable through careful product development. It's more like automotive safety: iterative improvements, crash testing, and gradual rollouts with real-world feedback. The philosophical differences matter because they're backed by massive resource commitments and genuine technical capabilities. Meta is spending up to $72 billion on AI infrastructure in 2025 alone, while OpenAI continues raising capital at valuations that would make entire countries jealous. These are roadmaps backed by more computational resources than most nations possess, not thought experiments.

Here's what we think is really happening. Neither vision will win cleanly because both have fatal flaws baked into their business models. Altman's democratic approach sounds great until you realize it requires trusting Congress to understand recursive self-improvement. Zuckerberg's personal empowerment pitch is compelling until you remember this is the same company that gave us the Facebook algorithm and thought the metaverse was the future. What we'll probably get is some messy hybrid where Altman's collective wisdom meets Zuckerberg's personal agency, filtered through whatever regulatory panic emerges when the first AI agent accidentally crashes a financial market or writes a hit song that makes humans weep with existential dread. The real question isn't which philosophy wins, but whether either of these guys can execute their vision before the next wave of competitors shows up with an even better story.

So You’ve Been Replaced by a Machine. Now What?

We've been watching the feeds lately, and the pattern is unmistakable. Every scroll brings another photorealistic portrait of someone who never existed, another sprawling fantasy landscape rendered in seconds, another song that sounds suspiciously like a lost Beatles track. The comments section tells the same story everywhere: a mixture of awe and quiet panic. That low hum of anxiety has settled deep in the bones of every creative professional I know, whispering the same question: "Is this it? Is my skill, my craft, now obsolete?" The anxiety has solid intellectual groundwork. The University of Michigan recently published research that should make any creative person pause. Their dynamic experiment (with over 800 participants across 40 countries) found that when we're exposed to AI ideas, our collective creativity becomes more diverse, but we also start unconsciously copying the machine. High exposure to AI-generated content made ideas different, not necessarily better. Meanwhile, The University of Bristol’s comprehensive 2025 review maps out AI's blistering advance into every corner of the creative industries, from text-to-image generation to sophisticated video creation tools. The robots are getting smarter, and they're changing how we think.

Something that might surprise you is that this feeling of being outmoded by a machine carries an ancient echo. The panic feels new because the technology is new, but the existential terror has deep historical roots. To understand what's happening to us now, we need to look back to the 1850s, when artists first met their supposed executioner: the camera.

When photography arrived, it was both a marvel of science and an agent of creative chaos. Portrait painters, whose entire professional identity depended on their ability to create perfect likenesses, faced what seemed like certain doom. The camera could capture reality faster, cheaper, and with brutal accuracy that no human hand could match. The cry went out from Paris studios and London salons that painting was dead. And in one sense, they were right. The job of painting, the tedious work of simply documenting reality, was indeed dying. But something extraordinary happened next.

Freed from the burden of mere documentation, painters discovered they had to find a new purpose. They faced a more interesting challenge: What could painting do that a camera couldn't? The answers transformed the entire art world. The Impressionists chose to paint not the thing itself, but the fleeting sensation of light dancing across it. Claude Monet painted the same haystack dozens of times, capturing how morning light differed from afternoon shadow in ways no photograph could convey. The Cubists decided to show not just one perspective, but every possible viewpoint simultaneously. Pablo Picasso's "Les Demoiselles d'Avignon" shattered the illusion of a single moment frozen in time. Jackson Pollock's drip paintings couldn't be captured by any camera because they existed in the realm of pure feeling. The camera didn't kill painting. It liberated painting from its most mundane function and forced it to become more essentially itself. The value of art pivoted from technical craft to conceptual vision, from skill to soul.

Today, AI serves as our new camera. It automates not just realism, but aesthetics themselves. It can generate a "beautiful" image, compose a "catchy" song, or write a "clever" headline in seconds. Much like those first stiff studio photographs, most AI output feels generic and predictable. The technology excels at producing endless variations of what it has been trained on, creating what amounts to a global average of prettiness. It's remarkably good at this mediocrity. But this is precisely where it creates space for our own liberation. AI forces us to confront the same fundamental question that photography posed to painters: What can we, as human creators, do that a machine cannot? We’ve been watching the creative community grapple with this challenge, and we’re starting to notice some fascinating patterns emerging. These aren't fully formed artistic movements yet, more like creative strategies people are stumbling toward as they figure out how to stay relevant and, more importantly, how to stay human.

One of the most intriguing responses we’re seeing could be described as a kind of radical conceptualism. We've watched artists spend hours crafting prompts that read like experimental poetry, weaving together personal memories, cultural references, and emotional landscapes in ways that transform AI into something closer to a collaborative partner than a replacement tool. The art seems to live less in the final digital image and more in the labyrinthine, deeply personal prompt that created it. Artist Sherry Horowitz has been exploring this territory, creating works where the conversation with AI becomes more meaningful than any image the machine might produce. The prompt itself becomes a kind of masterpiece, though I suspect we're still figuring out how to properly appreciate or even display this type of work.

This stands in stark contrast to another approach we keep encountering, which feels like a kind of neo-analog revival. In a world drowning in perfect pixels, we’re watching creators double down on the imperfect, tangible, undeniably human touch. There's something almost radical about watching a painter celebrate the thick texture of oil paint or seeing a sculptor embrace the natural grain of wood when everyone else is chasing digital perfection. The value seems to shift toward undeniable proof of human labor, the physical evidence that a person stood in front of a canvas and made something exist that didn't exist before. Even David Hockney, at 86, has been doubling down on physical painting while AI art floods galleries. It's as if the threat of technological perfection makes imperfection precious again.

Then there's something else that's harder to categorize, a pattern where the creative process itself becomes the point. We’ve seen artists create elaborate documentation of their creative journey, turning what used to be behind-the-scenes content into the main event. The struggle, the false starts, the collaboration with AI, the dead ends, the happy accidents. The entire messy process becomes wrapped up in the final product. It's less about the polished result and more about the performance of creativity, the human story unfolding in real time. This approach seems to recognize that audiences are hungry not just for beautiful things but for evidence of human struggle and discovery.

On the more subversive end, we’re encountering artists who treat AI less as a tool and more as a subject for critique. They push the models until they break, creating surreal and unsettling images that expose the machine's biases and blind spots. It's art as critique, a way of talking back to the algorithm. These creators seem less interested in making AI work better and more interested in making us think harder about what AI reveals about ourselves.

But we must address the elephant in the room that the photography analogy tends to gloss over: the economic reality. When photography displaced portrait painting, many portrait artists simply went out of business. They didn't all transform into Impressionist masters. Some became photographers themselves. Others left the art world entirely. The transition was often brutal and unforgiving. The same economic pressures exist today. AI can already produce marketing copy, generate stock photography, and create background music faster and cheaper than human creators. A graphic designer who makes a living creating social media graphics competes directly with AI tools that can produce similar work in seconds. Yet we’re seeing creative professionals navigate this transition with remarkable ingenuity. Some are becoming AI specialists, learning to work with the technology so skillfully that their human judgment becomes the irreplaceable element. Others are pivoting toward work that emphasizes their humanity.

The key insight from both the photography revolution and our current AI moment is this: technology doesn't eliminate human creativity; it forces creativity to evolve into new forms. The most successful creative professionals we know aren't trying to compete with AI on AI's terms. They're not attempting to paint more realistically than a camera or write more efficiently than ChatGPT. Instead, they're leaning into everything that makes them irreplaceably human. This means developing your conceptual thinking, your cultural awareness, your ability to synthesize seemingly unrelated ideas into something new. It means cultivating your personal voice, your unique perspective, your lived experience that no algorithm can replicate. It means understanding your audience as complex human beings with emotions, desires, and contradictions that go far beyond any data set.

We’ve also noticed a growing market for certified "human-made" creative work, similar to how "organic" and "handmade" became premium categories in response to mass production. Some clients specifically seek out human creators precisely because they want the unpredictability, the personal touch, the cultural understanding that only comes from lived human experience.

AI has become a powerful but ultimately clarifying filter. It will automate the generic. It will commodify the derivative. It will make a comfortable living churning out aesthetically pleasing but soulless content. But it cannot replace the human creator who understands that the real value lies not in the execution but in knowing what deserves to exist and why it matters.

Back to Basics

Stop Bribing Your AI

We're still getting used to the fact that we're talking to our computers. Remember when you had to learn not to say "please" and "thank you" to Siri because it made you feel weird? Now we're supposed to have full conversations with these things, and frankly, most of us are still figuring out the social etiquette. Turns out there's actual research on this awkwardness. A team at Wharton tested what happens when you try all the human tricks on AI models. They offered tips ranging from $1000 to a trillion dollars. They made threats about kicking puppies and reporting to HR. They tried emotional manipulation about sick mothers and career desperation. And they found that none of it works. Across rigorous testing on challenging benchmarks, none of these very human emotional appeals had any meaningful effect on performance. We're out here negotiating with something that can't be negotiated with, trying to motivate something that doesn't have motivation. It's like trying to charm your microwave into heating your coffee more evenly.

The weirdest part is that we know this, intellectually. But there's something about the conversational interface that tricks our brains into treating AI like another person. When something responds to you in natural language, your brain immediately starts wondering if you should be building rapport with it. Should you warm it up with small talk? Be direct and businesslike? What the research revealed is that AI isn't a collaborator with its own agenda and feelings. It's more like a very sophisticated mirror.

One interesting example from the study came when researchers framed requests as emails. Some models got completely derailed by the email format, focusing on the "From" and "To" lines instead of the actual task. The AI wasn't being stubborn or difficult. It was just reflecting back whatever structure you gave it, including the parts that weren't actually relevant. This changes a lot about how we should approach these tools in our creative work. Instead of trying to build a relationship with our AI, we need to think about giving it the clearest possible reflection of what we actually want. When we ask for "professional headshots," we're not asking a photographer who shares our aesthetic sense and can read between the lines. We're asking a mirror to show you "professional headshots" without any of the cultural context that tells us whether that means corporate LinkedIn shots or dramatic portfolio work.

The insight here is that our discomfort with talking to computers might actually be productive. That weird feeling when we catch yourself saying "please" to your AI? That's our brain recognizing something fundamental about this interaction. We're not talking to another consciousness, were operating a very advanced tool through conversation. Once we started thinking about it this way, our prompts improved. Instead of trying to charm or motivate the AI, we started treating it like we were programming a really smart but literal intern. Be specific. Be clear about context. Don't assume it shares your priorities or understands your unstated assumptions.

The researchers found that simple, direct instructions consistently outperformed all the emotional manipulation. Boring beats bribery every time. Your AI doesn't need motivation, it needs information. The clearer your instructions, the better your results.

Tools for Thought

Google Genie: Wish for a Game, Get One Instantly

What it is: Google DeepMind’s Genie is a foundational world model that can generate interactive, playable 2D worlds from a single image or text prompt. Think of it less like an image generator and more like a dream-catcher for game developers. By watching hundreds of thousands of gameplay videos, Genie learned the fundamentals of movement, actions, and controls without being explicitly told the rules. It can take a child's drawing of a landscape and turn it into a side-scrolling platformer you can actually play.

How we use it: While it's still in the research phase, the implications are huge. We see Genie as the ultimate rapid-prototyping tool. A game designer could sketch a level concept on a napkin, snap a photo, and have a playable demo in seconds to test mechanics and flow. For creators and educators, it opens a new frontier for interactive storytelling, allowing them to build simple, engaging experiences without writing a single line of code. It's a foundational step toward AI that doesn't just show you a picture, but invites you to step inside and play.

Google Deep Think: Giving AI Time to Ponder

What it is: Deep Think is a new reasoning upgrade for Gemini 2.5, currently available to Google AI Ultra subscribers. Inspired by a model that won gold at the International Mathematical Olympiad, it changes how the AI arrives at an answer. Instead of providing an instant response, Deep Think engages in "parallel thinking.” It explores multiple paths and ideas at once, revises them, and combines them over time. By giving the model more "thinking time," it can tackle more complex problems that require creativity and step-by-step planning.

How we use it: This is perfect for those moments when you need a collaborator, not just a search engine. We're still waiting for Deep Think to show up on our side, but we are hearing mixed reviews. It should be a powerful tool for any task that benefits from a bit of extra thought, from iterative design to solving tricky mathematical puzzles, if we ever actually get access to it.

Google Opal: Build Your Own Automations, No Code Required

What it is: Straight from Google Labs, Opal is an experimental tool that lets anyone build and share their own AI-powered mini-apps. It’s a visual, no-code platform where you can chain together prompts and other tools to create custom workflows. Think of it as a set of AI Legos for building specific solutions, allowing you to go from a simple idea to a functional app that you can share and use immediately.

How we use it: We see Opal as another AI prototyping tool for non-programmers. We’re excited to use it to create simple internal tools. For instance, we could build an app that takes a link to one of our articles, pulls the text, and generates a series of tweets and a LinkedIn post. Or, we could create a research assistant that automatically summarizes long reports into a bulleted list. It's perfect for quickly testing a workflow or building a custom AI application without getting bogged down in complex code.

NotebookLM: Now with Video Overviews

What it is: NotebookLM, Google's AI-powered research assistant, just got a major upgrade. The new "Video Overviews" feature creates narrated slides to visually explain complex topics, pulling images, diagrams, and quotes directly from your source documents. You can even customize the video for a specific audience.

How we use it: This transforms NotebookLM from a text-based tool into a multimedia creation studio. We're waiting to use Video Overviews to generate internal training materials and explainers from dense technical documents.

Intriguing Stories

AI Outspent the American Consumer: During the first half of this year, AI-related capital spending has officially contributed more to US GDP growth than consumer spending. The American consumer normally drives about 70% of our economy. But right now, companies buying AI infrastructure and software are doing more heavy lifting for economic growth than all of us combined, buying our lattes and streaming subscriptions and whatever else usually keeps this economy humming. However, AI capex only represents about 6% of total GDP. So you've got this tiny slice of the economy punching way above its weight class while the traditional heavyweight is basically coasting. If you're someone who believes the stock market generally reflects economic reality, which sounds simplistic until you actually look at the historical data and realize it's mostly true, then the past few months have been genuinely confusing. You'd watch companies like Chipotle and UPS get absolutely demolished after their earnings calls while AI stocks kept dragging the S&P 500 to fresh highs. The math wasn't mathing. Turns out that the market wasn't hallucinating or getting swept up in AI hype. All that money pouring into servers and software and compute infrastructure wasn't speculative excess after all. It was companies placing enormous, economy-moving bets on what comes next. The major tech companies just collectively dropped $102.5 billion on capital expenditures in their most recent quarters, and they're not buying fancy office furniture. They're buying actual, physical infrastructure: data centers, chips, and cooling systems. Christopher Mims at the WSJ noted that some of these power-hungry data centers are literally being built on the sites of former steel mills, because that's where the energy infrastructure already exists. As a percentage of GDP, spending on AI infrastructure has already exceeded what we spent on telecom and internet infrastructure during the dot-com boom. And it's still growing. The AI spending boom has graduated from changing individual company valuations to literally rewriting the rules about what drives American economic growth.

The Quiet Layoff Gets Loud: A shift is underway in the C-suite: CEOs are no longer apologizing for workforce reductions. They’re boasting about them. Gone is the coded language of "restructuring"; in its place are blunt pronouncements of shrinking headcounts, framed as a strategic embrace of AI and operational leanness. This new approach is being openly rewarded by Wall Street, where efficiency is paramount. As Zack Mukewa of the advisory firm Sloane & Co. notes, "Being honest about cost and head count isn’t just allowed, it’s rewarded." The numbers tell the story. Wells Fargo's CEO celebrated cutting head count for 20 consecutive quarters, a 23% total reduction, cheerfully calling attrition "our friend." Bank of America has trimmed its workforce from 300,000 to 212,000 under its current CEO, who plans to "keep working that down" as AI automates tasks like trade reconciliation. Verizon's chief, "very happy" with the company's efficiency, confirmed its headcount is "going down all the time.” This trend isn't always about mass layoffs; it's often a quieter strategy of slowing hiring and not replacing employees who leave. This pivot is driven by the rise of hyper-efficient startups and a cooling labor market that has given employers the upper hand. However, the shift isn't without its critics. Molly Kinder, a senior fellow at the Brookings Institution, expresses concern that this is happening "in plain sight with no blowback," particularly for white-collar workers in non-unionized fields. "Something feels remarkably different about this moment," she says, worrying that public acceptance could make this the new, unsettling norm for the American worker.

The Unintended Confessions of 100,000 ChatGPT Users: A researcher discovered a critical flaw in ChatGPT's sharing protocol, leading to a massive data exposure of nearly 100,000 user conversations on Google. Although users had to opt-in via a series of checkboxes to make their chats public, the design of the feature resulted in the inadvertent exposure of highly sensitive data, from confidential contracts to intimate personal discussions. In response, OpenAI has removed the feature allowing conversations to be indexed by search engines. The company’s chief information security officer acknowledged that the experiment "introduced too many opportunities for folks to accidentally share things they didn't intend to." While OpenAI is working to remove the indexed content from search results, the core issue remains. The data has already been captured by third parties, meaning the information from these conversations now exists in datasets beyond OpenAI’s or Google's control.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.