Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

In Defense of the Blank Page

We've all been there. Staring at that blinking cursor, feeling like it's personally mocking your creative ambitions. For generations, this initial creative friction was just part of the deal. You suffered through it, maybe developed some elaborate procrastination rituals involving way too much coffee, and eventually something emerged. The other week, we were wrestling with a particularly stubborn case of writer's block when we decided to try something that felt almost like cheating. We opened Claude and typed: "Give me 100 titles for a blog post about creativity and AI." Sixty seconds later, we had a cascade of options. A hundred of them. What would have taken us an entire afternoon of staring into the void and muttering creative profanities was done faster than a TikTok dance trend dies.

Alexander Embiricos from the Codex Team at OpenAI calls this the "Abundance Mindset," and frankly, it's been hiding in plain sight. Embiricos describes successful AI users as those who embrace "running many tasks in parallel" and trying "anything, even multiple times." The magic happens when you stop being precious about computational resources and start treating AI like an infinitely patient brainstorming partner who never judges your terrible first ideas. We've been experimenting with this idea of creative abundance in increasingly weird ways. Last month, we were stuck on a visual concept for a presentation. We spent an hour feeding Midjourney increasingly desperate variations of 'professional,' 'innovative,' 'dynamic' - all the corporate buzzwords that produced exactly the soulless stock imagery you'd expect. Frustrated, we threw out the rulebook and typed: the sound of rain on a tin roof that feels like a forgotten memory. It was completely logically impossible, but Midjourney created a haunting, abstract composition that somehow captured the mood we couldn't articulate with normal words.

This deliberate misinterpretation (aka the happy accidents) has become one of our favorite creative techniques. Instead of precise, descriptive prompts, we've started feeding AI systems poetic contradictions and cross-modal confusion. When you give AI a logically impossible task, it's forced to make creative leaps that often land in places human thinking would never venture. The happy accidents are honestly the best part. AI doesn't get frustrated by impossible requests, it just tries to find patterns in the chaos and often produces something unexpectedly beautiful in the process.

Going back to those 100 titles we generated, the real revelation wasn't the quantity but what happened next: we didn't use a single one of them. The AI's list wasn't the solution, it was the spark. We found ourselves drawn to a word in suggestion #17, intrigued by a phrase in #42. The perfect title wasn't on the list, it was born from the collision of AI abundance and human synthesis.

This brings us to the other side of the creative equation, something designer Willem Van Lancker articulates beautifully: "Productive Friction." Van Lancker argues that while AI can give you a mountain of marble instantly, it can't teach you to be Michelangelo. Only the right kind of struggle can do that. Now, you might be thinking this sounds like some nostalgic "uphill both ways in the snow" argument, but the science backing this up is interesting. MIT researchers recently hooked up college students to EEG machines while they wrote essays using ChatGPT versus going old school with just their brains. The AI-assisted students showed 55% less brain activity and later couldn't quote their own work. Their essays were polished, sure, but they'd essentially become strangers to their own thoughts.

This cognitive debt happens when we repeatedly outsource our thinking to external systems. Short-term gains, long-term cognitive costs. Like intellectual credit card debt, but for your brain. We're seeing this pattern everywhere. Another study from Model Evaluation & Threat Research found that experienced developers were 19% slower when using AI coding tools, despite predicting they'd be 24% faster. The tool that was supposed to supercharge productivity was quietly sabotaging it.

But before we all start planning our return to typewriters and carbon paper, there's a more nuanced story here. Adobe's 2025 education report found that 91% of educators saw enhanced learning when students used creative AI appropriately. The keyword there is "appropriately," which makes all the difference. This is where the abundance and friction mindsets stop being contradictory and start being complementary. We're beginning to think of them as a creative one-two punch: abundance for exploration, friction for refinement.

When you're starting a project, embrace the abundance mindset. Use AI to drown that blank page in possibilities. Generate not just 10 logo concepts but 50. Not just one approach to a coding problem but five different architectures. Don't be precious about computational resources because, honestly, they're not scarce anymore. This is the equivalent of having an unlimited supply of cheap sketching paper instead of one expensive canvas you're afraid to mark up. But then comes the crucial transition. Once you have that abundance of raw material, it's time to shift into productive friction mode. This is where you engage the uniquely human work of curation, synthesis, and judgment. You take the AI's architectural suggestion and push it in a direction it never would have considered. You become the editor of infinite possibilities rather than the generator of singular perfection. The goal isn't to avoid AI (that would be like avoiding calculators in favor of slide rules), but to be intentional about when and how we use it. Practice deliberate friction by regularly working without AI assistance. Your brain needs resistance training just like your muscles do.

Develop AI literacy: learn to prompt effectively, recognize AI limitations, and critically evaluate outputs. Understand when AI is likely to be accurate versus when it might hallucinate or perpetuate biases. This isn't about becoming a prompt engineer, it's about becoming a thoughtful collaborator with intelligent systems. Most importantly, preserve your ability to think independently by regularly flexing those mental muscles without assistance. The future belongs not to those who use AI most efficiently, but to those who use it most wisely.

We're moving into the proof-of-work economy where your portfolio matters more than your pedigree, and the process you show matters as much as the final product. The most successful creators won't be those who generate the most content, but those who demonstrate the best judgment in choosing what's worth making and how to make it better. The abundance mindset and productive friction aren't opposing forces, they're dance partners. AI gives us the ability to explore infinite possibilities without the traditional costs of iteration. But human wisdom, taste, and judgment determine which of those possibilities deserves to exist and how to bring it to life with meaning and intention.

Back to Basics

You’re Talking to AI All Wrong

When we watched Dex Horthy's recent talk about "Context Engineering," something clicked. Finally, someone had articulated what creative professionals have been doing instinctively with AI all along. While developers were perfecting the art of the one-line prompt, designers were already uploading brand guidelines, mood boards, and entire visual libraries to their AI tools. Horthy's framework gave us the language to understand why the creative approach was working so well. For the past couple years, the tech community has been preaching prompt engineering as the secret to AI mastery. And to be fair, it was an important first step. Prompt engineering is essentially the art of crafting the perfect request, choosing exactly the right words, in the right order, with the right specificity. It's the difference between asking for "make an ad" and asking for "create a 30-second social media video targeting Gen Z women interested in sustainable fashion, highlighting our carbon-neutral production, under-$50 price point, with upbeat music and quick cuts." This was genuine progress. We learned that AI responds better to specific language, clear parameters, and well-structured requests. The prompt engineers had a point: there's real craft in formulating these requests effectively.

Horthy's framework reveals the bigger picture. He positions prompt engineering as just one aspect of Context Engineering, which is the comprehensive practice of providing AI with rich, persistent information environments. This distinction illuminates why creatives were succeeding: they were already thinking beyond the prompt to the entire collaborative relationship. The difference is that prompt engineering is about crafting the perfect question, while context engineering is about creating the perfect environment for understanding. Think of it this way: prompt engineering is giving someone directions to your house, while context engineering is providing those directions along with local landmarks, traffic patterns, and parking tips. Both get the job done, but one creates a fundamentally different quality of experience.

Context engineering encompasses everything that surrounds and supports that prompt. It includes building reference libraries that AI can access, maintaining continuity across sessions, establishing persistent brand guidelines, and creating frameworks that ensure consistent output. While prompt engineering asks "How do I phrase this request?", context engineering asks "How do I help the AI understand my entire creative universe?" Think about how creative teams actually work. When we brief a new photographer, we don't just say "shoot our product beautifully," even if we say it very precisely. We share mood boards, previous campaign assets, brand guidelines, the creative director's pet peeves, examples of what definitely doesn't work, and stories about why the client has strong feelings about certain colors. The entire ecosystem of information is effectively context engineering. While prompt engineers were trying to compress everything into one perfect sentence, designers were building creative ecosystems around their AI tools.

The components of Context Engineering map directly to creative team workflows. Building reference libraries translates to our mood boards and style guides. Maintaining memory across sessions reflects standard project management practices. Establishing persistent preferences aligns with how we implement brand standards. And demanding structured outputs mirrors our longstanding use of templates and design systems. What the tech world calls RAG (Retrieval-Augmented Generation), we've been calling reference materials. What Horthy's framework reveals is that the tech world is now formalizing what creatives do intuitively. They're building systems around what we've always called good communication. And there's real value in this formalization.

The real insight here is that effective AI collaboration mirrors effective human collaboration. And creatives have been perfecting that art for decades. We know you can't just drop someone into a project with minimal context and expect brilliance. You need to share the vision, the constraints, the history, the failures, the client's secret preferences. Prompt engineering asks a question; context engineering creates a shared understanding.

The most validating part of this evolution is seeing our instinctive creative process recognized as the gold standard for AI collaboration. The lesson here isn't that creatives need to learn some new methodology. It's that having a framework for what we've been doing helps us do it even better. So the next time you sit down to work with AI, think less about crafting the perfect question and more about whether you've given it enough background to actually help you. Because at the end of the day, context is everything.

Tools for Thought

Dia Browser: The New AI OS

What it is: The Dia Browser is a new AI-powered web browser that transforms your browsing session from a collection of isolated tabs into a cohesive, intelligent workspace. Think of it as giving your browser a short-term memory and a built-in research assistant that actively understands the content you're viewing. Its core is an AI side panel that can be given context by simply @mentioning your open tabs. This allows the AI to synthesize information across multiple sources, interact directly with video content to pull summaries or quotes, and even learn your preferred style. The browser aims to make the leap from a simple window to the web into an active partner in your digital work. (Only available on Mac for now.)

How we use it: The real magic of Dia is how it collapses the time between research and creation. I’ve started using it as my primary tool for complex analysis. I'll open a half-dozen financial reports or technical papers in different tabs, and then simply ask the AI to @all open tabs and "compare the primary conclusions and identify any conflicting data." The output is a synthesized brief that I can drop directly into my notes. For content creation, I’ve created several custom skills that can summarize (instantly produces a bulleted list of key takeaways from any article, and rephrase (adopt my voice). Skills are like creating custom GPTs for the web, so Dia has removed some of the friction of collaborating with the internet.

The AI University: A Two-Campus Program with Anthropic and OpenAI

What it is: The two premier, freeAI learning resources are the Anthropic Academy and the OpenAI Academy. Together, they represent a complete, end-to-end educational ecosystem for mastering AI. Think of them not as competitors, but as two distinct colleges within the same university. Anthropic Academy is the "College of Arts and Sciences," offering a structured curriculum that teaches you the foundational principles, ethical frameworks, and architectural theory of AI collaboration. OpenAI Academy is the "College of Engineering," providing a hands-on, practical lab manual filled with developer-focused tutorials, code recipes, and API toolkits designed to get you building immediately.

How we use it: When people ask for the best way to learn AI development, I don't recommend one platform; I prescribe a full "semester" that leverages the unique strengths of both. I tell them to start at Anthropic to learn how to think. Begin with their "AI Fluency" course for the core principles, then master their "Prompt Engineering" tutorial to learn the art of instruction. Once that foundation is set, I send them to the OpenAI Academy for the "lab work". This is where they get their hands dirty with code by building projects from the tutorials on the Assistants API and function calling. For the final project, I advise them to combine both campuses: use Anthropic's advanced courses on the Model Context Protocol (MCP) to understand the high-level architecture of connecting AI to external data, while using OpenAI's guides on Retrieval-Augmented Generation (RAG) as the practical implementation manual. This two-campus approach is the most effective path I've found to becoming a truly well-rounded AI developer, equipped with both the "why" and the "how."

Intriguing Stories

Inside Meta’s Grab for AI Supremacy

The AI talent war has taken a startlingly personal turn. Top researchers across Silicon Valley are receiving direct messages from Mark Zuckerberg, who is personally spearheading Meta's aggressive recruitment drive. Meta is reportedly making offers worth up to $100 million to secure key individuals. These deals are supplemented by annual salaries hitting the $1-2 million range and significant equity. The entire operation is centered around Meta's newly formed Superintelligence Labs (MSL), led by Alexandr Wang, the founder of Scale AI. This strategy appears to be born from a sense of desperation. Meta’s flagship Llama 4 Behemoth model has reportedly faced multiple delays for not showing sufficient improvement. While the company released smaller Scout and Maverick models in April, their reception was lukewarm. In response, Meta has successfully poached at least eight researchers from OpenAI, while also hiring from Google DeepMind and Anthropic. However, the campaign has highlighted a cultural clash in the AI community, framed by some as "missionaries vs. mercenaries". OpenAI's CEO, Sam Altman, has publicly claimed that "none of our best people" have accepted Meta's offers, cutting to the heart of a debate over whether mission or money will win the AI race. This raises critical questions about Meta's free agency approach to team building. While it can assemble a Dream Team of talent, history shows that simply buying innovators is not a guarantee of success. This talent consolidation presents both promise and peril for the broader AI landscape. A supercharged Meta could accelerate breakthroughs and release more powerful open-source tools that benefit everyone. Conversely, it could create a brain drain that leaves smaller labs and startups unable to compete, potentially slowing innovation across the board.

Google Goes for a Swim

The proposed $3 billion acquisition of AI coding startup Windsurf by OpenAI recently collapsed due to significant concerns from Windsurf's team regarding the handling of their intellectual property within the OpenAI-Microsoft ecosystem. The inability to guarantee exclusive access to Windsurf's technology, a critical term for both parties, ultimately scuttled the conventional acquisition. In the wake of this collapse, Google executed a swift and strategically sophisticated maneuver, securing a $2.4 billion deal with Windsurf. This arrangement is not a traditional buyout. Instead, it is a non-exclusive licensing agreement coupled with a high-profile talent acquisition: a reverse-acquihire. Windsurf retains its independence and the right to license its technology to others, while Google gains access to its codebase and, most importantly, hires key executives like CEO Varun Mohan and co-founder Douglas Chen to work within Google DeepMind. The strategic calculus behind this deal is multi-faceted. For Google, it represents both an offensive and defensive victory. Offensively, it brings elite talent directly into its crucial agentic coding initiatives for the Gemini project. Defensively, it prevents a primary competitor from absorbing one of the most promising independent teams in the AI coding space. For Windsurf, the deal provides significant capitalization while preserving strategic autonomy. The company can continue its growth with a substantial cash infusion under the new leadership of interim CEO Jeff Wang and president Graham Moreno. For OpenAI, the outcome is a valuable lesson in the operational constraints that can arise from deeply integrated strategic partnerships.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.