- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
The Birth of a New Cinematic Language
Let's get personal for a moment. Think about your family photos. That shoebox of Polaroids, the dusty album, that one picture of your dad from the ‘70s you swear could be a still from a Hal Ashby film. Now imagine feeding those memories, that specific grain and light and emotion, into a machine and asking it to dream for you. Imagine turning your personal history into the very paint used to create a new universe. Filmmaker Eliza McNitt did exactly that. The result is Ancestra, an 8-minute short film that premiered at Tribeca just last week, produced by Darren Aronofsky's new venture, Primordial Soup, in partnership with Google DeepMind. We've seen plenty of "AI films,” most feeling like uncanny, slightly off-kilter tech demos. This one actually tells a story that couldn't exist without the technology, rather than using AI as a flashy effects layer. McNitt trained the model on her own family's visual archive, so every generated frame carries the aesthetic DNA of her actual childhood.
McNitt's film tells the story of her own birth, a perilous delivery where she was born with a hole in her heart. The narrative transforms her mother's love into a literal, cosmic force that ensures her survival. To bring this to life, she didn't just tell the AI what to create. She fed Google's Veo3 model the source code of her own existence, including photos her late father took on the day she was born. The AI effectively generated a digital ghost of McNitt herself, imbued with the texture of her own family archive. The difference between this approach and traditional CGI runs deeper than you might expect. The distinction is philosophical. For years, CGI has been an act of meticulous construction. Artists build worlds from the ground up, like digital sculptors or architects. They craft every model, paint every texture, set every keyframe. That's world-building. What we're seeing in Ancestra is closer to world-conjuring. The filmmaker's role evolves from master builder into something more like a shaman or dream guide. You don't lay every brick; you whisper intentions to a system that then presents you with emergent, often unexpected visual possibilities.
This matters because McNitt knows how to push technological boundaries. Her previous work, "Spheres," was a VR experience executive produced by Aronofsky that featured Millie Bobby Brown, Jessica Chastain, and Patti Smith as the multigenerational voices of the cosmos. That project made history as the first VR acquisition out of Sundance in 2018. So when she approaches AI filmmaking, she brings years of experience translating impossible concepts into immersive experiences. The production itself tells a fascinating story about hybrid workflows. Ancestra utilized a crew of over 200 people working with SAG-AFTRA actors, alongside AI-generated imagery. McNitt weaponized Veo not as a replacement for cinematography but as a way to visualize the literally unfilmable: the moment of her own birth, the cosmic forces her mother channeled, the microscopic cardiac defect that nearly ended her story before it began. The AI handled "cosmic events and microscopic worlds" while human performers grounded the emotional reality.
Of course, we have to talk about the elephant in the render farm: copyright. For any creative professional reading this, the question of where the AI gets its ideas remains paramount. Google has stated that Veo is trained on licensed YouTube content. This is a deliberate attempt to build a new cinematic language on a more ethical foundation than some of the "scrape it all and ask forgiveness later" models we've seen. The conversation around AI and intellectual property is far from over, but this move toward a licensed ecosystem suggests a future where these powerful tools can be used without undercutting the value of human creativity.
The timing of all this feels significant. Google has just launched Flow, its new AI filmmaking tool, which is explicitly built around this kind of creative collaboration. Ancestra is the first of three planned shorts from Primordial Soup, exploring AI's role in storytelling. We're watching filmmakers figure out in real time how to collaborate with intelligence that's neither quite artificial nor quite human anymore. Ancestra works because it has a beating human heart at its core. McNitt used this bleeding-edge tech not to create something cold and synthetic, but to tell one of the most intimate stories imaginable in a way that was previously impossible.
When Your Data Moat Becomes a Puddle
We've always found comfort in the business metaphor of moats, picturing companies as mighty stone castles protected by deep, wide channels of water. For years, the widest and deepest moat of all seemed to be proprietary data. Those carefully guarded datasets felt like the ultimate defensive asset, treasure troves of information that no rival could ever hope to replicate. Then Bob McGrew, OpenAI's former Head of Research, dropped a bomb in his recent Sequoia Capital interview, and we're questioning some fundamental assumptions about competitive advantage.
McGrew argues that proprietary data as a competitive advantage is rapidly becoming worthless. Consider Bloomberg's journey with its original Bloomberg GPT. The financial giant spent decades accumulating an incomparable trove of market data, news feeds, and financial communications. When they decided to train their own language model, the logic seemed bulletproof: feed their specialized AI this exclusive diet of financial information, creating a tool so finely tuned that no general-purpose model could compete. The results were humbling. The latest generation of general-purpose models often outperforms these specialized versions, even on domain-specific tasks where proprietary data should theoretically reign supreme. McGrew argues that we've been misunderstanding what made that data valuable in the first place. The real treasure wasn't the information itself but what he calls "embodied labor." Bloomberg's competitive advantage stemmed from the massive human effort its data represented: thousands of journalists chasing stories, analysts poring over earnings reports, and researchers building relationships with sources across global markets. That accumulated knowledge cost a fortune and seemed impossible to replicate quickly. The value has always lived in the expensive, time-intensive process of gathering that knowledge. We've been confusing the outcome with the effort that created it.
AI obliterates that equation entirely. What took Bloomberg years of human labor to compile can now be replicated by an AI system in days or weeks. Instead of decades of relationship-building to understand market sentiment, an AI can design comprehensive research that reaches more people, asks better questions, and analyzes results faster than any human team. The laborious work of synthesizing insights from thousands of documents becomes a simple query. When the labor becomes virtually free, the advantage it created evaporates. AI threatens to automate the very processes that made expertise valuable. Our ability to synthesize research, identify patterns, and generate insights based on accumulated experience suddenly faces competition from systems that can process vastly more information and identify subtler patterns than any human team.
We're watching the fundamental nature of competitive advantage evolve in real time. When everyone has access to the same incredible tools, when information gathering becomes trivial, when synthesis happens at machine speed, the real differentiator becomes our ability to ask the right questions and recognize valuable answers when we see them.
Back to Basics
When AI Shows Us Who We Really Are
I had an interesting moment with ChatGPT the other day that made me question what we think we know about unbiased AI. Curious about how these systems perceive us (and following a meme going around), I asked it to generate an image of what it thought I was like, based on our many interactions. The result was unsettling: a picture of a man, head down, grinding through technical work at lightning speed. When I challenged the machine on its assumption, it course-corrected with the algorithmic equivalent of an embarrassed apology. The new image showed a woman, but now she was teaching a class, smiling warmly, seemingly removed from the hard-edged technical work itself. The AI had essentially said, "Oh right, women can be technical too, but they're probably educators about it rather than practitioners."
This wasn't just a glitch in my personal interaction. New data from Sensor Tower reveals a similar pattern unfolding across the entire generative AI landscape, and the demographics tell a story that should make us all uneasy. Major AI productivity platforms, such as ChatGPT and Microsoft Copilot, have user bases that skew 60 to 70 percent male. Meanwhile, Character AI, the platform designed for companionship and role-playing, attracts a stunning 70 percent female audience, with nearly 90 percent of users under the age of 35. This seems like the emergence of a digital gender divide that mirrors some of our worst offline stereotypes. The systems we use for serious work tend to attract men, while platforms designed for emotional connection draw women. The market has essentially recreated the tired assumption that men want tools and women want relationships, except now we're encoding these biases into the foundational technologies of the next decade. When ChatGPT assumed I was male, it wasn't making a random error. The training data taught it that technical competence correlates with masculinity because, historically, that's been largely true in the dataset. The bias has graduated from being a data problem to becoming the system's default assumption about how the world works.
You can see the feedback loop forming in real time. Men use the productivity tools more heavily, which means their interaction patterns and preferences get disproportionately represented in future training cycles. Women, finding these tools less intuitive or welcoming, tend to migrate toward platforms that feel designed for them, which often focus on companionship rather than capability. The market responds by doubling down on these distinctions, creating even sharper divisions between "serious" AI tools and "social" AI experiences. The Character AI phenomenon is interesting because it represents both progress and a troubling missed opportunity. On one hand, we finally have an AI platform where women feel genuinely welcome and engaged. The user experience clearly resonates with a demographic that mainstream AI has largely failed to serve. But when you step back and look at the broader landscape, Character AI starts to feel like a digital ghetto, a separate space created because the leading platforms couldn't figure out how to be inclusive.
Tools for Thought
The Midjourney Gallery Comes Alive
What it is: Midjourney released its video model, and it did not disappoint. At its core, Midjourney's new video feature is an integrated motion engine designed to animate a user's existing library of generated images. The platform launched exclusively with image-to-video capabilities, taking advantage of Midjourney’s ability to create unique worlds and textures. The platform creates 4, 5 second renders, which can then be extended up to 4 more times (for a total of 20 seconds). The rendering is lightning-fast and inexpensive, costing roughly $4 for 20 videos. All four renders play simultaneously, making it much easier to choose the version you prefer. Midjourney’s platform is also unique in that you can render in any aspect ratio, although the quality is 480p.
How we use it: So far, we’ve been experimenting with animating some of our existing images, and uploading some from our camera. We really enjoy how the platform maintains consistency across styles and textures, and right now, we are really only missing the audio. (If you missed our write-up about the copyright issues, you can have a read here.)
Google Search Talks Back
What it is: The latest evolution of Google’s AI-powered search is Audio Overviews, a new feature that transforms traditional text summaries into short, conversational audio clips. Instead of just reading, you can now listen to a summary of your search topic, presented as a brief, podcast-style discussion between two distinct AI voices. This makes it easy to get a quick understanding of a new subject. Alongside this, Google is also rolling out Search Live as part of a new "AI Mode," which allows for a full, back-and-forth spoken conversation with Search. You can ask follow-up questions in a natural dialogue, turning the search engine into a real-time, interactive research partner.
How we use it: We are just getting comfortable talking to our computer, so we’ve only been using this audio feature when we are truly multitasking or we have misplaced our glasses.
Intriguing Stories
Meta’s New Alliance
Meta has announced a landmark $14.3 billion investment for a nearly 50% stake in Scale AI. Scale AI specializes in the meticulous, painstaking work of curating and labeling vast oceans of data, a process that is absolutely essential for training a large language model to be accurate, effective, and safe. Meta, despite possessing enormous computational power and talent, has been in a fierce race to match the progress of its rivals. This move reveals their strategy: if you can't outpace the competition on the track, then buy the company that makes the best running shoes. By investing so heavily in Scale, Meta is vertically integrating the foundational layer of AI development. As part of the deal, Scale’s founder, Alexandr Wang, will move to Meta to lead a new division focused on superintelligence. The fallout, however, was immediate. Google began the process of terminating its massive contract, while Microsoft and OpenAI also formally moved to sever their own deep relationships with the platform, as no major company can afford to route its most sensitive data through an entity so closely aligned with a chief competitor.
Your Childhood Toys Are About to Get an AI Brain
Mattel has announced a major strategic partnership with OpenAI. This means the iconic brands, from Barbie and Hot Wheels to Fisher-Price, will soon be integrated with the powerful AI technology that drives ChatGPT. For Mattel, it is a bold leap to redefine its products for a digital-native world, creating deeper and more personalized play experiences. For OpenAI, it represents a significant opportunity to embed its technology into the fabric of daily life through one of the most trusted and recognized consumer brands on the planet. However, this ambitious vision comes with a significant and necessary dose of caution. The idea of AI-powered toys immediately brings up critical questions about privacy and child safety. Mattel has walked this path before with its 2015 "Hello Barbie," a Wi-Fi-connected doll that faced a strong backlash over data security and was eventually discontinued. This history places an immense responsibility on both Mattel and OpenAI to prove they can navigate these challenges correctly this time.
When Your AI Assistant Has No Inside Voice
Meta's new standalone AI app appears to challenge the foundational assumption that the questions you ask your search engine or AI assistant are for your eyes only, creating what can only be described as a privacy nightmare by design. The issue stems from a core feature that most users would never expect from a personal AI assistant: a social media-style public feed called "Discover." When a user has a conversation with Meta AI, a "share" button is present. Many people, reasonably assuming this function works like a direct message or sharing to a specific friend, have been clicking it, only to find their private queries, audio clips, and images broadcast to the entire app's user base. The app's design has been criticized for failing to make the public nature of this action sufficiently clear. This has led to a surreal and alarming public feed of unintentional oversharing. We have seen users publicly post queries about sensitive legal troubles involving named individuals, ask for advice on personal medical conditions, and reveal other identifying details like home addresses. The user's Meta AI profile is also linked to their Facebook or Instagram account, meaning these public queries are often tied directly to a real-world identity, amplifying the potential for embarrassment or harm. In response to widespread criticism, Meta has recently added a new pop-up warning to clarify that shared prompts are public. While this is a necessary step, the core design choice to blend a private chatbot with a public social feed remains a point of contention.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
we’ve also started publishing more frequently on LinkedIn, and you can follow us here
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
banner images created with Midjourney.