- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
Teamwork 2.0: How AI is Rewriting the Rules of Collaboration
In a world increasingly fascinated (and slightly terrified) by Artificial Intelligence, it's easy to get caught up in dystopian visions of robot overlords and automated job losses. But what if AI isn't a threat, but actually our new side kick, especially for knowledge workers? Ethan Mollick and the team at the Digital Data Design Institute at Harvard just published a research paper that outlines some pretty groundbreaking findings suggesting AI may be our friend, not foe. The research team dove deep into the world of product development, thanks to a fascinating new study from Procter & Gamble, to see how AI impacts teamwork, and the results are quite surprising.
The study (a pre-registered, randomized controlled trial) involved hundreds of P&G professionals, from R&D wizards to commercial gurus, and they were tasked with developing new product ideas, with some teams getting access to the power of GPT-4 (or GPT-4o). The major takeaways:
AI is a Performance Power-Up: Turns out, one person armed with AI can be just as productive as an entire team working the old-fashioned way. Think of it as the Mario Kart star – suddenly, you're dodging banana peels and firing red shells like a boss - or for those not into Nintendo, finding the cheat code to unlock maximum productivity.
Silos? What Silos?: AI seems to have a knack for breaking down those pesky departmental barriers that plague so many organizations. With AI in the mix, R&D and commercial folks started speaking the same language, coming up with technically sound and market-savvy solutions. It’s like the Tower of Babel, except instead of chaos, everyone is suddenly fluent, making this sound more and more like a utopia every minute.
Happy Bots, Happy Workers: Forget the dystopian visions of stressed-out humans toiling alongside emotionless machines. The study actually found that working with AI led to more positive emotions and less anxiety. Turns out, a chatbot having a good vibe is now a scientifically-validated phenomenon.
The implications for workers navigating this AI revolution are clear: AI isn’t just a fancy calculator or a glorified search engine. It’s evolving into a powerful collaborator, a source of inspiration, and maybe even a digital confidant tackling workplace stress. This doesn't mean we can all kick back and let the robots do our thinking for us. The P&G study hints that the best results come when humans and AI work together, each bringing their unique strengths. It's like peanut butter and jelly, or Beyoncé and Jay-Z – two great tastes that taste even better together.
Altman’s AI-genda: Peeking into OpenAI’s Crystal Ball
There's a particular expression technologists get when they're seeing around corners that the rest of us can't—a mix of excitement, impatience, and mild frustration at having to explain what seems obvious to them. That's exactly the energy radiating from Sam Altman's recent Stratechery interview, where OpenAI's CEO described a future in which AI doesn't just help us work faster but fundamentally changes how human creativity and productivity function. Altman's vision might shock you, if you still think of ChatGPT as a clever text generator.
We've been parsing through the full transcript of Ben Thompson's conversation with Altman, and what emerges isn't just another tech executive's vague proclamations about "AI changing everything." Instead, Altman offers something far more concrete and immediate. He's not predicting the future—he's building it, right now, and we're all unwitting beta testers.
One of the most interesting revelations is that coding as we know it is already being transformed. Altman claims AI handles over 50% of coding workloads in many organizations today. Not tomorrow, not next year—today. But that's just the appetizer. The main course is "agentic coding," where AI doesn't just assist programmers but takes over entire complex tasks autonomously. Your engineering team of 20 might soon accomplish what previously required 100 people, with AI handling the heavy lifting while humans focus on direction and creativity.
For anxious computer science students wondering if they've chosen a soon-to-be-obsolete career path, Altman offers pragmatic advice: learn to work with these tools, not against them. The value isn't in writing code by hand anymore, but in knowing how to collaborate with AI systems effectively. This isn't the gentle "AI will augment, not replace" corporate message we've heard for years—it's a frank admission that the nature of technical work is being fundamentally rewired.
What really sent us down a rabbit hole was Altman's vision of "personal AI"—a digital companion that follows you across the internet, learning your preferences, habits, and thought patterns over time. He envisions a future where your OpenAI account becomes a passport, carrying your personalized AI from site to site. It's less "Sign in with Google" and more "Bring your digital twin with you everywhere." The implications for privacy, personalization, and how we interact with information are staggering.
On the product front, OpenAI isn't slowing down. They've just released GPT-4.5, with GPT-5 apparently not far behind. Altman speaks of bundling "three or four things on the order of ChatGPT" into a single subscription. This isn't just feature creep—it's a play to become the dominant consumer AI platform. When he talks about the value of a "1 billion user site," you can almost hear the wheels turning as he plots OpenAI's position as the layer sitting atop virtually every other application. The business strategists in us perked up when Altman laid out his "triumvirate of value" in AI: building the dominant internet company, creating the most cost-effective inference infrastructure, and leading in research and model development. Most fascinating was his assertion that all but the most cutting-edge models will quickly become commodities—a tacit acknowledgment that OpenAI's long-term value isn't in the models themselves but in the consumer relationships and infrastructure they're building.
DeepSeek's emergence as a competitor clearly left an impression. Altman candidly admitted it was a "wake-up call" that pushed OpenAI to enhance their free tier. OpenAI’s chain-of-thought reasoning feature went viral partially because the company had hidden similar capabilities. In response, Altman dropped what might be the interview's biggest bombshell: the free tier will eventually get GPT-5. He also hinted strongly at open-sourcing some models, representing a return to OpenAI's original mission after years of increasingly closed development.
Perhaps most intriguing was Altman's take on scientific breakthroughs. While he's been touting AI's potential to accelerate scientific discovery, he acknowledged we haven't yet seen the transformative research breakthroughs many have predicted. Altman believes that the models simply aren't smart enough yet, but we're "on the path." When pressed about whether transformer-based models can create new knowledge rather than recombine existing information, Altman didn't hesitate: "Yes." His confidence in AI's creative potential seems boundless, grounded in a materialist view that human creativity itself is essentially recombination with slight modifications.
As Altman advised graduating high school seniors: get really good at using AI tools. Just as coding literacy defined the last generation of tech workers, AI literacy will define the next. The future belongs to those who can harness these systems effectively, directing them toward meaningful goals while contributing the human insight, creativity, and judgment that AI still lacks.
Sovereign AI: Shaping a Nation’s Digital Destiny
Artificial intelligence is rapidly transforming industries and societies, prompting important discussions about its impact and future. Recently, on the a16z podcast, Nvidia CEO Jensen Huang and Mistral AI's Arthur Mensch discussed "Why Every Nation Needs Its Own AI Strategy," offering valuable insights into the strategic importance of AI for national development. Huang and Mensch emphasize that AI is a general-purpose technology with far-reaching implications, comparable to the impact of electricity or the printing press. It's not merely about technological advancement; it's about shaping the future economic and societal landscape, and so they argue that nations need to proactively develop their own AI strategies to ensure they can leverage its benefits and mitigate potential risks.
A key concern discussed was the potential for "digital colonialism," where a few dominant nations or corporations control the AI landscape, potentially influencing the technological and cultural development of others. This is why the conversation pivoted to suggest that nations are encouraged to develop "Sovereign AI" capabilities. "Sovereign AI" refers to a nation's ability to control, develop, and deploy AI models using its own infrastructure, data, talent, and governance frameworks. This involves more than just acquiring technology; it requires a holistic approach. Investment in computing resources, data centers, and networks are necessary for AI development. Crucially, there needs to be a developed ability to gather, manage, and utilize national datasets for training AI models, as well as cultivating a skilled workforce capable of developing, deploying, and maintaining AI systems through education and training programs. Lastly, the conversation touches on the importance of establishing ethical guidelines, legal frameworks, and regulatory policies to ensure responsible AI development and deployment. By developing these capabilities, nations can ensure that AI evolves to align with their specific needs, values, and cultural contexts.
The discussion also highlights the interplay between open-source and proprietary AI models. Open-source AI offers several advantages, including transparency, security, and accelerated innovation through community collaboration. However, specialization is also crucial. While general-purpose AI models provide a foundation, tailoring models to specific industries, languages, or regional contexts can significantly enhance performance and relevance.
The development of strategic AI capabilities has significant implications for creative professionals. As AI becomes increasingly integrated into creative processes, it presents both opportunities and challenges. AI can provide creative professionals with new tools for automating tasks, generating content, and exploring innovative design solutions. In conjunction, there is a growing need for creative professionals to ensure that AI systems reflect diverse cultural identities and values, contributing to the development of culturally sensitive AI models and ensuring that AI-generated content aligns with ethical and societal norms. That being said, while AI can automate certain aspects of creative work, it cannot replace human imagination, critical thinking, and emotional intelligence. The most successful creative professionals will be those who can effectively combine human creativity with AI capabilities.
The discussion between Huang and Mensch underscores the importance of strategic AI development for nations seeking to thrive in the digital age. By investing in infrastructure, talent, and governance frameworks, and by embracing both open source and specialization, nations can harness the transformative power of AI while ensuring it aligns with their unique needs and values
Back to Basics
AI and the Serendipitous Spark: Cultivating Creativity Through Deviation
The relationship between artificial intelligence and human creativity is complex, and increasingly fascinating. We believe that AI, far from being a mere automation tool, can actually serve as a catalyst for innovative thinking – particularly when its imperfections lead us down unexpected paths. Consider AI a cognitive stimulant, sometimes subtle, sometimes transformative, that challenges our assumptions and expands our creative horizons. For quite some time, the prevailing focus in large language model (LLM) development has been on optimizing for quality and accuracy. While this has undoubtedly produced impressive results, it has also inadvertently led to a certain degree of homogeneity in generated outputs. Models, in effect, become too proficient, adhering to established patterns and failing to venture into uncharted creative territory. They become the AI equivalent of the highly skilled technician who executes flawlessly but lacks the vision to innovate.
However, a recent paper from researchers at Midjourney and New York University, "Modifying Large Language Model Post-Training for Diverse Creative Writing," offers a compelling alternative. The authors challenge the notion that AI creativity is solely a function of technical sophistication. Their central question: How can we foster greater diversity in LLM outputs without compromising the underlying quality of the generated content? Their answer lies in the concept of "deviation."
Deviation, in this context, signifies the degree of divergence between a generated output and other outputs produced from the same prompt. The researchers hypothesized that by explicitly rewarding models for generating unique, yet still high-quality, content, they could encourage a more exploratory and imaginative approach to creative tasks. It's analogous to the process of scientific discovery, where unexpected anomalies and deviations from established theories often pave the way for groundbreaking insights.
The researchers used an approach to incorporate deviation by:
Identifying outlier content. For a given prompt, they identify the generated responses that exhibit the most significant differences from the average content being produced.
Amplifying Outlier Significance. In the training set, they amplify the presence of these outliers and teach the model that its future results need to resemble these responses.
Mitigation of Quality Loss. They use certain algorithms to ensure that in teaching the model that deviation has value, they do not compromise the underlying quality of its responses.
Models trained using the "deviation" strategy exhibited a marked increase in output diversity, while maintaining a high quality level. Intriguingly, the top-performing model achieved diversity scores that approached those of human-generated content, suggesting that the AI had begun to emulate the unpredictable and imaginative qualities of human creativity. It seems as though what the language model was learning during this process was "hallucinations" where the model would create results which are beyond what its traning data contains. This hallucination would be at times beyond the training data in ways that create serendipity.
The results underscore the importance of embracing imperfection and encouraging exploration in the development of AI systems. This approach reveals that AI, in its current state, is more akin to a stimulus rather than a rival for creativity. It has the potential to push us to think more creatively through its errors by presenting a different lens of reality. As Bob Ross knew, there are no mistakes, just happy little accidents. But let's not forget quality control.
Tools for Thought
Gemini Gets Another Upgrade: Canvas and Audio Overviews
What it is: Canvas is a real-time collaborative workspace baked right into Gemini. It's designed to bridge the gap between brainstorming and execution, letting you transition seamlessly from initial concept to working prototype – whether you're sketching out a new marketing campaign or building the next killer web app. It offers all the basic code writing features with HTML, CSS, and Javascript and lets you visualize the outcome. Audio Overview is a way of converting documents, research, and notes into podcast-style audio which is intended to help digest complicated information more intuitively and while multi tasking. (This might sound familiar, as Google NotebookLM has this same experience.)The hosts are simulated and engage in dynamic discussions of the document highlighting key topics and points. Both are available for premium subscribers and are also coming to mobile.
How we use it: We're still just scratching the surface of what's possible with these new tools, but here's a glimpse into our current workflow. Canvas has become the whiteboard we were looking for, and it has allowed us to generate outlines, draft articles, and revise segments with real-time suggestions. In the coding universe it has allowed us to visualize HTML and generate React for our websites with forms, buttons, and interactive elements. Audio Overview offers quick summarization and ingestion of data and insights, so you can learn while cooking or cleaning and engage with the concepts like it was a radio broadcast instead of a boring document. The future for Audio overview has implications for accessibility, multi tasking, and content summarization.
OpenAI Just Turned Up the Volume: Next-Gen Audio Models Arrive
What it is: OpenAI just dropped a suite of next-generation audio models in their API, and they're not messing around. Forget robotic voices and garbled transcriptions – we're talking about AI that can understand nuance, accents, and even varying speech speeds with unprecedented accuracy. The new gpt-4o-transcribe models are setting a new benchmark for speech-to-text, while the gpt-4o-mini-tts model lets you control how the AI speaks, opening doors to custom voice experiences. Imagine customer service agents with empathy, narrators with dramatic flair, or even AI companions with a unique vocal personality.
How we use it: We’ve just started exploring the possibilities, so we’ve been using the tools to converse, ask questions and get a feel for the personalities. Soon, we expect to use the new functionality to create more engaging and accessible content, build voice-powered applications that feel truly human, and streamline workflows with accurate and reliable transcriptions.
Intriguing Stories
AI Budgets Defy Gravity, Even When Tech Stocks Don’t
Remember when everyone was saying the AI bubble was about to burst? Turns out, Wall Street's "perma-bulls" aren't listening. Despite the tech stock rollercoaster we've been riding, one analyst, Daniel Ives from Wedbush Securities, is claiming AI spending is still on the rise. Ives estimates that AI initiatives now gobble up a whopping 12% to 15% of many IT budgets. If AI budgets are truly growing, it suggests the AI revolution is more than just hype. And according to Ives, IBM stands to benefit enormously. While he still believes that Palantir and Salesforce are strong plays, IBM has been added to the "Wedbush Best Ideas List". Apparently, IBM's cloud penetration is exceeding expectations, offering a "massive opportunity to monetize its installed base." J.P. Morgan isn't quite as enthusiastic, citing potential disruptions from government spending cuts. Even amidst market volatility, the AI trend appears to be resilient. Companies are betting big on AI, and IBM may be poised to reap the rewards. Whether this translates into long-term gains remains to be seen, but it's a clear signal that AI is far from a passing fad.
The Token Empire: NVIDIA’s Plan for AI
Forget assembly lines and smokestacks. According to Nvidia CEO Jensen Huang, the future belongs to the "AI factories." As Huang stated during his keynote at the GTC conference, "They're AI factories because they have one job and one job only — generating these incredible tokens that we then reconstitute into music, into words, into videos, into research, into chemicals or proteins." It's a bold vision where every company transforms into a powerhouse of AI, constantly feeding the machine with data to create something entirely new.
So, what does it mean to be an "AI factory"? The key is understanding the role of "tokens." Think of them as the basic unit of language for AI, the raw material for innovation. AI models break down words and data into these numerical representations to understand and process information. Companies won’t be in the business of only products and services, but AI token generation. Huang envisions companies, regardless of their core business, needing to focus on how they generate, capture, and leverage data to fuel their AI models. It's not enough to just build products; companies must also build the "mathematics" – the AI systems – that drive them. Look at Tesla, which could be seen as a token-generating machine, collecting vast amounts of data from its vehicles to improve its self-driving capabilities. Or consider companies like Vercel, whose v0 tool converts user requirements in English into fully functional websites and applications – turning those requirements directly into AI fuel. These requirements in English can then be turned into tokens for AI use. The whole company needs to harvest its institutional knowledge and data. The shift isn't just about technology; it's about a fundamental reimagining of how businesses operate. Companies must now prioritize data collection, processing, and analysis as core competencies. They must learn to identify and extract valuable insights from every interaction, every process, and every decision. In short, the future belongs to those who can harness the power of data to create intelligent systems that transform industries and redefine the business possibilities.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
banner images created with Midjourney.