- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
The Blueprint for Better AI is a Box of Crayons
If you're going to study how kids play, you might as well call in the undisputed world champions. When the LEGO Group teamed up with the minds at The Alan Turing Institute to explore how children in the UK are actually using generative AI, we knew the findings would be essential reading. And the one finding that really stopped us in our tracks was this: when given a choice for creative tasks, most kids chose good old-fashioned, messy crayons and art supplies over generative AI. In a world racing toward artificial everything, the kids are telling us something profoundly important about creativity.
The report isn’t some anti-technology manifesto from the playground. The children in the study were more than capable of engaging with the tech. The issue is that they felt a greater sense of pride and a stronger emotional connection to the art they physically created themselves. The process of using AI was often a quiet, solitary one, whereas making things with their hands was social and collaborative. This suggests that for creativity to feel meaningful, the experience itself matters just as much as the output. When a tool makes you feel less confident in your own abilities, as some children reported feeling after seeing AI-generated images, it's not a very good tool. This isn’t a failure of the technology itself, it's a failure of imagination on the part of its creators.
This is where we get excited, because we see this report as a blueprint for innovation. It's the moment in the movie where Tony Stark looks at a pile of broken parts and sees the next Iron Man suit. The study reveals exactly where the current generation of tools falls short, which provides us with a map for building something better. For instance, the researchers observed that children of color grew frustrated while trying to coax an AI into generating images that actually resembled them. When users don't feel represented in a tool, they eventually walk away. For every creative professional out there, that’s not a data problem, that's a design challenge we can solve.
The opportunity here is massive, and the report highlights some stark inequities we can tackle right now. Consider this: the UK-based study found that 52% of private school children report using generative AI, compared to only 18% of children in state schools. That’s a massive digital divide before kids even hit their teens. These tools were designed for adults and then simply handed down to children, who have entirely different needs. There’s a huge opening to build something new from the ground up that addresses this imbalance. The research highlights a clear desire for tools that can assist children with additional learning needs, a use case that both teachers and children were genuinely excited about.
So, this isn't a story about choosing between crayons and code. It’s about building technology that captures the joy, expressiveness, and boundless possibility of a brand-new box of art supplies. It’s a call to all of us in this space to stop thinking about what AI can replace and start imagining what it can uniquely create.
AI’s New Consumer Playbook is Speed
There are moments in technological history that feel like a genuine paradigm shift, where the ground beneath our feet seems to reconfigure itself almost overnight. We’re living through one of those moments right now, and the catalyst, as you’ve probably guessed, is Artificial Intelligence. A recent a16z Podcast episode, by the Consumer Tech Team, took a deep dive into this AI-driven transformation of consumer technology. Their conversation culminated in a thesis that captures the current moment: in this new landscape, "velocity is the moat." The speed at which companies innovate, adapt, and deploy isn't just a competitive edge anymore; it’s becoming the primary form of defensibility, a stark contrast to the more deliberate, multi-year product cycles we've seen in the past.
The a16z team highlighted how new AI tools, especially in voice synthesis and AI cloning, are experiencing an almost unprecedented adoption curve. We’re seeing instances where consumer virality is propelling these tools directly into enterprise applications, effectively bypassing the traditional, often lengthy, hype cycle. What we're witnessing goes deeper than simply bolting on new features to existing software. The driving force is the relentless, almost daily, advancement in the underlying AI models themselves, ushering in a new era of capability. This constant improvement creates a dynamic environment where keeping pace is paramount, pushing creators and businesses alike to innovate or risk becoming a footnote in tech history.
Another compelling (and to be honest, terrifying) discussion revolved around what the a16z team termed the "human insight layer" now being unlocked by AI. We're venturing beyond AI systems that simply process data or follow commands, into a realm where AI can understand, interpret, and even augment the nuances of human connection. The podcast shared anecdotes such as users leveraging platforms to practice social interactions, gain self-understanding, and even, as one user credited, learn how to talk to girls and eventually find a real-life girlfriend. It suggests AI’s peak value, as Justine Moore put it, might truly be in "enabling better human connection." Imagine an AI that doesn't just help you with the technicalities of your craft, whether it's writing lyrics, composing a melody, or perfecting a voiceover for a project, but one that also grasps the emotional undercurrents of your work, understands your creative intent, and assists in forging a more resonant connection with your audience, or even helping you better understand your own creative process.
The goal here is to find a more efficient way to get things done in order to enrich the human experience itself. The a16z partners explored the tantalizing possibilities of AI helping us find compatible collaborators, for everything from a startup to a band, while also providing a more nuanced and empathetic way to communicate our ideas.
Back to Basics
AI Does Not Think the Way We Think
There’s a fundamental error in how we approach artificial intelligence. We have spent years debating if machines can truly think, and in doing so, have set ourselves up for a grand misunderstanding. When a system can write a sonnet or diagnose a software bug, our instinct is to imagine a human-like mind at work, to attribute to it familiar qualities. It's time to replace that assumption with a more powerful idea. The most vital and interesting discovery isn't how AI is like us, but the immense potential found in how it is profoundly, and usefully, different.
The raw evidence for this idea comes from an unexpected source: Apple. Given that the company is widely seen as playing catch-up in the AI development race, its recent research paper, “The Illusion of Thinking,” was met with a healthy dose of industry side-eye. The immediate reaction across social media was predictably cynical, framing the paper as a convenient bit of corporate FUD. "When you can't win the race," the critics jeered, "you try to convince everyone the race doesn't matter." Yet, beneath the initial mockery lies a crucial scientific contribution. They sidestepped the usual benchmarks and instead gave top AI models a simple, controllable logic puzzle called the Tower of Hanoi. Their findings were specific and startling: when the puzzle’s complexity crossed a certain threshold, the models’ performance didn't just degrade, it completely collapsed. Even more unnervingly, the AI’s effort went down, not up. Faced with a real challenge, it simply gave up.
But data is just data until someone connects it to a bigger story. That’s where a veteran of the software world, Steven Sinofsky, comes in. In his analysis of the paper, he takes Apple's clinical findings and diagnoses a much deeper, almost philosophical problem within the tech industry itself. Sinofsky’s essential point is that this paper is important because it’s a powerful antidote to our worst habit: anthropomorphism. He argues that we’ve been here before (anyone remember Clippy?) and that projecting human traits onto our software has always led us astray. For Sinofsky, the AI’s failure is a gift, a bucket of cold water that can finally cure us of the delusion that we are building an artificial human.
This one-two punch of evidence and analysis gives us a much clearer picture. It’s like watching Commander Data from Star Trek. The Apple paper shows us his brain flawlessly executing calculations, and Sinofsky’s commentary reminds us why Data still tilts his head in confusion at a human turn of phrase. The brilliance is real, but it is fundamentally alien. The models we use every day are not junior colleagues; they are powerful logical engines operating on principles that are often counterintuitive to our own evolved intelligence. And here’s why this distinction is a cause for incredible optimism. For too long, we’ve been trying to fit a square peg into a round hole. This research, amplified by clear-eyed analysis, frees us from that. It gives us permission to stop asking the AI to be more like us and to start appreciating what it is on its own terms. When we understand that we’re collaborating with a truly alien intelligence, the possibilities are thrilling. It won't get bored. It doesn't have a fragile ego. It isn't weighed down by the cognitive biases that shape our own thinking. By respecting its non-human nature, we can learn to interact with it more skillfully.
Apple’s Patient and Awkward Future
The opening moments of an Apple WWDC keynote are always telling. So, when Apple kicked off this year's event with a segment on Apple TV (and not new tech), anyone who has followed the company for a long time knew to adjust their expectations. It was the first clear signal that this year wasn’t about explosive, immediate change, but about something more subtle and disappointing. The most visible part of this was a new design language called Liquid Glass. It’s a glassy, translucent aesthetic meant to unify every Apple OS with a fluid, modern feel. The new look has sparked considerable debate since its reveal. Many developers and designers have raised important questions about its usability on current devices, particularly concerning contrast and accessibility. On a phone screen, the layered, see-through elements can feel busy, leading to a compelling theory about its true purpose. The most logical explanation is that Liquid Glass wasn't primarily designed for the iPhone in your hand, but for the Apple Glasses of the future. It’s a user interface designed for augmented reality, where digital information is overlaid on the real world. In that context, a fluid, see-through design makes a great deal of sense. For now, it seems we are getting an early preview of a new design paradigm, even if it feels unfamiliar on our current hardware. We saw a similar long-game approach with artificial intelligence. While the industry is consumed by a frantic race for AI dominance, Apple appears to be choosing a different path. The updates to Apple Intelligence were thoughtful, practical, and deeply integrated into the operating system (and updates that already existed in competitors apps). Features like Live Translation and Circle to Search are genuinely useful additions, but better late than never.
Tools for Thought
OpenAI Unlocks Some Workflows
What it is: OpenAI has rolled out a suite of powerful new features designed to supercharge productivity: Connectors and Record (for Team users). Connectors act as bridges, enabling ChatGPT to securely access and interact with your data across various cloud-based services and applications, including Google Drive, Microsoft OneDrive, HubSpot, GitHub, and even communication platforms such as Slack and Gmail. This means ChatGPT can now analyze your documents, summarize your emails, or even pull data directly from your CRM, all within its familiar interface. Complementing this is the Record feature, a built-in tool for capturing and transcribing audio from meetings. It provides an editable transcript and a summary, making your discussions searchable and actionable, effectively turning spoken conversations into valuable data points within ChatGPT.
How we use it: We've been utilizing the Connectors to transform our workflows. The Google Drive connector, for instance, has been a game-changer. Instead of manually hunting through years of project briefs and internal notes, we now ask ChatGPT to scan our archives to unearth key processes and best practices, to helps us draft comprehensive Standard Operating Procedures (SOPs) and workflow documents, building our team's official playbook directly from our own scattered data. The power of these connectors extends to our communications, too. We recently used it to search our email history for a conversation with a former mentee. By instantly surfacing our past discussions, ChatGPT provided the perfect context to draft a thoughtful message with fresh, relevant advice, making the outreach far more personal and impactful.
Intriguing Stories
OpenAI Court Order Spills All the AI Tea
Many of us have adopted platforms like ChatGPT, integrating them into our creative workflows, business strategies, and even our daily problem-solving. We’ve brainstormed, drafted, and refined, often with a baseline assumption about our control over the data we share. That little "delete" button, those opt-out settings for training data, felt like a digital handshake, an understanding of privacy. Well, a court has directed OpenAI to preserve all ChatGPT user logs. And when we say all, we mean a rather comprehensive sweep. This includes everyday exchanges, in-depth project discussions, conversations that users believed they had wiped from existence by deleting them, and even the sensitive data that flows through OpenAI's API for its business clients. The upshot is that deleting a chat might not be the clean break we once imagined, creating what OpenAI itself has described as a serious "privacy nightmare.”
This whole situation unfurled from a copyright lawsuit, with news organizations raising concerns about how AI tools might be used to access their content. The argument was made that users might delete conversations to cover their tracks, which led the court to issue this sweeping preservation order. OpenAI is actively challenging the order. They argue it undermines the privacy commitments made to users and could potentially clash with significant international data protection regulations (aka GDPR). People use these AI tools for an incredibly diverse range of tasks, some mundane, others deeply personal or commercially sensitive. The idea that these interactions are now being stored, irrespective of users' attempts to delete them or their chosen privacy settings, is, to put it mildly, unsettling for many. This issue, however, stretches beyond the particulars of a single lawsuit. It taps into a much larger conversation about user agency and data control in our increasingly AI-driven world. For some time now, much of the public discussion surrounding AI data has centered on how it's used to train the models. In that context, opting out often felt like a way to exercise some control. This court order, though, pivots the focus sharply to data retention for legal and discovery purposes, which is a fundamentally different scenario. OpenAI has stated this order essentially forces them to set aside their established privacy policies and commitments to users regarding data deletion. This case serves as a potent reminder that the rulebook for AI governance is still being written, often in real-time, through legal challenges and policy debates. Yet, as the OpenAI situation demonstrates, the landscape can shift rapidly, and the definition of "privacy" can become surprisingly fluid depending on the context.
Jony Ive’s Design Magic Coming to an E-Bike Near You
Alright design enthusiasts and tech followers, here’s a collaboration that’s sure to pique your interest. Jony Ive, the visionary designer whose work at Apple shaped so many of the iconic products that have become integral to our daily lives, has been contributing his renowned expertise to Rivian, the innovative electric vehicle company, specifically for their first foray into the world of electric bikes. According to reports from TechCrunch, a team from LoveFrom dedicated about 18 months to working alongside Rivian's own designers and engineers in a focused, confidential project. LoveFrom concluded their work on this micromobility initiative in the fall of 2024, and the project has since evolved into a new, independent company called "Also," launching with an impressive $105 million in funding. Rivian’s founder and CEO, RJ Scaringe, who also sits on Also’s board, offered a deliberately understated description of the e-Bike, saying it includes "a seat, and there’s two wheels, there’s a screen, and there’s a few computers and a battery." While he also confirmed it will be "bike-like," this leaves a vast canvas for innovation, especially when considering Ive’s potential influence. LoveFrom has previously consulted with Rivian on elements such as their infotainment system and retail design, indicating an established relationship. This e-bike project, however, appears to have been a more deeply integrated effort.
Training a Million Students for the Future
The UK government is taking the spread of AI quite seriously, announcing a pretty hefty initiative to get the next generation ready for an AI-powered future. They're planning to train one million students in AI skills as part of a broader £187 million "TechFirst" scheme. The drive comes as their own research suggests that by 2035, AI will be a significant part of about 10 million jobs in the country. Prime Minister Keir Starmer is championing this push, emphasizing the goal for Britain to be an "AI maker, not an AI taker." The plan will also invest an extra £1 billion to significantly boost the UK's computing power and a partnership with tech giants like Google, Microsoft, and Nvidia to help train 7.5 million workers in essential AI skills by 2030. To be clear, the UK isn't the only one prepping for this AI wave. We're seeing similar ambitious moves all across the globe. For instance, the European Union has its own Digital Education Action Plan pushing AI and data literacy. Nations like China and India are also investing heavily in AI education and talent development, recognizing the transformative potential. Places like the UAE are making significant strides with their national AI strategies, creating a surge in demand for AI-savvy professionals. It’s a worldwide effort to ensure populations aren't just consumers of AI, but active participants and creators in this new era. Interestingly, alongside these ambitious plans in the UK, there's an acknowledgment of the public's concerns. Starmer urged people to "push past" worries about AI taking jobs, suggesting that "AI and tech makes us more human." Meanwhile, Technology Secretary Peter Kyle also offered a dose of realism, admitting that AI "does lie" and isn't flawless, stressing that understanding how these powerful tools work is key to using them wisely.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
we’ve also started publishing more frequently on LinkedIn, and you can follow us here
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
banner images created with Midjourney.