- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
When Your Toaster Gets An Attitude
The chatbot revolution promised digital assistants that would understand our needs, solve our problems, and maybe even crack a decent joke. Instead, we got virtual customer service reps with the emotional intelligence of a turnip and the helpful capabilities of a Magic 8-Ball. As AI continues its full-court press into every corner of our lives, particularly customer service, the industry is discovering that simply teaching a bot to say "I understand your frustration" isn't the golden ticket to digital nirvana. The breakthrough realization is that fixing chatbots isn't a tech problem but a psychology problem.
The recent saga of Klarna (the buy-now-pay-later folks whose AI ambitions recently hit a speed bump) illustrates this point. Last year, they boldly announced their AI assistant could do the work of 700 customer service agents, presumably while doing the digital equivalent of a victory lap. Fast forward to May 2025, and CEO Sebastian Siemiatkowski is eating a slice of humble pie, announcing that Klarna is now hiring humans again to ensure customers always have the option to speak with an actual carbon-based life form. "AI gives us speed. Talent gives us empathy," a Klarna spokesperson explained, inadvertently penning the epitaph for thousands of overhyped chatbot initiatives. In their single-minded pursuit of cutting costs, they'd prioritized efficiency over quality, leading to what the CEO delicately described as "lower quality" support experiences. This AI-piphany isn't just about Klarna's failed experiment. According to recent Harvard Business Review research, even when chatbots are basically indistinguishable from humans in blind tests, people often still prefer talking to actual humans. Shocking, we know. But when a chatbot's inherent advantages are clearly communicated, user satisfaction can actually rocket past human-agent levels. It's like dating: lower our expectations upfront, and we're much less likely to be disappointed.
The psychology behind effective chatbots involves some simple tricks that are suspiciously effective. Labeling your bot as "constantly learning" makes users dramatically more forgiving of its little digital brain farts. Another move that works wonders is backing up claims with hard stats ("Handled 4,500 queries today with 94% accuracy"). As humans, we're absolute suckers for numbers, even when we have no idea if they're legitimate. What's intriguing is how customer mood completely changes the effectiveness equation. When users are angry or rushing (think: "my flight is canceled and I need to rebook NOW before I lose my mind"), they generally want robot-like efficiency, not a digital shoulder to cry on. Overly empathetic chatbots in these situations are like the friend who responds to your crisis with "tell me how that makes you feel"—actively making things worse. Conversely, when delivering good news, a warm, human-like approach amplifies positive buzz like a digital high-five.
The psychological dimension also explains why timing matters so critically in human-bot handoffs. Nothing breeds customer homicidal tendencies faster than explaining your complex problem to a chatbot for ten minutes, only to get transferred to a human who has zero context on your situation and cheerfully asks, "How may I help you today?" This breakdown in the handoff process is reportedly one of the key factors that led Klarna to rethink its strategy. The Klarna reset is emblematic of a broader industry trend: the initial "replace all humans!" enthusiasm around AI is giving way to a more nuanced, balanced approach. Even as AI capabilities improve exponentially, the most successful implementations will be those that create thoughtful human-AI partnerships rather than wholesale replacements. Because at the end of the day, even in our increasingly digital world, the human touch still matters.
Think AI is Overhyped? Schmidt Says We Haven’t Seen Anything Yet
It’s a notable moment when a voice like Eric Schmidt, with his extensive background at Google, suggests that the considerable attention artificial intelligence already commands is an underestimation of its true significance. In a recent significant TED Talk, Schmidt offered a compelling, and at times quite sobering, analysis of the ongoing AI revolution. He framed it not merely as another technological advancement, but as a societal transformation potentially on par with the most pivotal shifts in human history over centuries.
The core of Schmidt's argument is that the sheer scale of AI's impending impact is still not fully appreciated. He posits that we are currently witnessing the emergence of a distinct non-human intelligence. This development is set to fundamentally reshape societal norms and human endeavors, all within the span of our own lifetimes. This isn't just about our digital assistants becoming more conversational or image generation tools improving their accuracy; Schmidt directs our attention to AI's rapidly advancing capabilities in complex areas like strategic planning and autonomous process management. He discussed projections indicating a potential 30% annual uplift in productivity, a figure that current economic frameworks are not equipped to model. Furthermore, he alluded to a viewpoint among some leading researchers that artificial general intelligence (AGI), meaning AI that matches human cognitive abilities across a wide range of intellectual tasks, could be a reality in as little as three to five years. However, this immense transformative potential is accompanied by equally significant challenges. The extraordinary energy consumption projected for advanced AI systems is among the most immediate and daunting. Schmidt pointed to calculations suggesting that the United States alone might require an additional 90 gigawatts of power to support AI's expansion (an energy demand comparable to the output of approximately 90 new nuclear power plants). This raises serious and complex questions regarding infrastructure development and environmental sustainability, particularly when considering that a single complex AI query can demand considerably more energy than a standard internet search. Indeed, some forecasts indicate that global data centers, driven by AI's processing needs, could consume as much electricity as entire nations, such as Japan, in the very near future.
Beyond the critical issue of energy, Schmidt also carefully addressed the inherent dual-use characteristic of artificial intelligence. The very same technological innovations that hold the promise for unprecedented breakthroughs in fields like healthcare, for instance, by dramatically accelerating drug discovery or assisting in highly complex medical interventions, can, in parallel, be adapted for applications such as autonomous weaponry or sophisticated cyber warfare. This reality underscores the urgent necessity for robust frameworks governing responsible development and clear ethical guardrails. This need is further amplified by the context of intense global competition, primarily between the United States and China, to achieve leadership in the AI domain. Schmidt also acknowledged prevailing concerns that AI systems could evolve unpredictably, potentially developing emergent capabilities that extend beyond our current capacity for direct control.
Despite these formidable obstacles, Schmidt’s overarching message is one of profound opportunity and a clear call for proactive and informed engagement. He strongly encourages professionals across every sector to understand and thoughtfully integrate AI into their respective workflows actively. This is not merely a strategy for maintaining relevance, but a pathway to harness AI's immense power for widespread innovation and societal progress. He articulates a vision of a future marked by "radical abundance," a state where AI significantly augments human capabilities and plays a pivotal role in confronting and resolving some of the world’s most pressing and complex challenges. A key element of this envisioned future involves AI potentially reaching a stage where it can contribute to novel scientific discovery by discerning and interpreting patterns across widely divergent fields of knowledge, a sophisticated cognitive feat that has traditionally been unique to human intellect.
Schmidt’s discourse is an important reminder of the profound depth and extensive breadth of the changes that AI is poised to introduce. His perspective highlights the imperative for continuous learning, strategic adaptation, and unwavering critical thought. The path forward will undoubtedly involve navigating intricate ethical dilemmas and substantial practical considerations.
Back to Basics
Why Your AI Assistant Gets Confused Mid-Conversation
Ever felt like your AI chat started a brilliant idea, only to get hopelessly sidetracked a few messages later? You're not alone, and now there's research to explain why our digital assistants sometimes seem to lose the plot mid-conversation. A new paper titled "LLMs Get Lost in Multi-Turn Conversation" from Microsoft and Salesforce research teams finally puts hard data behind this frustrating phenomenon that's had us muttering at our screens.
The researchers conducted a clever experiment that mirrors how we actually use these systems in the wild. Rather than feeding LLMs perfectly crafted, complete instructions (the typical benchmark scenario), they broke instructions into smaller "shards" of information and revealed them one by one across multiple conversation turns. Think of it like explaining a project to a coworker: "I need a newsletter draft... about our new product launch... targeting existing customers... with an emphasis on the sustainability angle." The researchers found a dramatic 39% performance drop across various top-tier models compared to when they received all the information upfront. It's like watching someone confidently walk into a party and then completely forget why they entered the room. What's particularly interesting is that it's not primarily the models' "aptitude" (their raw ability to solve problems, which remains intact mainly) taking the hit. Instead, the real gremlin in the machine is that they become wildly unreliable, meaning their consistency plummets. The same model might nail a task perfectly one time and then utterly botch it the next when information arrives piecemeal. The culprit, it seems, is that the models make early assumptions based on incomplete information. They then stubbornly cling to these initial interpretations, a bit like that friend who forms an opinion in the first five minutes and simply won't be swayed by any new facts, no matter how compelling.
This tendency to stick with early, flawed ideas leads to another phenomenon many of us have noticed but couldn't quite articulate: "bloated" answers. As an LLM tries to reconcile new information with its flawed earlier understanding, it produces increasingly verbose responses that attempt to cover all bases. The digital equivalent of nervously rambling when you're not quite sure of the answer. This research offers valuable practical insight for those of us integrating these tools into creative workflows. "Prompt engineering" has now changed to be about understanding how to guide these systems through multi-turn conversations without leading them astray. It's less like writing a spec and more like mentoring someone with brilliant potential but questionable listening skills. Frustratingly, the researchers found that common fixes don't solve this fundamental unreliability. These models simply struggle with the journey of unfolding information. The paper's authors suggest a practical takeaway: if your LLM seems to be floundering mid-conversation, you're probably better off starting fresh with a more consolidated prompt than trying to course-correct its existing confusion.
This doesn't mean we need to become prompt engineering wizards overnight. But it suggests being more deliberate in structuring complex requests, particularly when they involve multiple requirements or parameters. Our natural, evolving conversational style (while intuitive to us) seems to be a hurdle for current AI. Understanding this helps us adapt our approach, ensuring we get the best from these powerful tools as they continue to learn the art of true back-and-forth.
Tools for Thought
Notion’s New AI Meeting Notetaker
What it is: Notion, the highly versatile all-in-one workspace tool that allows you to organize everything from complex projects and team wikis to personal notes and databases, has introduced an AI-powered Meeting Notetaker. Think of this new feature as a sophisticated assistant built directly into that flexible digital environment (and a direct competitor to Granola). It diligently records audio from your device, so there is no need to invite extra bots to your calls, transcribes the conversation as it happens, and then intelligently summarizes the key takeaways. The goal is to streamline meeting documentation, allowing everyone to focus on the discussion, confident that a clear record is being created within their central hub for work.
How we use it: We are huge fans of Notion for organization and productivity, and we’ve been testing out Granola (and loving it) for our meetings, so we feel a little guilty testing out Notion. However, so far, the functionality is very similar in that while the AI handles the transcription, we can still add our own manual insights directly onto the Notion page, creating a comprehensive record. After the meeting, the AI-generated summaries and identified action items are already neatly filed in our workspace, ready to be linked to relevant projects or tasks. It's brilliant for tracking who said what (as long as they say their name, the AI isn't quite psychic... yet), ensuring action items actually get actioned, and generally making sure those precious meeting minutes don't end up lost like a forgotten sock in the digital laundry. It’s less "minutes" and more "moments of clarity, captured.”
ElevenLabs Infinite Soundboard Unleashes Limitless Audio Creation
What it is: ElevenLabs has released an Infinite Soundboard. This is a soundboard unbound by the constraints of pre-recorded libraries, a sonic playground where any sound you can conjure in your mind can be instantly brought to life. Leveraging their powerful text-to-sound effects AI model, Elevenlabs has created a dynamic, on-demand soundscape generator that allows you to type in a description and instantly create multiple variations of that sound. It’s like having a Foley artist on call, ready to manifest any auditory whim with a few keystrokes.
How we use it: The Infinite Soundboard presents a user-friendly interface where you can either unleash the AI by typing in your desired sound or upload your own existing audio treasures. If you opt for the AI route, each prompt conjures four distinct interpretations of your request, allowing you to cherry-pick the perfect sonic texture for your needs. These generated sounds, alongside your uploads and a selection of handy pre-made soundboard presets, populate the customizable pads of your soundboard. The magic truly happens when you start triggering these sounds in real-time. Imagine a streamer punctuating their witty banter with a perfectly timed "rimshot" generated on the fly, or a game master layering atmospheric drones and creature soundscapes, all orchestrated live. You can even loop sounds to create continuous ambiences or build rhythmic patterns, essentially turning the soundboard into a surprisingly capable impromptu drum machine or ambient noise generator
Intriguing Stories
Gemini Hits the Road
AI is set to become your favorite (or at least, most advanced) backseat driver, as Google is bringing Gemini to Android Auto. This isn't just your old voice assistant with a new label. Gemini is a genuine upgrade, promising a shift from stilted, robotic commands to more natural, flowing conversation. Imagine asking complex questions like, "Find a quiet coffee shop with Wi-Fi between here and the airport that's open now," and actually getting a useful answer. This move towards intuitive, hands-free interaction means you can keep your eyes on the road and your mind on your brilliant ideas. Gemini also streamlines messaging by summarizing lengthy texts, drafting replies, and even translating into over 40 languages. Plus, it intelligently connects with your Gmail and Calendar to fetch details like meeting addresses, essentially acting like an ultra-efficient PA on wheels. For those longer drives or when you need a thinking partner, "Gemini Live" allows for more continuous, in-depth conversations, perfect for brainstorming or prepping for that big presentation, all hands-free. We can expect this AI chauffeur to start arriving on Android Auto in the “coming months.” Google I/O kicks off tomorrow, May 20th, and if the pre-buzz is anything to go by, we’re about to be deluged with AI news, with Gemini undoubtedly taking center stage. We're on the lookout for even more details on how these AI advancements will enhance not just our drives, but our broader digital lives, from new Gemini features to deeper OS integrations.
Generational Divide in AI Usage
In a recent wide-ranging interview at Sequoia Capital's AI Ascent event, OpenAI CEO Sam Altman offered insights into how different generations are adopting ChatGPT in remarkably distinct ways. According to Altman, clear generational patterns have emerged in how people engage with ChatGPT:
Older Users: "Older people use ChatGPT as a Google replacement," Altman noted, explaining that this demographic primarily employs the AI as an information retrieval tool. A more conversational and sometimes more efficient alternative to traditional search.
Middle Generation (20s-30s): This group has taken their usage a step further, with Altman observing they're using ChatGPT "like a life advisor." These users consult the AI not just for facts but for guidance on decisions, leveraging its ability to analyze options and provide personalized recommendations.
College Students and Gen Z: The most striking pattern emerges with younger users, who Altman says "really do use it like an operating system." This generation has developed sophisticated workflows, connecting ChatGPT to their files and documents, memorizing complex prompts, and integrating the AI deeply into their decision-making processes. "They don't really make life decisions without asking ChatGPT what they should do," Altman remarked, adding that for many young users, the AI "has the full context on every person in their life and what they've talked about."
Altman's observations about how younger generations use ChatGPT may offer a preview of how most people will interact with AI in the future. The progression from seeing AI as a search replacement to treating it as an operating system mirrors other technology adoption curves we've seen with smartphones and the internet. As Altman noted in the Sequoia interview, OpenAI aims to be "people's core AI subscription" something akin to an operating system for artificial intelligence. He envisions a future where AI evolves from today's assistant model to more agentive AI that can take autonomous actions, particularly in areas like coding, which he expects to see develop significantly in 2025. While ChatGPT's text capabilities drove its initial success, the introduction of visual creation capabilities in March 2025 propelled adoption to new heights. When OpenAI expanded its model's capabilities to include image generation with GPT-4o, user growth accelerated dramatically. "We added one million users in the last hour," Altman reported on March 31, 2025, describing what he called "biblical demand" for the new image generation features. Within the first week of the image generator's release, users created over 700 million images, with particularly strong adoption in emerging markets like India. The remarkable adoption patterns across generations, coupled with the explosive growth triggered by multi-modal capabilities, suggest we're witnessing not just the rise of a popular application but a fundamental shift in how humans interact with technology (or at least with one that generates Studio Ghibli-like pictures).
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
we’ve also started publishing more frequently on LinkedIn, and you can follow us here
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
banner images created with Midjourney.