Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

The AI Skills Gap Shopify Just Made Real

We spend our days exploring AI, marveling at its breakthroughs, and occasionally wrestling with prompts that yield pure nonsense. We see the incredible promise; yet, we also can't ignore the data, like Pew Research showing many people try AI once, get a less-than-stellar result, and promptly file it under "Nope." That first fumble often overshadows the potential, reinforcing a crucial point: harnessing AI effectively isn't just plug-and-play; it's a genuine skill. This is precisely why a recent internal memo from Shopify's CEO, Tobi Lütke, demanding AI fluency feels like such a watershed moment.

In a recently surfaced internal memo (leaked and then officially shared by Lütke himself), Shopify established new terms: "Reflexive AI usage is now a baseline expectation at Shopify." This is no longer a suggestion to "tinker," as Lütke put it, but a fundamental requirement. He explicitly states that simply opting out isn't really an option anymore, calling stagnation "slow-motion failure." If you're not climbing, you're sliding.

What we find fascinating isn't just the mandate itself, but the why behind it, which aligns perfectly with what many of us have likely experienced. Lütke notes, "What we have learned so far is that using AI well is a skill that needs to be carefully learned by…using it a lot." He even acknowledges that many people give up after one bad prompt, missing out on the power of learning how to provide context and iterate. It’s less like summoning a genie and more like learning a complex piece of software or maybe even a musical instrument; your first attempt probably won’t be a masterpiece. (Our first attempts often sound like a dial-up modem trying to play jazz).

This resonates deeply with the creative and tech space. How many times have we stared blankly at a new tool, only to unlock its potential after forcing ourselves to integrate it into our workflow? Shopify is essentially institutionalizing that often painful but necessary learning curve. They're making AI fluency part of performance reviews, demanding teams justify not using AI before asking for more resources, and positioning it as a core competency alongside existing crafts. Lütke sees AI as a massive multiplier, enabling not just 10X productivity from top performers, but potentially 100X when combined with 10X tools. So, while the headlines might focus on the "mandatory AI" aspect, the real story here is the explicit acknowledgment that AI isn't magic. It's a powerful, transformative tool, and one that requires deliberate practice, experimentation (Lütke encourages sharing wins and losses), and the willingness to look a bit silly while you figure it out. It requires moving beyond the initial, often underwhelming, first date with generative text and committing to understanding how to truly collaborate with these digital minds.

Claude Goes to College

We spend a lot of time tinkering with AI, and Anthropic's Claude is one of our favorite tools. Its integration into our workflow is undeniable, but its penetration into the academic sphere raises particularly intriguing questions. Anthropic’s research team sought to clarify whether today's university students are leveraging these sophisticated language models merely as advanced grammar checkers or are engaging with AI on a more fundamental level for their coursework and learning. Their recently published Education Report offers data-driven look beyond the usual anecdotes, revealing a complex relationship between students and AI.

Anthropic found that STEM students, especially those in Computer Science, are embracing Claude most readily. Strikingly, Computer Science majors generated nearly 39% of the conversations analyzed, while representing just 5.4% of US bachelor's degrees, according to NCES data. Students in the Natural Sciences and Mathematics also showed higher-than-average adoption rates. Conversely, fields such as Business, Health Professions, and the Humanities demonstrated notably lower engagement relative to their enrollment figures. This disparity might reflect Claude's particular aptitude for tasks like coding, or perhaps suggests that STEM fields are simply further ahead on the AI adoption curve in educational contexts.

Anthropic's analysis went beyond simple usage counts to categorize the nature of student-AI interactions. They identified four distinct patterns, each representing a significant portion (between 23% and 29%) of the conversations.

  • Direct Problem Solving: Characterized by requests for immediate solutions or answers, such as debugging code or requesting factual explanations. The interaction is typically brief and task-focused.

  • Direct Output Creation: Involved instructing the AI to generate specific content, like drafting essay outlines, summarizing texts, or creating practice questions. The user seeks a tangible output with minimal collaborative input.

  • Collaborative Problem Solving: Marked by a more dialogic approach. The student and AI engage in a back-and-forth to troubleshoot issues or explore concepts, iteratively working towards a solution.

  • Collaborative Output Creation: Entailed working jointly with the AI to develop content. This could involve brainstorming presentation ideas, co-writing sections of text, or refining drafts through iterative feedback.

This diversity of interaction reveals that students are utilizing Claude as an active partner in various cognitive tasks. This capability distinguishes AI assistants from traditional tools like search engines, opening up new possibilities and introducing fresh complexities.

Digging into the type of thinking students offload to Claude yielded a provocative result, visualized using Bloom's Taxonomy. Anthropic discovered an "inverted pyramid:" the model was most frequently tasked with complex, higher-order functions like Creating (generating new ideas/content) and Analyzing (breaking down info). Foundational skills like Remembering facts or Understanding concepts saw much less action. This immediately begs the question: are students cleverly leveraging AI for demanding tasks, or are they outsourcing the very cognitive muscles they're meant to be developing? Anthropic flags this as a potential "crutch," worrying about foundational skill erosion.

The report doesn't sidestep the delicate issue of academic honesty. With nearly half (around 47%) of interactions classified as "Direct," seeking answers or content with limited user engagement, the potential for misuse is apparent. While many such requests could serve legitimate study purposes (like clarifying a definition or generating review material), the researchers also uncovered concerning examples, including requests for answers to test questions or assistance in circumventing plagiarism detection software. Anthropic appropriately emphasizes that determining intent or actual cheating from conversation data alone is impossible; context is crucial. A direct query could be part of an honest learning process or represent academic misconduct. Nonetheless, the findings underscore the profound challenge AI presents to traditional assessment methods and the ongoing need for institutions to grapple with defining ethical AI use in education.

Ultimately, Anthropic's research provides valuable empirical grounding for discussions about AI's role in education. It confirms that tools like Claude are rapidly becoming integrated into students' academic lives, serving as partners in complex cognitive tasks, particularly within STEM fields. The interaction patterns are diverse, ranging from simple queries to collaborative creation. The report avoids definitive pronouncements, instead contributing vital data to an ongoing, critical conversation among educators, students, and developers. 

Back to Basics

Your AI Collaborator Needs Communications 101

With multi-modal AI, we're all talking to our computers more than ever. Whether you're coaxing killer copy out of a language model, jamming with an AI music generator, or "vibe coding" your way through a complex project, AI has become the new collaborator. It's exciting, powerful, and sometimes weird. The conversation flow can be jerky, the feedback opaque, and the overall interaction less like a seamless partnership and more like trying to interpret smoke signals from a very intelligent toaster. As someone who spends a lot of time exploring these tools, we know the potential is huge, but the user experience often needs polish. Specifically, the communication aspect is crucial for making these powerful tools not just smart, but good collaborators, and a recent paper dives right into this: “Improving User Experience with FAICO: Towards a Framework for AI Communication in Human-AI Co-Creativity.

The authors focus on FAICO, which stands for Framework for AI Communication. Think of it as a guide not just for building co-creative AI, but for understanding how AI should talk back to us humans to make the whole process better. It’s about moving beyond just raw algorithmic power and thinking about the AI's conversational skills. FAICO was born from digging through over 100 academic papers on human-AI interaction and co-creativity. It breaks down AI communication into key ingredients that directly impact how we feel about and work with our digital partners. Forget just the output; this is about the dialogue.

FAICO suggests we need to think consciously about several things when designing or even just interacting with co-creative AI.

  • Modalities: This covers how the AI communicates. Does it rely solely on text, or does it incorporate visuals, sound, haptic feedback, or even an embodied presence? The choice of modalities impacts feelings of connection and collaboration.

  • Response Mode: This refers to when the AI initiates communication. Is it proactive, jumping in with suggestions (potentially interrupting flow), or reactive, waiting for user prompts or actions?

  • Timing: Related to response mode, this concerns whether communication happens instantly (synchronous) during the creative process or is delivered later (asynchronous), allowing for reflection.

  • Communication Type: This focuses on what the AI is saying. Is it providing feedback, making suggestions, or offering explanations for its actions?

  • Explanation Details: This addresses how much information the AI provides when explaining itself. Does it offer a full, detailed account, a moderate summary, or just the minimum necessary?

  • Tone: This involves the perceived emotional quality of the AI's communication. Is it polite, warm, friendly, and does it align appropriately with cultural contexts?

Right now, a lot of our creative back-and-forth with AI involves communication that just happens. It’s often an afterthought, the default setting. Focusing on the FAICO dimensions means shifting from accidental communication to an intentional strategy. For the people building these tools, this means asking how the AI should talk, not just whether it can perform a task. What tone builds trust? When is proactive feedback helpful versus annoying? For users, it means having the language to understand why an interaction feels clunky or smooth. Maybe it's not the AI's core intelligence that's the issue; maybe its timing is off, or its explanations are too vague. This awareness allows us to potentially tune the AI (if the tool allows, as FAICO envisions) or at least understand the friction points.

Getting these communication aspects right isn't just about making the AI seem "nicer." The FAICO paper highlights (and our own experiences confirm) that effective AI communication directly impacts user experience. It influences whether we trust the AI, whether we feel like we're truly collaborating, how confident we feel in the process, and how much we actually enjoy using the tool. Bad communication leads to frustration and abandonment; good communication fosters partnership and better creative outcomes. The researchers even propose practical tools based on FAICO, like design cards for developers and configuration tools for users. These tools allow us to tailor the AI's communication style.

So, as we continue to integrate AI deeper into our creative workflows, let's remember that building a better AI collaborator isn't just about smarter algorithms. It's also about teaching them how to communicate effectively. Frameworks like FAICO give us a language and structure for thinking about this, helping us move beyond vibe coding toward truly productive and maybe even delightful human-AI partnerships.

Tools for Thought

Meta’s Weekend Surprise: Llama 4 Crashes the Party

What it is: Over the weekend, Meta announced its newest AI model family: Llama 4. This new generation includes Llama 4 Scout, positioned as a nimble, lightweight model (17 billion active parameters) designed to run efficiently, even on a single H100 GPU. Crucially, Scout boasts a massive 10-million token context window, making it seriously adept at chewing through and remembering information from lengthy documents or conversations. Alongside it strutted in Llama 4 Maverick, a beefier mid-tier model (using 128 Mixture-of-Experts for efficiency) and aiming for more substantial reasoning, coding, and multilingual chops. Both Scout and Maverick come equipped with multimodal capabilities, meaning they're built to understand not just text, but also images, video, and audio. And Llama 4 Behemoth (rumored near two trillion parameters), while still in training, the eventual heavyweight champ meant to teach its smaller siblings a thing or two.

How we use it: Meta's sticking with the open-source playbook for Scout and Maverick. That open access is huge – it fuels innovation, allows for scrutiny, and lets us decide how to integrate this tech. For practical use, we'd likely lean on Scout, especially with that huge 10-million token context window. It puts up solid numbers across the board for its size, holding its own against models like Gemini 2.0 Flash-Lite on reasoning and image understanding, making it a strong choice for deep document analysis, long-running conversations, or general tasks where you need good comprehension without needing a beastly GPU setup. Maverick is the one we'd tap when the complexity ramps up. It reportedly excels in coding and multilingual tasks, and shows strong performance in reasoning and visual understanding. These scores suggest Maverick is the go-to for debugging tricky code, tackling nuanced problems that require sharp reasoning, working across different languages, or building sophisticated multimodal applications.

Intriguing Stories

AI Crashed Google’s Cloud Party

Normally, Google Cloud Next is where serious enterprise folks gather to talk about serious cloud business. We usually give it a respectful nod from afar. But this year, Google crashed its own party with a bag full of AI goodies so compelling, we had to jump in. Forget just infrastructure; the AI wave hit the shore hard, and there are some genuinely intriguing tools emerging for creatives and AI fans alike. Unsurprisingly, Gemini was the star quarterback. Google officially unleashed Gemini 2.5 Pro, their new heavyweight champion AI model. This multi-modal model is already climbing the leaderboards, and Google is making it accessible through AI Studio and Vertex AI. This feels less like an incremental update and more like a significant jump in AI capability.

Google is embedding its AI brainpower across the board. Beyond the expected upgrades in Vertex AI (their machine learning platform), Gemini is powering new features in Google Workspace: Google Sheets getting a "Help Me Analyze" button that acts like a mini data analyst, and Docs is offering "Audio Overviews" for podcast-style summaries. On the creative front, Google showed off Lyria (a text-to-music generator), Chirp 3 (a voice generator), and Veo 2, (an advanced text-to-video tool, complete with in-painting). Google also went big on "AI Agents" specialized AIs designed for complex tasks that can even work together. They unveiled tools to build these agents (an open-source kit, no less) and ways for them to communicate across platforms. We are particularly excited to get our hands on Agentspace (a platform that connects agents, enterprise search, and third-party tools to your enterprise cloud). Our favorite tool, AI Studio, got a UX redesign as well. Google has been shipping amazing products at lightspeed, and we are excited to try them all. We’ll be back with a full report, once we’ve dipped our toes in deeper.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.