Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

Humanoid Robots are Raising Billions, but No One is Buying (Yet)

Figure AI just raised over a billion dollars at a $39 billion valuation. For a company that's basically two years old with a handful of pilots and zero scaled deployments. That funding number tells you everything about where robotics is right now. The money believes, but the market doesn't.

Product-market fit means customers are pulling your product into existence. Growth feels inevitable because the alternative (not having your product) becomes unthinkable for your customers. That's fit. When the market reaches for you. Robotics has the opposite problem. Companies are pushing robots at customers who remain politely interested but uncommitted. BMW runs a pilot. Amazon tests a few units. Everyone talks about the potential. Nobody's placing orders that matter. The tell is in the language: pilots, demonstrations, partnerships, collaborations. The gap between what robotics has and what fit looks like shows up in a few specific places.

  • These robots only work in spaces we already built for humans. That sounds like an advantage until you spend time thinking about it. Factories have stairs because humans have legs. Warehouses have shelves at certain heights because humans have arms. Tools fit human hands because that's who uses them. Humanoid robots inherit these constraints without the adaptability that makes humans actually useful. We improvise, work around problems, and we handle the unexpected. Robots do exactly what they're programmed for in exactly the conditions they were trained in. BMW's pilot showed efficiency gains, for one specific repetitive task and in an environment that had been modified and controlled and monitored to make the robot successful. That's not scalable. That's a science experiment with a forklift watching from the sidelines in case anything goes wrong.

  • The value isn't in the robot body; it's in the system running the robot. Figure walked away from their OpenAI partnership this year and said they'd build AI models themselves. That's the real competition. Impressive hardware isn't the scarce resource anymore. What's scarce is software that lets robots learn from real deployments, improve over time, and handle situations that didn't exist in the training data. Until those systems generate actual returns that show up in quarterly earnings reports, we're still in the "trust us, this will be valuable eventually" phase.

  • The economics don't close. These robots cost $120,000 to $150,000 each right now. They're competing against human workers who are flexible and adaptive, and also against specialized industrial robots that already do specific tasks reliably. A manufacturer might buy two robots to experiment., but a manufacturer does not buy fifty robots to replace a production line unless the math is obvious and the risk is minimal. What matters is whether the economics work for normal companies making normal capital expenditure decisions in normal budget cycles.

  • Markets aren't created by general-purpose solutions. They're created by solving one specific problem so well that customers have no alternative. Smartphones started as phones with email. They later became platforms, but they started narrow. Robotics companies talk constantly about general-purpose humanoids that can do any job. What's the one thing these robots do better than any alternative? What's the problem so painful and expensive that someone will pay $150,000 to solve it? Until that question has a clear answer, we're still in technology demonstration territory.

The robotics industry has extraordinary engineering, unlimited capital, and media coverage most startups would kill for. What it lacks is the thing that actually defines market fit: customers pulling the product forward because they need it, not companies pushing the product at customers hoping they'll eventually want it. Now, this could flip faster than anyone expects. When AWS launched, most companies weren't ready for cloud computing either. Then someone figured out the economics for one specific use case, and suddenly everyone else was racing to catch up. Robotics needs its AWS moment. That one application where the value is so obvious, so immediate, that it creates a template everyone else follows. Maybe it's hazardous material handling where the alternative is risking human lives. Maybe it's warehouse operations where labor shortages are so severe that $150,000 per robot pencils out. Maybe it's something nobody's talking about yet because the breakthrough comes from an unexpected direction.

The path from here to there requires boring victories nobody's excited to fund. Costs dropping to levels where the math is obvious. Reliability improving to where robots work without constant supervision. AI systems handling novel situations instead of breaking when something unexpected happens. And all the infrastructure that nobody thinks about: safety standards, service networks, standardized interfaces. Technologies that require this much physical infrastructure and behavior change don't accelerate on software timelines. Autonomous vehicles taught us that.

For us, the opportunity isn't in building robots. It's in figuring out how automation could fit into human environments without making everything harder. What does human-robot collaboration actually look like in practice? How do you design interactions that people trust? What jobs should be automated versus augmented? Those questions determine whether any of this becomes real or stays in the demo phase forever.

Back to Basics

Your Next Canvas is a Graph

AI creativity used to be a single input box. You typed a description, pressed enter, and hoped. That model worked when the goal was novelty. But when you need repeatability, when you're building at scale, the single-prompt interface breaks down. Now the frontier has moved. Across platforms like Weavy, Fuser, Runway Workflows, ComfyUI, Adobe Firefly Boards, and workflow builders like n8n, a shared principle is emerging: AI creation is becoming compositional. Instead of sealed black boxes, creators arrange modular logic blocks. Instead of one opaque model doing everything, you orchestrate networks of specialized steps.

Six months ago, creating an AI marketing asset meant opening five browser tabs: Midjourney for images, Claude for copy, Figma for organization, Runway for video, Magnific for upscaling. Manually shuttling files between them, losing context with each jump. This fragmentation birthed a new category, with platforms like Weavy and Fuser launching node-based workspaces integrating dozens of AI models with professional editing tools. Runway launched Workflows, Kling launched Labs, and Freepik teased Spaces: nodegraphs are everywhere. Adobe took a different approach with Firefly Boards: an infinite canvas where creators arrange multiple AI models alongside generated assets. What was workflow automation is becoming creative infrastructure.

When Photoshop introduced layers, it didn't just make editing easier. It changed how designers thought. You could isolate changes, experiment without destroying the base, build complexity through transparent stages. The interface made the process visible and shareable. Node workflows do something similar for AI. They make reasoning explicit. In a single-prompt model, the path from input to output is hidden. If the result is wrong, you can only rewrite the prompt and pray. In a node workflow, every decision is a tile you can rearrange. You can swap one node without rebuilding the chain. You can insert human judgment exactly where it matters.

You're no longer iterating on prompts but designing systems. Iteration becomes surgical. When one node fails, you isolate and adjust that step, not start over. A single AI tool can be replaced. Gemini Imagen raises prices, you switch to Flux. Tool selection is commodity. A refined workflow is different: it encodes institutional knowledge and embodies creative decisions and technical constraints. It's not a tool. It's a process. And it's versionable.

But these node systems aren't a silver bullet. The learning curve, for one, can be steep. These tools require understanding data types, error handling, rate limits. They democratize capability for people willing to invest in learning graph-based thinking. But, they're more intimidating than a text box. A 15-node workflow has 15 failure points. When one API changes or one model depreciates, the chain breaks. Cost multiplies too. A 5-node workflow might hit 5 APIs. That math only works if repeatability matters more than volume.

This shift makes AI more capable while raising the barrier to entry. Workflows enable complexity that prompts cannot reach. But creativity-as-code isn't everyone's medium. The early internet had this inflection. In 1995 you wrote HTML by hand. By 2000, Dreamweaver let you drag elements around. Wordpress arrived and suddenly anyone could publish. But professional developers are still in VS Code writing React. The tools didn't make code obsolete. They gave non-coders a way in. Node workflows are at that same inflection point. The people wiring graphs together right now are figuring out patterns the rest of us will inherit. In a few years, abstraction layers will hide the complexity for casual users. But the professionals building systems that need to run the same way a hundred times will still be in the graph, watching exactly where the logic branches and how the data flows. Creativity will become circuitry.

Tools for Thought

Claude Skills: The Shortcut to Workflows

What it is: Claude Skills are modular capability packs that live as folders containing instructions, reference files, and optional scripts. Think of them like reusable custom prompts. Claude detects when a Skill matches the task at hand, loads it, and executes inside its sandbox to produce outputs with your rules intact. Anthropic positions Skills as a way to encode “how your organization does the work,” such as applying a brand system, working in spreadsheets, or generating standardized documents. In Anthropic’s own example, a Brand Guidelines Skill enforces colors, typography, and tone every time Claude drafts external materials. 

How we use it: We’ve been building a small library of Skills that act like guardrails and accelerants for our work. A Branding Style Guide Skill ensures color values, voice, and layout rules carry through decks, one-pagers, and ad copy. A Writing Guide Skill enforces structure, banned phrases, and audience nuance for newsletters and executive briefs. A Thought Partner Skill packages our critique rubric and research prompts so ideation sessions stay sharp and repeatable. Together these Skills turn Claude into a reliable collaborator that remembers house standards and shortens the path from idea to publishable assets.

ChatGPT Atlas: AI-powered Browser

What it is: OpenAI’s Atlas is a ChatGPT-centric web browser for macOS that puts an always-available sidebar into every page so you can summarize, compare, and rewrite content in place. Premium users also get an Agent Mode that can perform multi-step tasks inside the browser, like travel research or filling forms, and a “browser memories” option that tailors responses over time with user-controlled data settings. Atlas is positioned as a privacy-aware browser with opt-outs for training and toggles for memory, though it is still rolling out across platforms.

How we use it: Cautiously. The current build feels inconsistent on speed in practical tasks, with third-party tests reporting slow, error-prone agent runs, and fresh research highlights prompt-injection vectors and broader privacy questions that deserve time to mature. For day-to-day creative and research workflows we stick with Dia, which has proven to be faster and gives us reusable skills for repeatable actions. We keep Atlas installed for evaluation and occasional sidebar summaries on complex pages, but we favor Dia’s Skills plus clear privacy controls.

Google Pomelli: Experiment with your Brand Assets

What it is: Pomelli is a new Google Labs experiment that builds on-brand marketing campaigns for small and midsize businesses. You point it to a website, it creates a “Business DNA” profile from your tone, fonts, images, and color palette, then proposes campaign ideas and generates editable, downloadable assets for social, site, and ads.

How we use it: We gave Pomelli a spin with our own website, and it gave us some great first drafts for social media content, including copy and imagery. We’d still need a human in the loop, but the iteration was a starter pack.

Intriguing Stories

Building Gaming Worlds Faster: Electronic Arts, the video game titan behind EA Sports FC and Battlefield, has announced a strategic partnership with Stability AI, the company renowned for its Stable Diffusion image generator. The goal is to co-develop generative AI models and tools that will be embedded directly into EA's game development pipeline, a move both companies claim will "reimagine how content is built." The collaboration will place Stability AI's 3D research team directly inside EA's studios to build and test new workflows. EA executives are framing the partnership as a way to "amplify creativity," calling the new tools "smarter paintbrushes" that empower, rather than replace, human artists. However, the announcement comes just weeks after EA's historic $55 billion leveraged buyout, a deal which has reportedly saddled the company with significant debt.

AI Content is Everywhere: The internet has officially entered its new era. As of October 2025, AI-generated articles now make up 53.5% of all new web content, overtaking human-written content (46.5%) for the first time, according to a new analysis by Graphite.io. This tipping point is the culmination of a meteoric rise that began in late 2022, turning a ~5% trickle of automated content into an outright flood in under three years. But while machines may be winning the quantity race, they are definitively losing the quality and visibility battle. Despite the sheer volume of new AI articles, industry reports and search analyses consistently show that Google’s top search results remain overwhelmingly dominated by human-authored content. The automated "slop" being churned out at scale is, for the most part, failing to gain traction. Search algorithms, which are increasingly fine-tuned to detect helpfulness and genuine expertise, appear to be successfully filtering out low-quality automated content in favor of articles with a human touch.

Adobe Goes Max: At its MAX 2025 keynote, Adobe signaled a massive strategic shift: its AI-first future isn't a walled garden. The company unveiled Firefly Image Model 5, its new flagship generator, but the bigger story is that Adobe is integrating partner models from Google, OpenAI, ElevenLabs, and Topaz Labs directly into its creative apps. The second major leap is the new AI Assistant (codenamed Project Moonlight), a conversational "agent" that takes natural language commands across the Creative Cloud. Firefly is also moving aggressively beyond still images. Adobe announced a full multimedia AI suite, including Generate Soundtrack (for creating royalty-free music synced to video length) and Generate Speech (a text-to-voice tool using ElevenLabs tech). For individuals, Adobe is even beta-testing Firefly Custom Models, allowing any user to train an AI on their own personal style. Adobe and Google announced a partnership which was light on details, but allows enterprise users to Google products on their pre-trained models. In the end, by opening its suite to rival models, Adobe is positioning itself as the indispensable platform for creativity, regardless of which AI model is best for the job.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.