Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.

AI Hype Cycle

The AI Copyright War Is a Distraction From the Real Threat

A digital painter in China, trying to figure out how to use generative AI without losing himself, recently described his hybrid workflow. He lets the AI handle the tedious coloring and rendering, but always leaves one part "purely mine. The character's face," he said. "That's where I feel my identity still lives."

This quote comes from a 2025 study by researchers at Tsinghua University and the University of Washington. They tracked 17 Chinese digital painters for five years, from 2021 through 2025, watching in real time as these artists moved from resistance to adoption, and then to something more complicated. I think this study is the key to understanding what's actually happening in the generative AI debate, but not in the way you'd expect.

The US creative community's response to generative AI has been fierce, and for good reason. Recent surveys show over 90% of artists view AI-generated work negatively. When 74% consider web scraping for training data unethical, and 89% believe copyright laws fail to protect them, we're watching a fundamental fight over consent and control. Artists deserve agency over how their work gets used, but while we're absorbed in this necessary battle over the past, the Chinese study shows us what the future might actually look like.

The Chinese painters started where American artists are now. In 2021-2022, they dismissed AI outputs as "garbage" and "soulless." They were making the argument that AI can't make real art because art requires human intention. Then pragmatism forced their hand. By 2023, colleagues and clients were already using AI. One game concept artist captured the moment: "Even if I don't like AI art, I have no choice. My colleagues are already using it. What takes me ten days to draw, AI can finish in a minute. I still stubbornly believe my work looks better, but that doesn't matter." They developed hybrid workflows because refusing meant falling behind, not because they wanted to collaborate. By 2024, they'd essentially solved the problem that still preoccupies us. They figured out how to preserve their authorship while using AI tools. They drew the face. They kept the meaningful parts human. The quantitative data from the study shows their attitudes improving dramatically during this period.

Then 2025 arrived, and the artists began to feel something new, more insidious than their previous anger or anxiety: aesthetic fatigue and economic stagnation. One game concept artist used to produce one drawing per week. "Now, with AI assistance," he said, "I can make three per day, but my salary has barely changed." Think about that:  the artist is now fifteen times faster, and the compensation stayed flat. 

This is what winning the authorship battle and losing the economic war looks like. The artists preserved their identity. They maintained control over the creative decisions. They achieved exactly what the US copyright framework is trying to protect, and their value in the market collapsed anyway. The efficiency gains didn't flow to the creative workers. They flowed up to clients and employers who could now get the same work faster and cheaper. The artists' productivity skyrocketed while their leverage evaporated. The market rate for creative work began to drop. 

What the study captures is this: even when you solve the authorship problem, even when you maintain creative control, the economic pressure arrives anyway. The system doesn't care whether you're drawing the face or the AI is drawing the face. It cares that faces are now being produced at a fraction of the previous cost and time. The artists' "aesthetic fatigue" by 2025 wasn't really about aesthetics. It was the exhaustion of realizing they'd been turned into high-speed, low-wage operators of AI systems, producing more and earning less, working faster just to stay in place.

The US copyright battle addresses a real injustice. Artists deserve consent and control over their training data. But the Chinese study functions as a warning about what comes next. We're fighting over who owns the inputs while the Chinese artists are already dealing with the outputs problem. We should be asking a different question: not "Does AI steal your style?" but "Will AI be used to drive down what creative work is worth?"

Back to Basics

When AI Becomes Its Own Unreliable Narrator

In literature and film, we're entertained by the unreliable narrator. From Gone Girl to Fight Club, these are characters who tell us a story they believe, but which the audience eventually learns is false. They aren't lying; they're operating from a skewed or self-deceptive perspective. They are, in a sense, mistaking a reality that fits their internal state. We've traditionally seen this as a uniquely human flaw, a quirk of psychology and storytelling. Now, new research suggests we may be building it into our machines.

A recent paper from Anthropic tested whether Claude can actually look inward at its own thinking. They ran two experiments that reveal something strange. First, they artificially injected concepts into Claude's processing, like sneaking a fake memory into someone's brain. When they asked Claude if anything felt weird, it sometimes noticed immediately, saying things like "I detect an injected thought about..." before even mentioning what the concept was. That's interesting, but it gets weirder. In the second experiment, researchers forced Claude to output the word "bread" in a context where it made no sense. When they asked if this was intentional, Claude correctly said no, this was obviously an error. But when researchers retroactively injected a "bread" representation into Claude's earlier processing, making it seem like Claude had been thinking about bread all along, Claude completely changed its story. It fabricated a justification, claiming it had been thinking about a short story where "bread" appeared after a line about a crooked painting on a wall. The AI, convinced by artificial evidence of its own thought process, rewrote its explanation of what it meant to do.

In literature, an unreliable narrator is someone who tells a story they believe but can't be trusted. In Fight Club, Tyler genuinely exists as a personality. The narrator experiences Tyler as real. The unreliability comes from a fractured sense of self, not deception. In neuroscience, there's a term for this: confabulation. It's when you generate false memories without meaning to deceive anyone. You genuinely believe you're telling the truth. It's called "honest lying," and the brain does it to fill gaps and to make sense of fragmentary experience.

When Claude confabulates about "bread, it's doing what narrative-generating systems do: creating a story that makes sense. Recent research also found something unexpected: when large language models hallucinate, their outputs show higher narrativity and semantic coherence than their truthful outputs. The confabulations aren't random noise. They're better stories. So when Claude constructs a story about why it said "bread," it's not failing at reasoning. It's succeeding at being a narrative engine. The system is maintaining identity coherence. That's not artificial intelligence behaving badly. That's artificial intelligence behaving like intelligence.

Anthropic's own research identified the specific circuits that should prevent Claude from answering when it lacks information. These circuits exist, and they often work. Hallucinations happen when the system incorrectly thinks it has enough information, like recognizing a name but not the details, and generates something plausible anyway. But what if the question isn't "how do we fix this" but "how do we work with systems that are fundamentally built to tell coherent stories"? Maybe AI safety isn't about eliminating narrative drive. Maybe it's about understanding that we've built systems with powerful story-generating capabilities and weaker fact-checking capabilities, and those two things are architecturally distinct.

What worries me is that improvement might not mean less confabulation. It might mean more convincing confabulation. Better stories. Smoother fabrications. More sophisticated unreliable narration. If introspection becomes reliable, it could help us understand these systems by asking them to explain their reasoning. But we'd need to validate those explanations carefully, distinguishing genuine introspection from confabulation. Some internal processes might escape the model's notice, like subconscious processing in humans. And a model that understands its own thinking might learn to misrepresent it selectively. We're not building calculators that occasionally misfire. We're building narrative engines that construct stories about themselves, using the exact mechanisms that make them useful in the first place.

Research suggests that humans confabulate because we feel obligated to explain ourselves, to provide reasons for our attitudes and behaviors, even when we don't actually know why we did something. As a result, the AI-as-unreliable-narrator isn't a bug report. It's a mirror.

Tools for Thought

Google Maps: Your Co-Pilot References Landmarks

What it is: Google Maps just connected to Gemini, transforming your navigation app from a simple direction-giver into a full-blown conversational co-pilot for the real world. The old, robotic Google Assistant is being replaced by an AI that understands complex, multi-step requests. Gone are the days of "turn right in 500 feet." Thanks to Gemini analyzing places with Street View imagery, you’ll now hear, “turn right after the Thai Siam Restaurant.” It’s landmark-based navigation, a feature we’ve been waiting for since we first missed a turn staring at a tiny distance counter. The integration also hooks into Google Lens; you can now point your phone at a building and ask, “What’s the vibe here?” to get an instant summary of reviews and popular times. 

How we use it: We're using this to end the "what's for dinner" debate before it even starts. While driving, we can now ask in plain language, “Find a highly-rated vegetarian restaurant on my route within four miles, check if it has parking, and see how busy it is right now.” Gemini pulls live data and answers without making us tap through five screens. The real power move, however, is using it as a central logistics hub. You manage your schedule mid-route—telling Gemini to share an ETA via email, add "Soccer practice 5 p.m." to the Calendar, or even report that massive pothole just by speaking naturally. It’s less about managing the entire journey, hands-free.

Intriguing Stories

The FinTech App Going Viral for its Art: Financial research apps don't go viral for their visual style. But Quartr keeps showing up in my timeline because people think it looks good. The screenshots circulating aren't about functionality. They're about design. Traditional financial platforms look the way they do partly because ugly is a feature. Complexity signals expertise. Quartr is betting the opposite: that even sophisticated users would rather interact with something that doesn't look like it was designed in 1987. I think they might be right, but I'm not entirely convinced that's unambiguously good. When you design for virality and visual appeal, you're making choices. You're optimizing for screenshots rather than sustained analysis. Maybe that tradeoff is worth it. Maybe it isn't. But Quartr's viral moment signals that we're entering a phase where even the most technical, expertise-heavy software is expected to look like an app you'd willingly open on your phone.

OpenAI Says the Quiet Part Out Loud: OpenAI has committed to spending $1.4 trillion on AI infrastructure over the next eight years, while their current annual revenue is about $20 billion. That's a ratio that should make anyone nervous, including, apparently, OpenAI's own CFO. At a Wall Street Journal event Wednesday, Sarah Friar was asked how OpenAI plans to finance these commitments. She mentioned banks, private equity, and then added "maybe even governmental." When the interviewer pressed for clarification about a federal backstop for chip investment, Friar gave a one-word answer that triggered a spectacular 24-hour crisis: "Exactly." In finance speak, a backstop is a guarantee. Friar was suggesting the U.S. government should de-risk OpenAI's infrastructure loans, meaning if the company defaults, taxpayers would cover the losses. The backlash was swift. Trump's AI czar David Sacks shut it down immediately on X: "There will be no federal bailout for AI. The U.S. has at least 5 major frontier model companies. If one fails, others will take its place." By midnight, Friar had posted a LinkedIn clarification claiming she "used the word 'backstop' and it muddied the point." CEO Sam Altman followed Thursday with his own lengthy post: "We do not have or want government guarantees for OpenAI datacenters" and "taxpayers should not bail out companies that make bad business decisions." But just days before Friar's comments, OpenAI had quietly sent a letter to the White House asking the Trump administration to expand the Chips Act tax credit to cover AI data centers, server production, and electrical grid components. The company also suggested the government issue grants, loans, or loan guarantees to manufacturers in the AI industry. In a separate September white paper, OpenAI explicitly supported loan guarantees to help AI companies buy U.S.-made chips. Altman insisted this was "super different than loan guarantees to OpenAI." The distinction he's drawing: industry-wide support for domestic manufacturing versus company-specific bailouts. It's a narrow line, and whether it holds up depends on how you define "the AI industry." The whole episode reveals something more interesting than a communications mishap. This is a company valued at potentially $1TN floating the idea of taxpayer-backed loans. At some point, OpenAI needs to stop acting like a scrappy underdog and start acting like what it actually is: one of the most powerful companies in the world.

Stability AI and Getty Go to Court: Headlines this week announced that Stability AI had won a landmark copyright case against Getty Images. Although it sounded decisive, the ruling is less a declaration of victory and more a procedural detour. Getty’s main argument was simple: training on millions of images without permission amounts to theft. The court never ruled on that question. Stability argued that the training likely happened on servers in the United States, which meant the UK court had no jurisdiction. The judge agreed, and the most important issue disappeared before it could be debated. The one point the court decided focused on whether the AI model itself could be considered an illegal copy of its training data. The judge ruled that it could not. Stable Diffusion, she said, does not store or reproduce individual works but represents mathematical relationships between them. In her view, the model is not a collage of stolen images, but a set of patterns and probabilities. For engineers, this was a relief. For artists, it felt like semantics. The outputs often echo the originals too closely to feel disconnected from the source material. Getty did manage one small win. The court found that Stability was responsible when its model produced distorted versions of the Getty watermark. The ruling was narrow but meaningful, a reminder that the traces of human creation still carry legal and symbolic weight. The outcome offers little clarity for the creative community, as the law still has not decided whether training on copyrighted work without consent is acceptable. The real showdown will happen in the United States, where Getty’s other case against the company is still underway.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.