- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
AI Hype Cycle
While We Debate AI Ethics, China Is Rewiring Its Entire Economy
While I've been watching endless Twitter threads about whether AI-generated content counts as "real" creativity and listening to podcast hosts worry about authenticity in an AI world, China has been quietly writing the playbook for the next phase of global business competition. On August 21st, they showed their cards. The folks at Geopolitechs conducted a deep dive into the original policy document, and after reading their analysis, I must say that this feels different from the typical government strategy papers that gather dust on shelves. China's "Artificial Intelligence Plus" Action plan reads like a CEO's dream of total organizational transformation, except it's happening at the scale of the world's second-largest economy. Our approach to AI feels very American. The White House unveiled its AI Action Plan in July, and while it discusses about winning races and removing barriers, it's basically betting that if we unleash entrepreneurs and let them compete, magic will happen. Individual states are busy creating their own AI rules, which means you might need different compliance strategies for California versus Texas versus New York. I keep watching this unfold and thinking about how we're running two completely different experiments. One side executes a coordinated playbook while the other bets on controlled chaos. That gap should worry anyone thinking about where their business will be in five years.
Here's what makes China's new plan fundamentally different from their last big tech push. "Internet Plus" back in 2015 was all about connection. The Chinese call it 连接, or connection, which captures how digital platforms linked existing businesses with customers more efficiently. Think about how Alibaba connected manufacturers with global buyers or how Didi matched drivers with passengers. The underlying business stayed the same while digital layers improved matching and coordination. Your factory still made widgets, your driver still drove cars, your restaurant still cooked food. The internet became a really good middleman. This new "AI Plus" operates on what they call 赋能, or empowerment. Instead of connecting existing pieces more effectively, they're discussing embedding cognitive capability directly into how work is done. Picture the difference between using Salesforce to track customer interactions versus having AI that watches customer behavior patterns, predicts when someone's about to churn, and automatically adjusts your pricing strategy before you even realize there's a problem. One approach digitizes your existing process while the other changes how you think about business entirely.
The timeline they've laid out should make every business leader stop scrolling LinkedIn for a minute. By 2027, they want AI assistants and specialized agents deployed across 70% of their economy. I keep thinking about how WeChat Pay and Alipay spread across every corner of China from 2015-2018, and that's exactly the adoption velocity they're targeting for AI. By 2030, they're targeting 90% penetration. At that point, not having AI deeply integrated into your operations would be like running a business today without email or basic software. What really caught my attention is how their policy document states this "intelligent economy" will replace real estate and internet services as China's main economic engine. We're talking about a fundamental shift in where the money gets made. The most profitable opportunities will center on what they're calling "human-machine collaboration," where AI handles the routine thinking while humans focus on strategy and relationships. This represents something bigger than another technology adoption cycle. The US approach relies on our traditional strengths in innovation and entrepreneurship. We're betting that thousands of companies making independent decisions about AI will collectively figure out the best way forward. Very American, actually. Trust the market, let competition sort it out, may the best solution win. China's approach gets interesting because they're redesigning how organizations work from the ground up. Their plan talks about "AI-native enterprises" that get built around artificial intelligence rather than trying to retrofit existing processes. As the Geopolitechs analysis points out, this means thinking about AI more like a business partner than a business tool. They're building infrastructure to make this happen through national pilot programs, standardized systems, and requirements that government research data gets opened up for AI training. They're engineering the conditions where AI integration becomes necessary for survival rather than optional for advantage. Compare that to our approach, which assumes market incentives will naturally push businesses toward effective AI adoption.
China's also thinking systematically about what happens to workers when AI automates cognitive tasks. They're planning massive retraining programs and steering AI development toward creating new types of jobs even as it eliminates others. We're mostly assuming labor markets will adapt organically, which they probably will eventually, though the transition might be messier than we'd like. I'm not suggesting we should copy their centralized approach. There are obvious trade-offs around business freedom and the kind of innovation that happens when people can experiment without asking permission first. We need to take seriously the possibility that their systematic strategy might achieve faster, more comprehensive AI integration than our market-driven approach. For those of us running businesses or making technology decisions, this Chinese blueprint represents more than foreign policy analysis. It's a preview of competitive dynamics that's getting implemented with remarkable speed and coordination. While we're betting on entrepreneurial innovation and competitive markets, they're building entire economic systems around AI deployment.
Where Women Lead the Way
The gender gap in AI gets talked about as if it were a fixed law of nature. Women adopt generative AI at lower rates than men, the story goes, and the numbers seem to back it up. A working paper highlighted in The Wall Street Journal put the gap at twenty-five percent. The message is tidy and a little grim: women are hesitant, they lag behind, and they risk being left out of the next technological leap. Look closer, though, and the narrative begins to fray. Hidden in the research is one detail that doesn’t match the rest, and it comes from the San Francisco Bay Area.
Boston Consulting Group surveyed local tech workers and found that women there were actually more likely than men to use AI. Another study reinforced the pattern, showing sixty-eight percent of women in the broader tech industry using generative AI weekly compared with sixty-six percent of men. The margin is small, but the direction is everything. In the global center of tech, the gap flips. That reversal matters. It suggests the gender gap isn’t inevitable. It isn’t rooted in some unchanging difference in how men and women relate to technology. It is shaped by environment. When the culture shifts, the behavior shifts with it.
In San Francisco, AI is not treated like an exotic tool that needs careful introduction. It is part of the everyday atmosphere. It shows up in casual café chatter, in neighborhood meetups, in the side projects your friends are tinkering with. That saturation makes the technology ordinary. Once it feels ordinary, the leap to adoption is barely a leap at all. Professional incentives reinforce the shift. In many industries women hesitate due to the competence penalty, the risk of being judged as cutting corners when they openly use AI. That judgment tends to fall harder on women than on men. In San Francisco, the calculation reverses. Ignoring new tools signals resistance to progress. The safer move is to adopt quickly and visibly. In that environment, using AI is a marker of initiative rather than a risk to credibility. The effect strengthens when women see other women doing the same. Visible peers and role models integrating AI into their work dismantle the tired stereotype that technology is a male space. The more those examples circulate, the easier adoption becomes.
The takeaway is that the real challenge is not persuading women to change their behavior. The challenge is reshaping the environments that discourage them. Cultures that encourage experimentation, organizations that invite diverse voices into AI strategy, and networks that spread knowledge laterally rather than top-down all tilt the equation in the right direction.
Tools for Thought
Krea’s Real-Time Video
What it is: Krea has introduced a real-time video generator that pushes past the static frame-by-frame pace of most AI tools. Instead of waiting for clips to render, creators see 12+ frames per second unfold instantly as they type prompts, paint on a canvas, or stream from a webcam. The system emphasizes temporal consistency, so characters stay recognizable and styles remain coherent while the scene evolves. It feels less like batch processing and more like working inside a living sketchpad, where each brushstroke or word ripples through the moving image in real time.
How we use it: I’m still on the waitlist for video, but I use Krea’s realtime image generator. Instead of treating prompts as one-offs, I explore them interactively: dragging colors, shifting composition, and refining text as the output responds instantly. It becomes a feedback loop where ideas surface faster than they would in a traditional design flow. When the video tool opens up, I expect that same dynamic experimentation will apply to motion: sketching a scene, testing styles on the fly, and letting the AI animate prototypes in seconds. For creative professionals, this is less about generating a final product and more about building a playground where concepts can be tested, iterated, and expanded at the speed of thought.
Claude for Chrome
What it is: Anthropic is piloting a Chrome extension that brings Claude directly into your browser as a live sidebar. Instead of toggling between tabs, you can ask Claude to summarize an article, draft an email, or even fill forms and click through websites. It is part research preview, part stress test, with about a thousand users invited under the premium Max plan. Anthropic has locked it down with strict safeguards: site-by-site permissions, confirmation prompts for risky actions, and blocks on categories like banking and healthcare. The extension is designed to answer questions and to navigate the web on your behalf while keeping safety guardrails in place.
How we use it: We are still on the waitlist for the Chrome extension, but the workflow feels familiar thanks to tools like Dia and Comet. Both give us an embedded assistant layered on top of our everyday browsing, whether that means pulling research into a draft, parsing documents in real time, or highlighting key data without leaving the page. We rely on them for contextual support like fast summaries, structured notes, and quick comparisons while we work across multiple tabs. Claude’s version looks like it will extend that pattern into true action-taking: not only telling us what is on the page, but also moving through it. Until we get access, Dia and Comet fill that role, showing us the value of keeping an AI present in the same window where the work actually happens.
Intriguing Stories
Meta Has a Serious Commitment Problem: When the co-creator of ChatGPT threatens to quit your shiny new AI lab after just days and literally starts filling out paperwork to return to OpenAI, you know your talent strategy needs recalibration. That's exactly what happened to Mark Zuckerberg this summer when Shengjia Zhao had to be frantically promoted to Chief Scientist to prevent the most expensive recruiting embarrassment in tech history. Stepping back, we can see that the catalyst was when Meta’s Llama 4 project failed spectacularly in May, and Meta started its buying spree. Meta Superintelligence Labs launched July 1st with a $14.3 billion Scale AI investment and nine-figure signing bonuses for anyone with the right pedigree. The plan seemed foolproof until the talent started ghosting faster than bad Tinder dates. The departures followed a pattern that would be comedic if it weren't so costly. Ethan Knight didn't even show up for his first full day after onboarding. Avi Verma and Knight both boomeranged back to OpenAI within weeks, while Scale AI's Ruben Mayer-Hirschfeld lasted about two months. At least eight people have bailed in just two months, including veteran Meta folks who actually built the company's AI infrastructure. Despite that massive Scale AI investment, Meta's own researchers quietly use competitors like Mercor and Surge instead, apparently considering Scale's data insufficient for serious model training. They bought a data pipeline their own team won't use. The talent revolving door reflects organizational chaos that makes retention nearly impossible. Meta has reorganized its AI division four times in six months, creating perpetual musical chairs. The company seems to believe research culture can be assembled like expensive furniture. Instead, they've built a costly layover lounge where AI talent stops to collect signing bonuses before heading back to places where they can actually do meaningful work.
Anthropic Flips the Switch on Privacy: Anthropic built its brand around one promise: safety. Founded by former OpenAI researchers who fled the chaos next door, the company positioned itself as the responsible adult in a room full of reckless teenagers racing toward AGI. Claude launched with guardrails, "constitutional AI" principles, and a privacy stance so radical it almost seemed quaint, as data deleted after 30 days unless you explicitly begged them to keep it. That era is ending. Starting this fall, data from consumer Claude accounts will be stored for up to five years and used for training by default. To stop it, you have to hunt down the opt-out switch. Government, education, and enterprise customers remain exempt, but for everyone else the terms are clear: agree by September 28 or lose access. Anthropic insists this will "strengthen model quality and improve safety systems," which is the same justification every AI company uses when abandoning previous commitments. Privacy advocates point out the obvious: the company that built its reputation on restraint is now following the industry playbook of surveillance by default, consent by exhaustion. The constitutional AI principles that once guided Claude's responses apparently needed a constitutional convention. Turns out even the most safety-conscious companies discover that competitive pressure has a funny way of rewriting ethical frameworks.
Anthropic Backs Away: Anthropic has reached a settlement in a high-stakes copyright lawsuit brought by authors who accused it of training Claude on pirated books scraped from shadow libraries. The case had been barreling toward a December trial with potential damages so staggering that even Anthropic’s backers at Amazon and Google might have flinched. The judge overseeing the case had already issued a split ruling in June: using legally purchased books to train AI counted as fair use, while stockpiling pirated works did not. That distinction left Anthropic exposed to claims of willful infringement, where statutory damages can climb to $150,000 per work. With millions of books in question, the math quickly became existential. The company may have won the argument that training itself is transformative, but it risked losing everything over how it acquired its data. By striking a deal now, Anthropic avoids the uncertainty of trial and the spectacle of authors squaring off against one of AI’s most celebrated “responsible” startups. The terms are sealed for now, pending court approval, but the plaintiffs’ lawyer called the agreement “historic.” For the industry, the message is sharper than any verdict could be: AI companies cannot cut corners on data sourcing. Fair use may cover the what of training, yet the how remains a separate legal risk. Anthropic’s settlement does not end the copyright wars. Publishers, musicians, and platforms like Reddit still have cases in play.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
we’ve also started publishing more frequently on LinkedIn, and you can follow us here
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
banner images created with Midjourney.