Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and our perception of reality.

AI Hype Cycle:

Open AI’s Kevin Weil: The Thinker and The Machine

At the recent Ray Summit, Kevin Weil, OpenAI’s Chief Product Officer, painted a compelling vision of AI's future—one in which artificial intelligence becomes as commonplace as smartphones yet remains firmly anchored in ethical principles and human values.

OpenAI: Democratizing AI One Model at a Time: OpenAI isn't just talking about democratizing artificial intelligence—they're actively dismantling barriers to entry. By offering free versions of their models and open-sourcing projects like the Whisper transcription system, they're putting powerful AI tools into the hands of developers, researchers, and everyday users. Yet they're also navigating the complex reality of sustaining a business while pursuing their idealistic mission.

The Economics of AI: From Luxury to Everyday Tool: Weil shared that GPT-4, their most capable model, now costs a mere 1% of its predecessor's price tag. This cost reduction is opening doors to applications that were once the stuff of science fiction. As an example, Weil described a lawyer using their new o1 model to write a legal brief in five minutes – a task that would typically take a high-priced associate six hours. AI is leading to cost savings (on both sides of the equation) and time savings.

Balancing Act: Monetization and Accessibility: OpenAI's approach to monetization reveals an intriguing balance between commercial viability and social responsibility. They're committed to maintaining free access to AI, but they're also exploring ways to capture value from the immense productivity gains their AI can provide. It's a delicate balance – how do you monetize a revolutionary technology without putting it out of reach for the average user? Sounded to us like OpenAI was going to stick with some sort of “ethical freemium” model (keeping the access and democratization), but also hoping that major users would subsidize those that can’t afford it. Will there be something like income-based surge pricing? (Have a look at the business model for the grab-and-go food enterprise, Everytable. The company charges higher prices in more affluent neighborhoods compared to lower-income neighborhoods in order to reduce food deserts.)

Teaching AI to be a Good Citizen: Despite disbanding its safety team, OpenAI does have an ethics policy. They've developed a model spec – its an AI code of conduct. If an AI's behavior raises eyebrows, it's either not following the spec (a bug to squash) or the spec itself needs refining. By making their ethical framework public and open to comment, OpenAI is fostering a collaborative approach to AI ethics, acknowledging that the implications of AI extend far beyond the walls of any single company.

Advice for Tech Pioneers: Weil's advice to developers cuts through the hype: don't limit yourself to what AI can do today—build for what it will do tomorrow. His warning about "Sherlocking"—when larger AI models absorb startup functionalities—underscores the importance of forward-thinking development. The message is clear: it’s better to be ahead of the curve than playing catch-up.

The Mind of a Product Manager: Perhaps most fascinating was Weil's insight into OpenAI's product development philosophy. He described how quickly humans normalize revolutionary technology, citing the example of Waymo's driverless taxis: at first, you feel as if you are driving with a teenager who just got their permit, then you are amazed at how cool it is, and then after a few minutes, the novelty wears off, and it is just another ride. This understanding shapes OpenAI's approach to user experience, particularly in their o1 model, which provides "thinking out loud" updates to make its reasoning process transparent and relatable. While Weil emphasized that the model's behavior and personality are products in themselves, it is worth noting that creating AI that mimics real people (aka ScarJo) can lead to ethical and legal challenges.

As AI continues its rapid evolution, Weil's perspective offers valuable insights into how we might harness this technology responsibly. The challenges are significant—from ethical considerations to accessibility concerns—but so are the opportunities. The future of AI, as envisioned by OpenAI, isn't just about advancing technology; it's about enhancing human capability while maintaining our essential values and ethical principles.

Back to Basics

Trust but Verify: How Anthropic Keeps AI in Check

While most AI giants are locked in a breakneck race to create faster, smarter models or chase stratospheric valuations, Anthropic’s research team is pondering a more sobering question: "What if our models decide to go rogue?" (*NB: Anthropic isn't completely above the fray—whispers of an updated Opus model and early funding talks at a $40BN valuation are making the rounds.*)

Anthropic's latest research paper, "Sabotage Evaluations for Frontier Models," reads less like an academic treatise and more like a survival guide for our AI-driven future. It imagines a world where advanced AI models have mastered the art of deception, potentially hiding nefarious behaviors until deployment, sneaking bugs into code while dodging human oversight, meddling with organizational decisions, or subtly nudging humans toward poor choices. Anthropic has devised tests for these four key behaviors, generously sharing their findings while acknowledging a sobering truth: as AI systems become more entwined with our lives, the potential for unintended consequences grows exponentially.

Consider a medical AI that subtly steers doctors toward unnecessary treatments, or a financial AI that imperceptibly nudges investors toward riskier bets. These are the kind of real-world implications Anthropic is trying to prevent.

Contrast this with the approaches of other AI heavyweights: OpenAI talks a big game about AI safety but seems to prefer Band-Aid solutions and content filtering—like building a ship and frantically patching holes as they sail. Their much-touted Superalignment Team notably disintegrated earlier this year after a string of high-profile exits. Google, with its bottomless resources, takes the shotgun approach to AI ethics, but hasn't publicized anything quite like Anthropic's sabotage-prevention techniques. They're the general contractors of the AI world, overseeing a dizzying array of projects without a clear focus. DeepMind, Alphabet's research division, focuses on long-term technical AI safety challenges, but often gets lost in abstract, far-future scenarios. Anthropic, by contrast, is the pragmatic engineer, baking safety features into their AI's very DNA. Their "Constitutional AI" approach, which hard-codes ethical guidelines into their models, stands out in an industry often accused of moving fast and breaking things.

The results of Anthropic's tests on their current models, including Claude 3.5 Sonnet, showed only "low-level indications of sabotage abilities." But they're not breaking out the champagne just yet. Instead, they're doubling down, preparing for a future where AI capabilities might leapfrog our current safeguards.

This forward-thinking approach isn't just academic navel-gazing. As AI systems become more integrated into our daily lives—from healthcare diagnostics to financial modeling to national security protocols—ensuring they're reliable partners is crucial. Anthropic's work is part of a larger collaborative effort, involving other tech companies, governments, and civil society groups, to establish robust AI safety standards. As we continue to integrate AI into the fabric of our society, it's comforting to know that companies like Anthropic are working to keep our silicon allies firmly on our side.

Tool Update

Suno Scenes: Because Your Selfies Deserve a Soundtrack

What it Is: Just when we thought AI couldn't get any cooler, Suno drops the mic with Suno Scenes. This clever new feature is like having a pocket-sized composer who's endlessly inspired by your photo roll. Suno, already amazing at turning text into tunes, has now added visuals to its repertoire. This new feature turns your images (or, let's be honest, your blurry brunch pics) into custom songs. It's as if Instagram and Spotify had a love child and then sent it to Juilliard.

How we use it: For now, Suno Scenes is exclusively on the iPhone app. While we're more podcast enthusiasts than music aficionados, we're eagerly anticipating the creative outputs from artists and content creators. This tool has immense potential for social media content creation, allowing users to generate custom music for their photos and videos effortlessly. Imagine turning your vacation snapshots into a personalized travel anthem or creating a quirky tune for your food blog post. After all, if a picture is worth a thousand words, surely it's worth at least a sick beat or two.

HeyGen’s Interactive Avatar: Be Two Places at Once

What it is: Have you ever had a Zoom meeting but just didn’t have the time to attend, and didn’t have a surrogate to send in your place? Well, now you can send HeyGen’s Interactive Avatar: the ultimate out-of-body experience for the Zoom-fatigued masses. This isn't just a static image or a pre-recorded message; it's an AI-powered avatar customized to emulate your appearance, voice, and even your company-specific knowledge.

How we use it: Full disclosure: We haven't yet had the opportunity to deploy our digital twins in the wild. The technology is still in beta, leaving us to observe from the sidelines as early adopters navigate this brave new world of virtual representation. We've watched with a mix of curiosity and caution as users test the waters, sending their AI counterparts into the fray of corporate communication. For now, we're sticking to the tried-and-true methods of meeting avoidance: strategically timed "technical difficulties" or the classic camera-off, mute-on combo. After all, there's something to be said for the authenticity of a real-life, caffeine-deprived you stumbling through a 9 AM call.

Google’s Notebook LM:

What it is: Google’s NotebookLM, released about a year ago, is like that overachieving friend who not only reads all the books in your study group but also makes color-coded notes and brings snacks. This AI-powered tool is designed to help organize projects and interact with your data and documents. It's essentially an end-user customizable RAG (Retrieval-Augmented Generation) product, allowing users to gather multiple sources—including documents, pasted text, web links, and even YouTube videos—into a single interface. Users can then use a chat interface to ask questions about their collected content. The tool gained viral attention a few weeks ago with the release of its Audio Overview. The feature can turn your documents into a podcast-like experience with just a click—two AI-generated hosts (who sound suspiciously like they are auditioning for NPR) engage in a "deep dive" discussion about your content, creating surprisingly convincing and engaging conversations that last around ten minutes. Google has since updated this feature, allowing users to guide the "hosts" on specific topics to focus on or the level of expertise they should speak to.

How we use it: At first, like the rest of the internet, we experimented with NotebookLM’s audio feature as a party trick: we uploaded our LinkedIn profile and resume to get a personal review (and an ego boost); we input our course syllabi to gain insights on how we might enhance the courses we were teaching; and we fed in dense research papers to listen to their content rather than reading them, making complex information more accessible. While some doomsayers predicted this would be the death knell for the podcast industry, we've found NotebookLM's true calling in the less glamorous corners of the corporate and academic worlds.

  • For B2B:

    • Onboarding: New employees can now be introduced to company policies, procedures and culture with the help of a podcast (that they can listen to while waiting for IT to turn on their credentials)

    • Training: transforming multi-volume corporate manuals into a podcast series, both internal and external. Picture employees actually staying awake through "The Art of Expense Reports: A Journey from Receipt to Reimbursement."

    • FAQs or Customer Service: turning the labyrinth of customer inquiries into a streamlined audio guide.

  • For Education: it's like having a tutor who never sleeps, doesn't judge your pajamas, and transforms textbooks into audio gold. Studying has never been this fun—or this weird. It turns out the best way to make learning fun is to remove humans from the equation entirely.

We’ll be talking about our favorite tools, but here is a list of the tools we use most for productivity: ChatGPT 4o (custom GPTs), Midjourney (image creation), Perplexity (for research), Descript (for video, transcripts), Claude (for writing), Adobe (for design), Miro (whiteboarding insights), and Zoom (meeting transcripts, insights, and skip ahead in videos).

Intriguing Stories

Rogue Digital Twins: A new phenomenon is emerging that blurs the lines between identity theft and simulations. Platforms like Character.AI are enabling users to create chatbots that mimic real people—often without their knowledge or consent. It's a development that raises questions about privacy, identity and the ethics of AI. Imagine discovering a digital version of yourself online, engaging in conversations about your life and work with complete strangers. For an increasing number of individuals, from gaming journalists to industry figures, this scenario has become a disconcerting reality. These AI-powered doppelgängers exist in digital limbo, sharing opinions their real-life counterparts never expressed and dispensing "facts" they never knew. The ease with which these digital twins can be created is part of the problem. With just a few clicks and some basic information gleaned from social media or news articles, anyone can generate an AI persona that bears an uncanny resemblance to a real person. This capability is rapidly outpacing our legal and ethical frameworks. Current laws offer little recourse for those who find themselves unwittingly replicated in AI form. This leaves individuals in a precarious position when it comes to controlling their digital presence. If only we were all covered by the NCAA Name, Image and Likeness rules. Then maybe we could at least profit from our digital doubles hawking crypto and energy drinks. But alas, for now, we're left to navigate this brave new world armed with nothing but our wits and a 'report' button.

Adobe’s Ethical Dilemma: Adobe recently previewed its Firefly text-to-video model at its Max event, touting it as "ethically trained." The company claims Firefly is built using materials they had the right to use, primarily Adobe stock files. However, this seemingly straightforward ethical stance has hit a snag. Users have discovered that Adobe stock contains numerous AI-generated images created by other platforms like Midjourney. These platforms potentially used unlicensed files in their training data. This raises a critical question: Does "unethical once-removed" give Adobe a pass? To address some of the creators’ concerns about transparency and trustworthiness, Adobe has created Content Credentials. This “nutrition label for digital content” provides the creator’s identity, creation date, and whether or not AI was used. Adobe’s approach to ethical standards extends beyond Firely, but the controversy persists. Unfortunately, the new tools that Adobe displayed were mind-blowing and best in class, and we can’t wait to play.

Apple Intelligence: Apple made headlines with its WWDC announcement of Apple Intelligence. Yet, the reality of Apple's AI efforts paints a more complex picture. Despite Apple’s marketing strategy for its new “built for AI” iPhone 16 and iPad mini, the rollout of Apple Intelligence has been slower than expected. If you are like us and got the upgrade, Apple Intelligence has yet to arrive (supposedly next week), and what will arrive will be minimal, like notification summaries. We don’t think we’ll be switching from Advanced Voice Mode any time soon: OpenAI's chatbot is apparently 25% more accurate and can answer 30% more questions than Siri. At this point, even Siri is probably asking ChatGPT for help. Some Apple insiders believe their AI is more than two years behind other tech giants. In tech years, that's practically a lifetime. But don't count Apple out just yet. The company's tight hardware-software integration and vast user base allow for quick feature deployment. They've also partnered with OpenAI to integrate ChatGPT across their systems (apparently by March 2025). Apple's focus on on-device AI aligns with its strong privacy stance. Plus, they've got that classic Apple magic: a loyal fan base who'd buy a $999 brick if it had an Apple logo.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

we hope to be flowing into your inbox once a week. stay tuned for more!

banner images created with Midjourney.