- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI
Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and our perception of reality.
We’ll be publishing intermittently until the end of the year, and back to our regular cadence in 2025.
AI Hype Cycle: Regulation
Anthropic’s Red Flags vs Microsoft’s Green Lights
The world of AI regulation is like watching a theater where every player has their own script, motivations, and a touch of dramatic flair. In one corner, we’ve got Anthropic waving a red flag, cautioning that unregulated AI development might steer us into apocalyptic territory. On the other, Microsoft and a16z—powerhouses in their own rights—unite to argue that regulation could stifle progress. Each camp is armed with compelling points, and as the debate unfolds, it’s easy to see why this regulatory tug-of-war feels almost Shakespearean in its gravity.
Anthropic’s plea for regulation is rooted in what we’d call a healthy dose of existential paranoia. They’ve been ringing the alarm bells, pointing out that unchecked AI could evolve into something our sci-fi authors would warn us about. Their perspective isn’t just "fear-mongering" (a term that too often dismisses genuine concerns); it’s about creating safeguards that don’t just protect corporate interests but humanity as a whole. They want governments and private sectors to step into this dance with a shared understanding of the potential risks and responsibilities, which is easier said than done. But their stance does raise an essential question: Should we treat AI development like the Wild West or put guardrails up before the horse bolts?
Meanwhile, Microsoft and venture capital firm a16z have joined forces to sing a different tune. To them, regulation, especially premature or overly strict, could strangle the nascent yet booming AI industry. Their argument isn’t without merit: innovation thrives best when the road isn’t littered with too many speed bumps. They’re advocating for a Goldilocks approach—not too lax, or tight. They fear over-regulation could create a competitive void where only the most adaptable or resourced companies survive, possibly stifling smaller, innovative players. And considering the often clumsy, one-size-fits-all nature of tech legislation, they might be onto something.
This face-off isn’t just about who’s right but what each perspective signals about the evolving tech landscape. Anthropic’s cautionary note underscores a wider apprehension within AI circles, which hints at the Pandora’s box many believe we’ve already opened. Microsoft and a16z’s resistance to regulation reflects an industry with high stakes that even modest constraints feel like putting reins on a racehorse in mid-gallop. Our biggest takeaway from this regulatory drama is that finding the sweet spot between fostering innovation and avoiding potential disasters will be one of the trickiest balancing acts of our time. Regulation isn’t inherently bad; bad regulation is. Creating frameworks that understand the nuances and pace of technology, while remaining agile enough to adapt, is key. And as we watch the stage unfold, it’s becoming clear that the future will need both the cautionary tales of Anthropic and the optimistic pragmatism of giants like Microsoft. The question is whether we (and our regulators) can craft a narrative that acknowledges both sides without turning it into a Greek tragedy.
Back to Basics
What is Open Source AI? (Not What We Think)
“Open source" has become as malleable as a blockchain buzzword. But this time, it’s AI in the spotlight. Recently, the Open Source Initiative (OSI) put its stake in the ground, defining what “open source AI” should truly mean—and, spoiler alert, it’s not just slapping “open” on a license and calling it a day. The essence of open source has always revolved around accessibility, transparency, and freedom to tinker and adapt. But the lines blurred when AI stepped into the picture with its towering data needs, complex models, and guardrails. Models like Meta's LLaMA stirred the pot by blending open weights with conditional usage rights, prompting many to ask: Is that really open source?
So, the OSI did what any good guardian of tech values would do—they drew a clearer line. According to their fresh definition, open source AI must share the code and ensure full replicability, transparency, and community-driven modifications without a corporate catch-22. The move aims to prevent a future where “open AI” means nothing more than marketing fluff, setting the bar high enough to separate true open collaboration from controlled sandbox play. It’s an ambitious vision. The tech community is rallying behind the promise of more ethically open and transparent AI development. The new benchmarks highlight the balance between protecting innovation and keeping AI ecosystems genuinely collaborative and inclusive. However, as companies navigate these standards, you’ll see a divide between genuine contributors and those who have mastered “open-washing.”
Open source isn’t inherently good or bad—it’s a tool, shaped by the hands that wield it. A recent example that underscores this came from Chinese researchers reportedly leveraging Meta’s LLaMA models to develop military applications. This sparked heated debates about how “open” aligns with geopolitical and ethical considerations. Yet, we shouldn’t jump to equating openness with danger. Properly defined, open models foster collaboration and progress. It’s about ensuring that openness doesn’t compromise security but enhances innovation for a global good.
Less is More: The Power of Compact Models
In a tech landscape where bigger usually means better, a quiet revolution is taking place. Small Language Models (SLMs) are emerging as unexpected champions, challenging the notion that artificial intelligence needs massive computing power to be effective. These compact AI models prove that sometimes the best things come in small packages. Nature shows us that some smaller animals can be remarkably intelligent despite their compact brains. The same principle applies to AI. While giants like GPT-4 and Claude use enormous amounts of computing power – imagine hundreds of high-end computers running at once – Small Language Models can run on a single smartphone while still being remarkably capable.
What makes these smaller models special isn't just their size – it's how they're built. Meta has been pioneering this approach with their MobileLLM technology (the smallest model has 125M parameters compared to the 403BN of LlaMa 3). Instead of creating a massive AI that knows a little about everything, these models are designed to be extremely good at specific tasks. It's like the difference between a general practitioner and a specialist doctor – sometimes the focused expert can outperform the generalist. The way these models process information is clever too. Rather than looking at everything all at once, they focus on what's most important – much like how humans pay attention to the most relevant parts of a conversation rather than analyzing every word equally.
The rise of smaller AI models isn't just a technical achievement – it's a game-changer for how we'll use AI daily. When AI can run directly on your phone instead of in distant data centers, everything becomes more private and responsive. Your data stays on your device instead of being sent to the cloud, and the AI can respond instantly because it doesn't need to make a round trip to a server. This shift has real-world benefits that everyone can appreciate. Imagine having a sophisticated AI assistant that works even when you're offline, helps you write emails or summarize documents without sharing your private information, and doesn't drain your phone's battery or eat up your data plan.
Determining the optimal size for a language model isn't a one-size-fits-all proposition. It depends on the specific task, the quality of training data, and the desired performance. While 125 million parameters have emerged as a reference point for small models, there's no definitive consensus on the absolute minimum size required for accuracy across all tasks. What's clear is that advancements in training techniques, such as transfer learning and knowledge distillation, enable smaller models to perform effectively in a variety of applications.
One of the most exciting aspects of SLMs is their environmental impact. Large AI models consume enormous amounts of electricity – some use as much power as a small town. Small models, by comparison, sip energy like a light bulb. This efficiency doesn't just save electricity; it makes AI more accessible to everyone, not just tech giants with massive data centers. This accessibility is crucial for democratizing AI technology. Small businesses, schools, and developers in regions with limited resources can now use AI technology that previously required millions of dollars in computing infrastructure. It's like the difference between needing an entire orchestra to make music versus having a versatile electronic keyboard – both can create beautiful music, but one is far more practical for most people.
The future of AI might not be about building bigger and more powerful models, but about making them smarter and more efficient. Researchers are finding clever ways to teach these small models new tricks, making them more capable without increasing their size. It's similar to how smartphones have gotten smarter over the years without necessarily getting bigger – through better design and smarter software.
Tool Update
Claude Levels Up
What it is: Claude’s latest update is PDF sight which allows the model to interpret both text and visual elements (charts, images, and tables) within PDFs. Claude easily can dive into document-heavy tasks, extracting, analyzing, and summarizing PDFs. This means it’s not just about conversational AI anymore—it’s a full-fledged document navigator that can pull insights straight from PDFs, making it an ideal tool for deep research, complex report analysis, and any project where combing through PDFs feels like a chore. (Sadly, this sight feature is only available through the API at the moment.) And to tide those of us over who don’t use the API, Anthropic released new models. Claude 3.5 Haiku and Sonnet, bring a level of refinement and adaptability that make interactions feel more precise, insightful, and even a touch lyrical.
How we use it: Claude is stepping up to cover a wide range of use cases with these new features. Suppose we’re dealing with multiple reports, academic papers, or need quick reference points from a hefty PDF — in that case, Claude’s new sight feature streamlines our process by pulling out key information with impressive speed and accuracy. Meanwhile, Haiku and Sonnet models introduce a new level of stylistic versatility, letting you tune interactions to be succinct and insightful (Haiku) or expansive and nuanced (Sonnet).
ChatGPT Gets Another Upgrade: SearchGPT
What it is: OpenAI just opened the gates to its new Search feature in ChatGPT, bringing real-time internet browsing directly to your fingertips. Think of it as having an AI-powered research assistant that doesn’t just pull up general knowledge but digs up fresh, accurate, and up-to-the-minute information for you. We still find Perplexity to be the best search engine, and you still have to check the links from ChatGPT Search, but it is definitely better than a page of blue links. Plus, there’s the new Canvas feature with inpainting, where you can refine images with pixel-level control—perfect for those moments when visuals need to be as on-point as the data driving them.
How we use it: With Search, we’ve got another powerful research tool “Just ask,” and Search does the rest, streamlining our workflow and keeping us effortlessly informed. There is also an additional “search function” in the new ChatGPT that finally allows you to search old chats; we’re just hoping they’ll let us organize our chats by something other than date soon.
Desktop Apps: Claude and Perplexity
What it is: Anthropic and Perplexity launched dedicated desktop apps for their AI assistants, Claude and Perplexity. These apps bring serious AI firepower directly to your desktop, delivering advanced natural language capabilities that feel more immediate, versatile, and downright seamless compared to browser-based interactions. Whether you’re drafting emails, gathering complex insights, or brainstorming on the fly, Claude and Perplexity’s desktop setups offer a smoother, more powerful AI experience without the hassle of managing multiple browser tabs. For anyone looking to integrate AI more deeply into their daily workflow, these desktop apps make that connection faster and more reliable than ever.
How we use it: Honestly, we don’t use the apps any differently than the web versions, but we do like having the dedicated space instead of having to open yet another browser.
We’ll be talking about our favorite tools, but here is a list of the tools we use most for productivity: ChatGPT 4o (custom GPTs), Midjourney (image creation), Perplexity (for research), Descript (for video, transcripts), Claude (for writing), Adobe (for design), Miro (whiteboarding insights), and Zoom (meeting transcripts, insights, and skip ahead in videos).
Intriguing Stories
Recap & Chill: Can't remember what happened in that show you watched while juggling your phone, snacks, and an overenthusiastic dog? Amazon Prime Video has your back with their latest AI-powered feature: X-Ray Recaps. Think of it as having a spoiler-conscious friend who gives you the perfect show rundown. This new generative AI feature creates custom summaries of everything from full seasons to single episodes, even zeroing in on specific scenes you might have missed. Using advanced AI, it analyzes video content, subtitles, and dialogue to craft concise breakdowns of key moments, character arcs, and plot developments. Gone are the days of frantically watching recap videos before diving into a new season. Better yet, Amazon's built-in "guardrails" ensure these AI summaries won't spoil any major plot twists. So you can refresh your memory without stumbling across that shocking finale everyone's tiptoeing around on social media. Whether you dozed off during your binge-watch session or just need a quick refresh, X-Ray Recaps has you covered. Now you can confidently join those water cooler discussions about the latest must-watch series (provided you're a Prime subscriber, of course).
Still Buffering: Not so fast, Alexa. While Amazon is making headlines with Prime Video’s X-Ray Recaps to keep viewers updated on their favorite shows, the real story is happening behind the scenes with its flagship voice assistant. A decade after Alexa revolutionized smart homes, Amazon is racing to keep pace with the generative AI revolution. CEO Andy Jassy recently unveiled ambitious plans to "rearchitect the brain" of Alexa using cutting-edge foundation models. Amazon hopes to transform Alexa from a basic Q&A device into an intuitive personal assistant that can understand context and take meaningful actions. But the path forward isn't as smooth as Amazon hoped. Technical challenges and shifting priorities have slowed development, and insiders suggest the new AI-powered Alexa might not debut until 2025. Reports cite a lack of essential data and limited access to the chips needed to run the large language model, leading to delays. Amazon, however, disputes these claims and emphasizes that its AI teams have the resources they need, including proprietary chips and a strong partnership with Nvidia. Despite reassurances, it’s evident that Amazon has prioritized integrating generative AI into its AWS division, leaving Alexa waiting in the wings. This isn’t just an internal tech hiccup; it’s a broader reflection of the growing pains companies face when trying to keep up with rapid advancements in AI.
Not your Grandma’s Minecraft: Decart AI, alongside Etched, just unveiled a game that redefines sandbox gaming—and it’s a trip. Imagine a Minecraft-like world that builds itself as you play, pixel by pixel. Oasis is the world's first real-time AI world model, bringing generative AI right into your gameplay. Instead of being pre-coded, it’s a self-evolving universe that responds to your every keyboard tap and mouse click (i.e. every frame is generated in real time). Oasis reacts dynamically to players, simulating physics, visuals, and rules purely through AI. There’s also a delightful quirk: lack of object permanence. You turn around, and suddenly the world has rearranged itself. It’s like Minecraft with a touch of digital amnesia. As with any groundbreaking tech, there are questions. Training Oasis on gameplay footage from Minecraft raises possible copyright issues, with no indication Microsoft gave a green light. Decart’s model is exciting, but it also has that Wild West energy of emerging tech, where boundaries haven’t quite been set.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
we hope to be flowing into your inbox once a week. stay tuned for more!
banner images created with Midjourney.