- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI
Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and perception of reality.
Tools for Thought
Our Favorite Tools of 2024
The AI explosion of 2024 has turned my browser into a digital wonderland, with new tools materializing at a pace that makes quantum computing look sluggish. The pace has been nothing short of dizzying – just when I thought I'd mastered one AI assistant, three more would materialize, each promising to revolutionize how we work, create, and think. After countless hours tinkering with these digital companions, I've discovered some genuine gems that have earned a permanent spot in my workflow. I've battle-tested every promising AI tool that crossed my path, separating the digital diamonds from the silicon snake oil. Now, buckle up as I dive into my hand-picked lineup of 2024's most impressive AI tools – the ones that didn't just survive my intense testing gauntlet but became indispensable allies in my daily digital adventures. These aren't just occasional helpers; they're the power tools I fire up daily to supercharge my workflow. And while these topped my charts, I've got a treasure trove of runner-up AI tools that nearly made the cut – stay tuned for that compilation coming your way soon.
Claude: The Infinite Draft
What it is: Claude is like having a really smart, endlessly patient-thinking partner who's always ready to help. Some people need someone to bounce ideas off of, help organize thoughts, or be a sounding board - that's Claude. It's an AI that can write, analyze, explain things, and most importantly, actually understand what you're trying to accomplish. What makes it unique is how it adapts to your needs - whether you're trying to understand a complex topic, craft the perfect email, or develop a detailed business proposal. The magic truly shines through two remarkable features: Artifacts, which creates a dedicated space alongside our conversation for developing substantial pieces like documents or long-form content, and the Custom Style function, letting Claude adapt its writing voice to match any tone needed - from polished professional to casually creative.
How I use it: I've made Claude my go-to creative collaborator, and these features have entirely transformed my workflow. The Artifacts space has become my creative laboratory - when I'm crafting something substantial like a detailed proposal or comprehensive report, it lives in its own dedicated space while we bounce ideas back and forth in our main conversation. The Custom Style function takes our collaboration to another level entirely. I've crafted different voice profiles that captures the tone I need for various projects. One minute I'm generating crisp, professional client communications, the next I'm writing engaging team updates with just the right touch of personality. The best part is how these features work together - I can develop complex content in a focused workspace while maintaining precise control over the voice and presentation. It's like having a supremely adaptable creative partner who always knows exactly what tone to strike and perfectly organizes all of our projects.
Perplexity: Beyond Search
What it is: I've fallen head over heels for Perplexity, my go-to research powerhouse that's transformed how I process information online. Unlike traditional search engines with their endless sea of blue links, Perplexity crafts coherent, insightful responses that feel like they're coming from your most brilliant research partner. It's revolutionizing the research experience by weaving together insights from reliable sources while maintaining a running conversation that builds on previous queries. The platform's natural language processing capabilities pack serious computational muscle, yet the interface remains as approachable as a casual chat with a knowledgeable friend. The seamless integration of multiple data sources and its ability to remember context make complex research feel effortless.
How I use it: My daily workflow with Perplexity has become a masterclass in efficient research and discovery. The platform transforms my scattered thoughts into structured insights, complete with verifiable citations that hold up under scrutiny. During deep dives into complex topics, the platform guides me through progressive layers of understanding, building a comprehensive knowledge base. The collections system serves as my digital brain, storing and organizing discoveries with an elegance that makes traditional bookmarking seem primitive. I leverage the file upload feature for in-depth projects, letting Perplexity analyze and connect information across multiple documents. The ability to switch between different AI models provides multiple perspectives. Perplexity has become my indispensable digital companion, from quick fact-checks to comprehensive research projects.
Midjourney: The Creative Alchemist
What it is: Think of Midjourney as your digital art studio in the cloud. While other AI tools might sketch rough drafts, Midjourney paints masterpieces—transforming your words into stunning visuals that often blur the line between AI-generated and human-created art. Born as a Discord-only platform, it's grown into a sophisticated web-based powerhouse that feels like having a team of world-class artists at your fingertips. The latest V6.1 release has taken things to another level, with an almost uncanny ability to nail the details that used to give AI art away: hands actually look like hands now, and those subtle lighting effects that make images pop? Midjourney's got them down to a science.
How I use it: I've woven Midjourney into the fabric of my creative process, and honestly, I can't imagine working without it anymore. The web interface has become my second home—a digital canvas where I spend hours fine-tuning prompts and watching ideas materialize before my eyes. My secret weapon? The style reference and moodboard feature, which lets me build visual DNA profiles for different projects. When I need that perfect product shot or conceptual piece, I'll start with a reference image, then use the retexture tool to play mad scientist with materials and textures. Sometimes I'll spend an entire afternoon just experimenting with different styles, watching how a single prompt evolves as I tweak parameters. The platform has transformed what used to be time-consuming visualization work into an intuitive dance between human creativity and artificial intelligence.
ChatGPT: A Productivity Hack
What it is: ChatGPT has become practically synonymous with AI - the name that instantly springs to mind when anyone mentions artificial intelligence. OpenAI's flagship creation represents a fundamental shift in how we interact with technology, transforming from a simple chatbot into an indispensable digital companion. ChatGPT excels at diving deep into research topics, crafting compelling narratives, and developing structured approaches to complex business challenges. The latest version brings us Advanced Voice Mode which makes conversation feel as natural as talking to a colleague, Custom GPTs that adapt themselves to specific professional needs, and Canvas - a collaborative workspace that turns frameworks and deliverables into polished masterpieces. Each interaction feels less like interfacing with technology and more like partnering with a seasoned intern who's always ready with fresh insights.
How I use it: I've made ChatGPT my secret weapon mostly through the iPhone app. Advanced Voice Mode has become my constant companion during dog walks and brainstorming sessions - with insights flowing naturally through my earbuds. My Custom GPTs are precision-tuned for my professional needs: one specializes in deep-dive research synthesis, pulling together complex market data into coherent narratives. Another is trained on my go-to business frameworks, adapting best practices to unique client situations. My writing-focused GPT understands my tone and style, helping transform technical insights into clear, engaging client communications. Canvas has also become my strategic planning hub, where initial research evolves through collaborative refinement into comprehensive strategies. It's like having a brilliant consulting partner who's always ready to research, strategize, or help structure my thinking for maximum impact.
Google NotebookLM: Where Your Documents Talk Back
What it is. Remember Google NotebookLM, hands down the coolest way I've found to turn dense documents into podcast-style conversations? This research platform has completely transformed how I process and understand complex documents, going beyond just another note-taking tool. It's become my secret weapon for conquering complex content, through audio and direct engagement. Built on Google's Gemini AI engine, this research platform lets me dive deep into conversations with my documents—yes, actual back-and-forth discussions with my research materials. I can load up to 50 sources (or 300 for enterprise users) per notebook, from PDFs to Google Docs to web articles, and then do something magical: ask questions directly about any detail that catches my interest. The AI doesn't just parrot back content; it synthesizes information across sources, spots connections I might miss, and engages in surprisingly nuanced discussions about complex topics. While Audio Overview keeps me company during commutes, it's this ability to interrogate my documents and create study guides is truly revolutionary.
How I use it: In my world, NotebookLM has transformed both my teaching prep and professional research workflow. During commutes, I'm deep in Audio Overview discussions of industry papers, but back at my desk, I'm firing questions at my notebook about everything from market trends to theoretical frameworks. When preparing client presentations, I'll often start a dialogue with my collected sources, pushing the AI to help me uncover hidden patterns or challenge my assumptions. For teaching, it's a game-changer—I can rapidly explore different angles on complex topics, generating fresh perspectives that work for both graduate seminars and executive training sessions. The dual nature makes it special: the learning through Audio Overview combined with active engagement through direct questioning. It's like having a research partner who's read everything in my collection and is ready to engage in thoughtful discussion at any time. The platform has fundamentally shifted how I interact with information, making consumption and analysis more like an engaging conversation than a slog through dense text.
Google AI Studio: Screen Sharing with the Future
What it is: I've been diving deep into Google AI Studio lately, and let me tell you, it's a game-changer. At its core, it's Google's sleek platform for tinkering with their most advanced language models. But calling it just another AI interface would be like calling a Ferrari just another car. It's got this beautifully intuitive workspace where you can prompt, fine-tune, and deploy AI models without drowning in code. Google has somehow managed to pack enterprise-grade AI capabilities into something that feels as approachable as their search bar, while still giving us tech enthusiasts all the sophisticated controls we crave.
How I use it: Google AI Studio has completely transformed how I tackle tech troubles. Instead of falling down those endless YouTube tutorial rabbit holes (we've all been there), I simply share my screen and let the AI guide me through solutions in real-time. I'm particularly stoked about their experimental 1206 model - it's lightning-fast and kind of mind-blowing in how quickly it grasps what I'm trying to do. When I must dig deep into a problem, I switch over to deep research mode. It's like having a tech-savvy friend who knows all the answers and can explain them in a way that makes sense to me.
Anthropic Console: Universal Prompt Laboratory
What it is: The Anthropic Console has become my secret weapon in the AI world - and not just for Claude. Think of it as my personal AI whisperer that helps me communicate better with any AI model out there. At its core, it's a brilliant interface that helps me craft the perfect instructions for AI conversations. The built-in Prompt Generator is like having a universal translator for AI-speak, while the improvement tools help me polish my instructions until they shine.
How I use it: I've made the Console my first stop whenever I need to have a complex conversation with any AI model. I use the Prompt Generator to craft crystal-clear instructions, tweak them until they're just right using the improvement tools, and then take those perfectly crafted prompts wherever I need them - whether that's ChatGPT, Claude, Gemini, or any other AI assistant. It's like having a master key that unlocks better performance across all AI platforms. I've noticed that when I take a prompt refined in the Console and plug it into different models, I consistently get higher quality responses. The Console has essentially become my prompt laboratory, where I experiment and perfect my AI communication before taking it on the road.
Granola: Your Mental Alibi
What it is: Granola AI is an AI-powered notepad designed to enhance meeting productivity. Think of it as your personal assistant who transcribes the meeting in the background while giving you the freedom to jot down key points without breaking your flow. Its standout feature is the way it merges your handwritten notes with the meeting transcription to produce a tailored summary of key takeaways and next steps. Granola isn’t just about transcription—it contextualizes information based on meeting participants and their roles, creating personalized and actionable notes. With support for popular platforms like Zoom, Google Meet, and Teams, plus integration with your calendar, it fits seamlessly into your existing workflow.
How I use it: I rely on Granola AI to keep me focused and present during meetings. Instead of scrambling to capture every word, I only note what matters most, trusting Granola to fill in the gaps with its transcription and AI-generated enhancements. After the meeting, I love how it delivers a polished summary that combines my input with its additions, making it clear what came from me versus the AI. Its "Zoom In" feature has been a lifesaver for verifying quotes, and the post-meeting tools, like drafting follow-up emails, save me hours of manual work. With Granola handling the details, I focus on the conversation, confident that my notes will be accurate and comprehensive.
Honorable Mentions: Creative Dream Team
When it comes to fueling my creative projects, a few other tools deserve an honorable mention. Krea's become my go-to playground for testing the latest AI visual innovations - I've been diving into cutting-edge image and video models, and their sketch-to-image generation feature has saved my artistically challenged hands more than once. v0 has transformed me from a web development novice into someone who can build functional, good-looking websites without touching a line of code. Suno brings the audio magic, transforming text into expressive voices and immersive soundscapes that don't sound like they came from a 90s text-to-speech program. Runway streamlines my video editing and AI-driven content creation - it's made complex video projects actually achievable instead of eternally aspirational. Meanwhile, ElevenLabs reads articles aloud, adds immersive sound effects and ensures our storytelling hits the right notes with its advanced text-to-speech capabilities, bringing my words to life in ways I never thought possible. Together, these tools have turned my ambitious project ideas into finished work that makes me look far more talented than I am.
AI Hype Cycle
Tokens of Appreciation: Simon Willison's Recap of AI in 2024
I was all set to pen an exhaustive recap of AI's wild ride through 2024, but then I read Simon Willison's comprehensive blog post, Things We Learned About LLMs in 2024. He's already knocked it out of the park, so instead of reinventing the wheel, let me walk you through the highlights of his sharp observations.
First up, 2024 saw the mighty GPT-4 benchmark fall like dominoes, with 18 organizations leapfrogging OpenAI's original model. Google, Anthropic, and Meta led the charge, but Google's Gemini 1.5 Pro stole the show with its mind-bending 2-million-token context length. We're talking about analyzing entire books or debugging massive codebases in one go - not just better performance, but a fundamental shift in how we can put these models to work. The year also brought LLMs down to earth - literally to our laptops. Thanks to some serious efficiency gains, models like Meta's Llama 3.2 and Qwen2.5-Coder-32B now run smoothly on consumer hardware (GPT-4-level processing power, no cloud required, right at your fingertips).
The economics have shifted dramatically too. Once eye-wateringly expensive, running these models now costs mere pennies. Want to generate captions for 68,000 images with Google's Gemini 1.5 Flash? That'll be less than your morning coffee. This isn't just about cheaper processing - it's about democratizing access while shrinking the environmental footprint.
Speaking of evolution, 2024 made multimodal capabilities the new normal. Today's LLMs don't just read text - they see, hear, and process the world in increasingly sophisticated ways. OpenAI's live camera analysis and Google Gemini's audio processing capabilities feel like they've been yanked straight from tomorrow's tech catalog. The voice interaction piece has become particularly uncanny. OpenAI's GPT-4 voice mode now mimics accents with an accuracy that would make voice actors nervous. Pair that with live video analysis, and we're edging into sci-fi territory - though thankfully more Her than Terminator.
Simon offers a refreshing reality check on AI agents, though. Despite the persistent hype, practical applications remain stubbornly limited. Issues with reliability and gullibility continue to hold back real-world deployment, reminding us that true AI autonomy still has some growing up to do.
The training landscape saw some fascinating shifts too. Synthetic data proved invaluable for building specialized capabilities, while OpenAI's o1 series introduced a novel approach to complex problem-solving - scaling up inference rather than training. It's a clever twist that opens up new possibilities for how these models tackle tricky challenges.
The environmental story is a study in contrasts. While individual prompts have become remarkably efficient, the massive datacenter expansions by tech giants raise some thorny questions about infrastructure sustainability. Are we building what we need, or are we caught in a cycle of competitive overreach? The jury's still out on that one.
Then there's what Simon brilliantly dubs "the slop era" - and if you've spent any time online in 2024, you know exactly what he means. The internet is practically drowning in AI-generated content of wildly varying quality. It's highlighted our desperate need for better filtering and evaluation tools, unless we want to spend our days swimming through an ocean of synthetic mediocrity.
One of Simon's most striking observations centers on the stark disparity in AI literacy. While ChatGPT has become as household a name as Kleenex or Google, powerful alternatives like Claude and Qwen remain relative unknowns to the general public. As these technologies become increasingly woven into the fabric of our daily lives, bridging this knowledge gap becomes crucial.
Looking back through Simon's analysis, it's clear that 2024 wasn't just another year of steady progress - it was a fundamental shift in how we build, deploy, and interact with AI. While plenty of challenges remain (looking at you, environmental impact and content quality), the trajectory suggests we're moving toward a future where AI becomes more powerful and more accessible. The depth of Simon's insights goes even further than what I've covered here, and I'd strongly encourage you to check out his original post. After all, understanding where we've been is crucial for appreciating where we're headed. And based on 2024's developments, we're headed somewhere fascinating indeed.
Back to Basics
AGI vs ASI: The Gap Between Benchmarks and Brain Power
The AI landscape feels like a Silicon Valley soap opera lately, with Sam Altman dropping what might be his most intriguing blog post yet about ChatGPT's second birthday and OpenAI's journey toward AGI. Let me break down what we're talking about when we throw around terms like AGI (artificial general intelligence) and ASI (artificial superintelligence) because these distinctions matter more than ever.
Think of AGI as the holy grail of artificial intelligence - it's essentially human-level machine cognition. Not just a really good chatbot, but a system that can genuinely tackle any intellectual task a human can. Write poetry, debug your code, plan your wedding, and solve novel physics problems, all while truly understanding the nuanced context of each task. Now, ASI takes that concept and cranks it up to eleven. Imagine AGI as graduating college, and ASI as suddenly becoming a hybrid of Einstein, Leonardo da Vinci, and every other genius who's ever lived - except smarter.
Altman's latest reflection piece makes some bold claims about OpenAI's progress toward AGI, stating they're "now confident we know how to build AGI as we have traditionally understood it." That's quite a statement, especially coming on the heels of the O3 model's performance. When O3 hit that 87.5% score on the ARC-AGI benchmark - surpassing what's considered human-level performance - the AI world practically exploded. Let's talk about that disconnect between benchmark performance and true general intelligence. Sure, O3's innovative "adaptive thinking time API" and advanced search mechanisms are impressive engineering achievements. François Chollet, who created the ARC-AGI benchmark, called it a "surprising and important step-function increase." Yet acing a test, even a sophisticated one, doesn't equal general intelligence. It's like saying you're fluent in a language because you memorized the dictionary.
The reality behind these impressive numbers tells a different story. While Altman talks about AGI in 2025 and "superintelligence in the true sense of the word," his blog post reveals the messiness behind the scenes. He acknowledges the internal turmoil at OpenAI, including his dramatic firing and reinstatement, and the constant evolution of their approach. The company's gone from "a quiet research lab" to a complex organization grappling with governance issues and high-profile departures.
I find it fascinating how Altman pivots from AGI to superintelligence in his reflection. He's essentially saying, "Hey, we've figured out AGI, now let's talk about ASI." This feels a bit like announcing you've solved nuclear fusion while your prototype is still in CAD drawings. The path from our current narrow AI systems to AGI isn't just a matter of scaling up - it requires fundamental breakthroughs in understanding and replicating consciousness, reasoning, and general problem-solving. Let alone bias, privacy and security.
The tech industry's optimism is infectious, and the progress we're seeing is genuinely exciting. O3's breakthrough, with its novel approaches to program synthesis and chain-of-thought reasoning, represents a significant step forward. When Altman writes about superintelligent tools "massively accelerating scientific discovery," I can't help but notice that our current AI systems still struggle with basic causality and common sense reasoning - things that even young children grasp intuitively.
Maybe Altman's right - maybe 2025 will bring us AI agents that genuinely "join the workforce." Between the corporate drama, the benchmark celebrations, and the superintelligence speculation, I'd say we're still in the early chapters of this story. Though I must admit, reading Altman's reflections while my smart home device still struggles to turn off the right lights makes me wonder if we're all getting a bit ahead of ourselves in this AGI narrative.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
we hope to be flowing into your inbox once a week. stay tuned for more!
banner images created with Midjourney.