- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI
Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and our perception of reality.
Tool Update
Meet Open AI o1: The Model with a Master in Deep Thinking
What it is: OpenAI just released a preview (and mini-model) for its new o1 model. This isn't your garden-variety AI – it's more like having a digital Sherlock Holmes on speed dial. The o1 brings a fresh twist to artificial intelligence by channeling our cognitive processes. Instead of rapid-fire answers, it meticulously deconstructs problems, employing a "chain of thought" approach that mirrors human reasoning with uncanny precision. Yes, it's slower and a tad pricier than its zippy predecessors, but that's the price of depth over dispatch. (In a move that's more 'back to basics' than 'creative genius', OpenAI has reset its model naming to 1, heralding what they claim is AI's new frontier.)
How we use it: Think of o1 as the brainiac's Swiss Army knife – it's your go-to for cracking intellectual nuts. This model leaves mathematicians slack-jawed, outperforming PhD candidates in scientific showdowns, and makes seasoned coders question their career choices. But its talents don't stop at the lab door – this model is making waves in the corporate world too. Allie Miller showcased o1's prowess by using it to crack open some business optimization problems, demonstrating its knack for strategic thinking. Just don't ask it to curate your Instagram feed or surf the web – this cerebral powerhouse is all about text-based problem-solving. It's not your chatty AI companion for quick quips or mundane tasks. When you need serious cognitive muscle – whether you're in a lab coat or a business suit – o1 is ready to flex its intellectual biceps and dive deep into the cognitive heavy lifting.
Google’s Illuminate and Notebook: From Notes to Noise
What it is: Google's latest AI offerings are like overeager interns for your brain, ready to transform your mundane musings into audio masterpieces. Enter Illuminate and NotebookLM's Audio Overview – two digital tools that could make even your grocery list sound like it deserves a Pulitzer. Illuminate, the academic paper whisperer, is here to rescue you from the depths of scholarly despair. It's the equivalent of having a genius roommate who reads your assigned texts and explains them to you in podcast form while you're busy "studying" (read: scrolling through TikTok). Currently, Illuminate only works with papers from arXiv.org, but hey, Rome wasn't built in a day, and neither were your procrastination skills. Meanwhile, NotebookLM's Audio Overview is the hype man your scattered thoughts never knew they needed. Feed it your documents, and watch as two AI hosts spring to life, discussing your ideas with the enthusiasm of fanboys at a Comic-Con panel. It's like stumbling upon a secret society dedicated to decoding your brilliance – flattering, if slightly unnerving.
How we use it: Before you cancel your Spotify subscription, remember: these AI hosts aren't exactly Terry Gross. They might turn your cancer research into a stand-up routine or discuss war with the gravity of a cereal commercial. And let's not forget the elephant in the room – or should we say, the glitch in the matrix. These AI narrators are about as reliable as your friend who swears they'll "be there in five minutes." They might sprinkle your factual feast with a dash of fiction, turning your meticulously researched paper into the academic equivalent of a game of telephone. So, whether you're a student trying to absorb knowledge via auditory osmosis, or just someone who believes their dream journal is worthy of a dramatic reading, Google's got your back. Just be prepared for the possibility that your deep dive into quantum physics might end up sounding like a podcast hosted by Schrödinger's cat.
Google’s DataGemma: Grounding AI in Facts
What it is: Google's latest AI innovation, DataGemma, is set to redefine how we interact with artificial intelligence. This new model aims to tackle one of AI's most persistent challenges: the tendency to generate plausible-sounding but inaccurate information, often called "hallucinations." At its core, DataGemma is a clever fusion of Google's vast Data Commons – a repository of over 240 billion data points from trusted sources – and the company's cutting-edge Gemma AI models; leading to an AI system that's not just smart, but also remarkably well-informed. The model uses a real-time fact checker (RIG or Retrieval-Interleaved Generation) and an external database of relevant data (RAG or Retrieval-Augmented Generation).
How we use it: This fact-first approach makes DataGemma particularly suited for tasks requiring accurate, up-to-date statistical information. Whether you're a policymaker needing reliable data for decision-making, a researcher diving into climate trends, or a journalist fact-checking a story, DataGemma offers a level of accuracy that sets it apart from its AI peers. While other AI models might confidently generate responses based solely on their training data, DataGemma verifies its information against current, trusted sources. It's not infallible, but it represents a significant step towards more trustworthy AI interactions.
The AI Hype Cycle: The Seismic Shift
In a slightly meta-analysis, we're diving into Zvi Mowshowitz's review of Nate Silver's upcoming book, "On the Edge." Silver, renowned for his data analysis and forecasting chops, introduces a concept that's got us thinking: the Technological Richter Scale (TRS). This logarithmic scale, reminiscent of its seismological namesake, quantifies the societal tremors caused by innovations. A modest 3 might earn you a patent, while a groundbreaking 8 puts you in the league of world-altering inventions like the internet, electricity, and automobiles.
But where does AI fall on this scale? That's the trillion-dollar question setting tech circles ablaze. Some argue AI will top out at 8, matching the internet's impact. Others forecast a perfect 10 – a score reserved for epoch-defining technologies that fundamentally reshape human existence. A "10" on the TRS isn't just another leap forward; it's a tectonic shift potentially leading to artificial superintelligence (ASI). The implications are as exhilarating as they are unnerving. Optimists envision unprecedented advancements, while pessimists warn of existential risks to humanity. (Some heralded OpenAI’s o1 model as one step closer to AGI (Artificial General Intelligence), though many techies were quick to put the brakes on that particular hype train.) Skeptics of transformative AI point to potential capability plateaus, resource limitations, and the unique nature of human intelligence. Proponents counter that these hurdles are surmountable, asserting that sufficiently advanced AI could rapidly self-improve. The "it'll be fine" camp offers various rationales: the possibility of aligning AI with human values, the emergence of beneficial decision-making frameworks, and faith in economic systems to adapt positively to AI advancements. Yet, concerns linger about the "default scenario" – an unchecked race to develop ever-more-powerful AI systems that could spiral beyond human control.
Zvi astutely observes that we've crossed a critical threshold regardless of AI's ultimate position on the TRS. Global cooperation on AI governance and safety measures is no longer optional – it's imperative. We must proactively tackle alignment challenges and potential pitfalls while harnessing AI's transformative potential. Our focus should be on steering AI development towards beneficial outcomes, irrespective of its final TRS rating. This demands continued research in AI safety, ethics, and governance to ensure that as AI capabilities grow, they remain in lockstep with human values and interests. Here’s hoping Ilya Sutskever, OpenAI co-founder turned AI safety entrepreneur, keeps his eyes on the ethics prize rather than getting starry-eyed over funding rounds and profit margins. While on the flip side, eyebrows raised as Sam Altman stepped down from OpenAI's Safety and Security Committee, fueling speculation about the company's commitment to balancing innovation with ethical considerations.
The TRS offers a compelling framework for the ongoing discourse. Whether AI proves to be "internet big" or ushers in an entirely new epoch, we think its reverberations will reshape every facet of our existence. The challenge – and opportunity – lies in navigating this frontier with equal parts optimism and prudence, ensuring that the tremors of progress don't become the quakes of our undoing.
We’ll be talking about our favorite tools, but here is a list of the tools we use most for productivity: ChatGPT 4o (custom GPTs), Midjourney (image creation), Perplexity (for research), Descript (for video, transcripts), Claude (for writing), Adobe (for design), Miro (whiteboarding insights), Cursor (for coding), and Zoom (meeting transcripts, insights, and skip ahead in videos).
Intriguing Stories
Digital Afterlife: In a blend of nostalgia and tech, Ars Technica's Benj Edwards has found a way to keep his late father's memory alive – quite literally by his own hand. Using the Flux AI image generator, Edwards has managed to recreate his father's distinctive penmanship, opening up a Pandora's box of emotions and ethical questions. This isn't your grandma's scrapbooking project. The AI doesn't just mimic; it creates dynamic, ever-changing versions of Dad's uppercase scrawl, injecting it into everything from cartoon speech bubbles to fictional store signs. It's like Dear Old Dad is leaving sticky notes all over the digital world. While Edwards acknowledges the potential for misuse – after all, who wouldn't want to forge a sick note from Einstein? – he sees this as a celebration of his father's essence. It's a high-tech way of saying, "Thanks for the memories, Dad," with a silicon chip on the shoulder.
Powering Up: OpenAI is embarking on a bold infrastructure initiative that could redefine the US's role in artificial intelligence. With plans to invest tens of billions of dollars, the company aims to create a network of cutting-edge facilities across the country, positioning America at the forefront of AI development. This expansive vision includes constructing vast data centers, bolstering energy infrastructure, and ramping up semiconductor production. It's a comprehensive approach designed to address every facet of AI's demanding technological needs. OpenAI isn't going it alone, either. They're assembling a coalition of global investors, from Canada to the UAE, to fuel this monumental undertaking. At the heart of the plan is a multi-gigawatt computing system that would eclipse current capabilities, enabling massive distributed AI training across interconnected data centers. This isn't just about raw power—OpenAI is pushing the envelope with innovations like high-density liquid-cooled AI chips and advanced data-center connectivity. The economic impact could be substantial, with projections of 40,000 new jobs. However, the initiative also raises important questions about energy consumption, as AI-driven data centers are expected to significantly increase electricity demand. While OpenAI charts an ambitious course, it's worth noting the contrasting approach of xAI. Elon Musk's venture has faced scrutiny for its rapid deployment in Memphis, where environmental concerns highlight the need for balanced, community-conscious AI development.
Lost in Translation: Andrej Karpathy, the former Tesla AI chief and OpenAI co-founder, has sparked an intriguing debate about the nature of AI's most buzzed-about tools: he pointed out that "Large Language Models" (LLMs) might be due for a rebrand. Karpathy set off a firestorm when he stated that “LLMs have little to do with language.” Despite their name, they're versatile engines capable of processing various types of data, from image patches to molecular structures. Some suggest "Autoregressive Transformers" as a more fitting title, highlighting their broader capabilities, or “Generalized Answer Predictors.” Karpathy's insights remind us that in the rapidly evolving world of AI, even our terminology needs to keep pace. And the tech bros might need to hire some naming experts to up their branding game.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up below so you never miss an issue.
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
we hope to be flowing into your inbox once a week. stay tuned for more!
banner images created with Midjourney.