Verses Over Variables

Your guide to the most intriguing developments in AI

Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and our perception of reality.

Back to Basics

The AI Hydra: Open vs Closed and the Battle for the Soul of AI

Forget Musk vs. Zuck in the Octagon. Silicon Valley's latest cage match features a new contender: artificial intelligence. With Meta's Mark Zuckerberg and OpenAI's Sam Altman squaring off, the tech world is buzzing over a simple question that carries the weight of our digital future: should AI be free-range or cage-raised? It's a debate that's not just moving fast and breaking things—it's rewriting the entire tech playbook.

Meta and Mistral unleashed their large, open-source AI models to considerable acclaim this week (and these were just the viral ones). Not content with merely unveiling new technology, Mark Zuckerberg penned a manifesto titled "Open Source AI Is the Path Forward." It's a bold statement from a tech leader who's no stranger to controversy, but it raises a crucial question: What does "open source" really mean in the context of AI, and what are the implications?

Traditionally, open source in tech has meant full open kimono—collaborative development, with code freely available for anyone to use, modify, and distribute. (Think Linux or GitHub.) But in the world of AI, it's not quite so simple. "Open source" often translates to "open weights" —companies reveal the parameters of their models but keep the training data under wraps. It's like being given a cake recipe without knowing the ingredients’ origins. On the other side of this debate stands OpenAI, champions of the closed-source model. Their approach is a proprietary product: you can use it, but can't peek under the hood. Think of it as the difference between a DIY kit and a ready-made solution. (Or in traditional software, think Microsoft Windows or Adobe.)

The contrasting philosophies of Meta and OpenAI reflect more profound strategic differences. Altman is laser-focused on achieving artificial general intelligence (AGI), regardless of the cost. "Whether we burn $500 million, $5 billion, or $50 billion a year, I don't care," he's been quoted as saying. (Apparently, OpenAI is on track to lose that $5BN anyway, so justification or excuse?) It's a high-stakes gamble, with OpenAI consistently releasing top-tier proprietary models.

Zuckerberg, meanwhile, seems to be betting on democratization, with the subtlety of a bull in a china shop. Meta is turning AI into a commodity, while not cannibalizing its revenue sources. (OpenAI has no other revenue source at the moment, besides selling AI.) It's a move straight out of the tech giant playbook—remember when Microsoft made Internet Explorer free, turning Netscape into the Blockbuster Video of browsers? Or when Google started giving away email and maps like Halloween candy, hoarding our data instead of our dollars? (OpenAI did seem to feel some of the heat and released GPT-4o mini at a significantly lower price than to its other models.)

This isn't just a theoretical debate - it has real-world implications. Open-source models could accelerate innovation by allowing a wider range of developers and researchers to contribute. They could also lead to more transparent and potentially safer AI systems. On the flip side, closed-source models might offer more control over how the technology is used and developed, potentially mitigating risks.

While Zuckerberg champions open-source AI as the path forward, Altman paints a different picture in his Washington Post op-ed. He frames the AI debate not just as a tech industry squabble, but as a geopolitical imperative. "The urgent question of our time," Altman writes, is whether we'll live in a world where "the United States and allied nations advance a global AI that spreads the technology's benefits and opens access to it, or an authoritarian one, in which nations or movements that don't share our values use AI to cement and expand their power." No pressure.

So while Zuckerberg sees open-source as the key to democratizing AI, Altman envisions a more controlled, but still globally collaborative approach. It's a stark reminder that in the world of AI, even those pushing for openness and those advocating for control are ultimately chasing the same elusive goal: a future where AI serves humanity's best interests. As Ben Thompson put it, “The reality is that China almost certainly has access to all of Silicon Valley’s most advanced technology, and there is little point in trying to undo that; the U.S. will never out-China China. What the U.S. can do is lean heavily into innovation.”

The question is more than just whose path will get us to the promised land of benevolent AI. It's whether we'll recognize that promised land when we get there, or we’ll be too busy arguing about the route to notice we've arrived.

How LLMs Work: The Brain Buffet

We often use the term large language models (LLMs), and this word salad often feels like the wizard behind the curtain. But how exactly do these digital models work their linguistic magic? Grab your metaphorical fork and knife, because we're about to dig into the brain buffet that is LLMs.

The All-You-Can-Eat Data Fest: Picture the internet as an endless Las Vegas buffet of words. LLMs are the voracious eaters at this linguistic smorgasbord, gorging themselves on every book, article, and tweet they can find (with or without permission). By the time they're done, they've digested more text than your average bookworm could read in a thousand lifetimes. But it's not just a mindless binge. These AI systems are parsing every sentence, every phrase, every word. They're not just memorizing; they're learning the subtle patterns of language—from formal prose to colloquial expressions.

From Word Salad to Language Smoothie: Once our AI has had its fill (and we're talking petabytes of data here), it's time to process all that verbal input. This is where the magic happens. The LLM doesn't just regurgitate; it learns to generate. Every word becomes a data point, every sentence a pattern to analyze. Think of it like a language processor. In goes the entire works of Shakespeare, a dash of current events, a sprinkle of scientific journals, and a generous pour of internet forums. Process it all, and out comes a model that can discuss Hamlet's existential crisis in the same breath as the latest technological trends.

The Secret Sauce: The real magic happens thanks to the model’s ability to predict the next word in a sequence. The LLM examines each word in relation to every other word, figuring out which combinations are most likely to occur together. When you input "bank," the model doesn't just think of one meaning. It looks at the context. "Bank" next to "river"? It predicts words related to nature. "Bank" next to "money"? It shifts to financial terminology. As the model processes your input, it's constantly predicting what word should come next based on the patterns it learned during training. It's like a hyper-intelligent autocomplete, but instead of just finishing your sentence, it can generate entire paragraphs or essays by repeatedly predicting the most likely next word.

Serving It Up: When you ask an LLM a question, it's like placing an order at this high-tech restaurant. The AI chef quickly scans its vast knowledge base, picking out the most relevant information and generating a response by predicting the most likely sequence of words that should follow your input. But it's not just reciting memorized text. The best LLMs can combine information in novel ways, creating original content. It's as if you asked about climate change, and the AI provided a comprehensive analysis drawing from scientific papers, recent news, and policy documents—all by predicting the most appropriate words to follow each other in response to your query.

Key Terms to Be Familiar With: To understand the world of LLMs truly, here's a quick cheat sheet of terms to know:

  • Context Window: This is the short-term memory of the model; what you can enter into the chatbot prompt window plus what you get out as your response.

  • Parameters: These are the rules or settings for how a model understands and learns language; sometimes, it refers to the amount of data that the model was trained on.

  • Transformer: The architecture behind the LLM; the neural network that makes all this linguistic magic possible. It's the engine under the hood of your AI sports car.

  • Pre-training: The first phase of learning for the model; feeding the model a vast amount of information from the internet. It's like sending your AI to elementary school to learn the basics of language and general knowledge.

  • Fine-tuning: Training the model in a particular task or specialization (a focused dataset) to make it more skilled in a specific area. This is like choosing a major in college for your AI.

  • Prompt Engineering: The art of crafting effective inputs to guide the LLM's responses.

  • Hallucination: When an LLM generates plausible-sounding but factually incorrect or nonsensical information.

The Future of AI Cuisine: As impressive as today's LLMs are, they're just the appetizer in the grand feast of AI development. The next course promises even more sophisticated models with larger context windows, more nuanced understanding, and the ability to reason in ways that more closely mimic human cognition. But like any powerful tool, LLMs come with their own set of challenges. Issues of bias, privacy, and the ethical use of AI are hot topics that we'll need to grapple with as these models become increasingly integrated into our daily lives. As we continue to refine and expand the capabilities of LLMs, one thing is clear: the future of human-AI interaction is being written (or, should we say, predicted) one word at a time. Bon appétit!

We’ll be talking about our favorite tools, but here is a list of the tools we use most for productivity: ChatGPT 4o (custom GPTs), Midjourney (image creation), Perplexity (for research), Descript (for video, transcripts), Claude (for writing), Adobe (for design), Miro (whiteboarding insights), and Zoom (meeting transcripts, insights, and skip ahead in videos).

Intriguing Stories

The Search Party’s Over: OpenAI has just dropped the mic with SearchGPT, its shiny new AI-powered search engine. Unlike traditional search engines, SearchGPT aims to organize and summarize information, providing users with concise, relevant answers to their queries. Yes, it might sound like Google’s AI Overview, Perplexity, or Bing’s new generative search experience, but OpenAI, has promised that this time i) it works and ii) they asked creators for permission. The prototype, available to a limited 10,000 users, allows follow-up questions and features "visual answers." Powered by GPT-4, it's designed to integrate with ChatGPT eventually. OpenAI emphasizes collaboration with news partners (including the WSJ, AP, The Atlantic, and Vox Media) and respect for publishers, offering clear attribution and opt-out options. This approach sets it apart from competitors that have faced criticism for content usage. While still in its early stages, SearchGPT represents a significant step in AI-assisted information retrieval. It could challenge established players like Google and emerging competitors such as Perplexity.

Tech Giants Go DIY:  Move over, NVIDIA – there's a new chip off the old block. Tech giants are flexing their silicon muscles, cooking up homegrown AI chips faster than you can say "Moore's Law." OpenAI is rumored to be partnering with Broadcom to develop custom AI chips. This move, led by CEO Sam Altman, includes recruiting talent from Google's Tensor Processing Unit team, signaling a serious commitment to chip development. Meanwhile, Microsoft and AMD are playing chip matchmakers in a project codenamed "Athena." Let's hope this partnership turns out better than its namesake's head-splitting birth from Zeus. Amazon, Meta, and Alphabet aren't far behind, each tinkering in their digital garages like kids with very expensive Lego sets. Why the sudden urge to go silicon solo? Well, for one, these tech titans are tired of paying the "NVIDIA tax" – a premium so high it makes Manhattan real estate look like a bargain. But it's not just about pinching pennies. These custom chips are like bespoke suits for AI: tailor-made, form-fitting, and guaranteed to make your neural networks look fabulous. There's also the small matter of supply chain control—by developing in-house chips, companies can mitigate risks associated with shortages and geopolitical tensions, ensuring a steady supply for their AI initiatives. Even NVIDIA, feeling the heat, is creating a US-tariff free chip for China using Samsung memory. As this silicon revolution unfolds, the AI chip market is becoming a veritable smorgasbord of options. It's like a tech potluck where everyone brought chips – and we're not talking about the potato kind.

Winter is Coming…and so is Apple AI, maybe: If you're like us, you've been clutching your iPhone 15 (12 actually) like Gollum with the Ring, waiting for September to trade up for the sweet 16. We were promised the digital equivalent of the Red Wedding at WWDC, with Apple Intelligence set to revolutionize our devices. But alas, our dreams of an AI-powered autumn are melting faster than ice cream in the New York City Subway. Apple's hotly anticipated "Intelligence" features are taking an unexpected detour on their way to your iPhone, and won’t be released until October. Apple has already released fast tracked versions of iOS 18.1 for developers, in an effort to catch up to Microsoft and Google, so that when Apple AI finally graces us with its presence, we’ll have more apps using it than just Siri (who, let’s face it, could use a little help.) Unfortunately, even when iOS 18.1 does drop publicly, some of it best features won’t be widely available: the ability for Siri to use on-device data and context from your screen to provide more personalized and relevant responses is still a work in progress, for instance. We may have to wait a little longer for our handheld AI revolution — patience is a virtue.

Drift and Furious - AI Takes the Wheel: If you know us, then you know that we love action films, so we couldn’t resist this story. Stanford Engineering and Toyota Research Institute have pushed the boundaries of autonomous driving into the realm of high-performance motorsports. Combining cutting-edge AI with the thrilling art of drifting, two modified GR Supras – dubbed Keisuke and Takumi – have successfully performed autonomous tandem drifts. This isn't just for show (or the next Jason Statham movie, though we’re sure he’s taking notes). By mastering extreme vehicle dynamics, researchers aim to develop safer autonomous systems capable of handling critical scenarios, like recovering from a spin-out or maintaining control on slippery roads. The vehicles use physics-informed neural networks to help predict tire behavior with uncanny accuracy, and even communicate with each other via WiFi, coordinating their dance of controlled chaos. Buckle up - and watch this video. Just don't be surprised if your Tesla starts asking for drift lessons or your Roomba suddenly develops a need for speed.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up below so you never miss an issue.

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

we hope to be flowing into your inbox once a week. stay tuned for more!

banner images created with Midjourney.