- Verses Over Variables
- Posts
- Verses Over Variables
Verses Over Variables
Your guide to the most intriguing developments in AI
Welcome to Verses Over Variables, a newsletter exploring the world of artificial intelligence (AI) and its influence on our society, culture, and our perception of reality.
AI Hype Cycle: Rise of the GPT
From Niche to Necessity: AI as a General Purpose Technology
GPT has become a household acronym, but not for the reason you might think. While most associate it with AI chatbots (think ChatGPT to AI, as Google is to search, as Xerox is to photocopy), but today we're exploring its alter ego: the General Purpose Technology. This GPT is the rarest of technology – innovations so transformative they rewrite the rules of economics and society. Examples included the steam engine, electricity, or the internet. These innovations served as catalysts, reshaping entire industries, revolutionizing productivity, and fundamentally altering the fabric of human interaction with the world around us. Now, artificial intelligence is poised to join this exclusive club, and it's moving at a pace that makes its predecessors look like they're stuck in rush hour traffic.
Futurist Amy Webb brought up the concept of AI as a GPT at SXSW 2024. She painted a picture of AI not as just another app cluttering your home screen, but as the "everything engine" – an invisible force set to turbocharge innovation across every industry imaginable. Webb anticipates AI's evolution beyond simple response mechanisms, envisioning it as an intuitive collaborator capable of anticipating needs and catalyzing creativity in unprecedented ways. Webb forecasted AI as a GPT, the rise of agents and action-oriented AI in March.
The rapid adoption of AI is already evident in the tech community. Brad Smith, President of Microsoft, highlighted the surge in AI-related contributions on GitHub since the advent of ChatGPT. The platform has witnessed a remarkable 230% increase in AI-driven development, signaling a seismic shift in how developers approach problem-solving and innovation. This trend suggests that AI is not just a tool but a fundamental change in how tech workers function.
Parallel to its expanding capabilities, AI is becoming increasingly accessible due to plummeting costs. OpenAI's Dane Vahey reported a staggering 99% reduction in AI processing costs over 18 months—from $36 to $0.25 per million tokens. This dramatic decrease in cost is democratizing access to AI capabilities, potentially unleashing a wave of innovation across industries and socioeconomic strata.
AI is also on the fast track due to its ability to piggyback on our existing digital infrastructure. Unlike its GPT predecessors, AI doesn't need us to lay new cables or build power plants. It's sliding into our lives via the smartphones in our pockets and the Wi-Fi routers in our homes, turning every device into a portal for AI-powered innovation. And since AI communicates in simple language (no new computer speak to learn), AI is accessible to everyone from coding wizards to those boomers who still print out their emails. With AI increasingly becoming a commodity, often forced by the existence of open source models, we aren’t sure which is the chicken or the egg: access or price.
The confluence of accessibility, affordability, and adaptability positions AI to reshape our world at an unprecedented pace. From healthcare and finance to creative arts and agriculture, AI's versatility promises transformative effects across the entire economic landscape. We expect AI to grow as a General Purpose Technology, the jack-of-all-trades in our digital toolbox.
Which brings us to a thought-provoking twist in our AI tale. Last week's C-suite exodus from OpenAI raised some eyebrows. OpenAI, the company that's been painting vivid pictures of Artificial General Intelligence (AGI) – think less Alexa, more Terminator – has been trumpeting that this digital Nostradamus is just around the corner. (Or maybe it's already lurking in their labs, too potent for public consumption.) But here's the million-dollar question: if you'd been burning the midnight oil at a company for years, with AGI as your Holy Grail, would you jump ship right before the big reveal? Or is it more likely you'd cash out your chips just as your life's work was about to become a commodity (aka a GPT)? It's food for thought as AI evolves from niche novelty to necessary GPT.
From Wow to Now: AI’s Growing Pains
If you've scrolled through your feed lately, you've probably been bombarded by AI-generated podcasts courtesy of NotebookLM or watched people get digitally squished, crushed, or transformed into cakes from Pika Labs. Welcome to the AI circus, folks – where last week's viral sensation is this week's old news. We and Max Read and have noticed a pattern: AI tools are riding a predictable hype cycle, but with a twist. While Silicon Valley keeps touting efficiency and productivity, it's the fun factor that's driving adoption. Who knew play would be AI's secret weapon?
Let’s breakdown the new hype cycle:
Stage 1: The Honeymoon Phase: Initially, AI captures our imagination with its seemingly magical capabilities. We're enthralled by its ability to generate human-like text or create art from simple prompts. This stage is marked by widespread experimentation and a sense of wonder.
Stage 2: The Tinkerer’s Paradise: As the initial awe subsides, we begin to explore AI's practical applications. It becomes a tool for creativity and problem-solving, offering new ways to approach tasks and sparking innovative ideas.
Stage 3: Reality Check: With increased use comes a deeper understanding of AI's capabilities and constraints. We start to recognize that while impressive, AI outputs can be shallow or inconsistent, lacking the nuance and depth of human expertise.
Stage 4: The Trust Fall: This stage brings a critical realization: AI can make mistakes. We become more discerning users, learning to fact-check and validate AI-generated information, understanding that it's a tool to augment, not replace, human intelligence.
Stage 5: The Sweet Spot or Slop: Finally, AI settles into a more defined role in our digital ecosystem. We see a proliferation of AI-generated content, some mediocre, some innovative. The challenge becomes distinguishing quality and finding valuable applications amidst the noise.
Throughout this wild ride, it's often the rookies – yeah, we're talking about the kids – who are nailing it. Interestingly, as Conor Grennan points out, their beginner's mindset is like a superpower. Free from the baggage of "how things should be done," they see AI as a blank canvas of possibilities. As AI continues to develop, maintaining this spirit of openness and adaptability will be crucial. The most effective users of AI technology may well be those who can blend experience with the willingness to see things anew – combining the wisdom of expertise with the limitless imagination of a beginner's mind. Or at least those willing to stay and play.
Back to Basics
Liquid Intelligence: How New Models are Reshaping AI
The AI landscape is undergoing a significant transformation. While Transformer models have been the driving force behind recent advances in Generative AI, a new contender from MIT's lab is making waves. Liquid Neural Networks (LNNs) are introducing a fresh approach to artificial intelligence that could reshape our understanding of machine learning.
Transformers, the backbone of many current AI systems, operate like a well-structured organization—a series of neural networks functioning as a coordinated assembly line. They've proven highly effective, particularly in language-related tasks, processing data remarkably efficiently.
MIT's Liquid Neural Networks, however, introduce a paradigm shift. These networks process information more like ripples in a pond. When data is introduced, it creates patterns that spread through the network, influencing different components in complex, interconnected ways. This fluid approach to data processing opens up new possibilities in AI functionality. While Transformers excel in language tasks, Liquid Models are demonstrating impressive versatility in handling a variety of sequential data, including video, audio, text, and time series. This multi-modal capability suggests a future where a single model could address a wide range of tasks, streamlining AI applications across various industries. One of the most intriguing aspects of Liquid Models is their capacity for real-time adaptation. Unlike Transformers, which typically maintain a fixed structure post-training, Liquid Models can adjust on the fly. This dynamic learning ability could prove invaluable in environments that demand flexibility and rapid responses.
This isn’t just theoretical: this week, Liquid AI unveiled a series of Liquid Foundation Models. These compact, multi-modal systems are showing promising results, often matching or surpassing the performance of larger traditional models. This efficiency could have significant implications for the deployment of AI in resource-constrained environments or applications requiring swift processing. Whether Liquid Models will become the new standard or coexist with current architectures remains to be seen, but they're undoubtedly expanding our understanding of what's possible in machine learning.
Tool Update
ChatGPT Canvas: OpenAI Gets Creative
What it is: During its DevDay, OpenAI unveiled Canvas, a feature that's set to redefine AI collaboration. While it may evoke comparisons to Claude Artifacts and Cursor, Canvas transforms ChatGPT's traditional chat interface into a dynamic, visual workspace where users and AI collaborate side by side. Canvas allows users to edit in real-time, mirroring the collaborative feel of Google Docs, but with an AI twist. For coders, it offers Cursor-like functionality, providing debugging insights, change tracking, and the ability to translate between programming languages. Unlike Claude and Cursor, however, you can’t see what you are building in real time, as you have to bring the visualization or code into another program.
How we use it: So far we’ve been using it for writing. It's like having a tireless editor and brainstorming partner rolled into one, streamlining our content creation process. The new user experience offers quick options to adjust document length or reading level with just a few moves on a slider. Also, by simply typing '/' in the text box, you're presented with a quick menu – generate text, create images, or engage in deep reasoning.
CoPilot Labs: Microsoft’s New Sandbox
What it is: Microsoft has just turbocharged its AI assistant for a select group of Copilot subscribers, rolling out a suite of cutting-edge features: Voice, Vision, Recall, and the pièce de résistance - enhanced Reasoning. "Think Deeper," which we suspect is powered by OpenAI's o1 model, gives Copilot the ability to break down complex problems using step-by-step calculations. Copilot Vision is also joining the party, turning your screen into an interactive AI playground. Imagine having a hyper-intelligent sidekick that can analyze web content, decipher images, and chat about what you're seeing in real-time.
How we use it: Full disclosure: we're still waiting for our golden ticket to this AI tool. But if we had our hands on it, we'd put it through its paces, pitting it against OpenAI's Advanced Voice Mode in a battle of the AI assistants. We're talking real-time challenges, complex queries, and maybe even a joke-off (because who doesn't want an AI with a sense of humor?). If you are one of the lucky few with access, let us know if it lives up to the hype or is it just another shiny toy.
We’ll be talking about our favorite tools, but here is a list of the tools we use most for productivity: ChatGPT 4o (custom GPTs), Midjourney (image creation), Perplexity (for research), Descript (for video, transcripts), Claude (for writing), Adobe (for design), Miro (whiteboarding insights), and Zoom (meeting transcripts, insights, and skip ahead in videos).
Intriguing Stories
The Great Upgrade: While we might have been yelling about AI into the abyss, we aren’t alone. Researchers have unveiled the first nationally representative survey on generative AI adoption in the United States. And let's just say, AI isn't just knocking on the door of American workplaces – it's already redecorating the office. According to the "The Rapid Adoption of Generative AI" study, 39.4% of U.S. adults have dipped their toes into the AI pool. But it's in the workplace where things get interesting. Nearly three in ten employed Americans are bringing AI to work, and about one in nine are practically in a committed relationship with it, using it daily. What's truly fascinating is the diversity of AI's new fan club. While it's no surprise that almost half of the tech and management crowd are AI aficionados, the real eye-opener is that one in five blue-collar workers are also getting in on the action. It seems AI has a knack for crossing socioeconomic lines that would make a politician green with envy. The researchers estimate that AI could boost labor productivity by up to 0.875 percentage points. While the internet took years to weave its way into every facet of our lives, generative AI seems to be sprinting through that same digital marathon in record time.
California’s AI Balancing Act: In the heart of America's tech hub, California's recent forays into AI regulation have hit some unexpected turbulence. Two ambitious laws, each aimed at addressing different facets of the AI revolution, have encountered significant obstacles, highlighting the complexities of governing this rapidly evolving technology. AB 2839, the state's attempt to safeguard election integrity in the age of deepfakes, recently found itself temporarily sidelined by a federal judge. The law sought to combat AI-generated political misinformation by banning deceptive AI creations near election time and requiring clear labeling of manipulated content. However, a U.S. District Judge granted a preliminary injunction, citing concerns that the law might infringe on free speech protections. While the judge acknowledged the genuine risks posed by AI and deepfakes, he cautioned that the law's approach was too broad, potentially stifling protected forms of expression. Meanwhile, SB 1047, dubbed the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," met an even swifter end. This bill, aimed at regulating powerful AI models, would have required developers to implement stringent safety checks, quick-acting kill switches, and protocols to prevent catastrophic risks. Despite passing the State Assembly, Governor Gavin Newsom vetoed the bill on September 29, 2024. In his veto message, Newsom committed to finding a more appropriate path forward, suggesting that the bill's approach might have been too aggressive or potentially harmful to innovation. These legislative setbacks underscore California’s challenges as it attempts to lead in AI governance and balance fostering innovation and protecting public interests, while respecting constitutional rights.
Lights, Camera Algorithms: The world of digital content creation got an upgrade this week with the arrival of two new AI video generators. Pika Labs and Meta have both unveiled their latest tools, which will not only make your Instagram Stories more engaging they might get you a shot on Mystery Science Theater. Pika Labs 1.5 released a suite of eye-popping "Pikaffects" that transform static images into surreal animated clips. Users can now melt, explode, or even "cake-ify" objects within their videos, adding a layer of whimsy previously reserved for professional VFX artists. These physics-defying effects are visually striking, and are also easy to use. Not to be outdone, Meta's Movie Gen takes a more cinematic approach. This tool generates video clips from text prompts, with ambient sound and music, which could revolutionize storyboarding. Movie Gen also boasts impressive editing capabilities, including the ability to insert individuals into existing footage. Movie Gen, however, isn’t available to the public, but we’ll be waiting for it in Instagram.
If video isn’t your thing, Black Forest Labs has dropped its “Blueberry” model or Flux 1.1 Pro image generator. The new model boasts faster speeds, enhanced image quality, better prompt adherence, and more diverse outputs. With an API that lets developers integrate this powerhouse into their own apps for a mere 4 cents per image, Flux 1.1 Pro isn't just pushing boundaries – it's redrawing them entirely. Welcome to the era where imagination and algorithms dance a dizzying tango, and the next Mona Lisa might just be a text prompt away.
AI Selfie Stick: Move over, Reid Hoffman—your AI doppelgänger has some competition. While the LinkedIn mogul's been busy interviewing himself (talk about a captive audience), MIT's latest brainchild is taking narcissism to new heights, or rather, new ages. Enter "Future You," the digital time machine that lets you chat up your 60-year-old self without the hassle of actual time travel or, you know, aging. It's like FaceApp meets life coaching, with a dash of "Black Mirror" thrown in for good measure. In initial studies, participants who engaged with Future You for about 30 minutes reported decreased anxiety about the future and a stronger sense of connection with their potential older selves. While the technology shows promise in helping people make better long-term decisions, from financial planning to academic focus, the team is mindful of potential misuse. They're implementing safeguards and stress that Future You is meant to be a tool for self-reflection and development, not a crystal ball to be relied upon blindly. So much for checking out the results of a potential face job.
— Lauren Eve Cantor
thanks for reading!
if someone sent this to you or you haven’t done so yet, please sign up below so you never miss an issue.
if you’d like to chat further about opportunities or interest in AI, please feel free to reply.
if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.
we hope to be flowing into your inbox once a week. stay tuned for more!
banner images created with Midjourney.