Verses Over Variables

The tools I couldn't live without in 2025 that make up my creative AI stack.

The Tools I Can't Live Without: My 2025 AI Stack

2025 has been an explosion of AI tools, most of which are just expensive bookmarks with better branding. I've spent the past few years breaking things, duct-taping prototypes back together, and trying to figure out which of these actually help me ship work versus which ones simply turn my browser into a cry for help.

This is what's left standing. We're at a point where the barrier between "I had an idea" and "I built the thing" has collapsed so completely that the main constraint isn't capability anymore. It's curation. It's knowing which robot to call for a specific flavor of magic. Here are the 25 tools that made the cut (for now).

The External Brain Layer

NotebookLM
I'm done pretending I'm going to "read" everything the way a responsible adult with infinite leisure would. If a client drops a 60-page PDF in my inbox, it goes straight into a NotebookLM library. The Audio Overviews are the headline act (I listen while walking the dog or staring into the middle distance like I'm in an indie film about knowledge work). But the real value is the fence. It behaves because I've boxed it into the sources I trust. When a project gets messy, it keeps me anchored to the actual document instead of letting my brain freestyle a version of reality that feels emotionally true but is factually fictional. New additions to the tool that make it even more helpful: creating slide decks, video overviews, and infographics, all customizable to any style.

I've built entire research libraries using NotebookLM, generating preliminary case studies and context-rich infographics without opening separate design software. The recent Gemini integration lets me pull entire notebooks into conversations, creating an external memory system that remembers project details better than I do, which is both useful and mildly insulting.

One gripe: the AI hosts still have that "we're so excited to be here!" energy that makes me want to throw my AirPods into a lake.

Dia Browser
Dia doesn't stare at me with that "we're definitely selling your data" energy. When I'm on a competitor's site, I'm interrogating, not browsing. I'll use Dia Skills to pull a pricing structure or business model, or grab signals from their design language. It turns the open web into a working document. Plus, because it keeps so much context locally, it feels less like I'm handing my curiosity over to a server farm every time I click a tab.

I regularly chat with YouTube videos to extract specific information without watching full content (my attention span thanks me). The Skills can also take apart website branding on command: fonts, color palettes, and layout structures. Useful for design research and for when you want to understand why a competitor's site feels more expensive than yours.

Wispr Flow
Typing is officially the bottleneck of my life. I can talk faster than my fingers can type, and I have gotten over the stigma of talking to my tech like a person. I use Wispr to brain-dump at speed, especially when I'm drafting strategy or capturing the shape of an idea before it evaporates. It has an unexpectedly impressive sense of context. I can mumble a half-baked thought, and it turns into something that looks like a document, not a transcript of a breakdown.

It saves voice notes persistently, so I can capture ideas in the moment and copy-paste the transcribed text wherever I need it later. For getting thoughts out of my head and onto a page without the keyboard becoming the friction point, Wispr Flow is a must-have.

The Visual Pipeline

Midjourney
Midjourney is still the one I trust for a visual direction that feels alive. Most image tools now produce output that is technically "good," but faintly interchangeable (like every brand hired the same AI art director who only owns one sweater). Midjourney still has an artistic pulse. I use it for the first spark: mood, lighting, texture. I demand inspiration over perfection, and it usually delivers.

My workflow typically starts here to establish visual tone, then I export those images to other tools like Runway or Veo to build out stories and videos. The style reference library helps maintain consistency across formats and surfaces new aesthetic directions. It's the opening act, the reason the final output doesn't look like every other AI-generated thing flooding the internet.

Nano Banana Pro
This is the model I reach for when I need the image to actually obey the laws of physics. If I need text to be correct, signage to be legible, or a composition to stay grounded, Nano Banana is the choice. Midjourney gets dreamy, which I love until I need the deliverable to stop floating off into the astral plane. Nano Banana is my editorial tool. It's where I go when the image has to look intentional and precise, not just beautiful.

I create composite images blending up to 14 reference images, which is critical for consistent character studies or product placements. To quote Reddit: "ChatGPT 1.5 Imagen is the Tinder profile, and Nano Banana Pro is the person who actually shows up to the date."

Krea & Freepik
Krea is my "jam mode." I can move shapes and see the AI update the scene instantly, which keeps me thinking spatially. Prompting can turn into a weird kind of bureaucratic writing if you're not careful, and Krea pulls me back toward play. For projects where speed of exploration matters more than final polish, it eliminates friction between ideation and execution.

I also use Freepik, which is my cheap laboratory. I use it to test workflows and upscale assets before I spend real money elsewhere. If Weavy is the clean architecture diagram, Freepik is the garage where you build the first version out of whatever you can find. It's where I validate approaches before scaling up, letting me fail fast and cheaply.

Veo
Veo is expensive and heavy, so I treat it like the fancy camera you don't bring to a beach day. I use it only for hero assets where audio sync matters. The thing that pushes it over the line is the synchronized sound. Footsteps that land on the right beats. Atmosphere that behaves like a real environment. That coherence is what shrinks the uncanny valley from "yikes" to "wait, is this real?"

I deploy it for clips where audio-visual coherence is non-negotiable: character dialogue, dynamic environmental scenes where ambient sound carries the realism. For everything else, I use cheaper alternatives. But when the final output needs to feel like actual footage, Veo is the choice.

Adobe
When I need the full design toolkit, not just generation. The AI aggregator pulls in multiple models, and Boards keeps everything from turning into a folder disaster I'll never find again.

The Build Shop

Google AI Studio
My go-to when I need a small tool right now. If I'm running a workshop and I want an interactive Q&A or a tiny web app to make an idea tangible, I build it here. It feels like sketching with code. I'm trying to turn a concept into something clickable in under fifteen minutes, not create enterprise software.

I've used it to build pattern generators, target audience tools, brand pyramids, and live Q&A systems for presentations. The real-time preview lets me iterate on the conversation rather than wrestling with syntax. It turns "I wish this thing existed" into "okay here it is" faster than any other tool I've tried.

Cursor & Antigravity
Cursor is where I go when the prototype exists, and now it has to look professional. The Live Editor is a gift for UI refinement because I can watch the pixels move as the code updates. Before AI Studio and Antigravity emerged, this was my primary vibe coding environment. It continues to be crucial for the phase where I'm moving from "this works" to "this looks good."

Google’s Antigravity is the "big guns." It's agent-heavy, which means my job is managing, not implementing. They handle the plumbing, and the best part is that they never ask for a 45-minute meeting to discuss the "next steps" for the previous meeting. The Manager View lets me orchestrate multiple agents tackling different parts of a project simultaneously. The built-in browser means agents verify that applications actually work before showing me results. Despite being out for less than a month, it's climbed my rankings fast.

Weavy (Figma Weave)
This is how I make the logic of a creative workflow visible. It's node-based, so I can see exactly how one model feeds another. I've built flows for video creation, branding projects, and marketing campaigns that need coordination across multiple AI models. The canvas lets me test which models produce the best outcomes empirically rather than relying on hype.

It matters for my own clarity, but it also matters for clients. A creative AI workflow can feel like magic until someone asks how it works, and then it feels like chaos. Weavy gives me a diagram that proves there's a system behind the curtain. When a client asks, "How did you make that video?" I can show them the actual pipeline instead of waving my hands mysteriously. Can’t wait for its full integration into Figma.

The Reasoning Engines

The Big Three (Gemini, ChatGPT, Claude)
I have zero brand loyalty here. I rotate them based on who's having a good week. Claude is for writing, Gemini shines with huge context, and ChatGPT is my pick for structured logic. The point is choosing the right brain for the job and moving on with your day.

I am currently partial to Gemini 3 Pro, with ChatGPT 5.2 as my second choice. This backup ensures that if one model has an off day (which happens more than you'd think), I have immediate alternatives. The models have different strengths that emerge and recede with each update. It's basically polyamory for AI assistants, except everyone knows about each other and nobody's feelings get hurt.

Claude Skills
Skills are the "I already explained this" antidote. I've built a writing skill, so I don't have to re-explain my tone every single morning. It's modular and clean. Skills add immediate context and memory designed for certain tasks. I use them for fact-checking, implementing different writing voices, creating personas, and building artifacts like slides and infographics.

Deep Research
I pair my personal research with Deep Research when I feel the "tab spiral" beginning. Instead of disappearing into the web for three hours and resurfacing with a half-formed thesis and a headache, I let an agent do the digging and hand me a citation-rich memo. I still rewrite and interpret, but it gets me the evidence base fast enough that I actually stay sane. I alternate between ChatGPT and Gemini for research, depending on release cycles and which model is currently stronger. Model agnosticism keeps me optimizing for capability rather than brand.

The Supporting Cast

Everything else in the dock is purely functional. Eleven Labs for when I'm too lazy to record my own voiceovers (I've cloned my voice for side projects, which sounds narcissistic until you realize it's just practical). N8N for the backend plumbing that chains AI agents into cohesive systems without me thinking about it. Scheduled Tasks in ChatGPT so I wake up to a news briefing instead of a doom-scroll. Google AI Mode because I've stopped using traditional Google Search entirely (I prefer context and synthesis over link directories). Projects in Claude and ChatGPT to keep client work siloed with persistent memory. Canvas for prototyping interactive graphics when I want conversation-driven iteration. Gamma for slide decks that don't look like standard corporate templates from 2012. App Integrations (MCP) connecting everything to Google Drive, Notion, Photoshop, Figma, turning AI from isolated conversations into an operational layer across my workspace.

And the honorable mentions: Suno for music generation when I need original soundtracks. Infinite Talk for fixing lip-sync problems in AI video. Both are niche, both are occasionally essential.

So that's my stack. Twenty-five tools that mostly didn't exist eighteen months ago, half of which will probably be obsolete or absorbed into something else by next year. The weird part is that a year ago, half of them didn't exist, and now building without them feels like doing carpentry with my teeth.

What's your stack look like?

Drop a comment with your ride-or-die tools, or the one you tried that immediately made you question your judgment. I'm always curious what's working in other people's creative workflows.

Need help with any of these? Want to talk through which robot to call for your specific kind of magic? I'm at [email protected]. Always happy to talk shop.

— Lauren Eve Cantor

thanks for reading!

if someone sent this to you or you haven’t done so yet, please sign up so you never miss an issue.

we’ve also started publishing more frequently on LinkedIn, and you can follow us here

if you’d like to chat further about opportunities or interest in AI, please feel free to reply.

if you have any feedback or want to engage with any of the topics discussed in Verses Over Variables, please feel free to reply to this email.

banner images created with Midjourney.