Is Your AI Idea Any Good? A 5-Question Stress Test
By Stella Garber, Co-founder & CEO, Hoop | Former Head of Marketing, Trello. As seen on Every (May 2025)*
Why This Matters
In an era where every third startup claims to be “AI-powered,” separating signal from noise is harder than ever. Stella Garber—a serial founder and angel investor—has lived through pivots, flops, and a major acquisition (Trello to Atlassian). In this piece for Every, she shares the 5 filters she uses to evaluate whether an AI idea has the potential to survive the hype cycle and turn into a lasting business.
1. What existing workflow are you fundamentally changing?
Don’t just “AI-ify” a process. If your tool still requires users to input tasks manually, you’ve missed the point. At Hoop, we scrapped task creation entirely. Our AI monitors user commitments across tools and auto-generates reminders.
Test: If your product vanished, would users miss it—or feel relieved?
2. Where is the customer’s deepest frustration?
Find the moment your user swears under their breath—that’s your entry point.
At Trello, users hated digging for buried info. At Hoop, people forgot their own to-dos.
Ask: “What’s the last thing you missed that made you look bad?”
Rule: Vague pain = generic product. Specific pain = sharp wedge.
3. Can your AI unlock a dependency or superpower someone’s capabilities?
The best AI tools remove the need to wait, pay, or ask.
Lovable let designers bypass engineers. Harvey replaced teams of paralegals. Gamma kills the blank slide.
Prompt: “With our tool, who can do what they couldn’t before?”
4. What’s your unique data advantage?
Generic data = generic product.
Your moat comes from messy, exclusive, or workflow-embedded data others can’t easily access.
The deeper your product integrates, the harder it is to replace.
Reminder: AI should improve with usage. That’s not just a feature—it’s the stickiness.
5. How will you solve the trust problem?
LLMs hallucinate. Founders can’t.
Bake trust into your product from Day 1:
• Publish release notes.
• Explain every AI action (“Why did I see this?”).
• Allow user corrections.
• Get explicit consent before using data for training.
Bottom line: No trust = no adoption.