The AI gold rush is in full swing. From early-stage startups to the world's largest enterprise vendors, everyone is shouting the same message: We have AI inside.
But really, most of it is noise.
The gap between what's promised and what's actually delivered by "AI-powered" tools has never been wider.
For non-technical decision-makers, it can feel impossible to separate innovation from marketing hype. You're asked to make high-stakes decisions about tools, strategies, and vendors, often with little more than a demo and a buzzword-heavy pitch deck.
This article provides a clear framework to cut through the noise. It will help you evaluate AI claims with confidence, ask smarter questions during demos, and avoid falling for tools that overpromise and underdeliver.
Why "AI Inside" Is Basically Meaningless
The phrase "AI-powered" is quickly becoming the new "cloud-based." Technically accurate, functionally meaningless.
Many of today's so-called AI features are nothing more than thin wrappers on top of public APIs like OpenAI or Claude. Vendors plug a model into a text box, add a prompt under the hood, and call it a copilot.
A 2023 Gartner report found that over 70% of enterprise buyers felt unclear on how a vendor's AI actually worked. This confusion isn't accidental. As tech journalist Casey Newton noted in Platformer, much of the industry is incentivized to ride the AI hype wave without doing the hard work of building differentiated technology.
It’s easy to slap a logo on a feature. It’s much harder to build something with real strategic value.
The Snake Oil Spectrum: How to Classify AI Claims
Not all AI-powered products are bad, but most fall short of what they claim. To simplify evaluation, we use the "Snake Oil Spectrum" with three types of AI you’re likely to encounter:
1. Vaporware AI
This is the purest form of hype. The product promises AI features that don’t exist yet, are in "stealth beta," or are only shown in polished demo videos. Often, the team can’t answer basic questions about models, accuracy, or use cases.
2. Wrapper AI
Here, a basic large language model (LLM) integration is layered over an existing UX. These tools provide convenience, but the AI isn’t doing anything that ChatGPT can’t do with a good prompt.
If you can replicate 80% of the value in a free ChatGPT window, it's wrapper AI.
3. Strategic AI
This is what you're looking for. Tools that:
Solve high-friction problems you couldn't solve before
Are built on proprietary data or use models tuned to your context
Are deeply integrated into product workflows
Deliver measurable outcomes
Strategic AI isn't just about novelty, but offers a competitive advantage.
A 5-Part Framework for Evaluating AI Products
When evaluating a tool or vendor, use these five criteria to guide your thinking:
1. Functionality
What problem does the AI solve?
Is this a high-impact problem for my business?
Would I still use this product if the AI feature were removed?
2. Differentiation
Is this just a plug-in, or is there proprietary tech behind it?
Is the model trained on unique, high-value data?
3. Transparency
Do they tell you what model is being used? (GPT-4, Claude, open-source LLMs, etc.)
Do they explain how it's tuned or constrained?
Is there documentation that shows how the AI behaves in real-world scenarios?
4. Control
Can users adjust or influence the AI's behavior?
Are there confidence indicators or error-handling features?
5. Outcomes
Can the vendor show real ROI or performance lift from the AI?
Are there meaningful case studies?
Red Flags in AI-Powered Pitches
During your next vendor demo, listen closely. These red flags often signal low-value AI:
"Trained on billions of data points" (but can’t tell you which or why it matters)
"Proprietary AI" (but built entirely on someone else's model)
No examples of user feedback loops or continuous tuning
Demo content is generic, with no real-world edge cases
Salespeople can’t explain how the model behaves under pressure
Ask These 5 Questions in Every Demo
Most vendors expect surface-level questions. Stand out by asking:
What model(s) is this feature built on?
Do you fine-tune the model on proprietary data?
How do you handle errors, hallucinations, or bad outputs?
Can I test the AI with real data before committing?
What measurable results have your customers seen?
If the vendor stumbles here, you have your answer.
You Don't Need to Be Technical. You Need to Be Strategic
The best buyers in the AI era won’t be the most technical. They’ll be the most curious, skeptical, and strategic.
To get up to speed quickly, get our best-selling AI strategy guide: AI Strategy in 30 Minutes: A Crash Course for Business Leaders. It gives you the vocabulary and mental models you need to have credible, confident conversations about AI adoption in your company, without getting lost in technical details.
If you're a business leader tasked with evaluating tools, setting vision, or leading strategy, you can no longer afford to take AI claims at face value.
Ask better questions. Look beyond the marketing. And trust your instinct. If it sounds too good to be true, it probably is.