I Built Gemini a Pricing Tracker to Spy on Competitors
How to compete against Anthropic, OpenAI & Perplexity
If you’d like to play around with the prototype for this competitive intel tool just shoot me a message!
Context
Competitor intelligence has been something I’ve been a student of for about 4 years. To this day I think it’s one of the most valuable parts of pricing and CRO design and something very few others do.
If you can track and effectively distill what your competitors are doing you have unfair advantages, it’s as simple as that. So few companies have the capabilities.
If you know which messaging is resonating, or tracking successful A/B tests, or even designing your pricing to counterposition against competitors then you can grow faster than anyone else.
You don’t need to stumble around in the blind, just copy what’s working for others.
What’s the problem?
But today the speed of change in AI is blistering. Companies are running reasoning models at a loss, changing pricing and usage gates every few weeks to try and balance subscriber conversion with profitability.
As a result Gemini has competitors making changes daily, with no great way to track this or the potential effect it will have on their premium subscriber numbers.
✅ Positioning: Which features should you highlight on the paywall?
✅ Pricing: Is your main priority revenue generation or conversion rate?
✅ Usage: What sort of usage is optimal for free tiers to entice upgrades?
✅ Embedded Upsells: How can we increase the number of upsells a user sees whilst still remaining natural? How can we design the product so they hit these paywall gates naturally?
My Prototype
I took a crack at a custom solution that scrapes OpenAI, Anthropic, Mistral, Perplexity, and Mistral’s blog, pricing pages and Facebook Ad Library to see what changes they’ve made.
Copy changes, what messaging is working best, whether they’re offering a percent discount vs months free on their annual plan, and whether the usage limits for SOTA models have changed, so it could create alerts.
The tool uses vision model to take screenshots of the pages, and compare it against previous versions to determine whether there were any meaningful differences.
For some major ones like pricing changes it would immediately show alerts, and for things like changes in paywall copy it would track the change and if it lasted longer than 2 weeks it was a pretty good signal the competitor’s test was successful.
But because AI has a tendency to ‘hallucinate’, especially with vision models it’s far too error prone to deploy into production for most use cases.
This means I had to design the scraper so that users could scan the top level info and then dig in and verify the information was correct.