Mini Report
10 creditsPOST /api/ideate/analyze
Headline subset of the pipeline
- Idea normalization + intake
- Demand pressure check
- Lightweight competitor lookup
- Headline score and quick verdict
The pipeline behind every Unycorn report
Real signals from Product Hunt, Google Trends, deep web search, Reddit, Twitter, GitHub, Wikipedia and live competitor websites — not vibes from a chat model.
5–10 min
analysis time
30+
data points per report
Generic AI tools
Unycorn
The pipeline
Twelve stages. Most of them fetch real evidence. The LLMs come last — and we run multiple of them against each other to catch hallucinations.
intake.normalize
Idea intake & normalization
Parse the user-submitted idea into category, target user, and core keywords.
lib/demand-pressure.ts
Demand pressure analysis
Score how much real-world frustration exists around this problem.
lib/google-trends.ts
Google Trends pull
Live search-volume curves for the core keywords — momentum, not memory.
lib/web-search.ts
Deep web search
Deep search across the open web for organic competitive landscape, ad density, and who actually ranks for the queries that matter.
social.signal-mining
Reddit & Twitter signal mining
Scan Reddit threads and Twitter for upvoted complaints, workaround posts, and founder buzz around the problem space.
db.competitors · db.competitor_sources
Competitor discovery
Match your idea against our Postgres competitor table, sourced from Product Hunt today (HN/GH next).
lib/enrich/competitor.ts
Live homepage fetch + LLM positioning
Fetch each competitor’s site and extract positioning, target users, pricing model, and vertical.
lib/competitor-enrichment.ts
Funding / team enrichment cascade
Wikipedia → GitHub → Product Hunt → web search → Apollo. Cached, deduplicated, snake_case keys.
lib/market-timing.ts
Market timing analysis
Is the wave forming, cresting, or already broken? We score the moment.
lib/economics-engine.ts
Unit economics modeling
Back-of-the-envelope CAC, LTV, and the price point that has to work for this to be a business.
orchestrate.multi-llm
Multi-LLM orchestration & validation
Multiple LLMs synthesize the report and cross-check each other. If they disagree on a claim, we flag it instead of guessing.
lib/ideate-scoring.ts
Data aggregation
Roll every signal — demand, social, competitive, economics, timing — into the final structured report and score breakdown.
This is why a real report takes 5–10 minutes.
We're not generating text — we're collecting evidence.
Sources
Every report is grounded in fresh, traceable data. No invented competitors. No made-up market sizes.
Product Hunt
Competitor discovery, votes, comments, Featured signals
Daily updates
Google Trends
Search-volume momentum on the core keywords
Live, per request
Deep web search
Organic competitive landscape, ad density, ranking pages
Live, per request
Upvoted complaints, workaround threads, community pain
Live, per request
Twitter / X
Public sentiment, founder buzz, complaint signals
Live, per request
Hacker News
Technical interest, Show HN launches, Ask HN frustrations
Live, per request
GitHub
Repo signals, stars, activity for technical competitors
Per enrichment, with caching
Wikipedia
Funding history, founding dates for established players
Per enrichment, with caching
Live web fetch
Competitor positioning, pricing, target users — LLM-extracted from homepage HTML
180-day cache, refresh on demand
Quality score
quality_score = votes + 0.5 × comments + (100 if Featured else 0)
Computed in Postgres via compute_quality_score(id) and refreshed automatically on every Product Hunt ingest. A loud launch beats a dead repo, every time.
Analysis tiers
Same pipeline. Different number of stages run, depending on how much evidence you need.
POST /api/ideate/analyze
Headline subset of the pipeline
POST /api/ideate/deep
All 12 stages
Why it takes minutes
A 30-second answer is a 30-second guess. We'd rather take a few minutes and hand you something you can actually defend.
Real fetches, not cached guesses
Every report pulls fresh trend and SERP data at request time. The signal you see is from today, not from someone else’s session last quarter.
LLMs synthesize, then cross-check each other
Multiple models only see evidence we already collected — and they validate the report against each other before you see it. Disagreements get flagged.
180-day cache where it makes sense
Competitor positioning is cached for 180 days because positioning doesn’t change daily. Trends and SERP are always fresh.
Drop in a startup idea and watch all twelve stages run.