Research

ORIGINAL RESEARCH

AI Visibility Benchmark: April 2026

We tripled the sample to 150 B2B companies across 5 sectors. The pattern from 50 companies held at scale — and the invisible majority is even larger than we expected.

150

companies scored

5

sectors analysed

81%

score 0-5 on citation

28.7

average score out of 100

THE HEADLINE

We tripled the sample. The gap got worse.

In March, we scored 50 enterprise companies and found 44% scored 2/25 on Citation Presence. That felt high. We assumed it might be a sample artefact — 50 companies, heavily weighted toward enterprise heavyweights.

So we added 100 more companies. Same 5 sectors. Same methodology. Broader revenue range. Mid-market alongside enterprise.

The result: 81% of 150 companies score 0-5 on Citation Presence. Not 44%. Eighty-one percent.

The average total score dropped from 82.2 to 28.7 out of 100. The original 50 were the top of the market. Adding 100 mid-market companies revealed the real baseline — and it is far lower than anyone expected.

Only 19% of companies across 5 sectors are recommended by AI when a buyer searches their category.

EDITION COMPARISON

March (N=50) vs April (N=150)

MetricMarch (50)April (150)Delta
Average Total Score82.228.7-53.5
Citation Presence (avg)13.75.9-7.8
Entity Recognition (avg)23.47.8-15.6
Content Structure (avg)20.16.7-13.4
Citation Breadth (avg)25.08.3-16.7
Low citation (0-5/25)44%81%+37pp

Why the drop?

The March study sampled established enterprise companies — large teams, strong SEO, years of content. They represent the top of the market. Adding 100 mid-market companies reveals that outside the enterprise elite, most B2B companies have minimal AI visibility infrastructure. The original 50 are the exception, not the baseline.

THE DATA

Dimension averages (150 companies)

Citation Presence (weakest)5.9/25
Entity Recognition 7.8/25
Content Structure 6.7/25
Citation Breadth 8.3/25

What this means:

At the enterprise level (March study), the gap was citation-specific — companies scored well on everything else. At the broader market level, the gap is across all dimensions. Most mid-market B2B companies have weak AI visibility infrastructure across the board. Citation remains the weakest, but the entire foundation is missing.

BY SECTOR

Sector comparison (150 companies)

SectorNTotalCitation
Enterprise SaaS3031.39.5
Financial Services3029.86.1
Professional Services3028.35.3
Management Consulting3027.34.7
Technology / IT Services3027.14.0

Enterprise SaaS still leads but the gap narrowed. In the March study (10 per sector), SaaS averaged 89.8 with citation at 24.4. At 30 companies per sector, the average drops to 31.3 with citation at 9.5. The original 10 SaaS companies were outliers — well-established platforms on every review site.

Technology / IT Services remains the weakest at 4.0 citation. The bottom 10 companies in the entire study are all IT Services firms, every one scoring 2/100.

EXPLORE THE DATA

Interactive benchmark explorer

Filter:

CITATION SPLIT

63%invisible
Uncited (29)
Cited (22) (9)
Perfect (25) (8)

DIMENSION AVERAGES — All 46

9.9/25

Citation

16.8/25

Entity

14.7/25

Content

17.5/25

Breadth

ALL FIRMS RANKED

Total AI Visibility Score — hover for detail

Cited
Perfect
Uncited
Elixirr
97
Argon & Co
97
Wise Business
97
iwoca
97
Forsters
97
Saffery Champness
97
Avanade
97
ActiveCampaign
96
Adyen
96
Stripe
94
Gong
93
HubSpot
91
ZoomInfo
91
Outreach
90
6sense
89
Salesforce
89
Drift
87
Revolut Business
74
Checkout.com
74
Tide
73
Starling Bank Business
73
GoCardless
73
Funding Circle
73
Softcat
73
Computacenter
73
Modulr
70
Mishcon de Reya
70
Shoosmiths
70
Marketo
67
Kainos
67
Pardot
66
Klaviyo
8
Monday.com
8
Braze
8
Amplitude
8
FTI Consulting
8
Wipro
8
Infosys
8
TCS
8
Capgemini
8
Grant Thornton UK
8
BDO UK
8
CDW
8
SHI International
8
Bytes Technology
8
Crayon
8

CITATION VS TOTAL SCORE

The two clusters show the binary split

0204060801000510152025Citation PresenceTotal

CITATION SCORE DISTRIBUTION

No middle ground — firms score 2 or 22+

29
0–5
6–10
11–15
16–20
9
21–22
8
23–25

Showing a representative sample. Full dataset available in the research CSV.

KEY INSIGHTS

What the data tells us at scale

The March study sampled the elite. The April study sampled the market.

The original 50 companies were established enterprise players with years of content, SEO investment, and brand recognition. They scored 82.2/100 on average. Adding 100 mid-market companies dropped the average to 28.7. The gap between the top and the rest is far wider than any single-sample study suggested.

81% invisible is the real baseline for B2B

At 50 companies (enterprise-heavy), 44% were invisible. At 150 companies (market-representative), 81% are invisible. The real question is not whether your company is visible to AI. It is whether you are in the 19% that is. For most B2B companies, the honest answer is no.

The bottom 10 are all from one sector

Every company in the bottom 10 is a Technology / IT Services firm scoring 2/100. Not low. The minimum. IT Services has no review platform ecosystem (no G2 equivalent), no buyer guide culture, and no comparison infrastructure. AI has nothing to draw from when deciding to recommend.

Enterprise SaaS advantage narrows at scale

The original 10 SaaS companies scored 89.8 with 24.4 citation. At 30 companies, that drops to 31.3 with 9.5 citation. The well-known platforms (HubSpot, Salesforce, 6sense) still dominate, but the next tier of SaaS companies are just as invisible as consulting firms. The structural advantage only applies to category leaders.

The binary pattern holds at 3x the sample

81% score 0-5 on citation. 19% score 20-25. Almost nothing in between. This is the same all-or-nothing pattern from both the enterprise study and the UK law firms study. AI either recommends you or it does not. There is no 'partially visible.'

CITE THIS RESEARCH

Stats you can use

All stats from the April 2026 edition. Link to this page as your source.

81%

of 150 B2B companies score 0-5 on AI citation presence

28.7

average AI visibility score across 150 companies (out of 100)

150

companies scored across 5 sectors in the April 2026 edition

19%

of companies are recommended by AI when buyers search their category

82.2 → 28.7

average score drop when sample expanded from 50 to 150

4.0/25

average citation score for Technology / IT Services (lowest sector)

9.5/25

average citation for Enterprise SaaS (highest sector, down from 24.4)

10/10

bottom 10 companies are all IT Services firms scoring 2/100

0%

of companies score in the 6-19 range on citation — binary split confirmed

3x

sample increase confirms the same all-or-nothing citation pattern

METHODOLOGY

How we conducted this study

Sample

150 enterprise and mid-market B2B companies across 5 sectors: Enterprise SaaS (30), Management Consulting (30), Financial Services (30), Professional Services (30), Technology / IT Services (30). The original 50 from the March 2026 edition are included alongside 100 new companies added for April. New companies selected to broaden revenue tier representation within each sector.

Scoring

Each company scored across 4 dimensions, each worth 0-25 points for a total of 0-100. Citation Presence: does AI name the company in category queries? Entity Recognition: does AI correctly describe the company? Content Structure: can AI extract answers from the website? Citation Breadth: is the company mentioned across independent sources?

Scanner

v2.0 multi-API scanner using OpenAI (gpt-4o-mini), Google Gemini 2.0 Flash, Brave Search, and Tavily. Each company tested with 2 category-level keywords. The 100 new companies were scanned with v2. The original 50 companies carry their March v1 scores (Perplexity-only) for continuity. A cohort rescan with v2 is planned for the May edition.

Monthly expansion

The benchmark expands by approximately 100 companies each month. New companies are sector-balanced, deduplicated against the master registry, and selected to represent a mix of revenue tiers. Each edition includes all companies from previous editions plus the new additions. This creates a growing dataset for trend analysis.

Limitations

AI platform responses vary by session, location, and time. Scores represent a point-in-time snapshot. The original 50 companies were scanned with v1 (Perplexity API only); direct score comparison with v2-scanned companies should note this methodology difference. Company names are published in the research but anonymised in all derivative content (blog, LinkedIn, newsletter).

Where does your company rank against 150 competitors?

This benchmark shows the market landscape. The Competitive Report shows where you stand — your company plus 10 direct competitors, scored with the same methodology.

Compare with previous edition: March 2026 (N=50)