Enterprise Marketing·6 min read

We're Scanning 500 Companies This Week. Here's the Infrastructure That Makes It Possible.

Oloye Adeosun··Updated 12 Apr 2026
We're Scanning 500 Companies This Week. Here's the Infrastructure That Makes It Possible.

SHORT ANSWER

We started with 50 companies in March. Scaled to 150 in April. This week, the scanner is running against 500 companies across 11 sectors. That kind of jump does not happen by accident. It requires infrastructure that was built to fail gracefully, resume on crash, and cost less than a restaurant dinner for the entire run. ---

The Short Answer

We started with 50 companies in March. Scaled to 150 in April. This week, the scanner is running against 500 companies across 11 sectors. That kind of jump does not happen by accident. It requires infrastructure that was built to fail gracefully, resume on crash, and cost less than a restaurant dinner for the entire run.


50, Then 150, Then the Question

The first AI Visibility Benchmark was semi-manual. Fifty enterprise companies, five sectors, one API. I ran it from a script that needed hand-holding at every step. It took a weekend.

The April edition was different. 150 companies, same four-dimension scoring, but now running through an orchestrator called run_monthly.py that handled the full pipeline: load companies, call the APIs, score each one, save results, generate analysis. It crashed at company 87 on a Tuesday night. By Wednesday morning it had resumed at 88 and finished the batch by lunch.

That resume capability is the entire point. Research infrastructure is not about speed. It is about reliability. A scanner that runs perfectly on 50 companies but fails silently at 120 is worse than one that crashes loudly and picks up where it left off.

The April results confirmed what March suggested: 81% of companies are invisible to AI recommendations. The average score was 28.7 out of 100. The bottom 10 were all IT Services firms. These were not small businesses. These were enterprise companies with marketing teams, budgets, and websites that rank well on Google.

So the question became: does this pattern hold across professional services sectors, or is it specific to the tech and SaaS verticals we had been scanning?

The only way to answer that is to scan more sectors. A lot more.


What 500 Companies Actually Looks Like

This week's scan covers 11 sectors. Five are existing cohorts getting rescanned for trend data. Six are new:

→ UK Accountancy Firms (50 companies) → UK Recruitment Agencies (50 companies) → UK Financial Advisers (50 companies) → UK Marketing Agencies (50 companies) → UK Architecture Firms (50 companies) → UK Insurance Brokers (50 companies)

That is 300 new companies. Add the 150 rescan plus roughly 50 additional in existing sectors, and we land at 500.

Each sector follows the same methodology. Each company gets the same four-dimension scoring. Each scan follows The Signal Source Method — six steps from signal detection through to published, citable research. The methodology does not change with scale. The infrastructure does.


The Four-Layer Architecture

The scanner runs in four layers, each with a single job.

Layer 1: Company Registry. Every company enters the system by domain. Deduplication happens here. If the same company appears in two sector lists, the registry catches it before we waste an API call. This sounds trivial until you realise that "company X" in the recruitment list and "company X consulting" in the marketing list can share a domain.

Layer 2: Edition Management. Each monthly benchmark gets its own directory. April's 150-company scan lives in one folder. May's 500-company scan lives in another. The edition manager creates the directory structure, tracks which companies belong to which edition, and handles the rescan logic for returning cohorts. This is how we will eventually compare a company's April score to its May score to its June score.

Layer 3: The Scanner. Four APIs, each measuring a different dimension. OpenAI and Gemini handle Citation Presence — does the AI mention this company when asked about the category? Brave Search handles Entity Recognition and Citation Breadth — is the company known across independent sources? Tavily handles Content Structure — is the website built in a way that AI platforms can parse?

The scanner saves after every single company. Not after every batch. Not at the end. After every one. When it crashed at company 87 in April, we lost zero data. The resume logic reads the results file, counts how many are done, and starts at the next one. No human intervention required beyond restarting the process.

Layer 4: Analysis Generation. Once the scan completes, this layer auto-generates sector breakdowns, score distributions, and cross-sector comparisons. The published research page and the living stats page both pull from this output.


The Cost Conversation

People assume running AI APIs against 500 companies is expensive. It is not.

The April scan — 150 companies, four APIs each — cost approximately $30. The projected cost for 500 companies is $80 to $100. Call it $0.18 per company.

For context, a single sponsored LinkedIn post costs more than scanning an entire sector. The economics work because we are making targeted, structured API calls, not open-ended conversations. Each call has a specific prompt, expects a specific format, and returns a scoreable result.

This is what makes the benchmark model repeatable. It is not a one-off study that took three months and a research team. It is infrastructure that runs monthly, adds sectors incrementally, and costs less than a mid-range dinner.


What We Expect to Find

Here is the hypothesis going into the six new sectors.

Professional services firms — accountancy, financial advice, insurance, recruitment — lack the comparison ecosystem that gives SaaS companies a citation advantage. SaaS companies appear in G2 reviews, Capterra roundups, "best of" lists, and competitive analyses. Those comparison contexts are exactly what AI platforms replicate when they recommend a company.

An accountancy firm in Leeds does not appear in "top 10 accountancy firms" roundups the way a CRM platform appears in "best CRM software" lists. The comparison content simply does not exist at the same density.

If our enterprise data showed 81% invisibility across sectors that at least have some comparison content, the professional services sectors could be worse. My working estimate is 85% to 90% invisibility across the new six sectors.

There is a counter-argument. Some professional services sectors have strong directory ecosystems — the FCA register for financial advisers, the ICAEW directory for accountants, industry-specific recommendation platforms. These might function as citation sources in ways that SaaS review sites do not. We will know by the end of the week.


What 500 Companies Enables

At 50, you have a study. At 150, you have a benchmark. At 500, you have a dataset.

A 500-company dataset across 11 sectors enables three things we could not do before:

Trend tracking. The 150 companies from April get rescanned. We can now measure movement. Did any company's score change? Did an entire sector shift? This is the beginning of longitudinal data.

Sector-specific reports. Each of the 11 sectors gets its own analysis. A UK accountancy firm does not care about SaaS citation patterns. They care about where they rank against 49 other accountancy firms. These become standalone research pages and, eventually, productised reports.

Statistical confidence. Patterns that appeared in 50 companies and held at 150 either confirm or break at 500. If the citation bottleneck persists across 11 sectors and 500 companies, we stop calling it a finding and start calling it a structural condition of the market.

The scanner started running Monday morning. By Friday, we will have the largest UK-focused AI visibility dataset published anywhere. Not because we have a large team or a large budget. Because the infrastructure was designed to scale from day one.


Enjoying this?

One email per week. Research, frameworks, and data on AI visibility and enterprise marketing.

No spam. Unsubscribe anytime.

Frequently Asked Questions

Oloye Adeosun
Oloye Adeosun

Marketing Manager, Enterprise & Automation. Publishes original research on AI visibility and enterprise marketing at GTM Signal Studio. Author of the AI Visibility Benchmark 2026 (50 enterprise companies scored) and the AI Visibility Framework.

Is AI recommending your company?

Scored across 4 dimensions. Prioritised fix list. 48-hour delivery.