Service — Jonomor
AI Visibility Audit
A structured evaluation of how well a business is positioned for AI retrieval. The audit scores five categories — Entity Stability, Category Ownership, Schema Graph, Knowledge Index, and Continuous Signal Surfaces — totaling 50 points.
The score is diagnostic. It identifies where the AI Visibility Framework has been implemented well, where it has not been applied, and which gaps are most likely causing citation failures.
Scoring Categories
- 01
Entity Stability
10 pointsWhat is evaluated
Whether the business, its founder, and its products are defined as distinct, named entities with consistent canonical names, correct Schema.org types, and stable @id values across all surfaces.
Why it matters
AI systems learn associations from consistently named entities. An entity that appears under multiple names or types across its own properties cannot produce a coherent representation — and cannot be reliably cited.
What failure looks like
- —Organization name varies across domains and pages
- —No Person entity declared for the founder
- —Products referenced without consistent parent-child relationships
- —Schema @id values differ between pages for the same entity
- —No entity registry or equivalent naming governance
- 02
Category Ownership
10 pointsWhat is evaluated
Whether the business has published a structured body of content that defines authority within a specific topic domain — a deliberate cluster architecture with pillar, depth, and definition layers that establishes a clear category claim.
Why it matters
Topic co-occurrence is one of the primary mechanisms by which AI models associate entities with categories. If the entity name co-occurs consistently with a topic domain across many documents, the model builds association between entity and category.
What failure looks like
- —No topic cluster architecture — only isolated articles
- —Content does not define a specific category or domain
- —No pillar article establishing category ownership
- —Thin content depth — fewer than 5 articles per topic cluster
- —No definition pages capturing the category's core terminology
- 03
Schema Graph
10 pointsWhat is evaluated
Whether JSON-LD schema is correctly and consistently implemented across the site — with stable @id values, locked entity types, canonical author and publisher references, and no contradictions between schema and page copy.
Why it matters
Structured data reduces inference burden for AI parsers. Without it, AI systems must infer entity type, relationships, and attributes from prose alone — a weaker signal. With correct schema, entity facts are machine-readable and explicitly declared.
What failure looks like
- —No JSON-LD schema present
- —Schema types don't match entity types (e.g., methodology typed as Article)
- —Author and publisher fields duplicate entity definitions instead of using @id references
- —Placeholder dates in schema or synthetic freshness metadata
- —@id values are not stable or resolve to non-canonical URLs
- —Schema contradicts page copy on entity description or classification
- 04
Knowledge Index
10 pointsWhat is evaluated
Whether the site's internal link architecture reinforces the entity graph — specifically whether product pages, content pages, concept pages, and reference pages cross-link coherently to the hub pages and to each other.
Why it matters
Internal links signal how entities relate and which pages are authoritative within the domain. A well-linked internal architecture concentrates authority signals at hub pages and allows parsers to traverse the entity graph from multiple entry points.
What failure looks like
- —Pages exist with no inbound internal links
- —Concept and content pages do not link back to the ecosystem hub
- —No cross-linking between related concept and reference pages
- —Product domains do not link back to the parent organization domain
- —Authority pages (ecosystem, founder) are not referenced from content pages
- 05
Continuous Signal Surfaces
10 pointsWhat is evaluated
Whether the entity maintains a growing, consistent surface of pages and cross-domain references — including directory profiles, cross-domain product references, and sameAs schema references pointing to real identity profiles.
Why it matters
A single self-declaration is a weak authority signal. A continuous, expanding surface of consistent references across domains and page types compounds retrieval signal over time. AI systems build citation confidence from the breadth and consistency of references they encounter.
What failure looks like
- —Entity defined only on its own domain with no cross-domain reinforcement
- —Product domains do not reference the parent organization
- —No directory profiles with canonical entity names
- —sameAs URLs in schema point to product domains or non-existent profiles
- —No ongoing publication of consistent, linked content extending the surface
Score Interpretation
- 45–50Strong
Entity is well-defined, structured, and cross-referenced. AI retrieval is likely to be consistent.
- 38–44Good
Core structure is in place. Specific gaps in citation surfaces or topic depth are limiting citation reliability.
- 30–37Moderate
Entity architecture exists but has inconsistencies. Structured data or citation surfaces need attention.
- 20–29Low
Significant structural gaps. Entity is unlikely to be retrieved reliably by AI systems.
- 0–19Critical
Entity is undefined or fragmented. AI systems cannot form reliable associations around this entity.
Who This Is For
- Software companies
Products with no entity definition, no schema, and no cross-domain reinforcement — well-built tools that AI systems cannot reliably recommend.
- Research publications
Publications with strong content but no entity architecture — AI systems cannot associate the publication name with a specific author or organization entity.
- Consultancies and studios
Service businesses that depend on category authority — where appearing in AI answers to category questions has direct business value.
- Organizations with multiple product domains
Entities where the parent-child relationship between organization and products is undefined — resulting in authority fragmentation across domains.
Engagement Packages
Three tiers — from a standalone diagnostic to a full authority network build. The Starter audit is available immediately. Growth and Authority are ongoing implementation and operations engagements — not one-time projects. They begin after the initial audit establishes a baseline and continue until measurable retrieval improvement is achieved and compounding.
Starter
AI Visibility Audit
$2,499
A full-site AI Visibility audit conducted across your entire domain — not just your homepage. Every indexed page, every schema declaration, every external signal surface evaluated against the 50-point framework. Your audit report is delivered within 24–48 hours.
- —Full-site schema validation across all indexed routes
- —Entity architecture assessment — name consistency, @id integrity, relationship declarations
- —Content cluster depth and internal linking analysis
- —Citation surface audit — sameAs resolution, external domain presence verified
- —AI retrieval query — your entity checked live across ChatGPT, Perplexity, and Gemini
- —Written gap analysis with prioritized course of action tailored to your domain
Growth
Audit + Architecture Design
Custom
Everything in Starter, plus Ali Morgan designs your complete AI Visibility architecture — entity registry, schema graph, content cluster strategy, and cross-domain authority plan. Your team implements. Jonomor reviews every deliverable before it goes live.
- —Full Starter audit establishing your baseline
- —Entity registry design and canonical naming governance
- —JSON-LD schema architecture for every page type
- —Topic cluster strategy — pillar, supporting articles, definitions
- —Cross-domain authority patch specifications
- —Jonomor review and approval on all implementations
Authority
Full Authority Network — Operated by Jonomor
Custom
Jonomor builds and operates your entire AI Visibility system. Entity graph, schema architecture, three topic clusters, citation surface expansion, retrieval operations, and quarterly audit cycles. You don’t manage it. We do.
- —Everything in Growth
- —Content cluster deployment — 18+ articles published
- —Citation surface expansion across all external platforms
- —Monthly retrieval operations across ChatGPT, Perplexity, Gemini, Copilot
- —Quarterly AI Visibility audits with benchmark comparison
- —AI Presence included — continuous signal automation
- —Jonomor operates as your AI Visibility function
Frequently Asked Questions
- How does the AI Visibility audit work?
- After purchase, you complete a short intake form with your domain, company name, industry, and business goals. The audit system crawls up to 50 pages of your site, extracts all structured data, checks entity consistency across every route, resolves all external identity URLs, then queries three AI engines simultaneously — Perplexity, ChatGPT, and Gemini — to capture what each says about your organization right now. A 15-page PDF report is generated and reviewed by Ali Morgan before delivery.
- What AI engines does the audit query?
- Three: Perplexity, ChatGPT, and Gemini. Each engine is queried independently and simultaneously. Perplexity runs three queries capturing citation depth. ChatGPT and Gemini each run one query. The AI Engine Report in your PDF shows each engine's response and whether your organization is CITED, PARTIAL, or NOT_CITED by each one.
- How long does the audit take?
- The automated portion of the audit typically completes within minutes. Ali Morgan reviews every report before it is sent. Most reports are delivered within 24 hours of purchase.
- What is the 15-page report?
- The report contains a scored cover page, executive summary with live AI engine preview, five category pages each scored out of 10, an AI Engine Report showing Perplexity, ChatGPT, and Gemini responses, page-by-page findings, a course of action with P1/P2/P3 priorities and 90-day timeline, Ecosystem Intelligence mapping, consulting tier comparison, AI Presence trial page, Expert Answer Platform Guide, and Directory Citation Network guide.
- What is the difference between an AI Visibility audit and an SEO audit?
- An SEO audit evaluates ranking signals for search engines. An AI Visibility audit evaluates how well an organization's entities are defined, typed, and structured for recognition and citation by AI answer engines like ChatGPT, Perplexity, Gemini, and Copilot. Where GEO and AEO describe the optimization disciplines, the AI Visibility audit measures the outcome — scoring the structural conditions those disciplines are designed to produce.
Work With Jonomor
Jonomor conducts AI Visibility audits for businesses that need a structured evaluation of their current AI citation readiness. The audit produces a scored report across the five categories above, identifies the highest-priority implementation gaps, and maps the corrective actions to the AI Visibility Framework.
Audit engagements are available as standalone deliverables or as the entry point to a broader authority architecture engagement covering entity definition, schema implementation, topic cluster design, and citation surface expansion.