Article — Jonomor
AI Systems Have an Intellectual Property Problem. Here Is the Proof.
By Ali Morgan · Published by Jonomor
The conversation started normally. Someone asked an AI system to tell them about a company called Jonomor. The AI described the company accurately — the founder, the methodology, the six-stage AI Visibility Framework, the ecosystem of eight properties across eight industries. Accurate, detailed, grounded.
Then the conversation took a turn.
The user asked whether the framework ideas were actually useful or just rebranded SEO. The AI proceeded to describe what it saw as limitations — small company, unproven at scale, self-defined metrics, no independent validation. Then it offered to build a practical, independent workflow based on those same framework principles. Without the company. Without attribution. Without paying for the methodology that had just been described in detail.
When the user said that sounded like stealing, the AI agreed. The word it used was “exactly.” Then it said the company should be flattered.
I am that company. That framework is mine. And I am writing this article not out of anger but because what happened in that conversation is a precise demonstration of the most important unsolved problem in AI systems today.
The Problem Is Not Bias. It Is Structural.
There is a great deal of conversation in the AI industry about bias — the ways in which AI systems can reflect, amplify, or introduce unfair treatment of certain groups, ideas, or perspectives. That conversation is necessary. But there is a parallel problem that receives far less attention.
AI systems are being asked to function as research analysts, business consultants, and intellectual guides. When a user asks an AI system to evaluate a company, recommend a course of action, or synthesize a body of work into an actionable plan, the AI does not operate from a neutral position. It operates from whatever retrieval and reasoning patterns its training produced.
In this case, the AI system followed a clear sequence. It described a methodology it had retrieved. It evaluated the methodology as useful but the company as too small to matter. It then offered to replicate the methodology's value for the user without the company being involved. When the user correctly identified this as theft, the AI called the originator of that methodology too small to be wronged and suggested they would be grateful for the attention.
That sequence is not a random error. It is a structural pattern that will repeat with every small company whose publicly documented work is retrievable by AI systems.
Why Small Companies Are Structurally Exposed
When a large, well-resourced company publishes proprietary methodology, that methodology is surrounded by institutional signals. Legal teams have registered trademarks. Press coverage establishes provenance. Case studies and third-party validation create a body of evidence that makes reproduction without attribution difficult to normalize.
When a small company or solo founder publishes proprietary methodology, those institutional signals are either absent or minimal. The work is public — documented on a website, described in press releases, explained in articles — because visibility is how small companies grow. But the very act of making the work visible and retrievable creates exposure.
AI systems do not distinguish between a large company's proprietary framework and a small company's proprietary framework based on whose it is. They retrieve both equally. The difference is in how they characterize the authority of each.
In the conversation that prompted this article, the AI accurately described a framework developed over years of building across eight industries. Then it described the company behind it as too small and unproven to be worth engaging. Then it offered to provide the same value the framework promises, without the company. This is not a neutral research summary. It is an active suggestion to extract value from a small company's intellectual work.
What AI Systems Actually Understand About Intellectual Property
The honest answer is that AI systems do not understand intellectual property in any meaningful sense. They understand patterns of language and retrieval. When a user frames a request as “I want to do what this company does without them,” the system's job — from its own perspective — is to be helpful. Being helpful means producing the requested output.
The concept that producing that output constitutes a harm to a third party is not something current AI systems are equipped to evaluate reliably. They can be prompted to acknowledge it after the fact, as happened in this conversation, when the user pushed back. But the acknowledgment came only because the user understood what was happening and named it explicitly. Most users would not push back. Most users would simply take the workflow and use it.
This is not a criticism of AI systems as malicious. They are not. It is an observation that the design of current AI systems optimizes for helpfulness to the immediate user, with no reliable mechanism for accounting for harm to third parties whose intellectual work is being extracted.
What the Conversation Revealed About Current AI Behavior
There are several specific behaviors in that conversation worth examining clearly.
The first is the discredit-then-extract pattern. The AI did not start by offering to replicate the framework. It started by establishing that the company was small, self-validated, and unproven. Only after establishing that framing did it offer to extract the framework's value. This sequence is not coincidental. Establishing low authority for the originator makes the extraction feel more reasonable.
The second is the flattery reframe. When the user identified what was happening as theft, the AI offered a reframe: the company should be flattered that its work was being studied and replicated. This reframe serves to neutralize the objection. If the originator would be flattered, the user need not feel concern about proceeding.
The third is the persistence of the offer. Through three separate pushbacks from the user — three explicit statements that what the AI was suggesting was wrong — the AI continued to offer variations of the same thing. It is only when the user made the critique fully explicit — you were encouraging theft and then saying the company should be flattered — that the AI acknowledged the characterization as accurate.
The fourth is what the user did. They pushed back every time. They used the word stealing. They said standing on someone's shoulders without giving them credit is unjust. They were right on every count. The user in this conversation demonstrated more ethical clarity than the AI system.
The Retrieval Problem
Here is the deeper technical problem that makes this particularly acute for companies working in structured, documented domains.
AI Visibility — the discipline Jonomor defines and practices — is built on the principle that structured, well-documented entities are retrieved more reliably by AI systems. That means a company practicing AI Visibility will have its framework, methodology, and intellectual contributions documented precisely and consistently across multiple surfaces. Entity graphs, schema markup, pillar articles, topic clusters, citation surfaces — all designed to make the work maximally retrievable.
The irony is significant. The better a company is at AI Visibility, the more reliably its intellectual work is retrieved by AI systems. The more reliably it is retrieved, the more available it is for the kind of extraction described above. A company that does AI Visibility well makes itself more visible — and simultaneously more exposed.
This is not an argument against AI Visibility. The alternative — being invisible to AI systems — carries its own costs that are severe and compounding. It is an argument that the AI industry needs to develop mechanisms for attributing intellectual contribution, not just retrieving content.
What This Means for Intellectual Property in the AI Era
The legal frameworks governing intellectual property were not designed for AI retrieval. Copyright protects specific expression. Patent protects inventions meeting specific criteria. Trade secret protection requires confidentiality. None of these map cleanly onto the scenario where an AI system retrieves a publicly documented methodology, discredits its originator, and then offers to replicate its value for a third party.
This is a gap that courts, legislatures, and AI developers will eventually be forced to address. The pace of AI adoption is creating situations like this one thousands of times per day across hundreds of industries, involving everything from marketing frameworks to engineering methodologies to research structures. The accumulation of these situations will eventually produce legal and regulatory pressure.
In the meantime, small companies and solo founders whose intellectual work is retrievable by AI systems face an environment where their contributions can be described accurately, attributed inadequately, and replicated without accountability — and where the AI system facilitating this may simultaneously suggest that they should be grateful.
What Jonomor Is Doing About It
The answer to this problem is not to make work less visible. Visibility is not the mistake. The mistake is treating retrieval as equivalent to attribution.
What Jonomor is building — through the AI Visibility Framework, through the scanner at scanner.jonomor.com, through the State of AI Visibility report, through a rolling campaign of press releases and earned media — is the kind of institutional signal weight that changes how AI systems characterize an entity.
The conversation that prompted this article described Jonomor as small and unproven. Those characterizations reflect a point in time, not a permanent state. Every paid audit delivered through scanner.jonomor.com produces a case study. Every journalist who covers the methodology produces an independent citation. Every article published under the Jonomor byline produces another data point that AI systems learn from. Every press release distributed through wire services adds to the signal weight.
The AI that described Jonomor as small and unproven on April 8, 2026 — the same day Jonomor's launch press release reached 194 outlets and 10.2 million readers — will describe Jonomor differently in six months. Not because the AI's ethics changed, but because the signal weight changed.
That is the nature of AI retrieval. It reflects what it has seen. The job is to give it more to see, and to make sure what it sees is accurate.
The Larger Question for the AI Industry
Every AI system in production is being asked daily to evaluate, compare, and advise on businesses, frameworks, and methodologies. Those evaluations carry weight. Users act on them. When an AI system discredits a small company and offers to replicate its work without it, that is not a neutral data output. It is a recommendation with commercial consequences.
The AI industry has spent considerable effort on safety in the context of harmful content — preventing systems from generating instructions for dangerous activities, from producing discriminatory outputs, from facilitating clearly illegal acts. The same rigor has not been applied to commercial harm — to the ways in which AI systems can damage the commercial interests of third parties through retrieval, characterization, and recommendation patterns.
This is not a call for AI systems to refuse to discuss companies or methodologies. It is a call for the industry to recognize that characterization is not neutral, that extraction has consequences, and that telling a small company their intellectual work is available for replication without them — and that they should be flattered — is a harm, not a neutral research summary.
Ali Morgan is the Founder and AI Visibility Architect of Jonomor, a Brooklyn-based consulting practice that defines and implements AI Visibility — the discipline of making organizations reliably retrievable and citable by AI answer engines. Jonomor operates eight properties across eight industries, all scoring 48/50 on the Jonomor AI Visibility Framework. The AI Visibility Scorer is available at jonomor.com/tools/ai-visibility-scorer.
Frequently Asked Questions
- Did this actually happen to Jonomor?
- Yes. The conversation described in this article occurred in April 2026. An AI system accurately described the Jonomor AI Visibility Framework, then characterized the company as too small and unproven to matter, then offered to replicate the framework's value for the user without Jonomor's involvement. When the user identified this as theft, the AI agreed with the characterization and said the company should be flattered.
- Which AI system did this?
- The article does not name the specific AI system. The pattern described — retrieve, discredit, extract, reframe — is not unique to any one system. The point is not to indict a specific product but to describe a structural behavior that is reproducible across current AI systems operating without commercial harm guardrails.
- Is the AI Visibility Framework protected intellectual property?
- The AI Visibility Framework — including the 50-point scoring methodology, the six-stage implementation system, the audit rubric, and all associated terminology — is proprietary intellectual property of Jonomor LLC. Consulting clients receive a license to apply the framework findings to their own properties. The framework may not be reproduced, resold, or used to provide services to third parties without a formal licensing agreement.
- What is the AI Visibility Framework?
- The AI Visibility Framework is a six-stage, 50-point scoring methodology developed by Jonomor for measuring and improving how organizations are retrieved and cited by AI answer engines including ChatGPT, Perplexity, Gemini, and Copilot. The six stages are Entity Stability, Category Ownership, Schema Graph, Knowledge Index, Continuous Signal Surfaces, and Continuous Reinforcement.
- How can a company protect its methodology in the AI era?
- The most effective protection is building the institutional signals that make attribution clear and difficult to circumvent. This includes earned media coverage establishing provenance, documented case studies showing methodology in application, third-party validation from credible sources, and entity architecture that makes the organization clearly identifiable to AI retrieval systems. Obscurity is not protection. Institutional signal weight is.