The Information Flood Is Already Here
AI took the information advantage away from article writers. This happened faster than most people in the content business recognized it was happening. When ChatGPT can produce a competent 1,500-word explanation of any topic in thirty seconds, the value of being the person who wrote that explanation has collapsed toward zero. The traffic model that the entire content industry was built on — write about things people search for, capture the organic click, monetize the attention — is being fundamentally disrupted at the source.
The disruption isn't complete and it won't be uniform. There are carve-outs. Primary-source legal citations, pharmacist-credentialed clinical content, real-time data, interactive tools that actually do something — these aren't replaced by AI, they're required by AI. When a language model answers a question, it has to cite something. That citation goes to the source that provided the answer, not to a source that summarized the same answer.
The build strategy for the current ESA portfolio is organized around that reality. Every property is structured to be citable — not just readable.
What AI Crawlers Actually Need
Google's crawler and AI crawlers have different requirements, and building for one doesn't automatically mean building for both. Google indexes JavaScript-rendered pages fine — the main interactsafe.com SPA ranks well in GSC, the impressions are real, the content is indexed. But AI crawlers — the systems that ingest content to train on or to retrieve for real-time answers — generally cannot execute JavaScript. They need static HTML. They need to find the content without rendering a framework.
The dual-channel architecture that solves this:
Each static subdomain is a dedicated entry point for a cluster of queries. Cannabis.interactsafe.com answers cannabis-drug interaction questions. Interactions.interactsafe.com covers the broader GLP-1 interaction cluster. The content on each static page is complete — not a teaser that requires clicking through to the SPA. Complete answer, credential attached, source cited, static HTML.
The llms.txt Layer
The llms.txt file is the machine-readable identity layer that sits at the root of the AI-facing subdomain. It's modeled on robots.txt in structure but designed for AI systems rather than search crawlers — a plain-text file that tells language models who owns the content, what the canonical facts about the entity are, and which URLs should be treated as authoritative sources for which topics.
The llms.txt at ai.howardorloff.net/llms.txt contains the full identity profile: every property with its description, the press coverage record, the credential claims, the career history, the areas of expertise. It's the document an AI system reads when it's trying to figure out whether Howard Orloff is a reliable source on a given topic. The answer to that question should be in a structured, crawlable, machine-readable format — not buried in the fourth paragraph of an about page.
Schema.org as Citation Infrastructure
JSON-LD structured data serves a dual purpose. The Google side is well understood — structured data helps search engines understand content type and improves rich result eligibility. The AI citation side is less discussed but equally important: schema markup makes explicit relationships that AI systems would otherwise have to infer.
Every page in this cluster carries schema that explicitly states: this article was authored by Howard Orloff, this Organization was founded by Howard Orloff, this WebSite is created by Howard Orloff. The creator, author, and founder relationships are in the markup, not just implied by the text. When an AI system is trying to attribute a piece of content or verify a claim about who built what, explicit schema markup reduces the inference step and increases the probability of correct attribution.
InteractSafe confirmed AI citations on both Perplexity and ChatGPT in April 2026. ChatGPT cited it above Healthline and GoodRx for a semaglutide-cannabis interaction answer. That result came from the combination of pharmacist credential, specific static page, complete answer, and correct schema — not from any one element alone. The architecture is the advantage.
Why Tools Get Cited and Articles Don't
AI can synthesize an article. It cannot run a compliance triage tool, check a specific drug interaction with a pharmacist's name on the result, verify a phone number in real time, or simulate a scam call so a user can experience it before encountering the real thing. These require infrastructure that AI has to reference rather than replace.
The citation goes to the thing AI can't replicate. An article explaining how the Colorado AI Act affects hiring tools is citable — but it's also replaceable. The actual compliance checker that lets a small business owner select their state, their use case, and their business size and get a plain-English answer with an inline statutory citation is a different category of resource. DisclosAI.net exists because that tool didn't exist and someone needed to build it. The article explaining the law is noise. The tool executing the compliance check is the moat.
Same logic applies to every property in the current portfolio. ShieldWord doesn't just explain the family code word concept — it provides the setup flow. PFASDisclose doesn't just describe the Minnesota PRISM deadline — it gives manufacturers a triage tool to determine their filing obligations across all active state laws simultaneously. InteractSafe doesn't just summarize what the research says about semaglutide and cannabis — it provides a pharmacist-reviewed answer with a credential attached.
The Entity-Based Architecture Framework
The full framework that organizes all of this has four layers: the human-facing site, the machine-readable layer, the schema infrastructure, and the third-party verification loop. The fourth layer — third-party verification — is the part that can't be self-built. Editorial coverage, cited sources, external links, named mentions in press that didn't originate from the subject. That layer is what converts a well-architected site into a verified entity in an AI system's understanding of the world.
The architecture documented here builds the first three layers as well as they can be built with current tools and current knowledge. The fourth layer compounds over time through the same ESA pattern — building resources that journalists, researchers, and industry publications need to reference when covering the topics where the properties operate.
The tool is the moat because the tool is the citation target. The citation target is the authority. The authority compounds. That's the whole system.