Your board is about to ask. "What AI is running here, and what data is it seeing?"
Most mid-market leadership teams cannot answer that question. Their organizations have 40+ AI tools running across departments — most of them unsanctioned, many of them touching customer data, none of them inventoried. The Preside AI Governance™ assessment produces the inventory, the data-flow map, the framework-gap analysis, and the remediation roadmap.
What "Shadow AI" Actually Looks Like
Four scenarios happening in mid-market organizations right now
Composite from our pattern intelligence — these are not hypothetical. They are typical of what an AI Governance™ inventory surfaces in the first two weeks.
A rep pastes a customer's full P&L into ChatGPT to draft a proposal
Customer-confidential financials enter a third-party LLM. The vendor's terms allow training on data unless explicitly opted out (the rep didn't). Customer would have to be notified under their MSA.
Tools seen: ChatGPT (personal accounts), Claude.ai (personal), Gemini (personal Google accounts)
Production source code flowing through IDE-embedded copilots
20+ developers using AI code completion that sends code context to the vendor. Repo contains customer-specific configurations and embedded API keys. Some vendors disclose training-data usage; some don't.
Tools seen: GitHub Copilot, Cursor, Codeium, JetBrains AI, Tabnine
Support agents using AI summarizers on full customer ticket histories
An installed browser extension is summarizing customer email threads from your help-desk tool. PII flows through a vendor nobody on the IT side has vetted. The extension's terms permit "anonymized analytics".
Tools seen: Various Chrome/Edge extension AI assistants, Otter.ai, Fireflies.ai
Six generative-AI subscriptions, three duplicate use cases, zero audit trail
Departments procured their own AI tools via expense reports. Some duplicate functionality. None inventoried by IT. The next SOC 2 cycle will ask about all of them — and the answers don't exist.
Tools seen: Jasper, Copy.ai, Anyword, Notion AI, Synthesia, ElevenLabs, Runway
Why It's Become Board-Level
The frameworks have arrived. The board questions have started.
In the last 18 months, AI tools have spread organically through every department of every organization. There is no "AI deployment". There is a steady accretion of tools, each adopted because it solved someone's immediate problem.
Now the questions arrive from above. The board wants to know what's running. Counsel wants to know what data is exposed. The auditor wants documented AI controls for the next SOC 2 cycle. The answer to all three is the same artifact: a sanctioned inventory, a data-flow map, and a framework-aligned posture.
Released Jan 2023. Voluntary today, expected to anchor future U.S. AI regulation. Already referenced in federal procurement.
Effective Aug 2024 with phased enforcement through 2027. Extraterritorial reach. Penalties up to 7% of global revenue for prohibited-AI violations.
Auditors are asking about AI controls in 2025–2026 audit cycles. Some Trust Service Criteria now include explicit AI scope.
Bias-audit requirements for hiring/HR AI, consumer disclosure rules, sectoral AI rules. The patchwork is growing fast.
The Assessment
Four dimensions, one defensible posture
Designed to produce the same artifact that auditors, GCs, and boards all need — a single sanctioned inventory of AI in your environment with risk and remediation classified.
AI Inventory
Every AI tool in use — sanctioned and shadow. SaaS platforms, embedded AI features, browser extensions, code copilots, no-code automation. Categorized by department, use case, and data sensitivity.
Data Exposure
What data is flowing into which AI tools. PII, financial records, customer data, IP. Mapped against vendor data-handling commitments and training-data clauses. Identifies the contractual exposure.
Framework Gap Analysis
Current state mapped against NIST AI RMF and EU AI Act. Specific control gaps identified with citation language. Audit-ready posture documentation for the next SOC 2 / ISO cycle.
Remediation Roadmap
Prioritized actions: which shadow AI to sanction or block, which data flows to restrict, which controls to add. Sequenced by risk reduction per effort unit.
Delivery Options
Two ways to engage
Direct from Preside
Preside delivers the assessment under our brand. Methodology, tooling, and reporting from one source. Right for organizations that want to engage directly with the methodology owner.
Direct Engagement →Through a Partner
Co-branded delivery via a Preside partner already advising your organization. Reports read: Prepared by [Partner] · Powered by Preside AI Governance™.
Partner Program →Get an answer to the question your board is about to ask.
Full inventory. Data exposure map. Framework alignment. Remediation plan.