4 Realities of AI Governance
Mark Nyquist, Senior Director and Head of Global Compliance at Epicor, outlines four key realities to keep in mind about AI governance. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

In global compliance, our job is to make sure the answer isn’t just yes or no. It’s to make sure we know what the company is agreeing to and what risks we are accepting. That’s the new compliance frontier: governing AI before it governs us. There are four realities you can anchor your AI governance on that will enable you to move fast without losing control:
Accountability follows the data.
- Third-party risk is AI risk (paid vs. free matters).
- Set clear data boundaries and minimize sharing.
- Governance is enablement; keep it lightweight and consistent.
Reality #1: Accountability Follows the Data
AI has not replaced traditional security work; it has layered new obligations on top of it. We still have to protect our data and maintain sovereign assurance through independent audit reports, whether that’s SOC, PCI, ISO, or other standards. Still, we must today also guide our own teams and vendors on the use of powerful AI tools. That’s where accountability begins: with the human or process that touches the data.
When the rules are clear, people move faster and safer; when directives are fuzzy, everything downstream is too—so we keep policy short, plain, and visible. Publish the basics in plain language: what data can be used, which tools are permitted, and what must never leave the environment. Then pair those rules with transparency about where data goes, how it’s protected, how long it’s kept, and whether it’s used to train any model.
Ownership should be explicit because clarity at the top prevents confusion downstream. When everyone knows who is accountable for the data and the decisions tied to it, governance becomes practical instead of theoretical. That clarity matters most when the stakes are high – sensitive decisions deserve extra friction. If a model influences customer eligibility, HR outcomes, pricing, or similar high-impact areas, it requires a human-in-the-loop and keeps a short impact assessment on file. This isn’t bureaucracy for its own sake; it’s the paper-trail that proves you thought about fairness, explainability, and harm before deployment.
Reality #2: AI Vendors should be part of your Third-Party Risk Program
When evaluating vendors, treat the choice between a paid enterprise tier and a free or consumer tier differently. Enterprise offerings usually come with data processing agreements, the ability to disable model training, configurable retention, etc. If a product is free, the business model often relies on extracting value from your data or metadata to make the service possible.
Unless the contract says otherwise, assume prompts, outputs, or telemetry may be retained for “service improvement.” Fine-print phrases like “continuous improvement” often mean that inputs, outputs, or telemetry can be retained or used to tune systems unless you opt out.
To keep reviews consistent, leverage resources like the NIST AI Risk Management Framework. It provides practical checklists for transparency, accountability, and monitoring. Remember the AI supply chain: your vendor depends on model providers, plugins, and open-source components; your risk includes their dependencies, so cover these in your TPRM process. These are the three questions to ask every provider before switching anything on:
- How will you use our data? You’ll need to know whether it is only to provide the service, or also to improve products or train models. It’s also important to see if you can opt out or shorten retention for prompts and logs.
- Where will our data live, and for how long? Ask about storage locations, cross-border transfers, and your rights to export and delete.
- What protections and assurances will you provide? Are independent assurance (e.g., SOC 2/ISO 27001) available?
Treat the answers as operating conditions, not marketing copy. If commitments are vague, the risk becomes yours by default. For example, a team wanted a free summarization plugin, but the vendor couldn’t confirm retention or training posture. We chose an enterprise alternative with no-training mode and tenant isolation—same productivity win, materially lower exposure.
Reality #3: Clear Data Boundaries Help Minimize Sharing
Boundaries are the difference between safe speed and reckless speed. Start by defining a short set of data types that must never be pasted into external tools: regulated PII, confidential customer data, unreleased financials, source code, or merger and acquisition materials. Map the rest into simple classes-public, internal, sensitive-and tie each class to approved tools and use cases.
Education is critical here. Publish short, practical guides and host brief sessions that teach teams how to use AI responsibly—what data is off-limits, how to anonymize inputs, and why governance matters. When people understand the “why,” they follow the “how” more consistently.
We coach teams to replace direct identifiers, such as proper names, with anonymized samples and to summarize rather than copy-paste data into an AI tool. If you must send data out, insist on no training, short retention, and private endpoints or tenant isolation. Data boundaries also help teams say yes more often; when people know exactly what’s allowed, they don’t waste time guessing, and the policy becomes an enabler rather than a veto. They are the difference between safe speed and reckless speed.
Reality #4: Governance Is Enablement
The simplest programs are the ones people will actually follow. With a steady stream of new tools, a lightweight review keeps us responsive without letting sprawl outrun our controls. For example, publish a list of vetted tools and make it easy to request reviews of new tools and technologies.
Additionally, publish a brief policy in plain English and pair it with a short intake form that captures use cases, including the problem, the data involved, and the expected output. Review vendors with an AI-aware lens and document the conditions of approval (SSO, logging, export limits, retention settings). The goal is to make the safe path the easy path. The litmus test for a healthy program is confidence: employees should feel they can use approved tools without second-guessing whether it’s safe to do so. Leaders should see the value and time saved, including fewer manual steps.
Good governance isn’t about saying no to AI; it’s about keeping control of your data, your commitments, and your accountability so the organization can say yes more often, with less worry.

