Ad Image

Secure Generative AI Starts With the Work: A Human Risk Playbook That Protects What Matters Without Slowing People Down

Secure Generative AI Starts With the Work

Secure Generative AI Starts With the Work

Nicole Jiang, co-founder and CEO at Fable Security, provides a “human risk playbook” to demonstrate why secure generative AI use must start with the work it’s being used for. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

Nobody rolled out generative AI formally. It showed up one day, and people started using it in a big way. They started drafting emails faster. Then they started summarizing their Zooms. All of them. Then documents, then contracts, then code. The output was terrible, then it was acceptable. Now it’s pretty good. Soon it’ll be great. The time savings are real.

Now the question isn’t whether employees should use generative AI. They are using it. The real question is whether they use it in a way that protects customer trust, intellectual property, and your competitive advantage. Security teams struggle a bit here because the risk is hard to discern. Is it summarizing key terms from a contract? Sussing out the best messaging for your product roadmap? Figuring out where there are code glitches? Which data is sensitive and which can be exposed? And is your paper policy the best place to codify this guidance?

Here’s the thing: Generative AI usage is not a people-are-your-weak-link problem. It is a behavior problem. And if you can see that behavior, understand its risk, and shape it, you can reduce your exposure and give people the tools to work safely without slowing productivity.

Teach People What “Sensitive” Actually Means

Most policies hinge on one vague word: sensitive. In the flow of work, employees make split-second decisions. “Can I paste this?” “Can I upload this?” If the answer feels unclear, they may default to speed. You can fix that. Don’t rely on your once-a-year training to send this message. Give employees a simple test they can run in their head before they hit paste:

  • If it is public information, it is generally fine.
  • If a competitor would benefit from seeing it, do not upload it.
  • And if you wouldn’t want it in the news, don’t share it.

That shorthand will prevent more mistakes than a long paper policy nobody reads. If you have the capability, label your data with clear markings that people understand, e.g., “Confidential—nonpublic financial data” or “Confidential—client data.”

Stop Betting on Paper Policies

Having a policy matters, but it won’t protect your data. Nobody knows what’s in it or even where to find it. Heck, most security teams don’t even know where to find their paper policy. People are using generative AI because they’re under the gun and trying to move fast, so just telling them they can’t upload isn’t practical. Giving people a viable alternative and making the secure option the easy one is the best way to put your policy into action.

See What is Actually Happening

You can’t manage what you cannot see. Most organizations already have the signals. Identity logs show which users sign in to which applications. DLP flags sensitive content. Cloud telemetry shows uploads, downloads, sharing, and more. You should be able to answer these questions:

  • Which teams rely on generative AI for work?
  • Which tools are they using?
  • Are those tools approved?
  • Are people uploading or pasting content labeled “confidential” or from likely confidential sources?
  • Which roles repeat the same behaviors?

For example, imagine you detect that a marketing employee is using an unsanctioned AI tool. You see an upload event. Your DLP system has already classified that document as containing customer financial data. That is not a vague risk. That is a specific, observable behavior tied to specific data. With that visibility, you can respond in the moment with a targeted, relevant intervention.

Focus on Behaviors, Not Tools

The AI tool landscape will continue to change. Chasing individual apps is a losing strategy. However, the behaviors remain consistent:

  • Uploading internal documents for summarization
  • Pasting proprietary code into prompts
  • Including customer identifiers in drafts
  • Using personal accounts for work
  • Skipping redaction to save time

Pick the few behaviors that would materially harm your organization if exposed. Target those first.

Intervene in the Moment

Most awareness programs are periodic—security teams roll them out once a year or every six months. So they’re always generic and never really relevant to specific risks. If you can intervene at or near the time of risk with an intervention that’s brief, specific, and actionable, you’ll give yourself a fighting chance. Instead of “Do not share sensitive data,” say this:

“Hi, Sheila. It looks like this document contains client information. Before using an external AI tool, remove names and account numbers, or use the approved internal AI tool. Here’s how you get it. Contact us at [alias] if you have questions.”

That message works because it’s highly relevant, explains the risk, and gives the employee a concrete call to action.

Make Secure Behavior the Default

For people to change their behavior, they need a usable alternative. Give them something they can rely on. Teach them simple habits and best practices for finding confidential information or redacting certain bits. Make tools easy to find. Approve access requests quickly. If you can make the secure choice easy, people will go for it.

Measure Behavior Change

Look beyond phishing clicks and training completion rates. It’s not that they’re bad, but they’re incomplete. Measure outcomes: Did people adopt the approved tools? Did they stop uploading sensitive data? Did they remove or redact sensitive data? Which of your interventions worked, and which got ignored? Treat generative AI governance as a living program. Watch the data. Adjust quickly. Double down on the programs that work.

The Bottom Line

Securing generative AI isn’t about your paper policy. It’s about putting that policy into action. Define sensitive data in language that employees can understand. Label it clearly. Make the safe option faster than the risky one. Watch behavior in real time. Intervene with short, practical guidance. People want to do good work. Build an environment that makes secure decisions the easiest decisions, and you will reduce risk.


Share This

Related Posts