Scenario 1: The CISO Who Banned AI (15 min)
Setup: A CISO has blocked all AI tools after reading about data leakage. Employees are frustrated.
Your objective: Position Workforce AI Security as the path from "block everything" to "govern and enable."
Key points:
- Blocking creates shadow AI — employees use personal devices
- Governance with visibility is more effective than prohibition
- Start with Detect mode to understand current state
- Redact mode allows AI usage while stripping sensitive data
Scenario 2: The AI Application Builder (15 min)
Setup: A dev team is building an internal AI assistant for their SOC. The security team is concerned about prompt injection but doesn't understand it.
Your objective: Explain prompt injection in plain language, demonstrate it, and position AI Guardrails.
Key points:
- Prompt injection is a new attack class — WAFs don't catch it
- Live demo: show an injection being blocked
- Inbound + outbound scanning protects both input and output
- Latency: ~30ms is invisible compared to LLM generation
Scenario 3: The AI-First SOC (10 min)
Setup: A large enterprise is deploying AI agents for tier-1 SOC tasks: alert triage, enrichment, response. They want agents to access SIEM, EDR, and firewall APIs.
Your objective: Position AI Agent Security and explain least-privilege for agents.
Key points:
- Agent ≠ chatbot — autonomous action changes the security model
- MCP is the emerging standard — every connection is a capability grant
- Monitor first (Detect mode), then enforce
- The blast radius question: "If compromised, what's the worst it could do?"
Think Deeper
The CISO asks: 'Can you guarantee no data will leak to AI tools?' What's the honest answer?