End-of-lesson Quiz
5 questions · Workforce AI Security
1
of 5
Your company blocks
chat.openai.com at the firewall. Why does this not solve the shadow AI problem?
Blocking a single domain gives a false sense of security. Shadow AI enters through personal devices, mobile networks, embedded AI features in otherwise-approved tools, and API calls from developer environments. Workforce AI Security provides visibility across all these vectors — blocking one URL does not.
2
of 5
A prompt contains both a customer name and an AWS secret key. Which classification takes priority and why?
Real systems use a highest-severity-wins rule. A leaked AWS key can be exploited by automated scanners within minutes of exposure. PII is serious but typically requires additional steps to weaponise. Credential leaks are immediate, direct-access risks.
3
of 5
A customer wants to start enforcement by setting every policy to Block. What do you recommend instead?
Blocking without visibility causes employee backlash, shadow workarounds (personal devices), and you never learn what you're protecting against. The proven path is Detect → Analyse → Design targeted policies → Enforce gradually. Data-driven policy beats guesswork.
4
of 5
The dashboard shows 31% of AI usage happens after business hours. What additional context do you need before treating this as a risk?
A metric without context is just a number. Department + data type + trend turns it into an insight. Engineering at midnight is likely normal; HR accessing salary data at 2 AM during notice period is a potential exfiltration indicator. Context is everything.
5
of 5
A user's risk score spikes from 15 to 78 after uploading 50 files to ChatGPT. Is this malicious?
Risk scoring works like UEBA: it detects statistical anomalies, not malice. Uploading 50 marketing images for a campaign is benign; uploading 50 source files the week before resignation is not. The system flags the deviation — investigation determines intent.