Risk scoring: anomaly detection for AI usage
Each user gets a risk score based on their AI usage patterns. Adjust the sliders to simulate a user's behaviour and watch the risk score change in real time.
Preset scenarios:
12
Risk Score
LOW RISK
Risk factor breakdown
How risk scoring works
Baseline (learned): Average usage patterns across the organisation.
15 prompts/day, 3% sensitive content, 1-2 apps, minimal after-hours usage.
This is the "normal" the model learns.
Anomaly (flagged): Deviations from baseline trigger risk score increases.
The same anomaly detection approach from Stage 2 —
but applied to AI usage instead of network traffic.
Loading...
Think Deeper
Try this:
A user's risk score jumps from 15 to 78 in one day. They uploaded 50 files to ChatGPT. Is this malicious?
Not necessarily. Check: 1. File types — marketing images vs source code. 2. Context — were they told to migrate docs? 3. History — is this their first spike or a pattern? Anomaly detection flags the deviation; human judgement determines the intent. This is exactly like UEBA from Stage 2.
Cybersecurity tie-in: Risk scoring is UEBA (User and Entity Behaviour Analytics)
applied to AI usage. The same principle you learned in Stage 2 for detecting anomalous network
connections — now detecting anomalous AI interactions. The math is the same; the data source is new.