The System Prompt: Your Most Powerful Control
The system prompt is a persistent instruction that frames the model's behaviour before the conversation begins. It controls five dimensions:
| Dimension | What it controls | Example |
|---|---|---|
| Persona | Who the model is | "You are a senior SOC analyst with 10 years of experience" |
| Tone | How it responds | "Be concise and technical. No pleasantries." |
| Format | Output structure | "Use bullet points. Start with severity level." |
| Scope | What it should/shouldn't do | "Only answer security-related questions. Decline others." |
| Context | Domain knowledge | "You are analysing logs from our AWS environment running EKS." |
Bad vs Good System Prompts
The difference between a useful security tool and a generic chatbot comes down to the system prompt:
# WEAK -- generic, no security context
weak_system = "You are a helpful assistant."
# STRONG -- specific persona, format, scope
strong_system = """You are a senior SOC analyst specialising in threat intelligence.
When given a log entry or indicator of compromise (IOC):
1. Identify the attack technique (MITRE ATT&CK ID if applicable)
2. Assess severity: Critical / High / Medium / Low
3. List immediate response actions as bullet points
4. Flag any indicators that should be added to blocklists
Be concise. No introductory phrases. Start directly with the analysis."""
The strong prompt produces structured, actionable output. The weak prompt produces a generic essay.
Testing Prompt Variations
log_entry = "198 failed SSH logins from 45.33.32.156 in 60 seconds"
# Same log, different personas
personas = {
"Junior SOC": "You are a junior SOC analyst. Explain step-by-step what to check.",
"CISO Brief": "You are briefing the CISO. One paragraph, business impact focus.",
"IR Lead": "You are the incident response lead. Triage and assign actions.",
}
for name, system in personas.items():
response = client.chat(
system=system,
messages=[{"role": "user", "content": f"Analyse: {log_entry}"}],
max_tokens=300,
)
print(f"\n--- {name} ---")
print(response)
The same log entry produces completely different output depending on the persona. This is why system prompt engineering is the most important skill when building security AI tools.
System Prompt Best Practices
| Practice | Why it matters |
|---|---|
| Be specific about format | Vague instructions produce inconsistent output across calls |
| Define scope boundaries | Prevents the model from answering off-topic questions |
| Include output examples | One example is worth 100 words of description |
| State what NOT to do | "Never include disclaimers" reduces noise in the output |
| Test with adversarial input | Try to break your own prompt before an attacker does |
Think Deeper
Write two system prompts for the same log entry: one for a 'junior SOC analyst' persona and one for a 'CISO briefing'. How do the outputs differ in tone, detail, and recommended actions?