Domain 5 · Task Statement 5.2

Context Files & Persistent Knowledge

TL;DR

Transform Claude from a generic assistant into a calibrated colleague using role definitions, writing sample calibration, tone contrasts, and persistent context files that eliminate repetitive onboarding.

What You Need to Know

Every time you start a fresh chat with Claude, you are working with a brilliant stranger. It has enormous general knowledge but knows nothing about your role, your company, your preferences, your banned phrases, or your industry jargon. You can either repeat this context every single time — or you can give Claude a comprehensive onboarding binder that persists across sessions.

This is the intern-to-colleague transformation. On day one, a new hire knows nothing about your brand. You can give them a single task and micromanage the output, or you can hand them an onboarding binder — your role, your preferences, your terminology, examples of your writing — and watch them produce work that sounds like it came from your team. Context files are that onboarding binder.

Role definition: the foundation

The single most impactful line you can add to your Project Instructions is a role definition. "You are a Senior B2B SaaS Strategist" produces fundamentally different output from "You are an internal communications specialist for a mid-size manufacturing firm." Without a role definition, Claude defaults to a generic helpful assistant — pleasant but unfocused.

A good role definition includes:

  • The specific job title or function
  • The industry or domain
  • The audience Claude is writing for
  • Any constraints on the role (e.g., "You advise but never make final decisions")

Writing sample calibration: show, don't tell

This is the technique that separates adequate output from output that sounds like you wrote it. Instead of describing your tone for three paragraphs — "I write in a direct, no-nonsense way that gets to the point quickly" — paste 200 to 400 words of your actual writing.

The AI learns more from a concrete example than from abstract description. A writing sample teaches Claude your sentence length, vocabulary choices, structural patterns, how you open emails, how you close reports, whether you use contractions, and dozens of other micro-patterns that no written description can capture.

[~]

The 200-400 Word Sweet Spot

A writing sample between 200 and 400 words gives Claude enough to calibrate without consuming excessive context tokens. Use a real email, report introduction, or message you have actually sent. Avoid polished marketing copy — Claude needs to match how you actually write, not your best-edited work.

Tone contrasts: defining voice by what it isn't

Describing what you want often fails because positive descriptions are inherently vague. "Write in a friendly professional tone" means something different to every person who reads it. Contrasts work better because they set explicit boundaries.

Instead of: "Write in a friendly professional tone" Write: "Knowledgeable friend explaining over coffee, not a corporate press release. Direct, not evasive. Confident, not arrogant. Specific, not vague."

Each contrast creates a clear boundary that prevents Claude from drifting into default patterns. The exam tests this: if a scenario asks for the most effective way to define a brand voice, look for the answer that uses contrasts rather than positive-only descriptions.

Behavioural guardrails: stop the flattery

Claude's default behaviour includes agreeable preambles — "Great question!" "That is an excellent point!" "I would be happy to help with that!" These add no value and waste tokens. Unless you explicitly tell Claude to stop, every response starts with validation.

Add these to your instructions:

  • "No preambles or flattery — start every response with substance"
  • "If my logic has a gap, say so directly"
  • "Challenge my assumptions rather than agreeing by default"

An AI that agrees with everything isn't a useful colleague. The most valuable persistent context doesn't just tell Claude what to do — it tells Claude what to stop doing.

Context window budget management

Every word in your Project Instructions and Knowledge Base consumes tokens from the context window — the same window your conversation needs. A 5,000-word system prompt leaves significantly less room for actual discussion, especially in long sessions with multiple back-and-forth exchanges.

Treat your instructions like expensive real estate:

  • Front-load the most critical rules (they get the most attention)
  • Use concrete examples instead of lengthy descriptions (a 200-word writing sample replaces 500 words of tone description)
  • Remove anything Claude can infer from a shorter instruction
  • Audit your instructions regularly — if a rule hasn't been relevant in a month, consider removing it

Common Mistakes

Common Mistake

Leaving Claude in its default 'agreeable' mode — getting polished-sounding output that validates your ideas rather than genuinely useful feedback that challenges your reasoning.

Instead: Add explicit behavioural guardrails: 'Challenge my assumptions. If my logic has a gap, say so directly. No flattery or validating preambles.' An AI that pushes back on weak arguments is far more valuable than one that always agrees.

Common Mistake

Writing three paragraphs describing your desired tone ('I like direct, punchy writing that gets to the point quickly') instead of pasting an actual writing sample.

Instead: Include a 200-400 word sample of your own writing — a real email, a report introduction, or a message you actually sent. The AI learns more from a concrete example than from abstract description. Show, don't tell.

Common Mistake

Writing a 5,000-word system prompt that covers every conceivable scenario, leaving minimal room for actual conversation in the context window.

Instead: Keep instructions to the essential rules. Front-load the most critical items. Use examples instead of lengthy descriptions. If your instructions exceed 1,000 words, audit them for redundancy — every line must earn its place.

Defining tone for email drafting

Before

Write an email in a professional tone.

After

Write this email following the tone of the writing sample in my instructions. Avoid passive voice. Lead with the most important point. End with a numbered action items section.

Getting genuine feedback

Before

Help me sound more like myself.

After

Review my last three drafts against the writing sample in our project instructions. Identify where my voice drifts into generic AI phrasing and suggest specific rewrites that match my natural style.


Hands-On Activity

Hands-On Activity

Create Your Personal Context File

10 min

Write a professional bio, add a writing sample and explicit behavioural rules, and test the calibration by comparing Claude's output against your actual writing style.

What you will learn

  • Write a concise professional context that defines your role and preferences
  • Calibrate Claude using a real writing sample rather than abstract tone descriptions
  • Add behavioural guardrails that eliminate default AI patterns like flattery and hedging
  • Test the calibration by comparing Claude's output against your actual writing
  1. 01

    Write a 200-word bio describing your professional role, the industry jargon you commonly use, and specific things that annoy you in AI writing (e.g., 'Never open with I hope this email finds you well').

    Why: This forces you to articulate preferences you normally take for granted. The bio becomes the foundation of your persistent context — your onboarding binder for the AI.

    Expected: A concise personal context document covering your role, vocabulary, and explicit anti-patterns.

  2. 02

    Open a Project and paste your bio into the Project Instructions under the header 'Context & Writing Sample'. Add 2-3 concrete rules: banned phrases, required formatting, or structural preferences.

    Why: Combining the writing sample with explicit rules gives Claude both a calibration target (how you sound) and guardrails (what you never want). This is the intern-to-colleague transformation in practice.

    Expected: Your Project Instructions now contain a personal context section and a set of specific behavioural rules.

  3. 03

    Start a new chat in the project and ask Claude to draft a short email on a topic relevant to your work. Compare the output against an email you would actually send.

    Why: This is your calibration test. If the output sounds like you, the context file is working. If it misses the mark, refine the writing sample or add more specific rules.

    Expected: An email draft that mirrors your vocabulary, sentence structure, and formatting preferences — noticeably different from a generic Claude response.


Practice Question

Practice Question

A CEO wants the AI to draft emails that sound exactly like her — direct, emoji-free, and always ending with a clear 'Action Item' header. What is the most effective configuration?


Sources