The Scenario
You're a communications lead at a mid-sized company. Your team has started using Cowork for everything — emails, reports, briefs, proposals. The outputs are competent but they all sound the same: polished, formal, slightly robotic. None of it sounds like your company. When you read the AI-generated drafts next to something a human team member wrote, the difference is obvious.
Your CEO has noticed. "This reads like a machine wrote it," she said at the last leadership meeting. "If we're going to use AI, the output needs to sound like us."
Today, you're going to solve this permanently. You'll build a brand voice context file from real examples of your company's writing, then create a custom /humanise skill that applies that voice consistently to any Cowork output.
This isn't a cosmetic exercise. Brand voice is one of the few things that can't be faked at scale. An organisation that produces consistent, recognisable writing across all channels — emails, reports, proposals, social media — signals professionalism and coherence. One that produces a patchwork of AI-generic outputs signals that nobody's in charge of how the company sounds.
What You'll Learn
By the end of this tutorial, you'll be able to:
- Extract voice patterns from real writing samples using both manual analysis and AI-assisted pattern recognition
- Write a prescriptive brand voice context file that produces measurable, consistent results
- Build a custom skill that transforms any Cowork output to match your company's voice
- Test the skill across multiple content types and verify it adapts to context
- Package the skill and context file for team-wide deployment
Prerequisites
- Claude Desktop with Cowork enabled (Pro or Max plan)
- 5-10 samples of your company's actual writing — emails, blog posts, internal memos, client proposals, social media posts. The more variety, the better. If you don't have real samples, use writing samples from any brand whose voice you admire and want to emulate.
- Access to Cowork's custom skills functionality (available on Pro plans and above)
The quality of your brand voice file directly determines the quality of the /humanise skill. Spend real time collecting good samples. A skill built on three generic emails will produce mediocre results. A skill built on ten carefully chosen pieces — covering different contexts, audiences, and tones — will produce output that genuinely sounds like your team.
Step 1: Collect and Analyse Your Voice Samples
Gather your 5-10 writing samples and place them in a folder called Brand-Voice-Lab/samples/. These should represent different types of communication:
- At least one internal email or memo
- At least one client-facing document
- At least one informal communication (Slack message, social post)
- At least one formal document (report, proposal)
Before asking Cowork to analyse them, read through the samples yourself and jot down what you notice:
- What words or phrases appear repeatedly?
- Is the tone formal, casual, or somewhere specific in between?
- How long are the sentences? Short and punchy, or long and detailed?
- Does the writing use jargon, acronyms, or industry-specific language?
- Is there humour? Warmth? Directness?
Write your own observations down first. You'll compare them with Cowork's analysis.
This manual analysis step isn't optional. If you skip it and rely entirely on Cowork's analysis, you've got no way to validate whether Cowork correctly identified the voice. Your human judgement is the ground truth here — you know how your company sounds because you've worked there. Cowork is analysing patterns; you're recognising identity.
Create a simple voice audit table:
| Sample | Type | Formality (1-5) | Sentence length | Distinctive phrases | Tone notes |
|---|---|---|---|---|---|
| CEO blog post | External | 3 | Short, punchy | "Here's the thing..." | Conversational authority |
| Client proposal | External | 4 | Medium | "We recommend..." | Formal but warm |
| Team Slack update | Internal | 2 | Very short | Abbreviations, emojis | Casual, direct |
| Quarterly report | Internal | 4 | Long | "Performance against target" | Data-driven, measured |
This table becomes your validation reference when you review Cowork's analysis in Step 2.
Checkpoint: You've got 5-10 writing samples in a folder, and you've written down your own observations about the company's voice.
Step 2: Use Cowork to Extract Voice Patterns
Point Cowork at your Brand-Voice-Lab folder and run this analysis:
Read every file in the samples/ folder. These are examples of our company's writing across different contexts. Analyse them and produce a brand voice profile saved as brand-voice-analysis.md with:
- Voice characteristics — the 5-7 defining traits of this writing voice (e.g., "Direct but warm," "Uses short sentences for emphasis," "Avoids corporate jargon")
- Vocabulary patterns — specific words and phrases that recur, and words that are notably absent
- Sentence structure — average length, preferred constructions, use of questions or imperatives
- Tone spectrum — where the voice sits on scales of formal/informal, technical/accessible, reserved/enthusiastic
- Context adaptation — how the voice changes between internal and external, formal and informal communications
- Anti-patterns — specific phrases or constructions that this voice never uses
Let's knock something off your list
Read every file in the samples/ folder. These are examples of our company's writing across different contexts. Analyse them and produce a brand voice profile saved as brand-voice-analysis.md with voice characteristics, vocabulary patterns, sentence structure, tone spectrum, context adaptation, and anti-patterns.
Cowork analyses your writing samples and extracts a structured voice profile
Review Cowork's analysis against your own notes. Does Cowork's profile match what you observed? Are there traits it missed, or patterns it identified that you didn't notice?
Common gaps in Cowork's analysis:
- It may miss context-dependent tone shifts — the difference between how your company writes to clients versus how it writes internally. If your samples included both, check whether the analysis captures this variation.
- It may over-index on surface features — counting word frequencies rather than identifying the underlying attitude. "Uses short sentences" is a surface observation; "conveys urgency through brevity" is a deeper insight.
- It may not detect what's absent — the words and phrases your company deliberately avoids are just as important as the ones it uses. If the analysis doesn't include an "anti-patterns" section, add one yourself based on your knowledge.
If the analysis is missing key traits, prompt Cowork to add them: "You missed that our writing never uses passive voice in client communications, and that we always address people by first name. Please add these to the profile."
Checkpoint: You've got a brand-voice-analysis.md that accurately captures your company's writing voice, validated against your own observations.
Step 3: Write the Brand Voice Context File
Now transform the analysis into a prescriptive context file that Cowork can use as instructions. Create brand-voice.md in your root folder:
The structure should include:
Voice Identity
A 2-3 sentence description of who the brand sounds like. Not abstract qualities — a concrete persona. Example: "We sound like a knowledgeable colleague who explains complex things simply, uses dry humour sparingly, and never talks down to the reader."
Rules
Specific, actionable rules. Not "be friendly" but:
- Use contractions (we're, it's, don't) in informal contexts; avoid them in formal reports
- Maximum sentence length: 25 words. If a sentence exceeds this, split it.
- Never use: "leverage," "synergise," "circle back," "move the needle," "at the end of the day"
- Always use: first names (not "Mr. Smith"), active voice, specific numbers over vague quantities
Approved Vocabulary
Words and phrases your brand uses, with context for when to use each.
Banned Vocabulary
Words and phrases that are off-limits, with the preferred alternative for each.
Examples
Include 2-3 before/after pairs showing generic AI writing transformed into your brand voice.
The most common mistake in brand voice files is being too abstract. "Write in a professional but approachable tone" means nothing — every company thinks their tone is "professional but approachable." Specific rules, specific vocabulary, and specific examples are what make the difference.
Checkpoint: You've got a brand-voice.md context file with identity, rules, approved/banned vocabulary, and before/after examples.
Step 4: Build the /humanise Skill
Now create the custom skill. In Claude Desktop, navigate to your project's skills configuration. Create a new skill with:
Name: /humanise
Trigger description: "Use when the user says 'humanise this', 'apply brand voice', 'make this sound like us', or when reviewing any draft output that needs voice alignment."
Instructions:
You are a brand voice editor. When triggered, you rewrite the provided text to match the company's brand voice as defined in brand-voice.md.
Process:
- Read brand-voice.md from the project context
- Analyse the input text for AI-typical patterns: generic hedging phrases, overly formal constructions, filler words, passive voice, and vocabulary that contradicts the brand rules
- Rewrite the text applying every rule in brand-voice.md — voice identity, vocabulary rules, sentence structure guidelines, and banned word replacements
- Preserve all factual content, data points, and key arguments — only change how they're expressed, never what they say
- Output the rewritten text with a brief summary of changes made
Critical constraints:
- Never add information that wasn't in the original
- Never remove data, statistics, or specific claims
- If the original text contains technical terms that are correct, keep them — don't dumb down accuracy for style
- Apply the context-appropriate tone (formal for reports, warmer for emails, punchy for social)
Building /humanise skill
Cowork processes the input text through your brand voice rules step by step
Checkpoint: Your /humanise skill is created with clear instructions that reference brand-voice.md and include constraints to preserve factual accuracy.
Step 5: Test with an AI-Generated Draft
Generate a test piece that needs humanising. Ask Cowork to write a client update email about a project milestone — using its default voice, deliberately without the brand voice file:
Write a client update email informing them that the Q1 infrastructure migration is complete, on budget, and two days ahead of schedule. Include three key achievements and next steps for Q2.
Save this as test-drafts/default-email.md. Read it. It'll almost certainly sound generic and corporate.
Now invoke your skill:
/humanise the email in test-drafts/default-email.md
Compare the before and after versions. Check:
- Did the voice shift noticeably?
- Were banned words replaced with approved alternatives?
- Are sentence lengths closer to your specified maximum?
- Is the factual content preserved exactly?
- Does it actually sound like your team wrote it?
Checkpoint: You've got a before/after comparison showing a clear voice transformation, with factual accuracy preserved.
Step 6: Test Across Different Content Types
One email isn't a sufficient test. Run the skill against three different content types:
- A formal report paragraph — ask Cowork to write a paragraph about quarterly revenue performance, then humanise it
- An internal Slack message — ask for a team update about a project delay, then humanise it
- A LinkedIn post — ask for a thought leadership post about industry trends, then humanise it
For each, verify:
- Does the skill adapt its application to the content type? (The LinkedIn post should be punchier than the report paragraph.)
- Are the brand rules consistently applied across all three?
- Does the skill ever go too far — making a formal report sound too casual, or making a Slack message sound too polished?
If the skill isn't adapting to context, refine the instructions to emphasise the "Context adaptation" section from your brand voice file.
Create a test results matrix:
| Content Type | Voice match (1-5) | Context adaptation (1-5) | Factual preservation (1-5) | Length appropriate? | Key issue |
|---|---|---|---|---|---|
| Formal report | Yes/No | ||||
| Internal Slack | Yes/No | ||||
| LinkedIn post | Yes/No |
A score of 4+ across all dimensions and content types means your skill is production-ready. Below 3 on any dimension means you need to refine either the brand-voice.md or the skill instructions (or both).
The most common failure mode is that the skill applies the same tone regardless of content type — making a formal report sound too casual, or making a Slack message sound like a press release. If you see this pattern, the fix is usually in the brand voice file: add explicit rules for how the voice adapts by context, with examples of each variation.
Checkpoint: The /humanise skill produces brand-consistent output across at least three different content types, with appropriate tone adaptation.
Step 7: Refine Based on Testing
Based on your tests, update both files:
brand-voice.md: Add any rules you discovered were missing. Common additions after testing:
- Rules about handling numbers and statistics
- Rules about paragraph length
- Rules about how to open and close communications
- Additional banned phrases you noticed in the AI defaults
The /humanise skill instructions: Refine based on what worked and what didn't. Common refinements:
- Adding explicit instructions about context detection (email vs report vs social)
- Strengthening the constraint against adding fabricated enthusiasm
- Adding a word count guideline (humanised output should be within 10% of the original length)
Task complete
brand-voice.md updated with 4 new rules
/humanise skill instructions refined
Both the brand voice context file and the skill instructions are updated from your test findings
Checkpoint: Both the brand-voice.md and /humanise skill have been refined based on real test results.
Step 8: Package for Team Use
Create a README.md in your Brand-Voice-Lab folder that documents:
- What this is: A /humanise skill and brand voice file for consistent company voice across all Cowork outputs
- How to install: Steps for teammates to add the skill and brand-voice.md to their own projects
- How to use: The trigger phrases that activate the skill, plus examples
- How to maintain: Process for updating the brand voice file when the company's voice evolves (new leadership, rebrand, tone shift)
- Limitations: What the skill handles well and where human editing is still needed
Checkpoint: A team-ready package with installation instructions, usage guide, and maintenance process.
Expected Output
Your deliverable is a working brand voice system:
brand-voice.md— a detailed, prescriptive brand voice context file- A custom
/humaniseskill that applies the voice consistently - Before/after test results across multiple content types
README.mdwith team installation and usage instructions
This isn't a toy — it's production infrastructure. Once deployed across your team, every piece of AI-generated content will sound like it came from the same voice, without anyone manually editing for tone.
Extension Challenges
-
Multi-brand support — Create separate brand voice files for different brands or sub-brands your company manages (e.g., parent company vs product brand vs executive thought leadership). Modify the skill to accept a brand parameter:
/humanise --brand=executive. -
Voice drift detection — Build a second skill called
/voice-checkthat analyses a piece of text and scores it against brand-voice.md, without rewriting it. This lets writers check their own drafts before submitting. -
Automated pipeline — Set up Cowork so that every document it produces automatically runs through
/humaniseas a final step. Test whether double-processing (Cowork generates, then humanises its own output) produces better or worse results than single-pass generation with the brand voice file in context.