Domain 3 · Task Statement 3.5
Designing Connected Workflows
TL;DR
Design multi-step workflows that chain Connectors and Skills together, manage token budgets across complex pipelines, apply the Connector-Logic-Connector pattern, and optimise performance with tool-call batching.
What You Need to Know
Everything in Domain 3 converges here. You understand Skills (internal logic), Connectors (external data access), and Plugins (the bundles). Now you need to chain them together into workflows that solve real business problems — pulling data from one service, processing it through your custom logic, and pushing the result to another service, all from a single prompt.
Connected workflows are where Cowork stops being a clever chatbot and starts being a genuine productivity tool. But they are also where most people make their worst mistakes: over-engineering with too many components, ignoring token budget constraints, or building monolithic skills that try to do everything at once.
The Connector-Logic-Connector pattern
The most reliable workflow architecture follows a three-stage pattern:
- Connector IN — fetch raw data from an external service (Salesforce pipeline data, Jira tickets, Google Drive documents)
- Logic — Claude's reasoning engine processes the data, guided by a custom Skill that applies your specific rules (risk scoring, brand formatting, priority classification)
- Connector OUT — push the finished result to an external destination (Slack channel, email, shared drive, project management tool)
This is the "CLC" pattern, and it maps directly to how effective delegation works in any organisation: someone gathers the raw information, someone else analyses it according to established criteria, and the result gets delivered to the right people.
Not every workflow needs all three stages. Some workflows only fetch and process (Connector IN → Logic), producing a result that stays in the Cowork conversation. Others only process and deliver (Logic → Connector OUT), working with data from your local working folder. But the full CLC pattern is what the exam tests most heavily because it exercises every component of the ecosystem.
Tool priority hierarchy
When Claude encounters a multi-step task, it follows a natural selection hierarchy:
- Check for a relevant Connector — if the task involves external data, Claude looks for an installed, authenticated Connector first
- Apply a relevant Skill — if processing logic is needed, Claude searches for a matching Skill via progressive disclosure
- Fall back to native capabilities — if no Connector or Skill matches, Claude uses its built-in reasoning and file-processing capabilities
This hierarchy matters because it determines the order of operations. Claude doesn't randomly choose between tools. It prioritises real-time data from Connectors over static context, and specialised Skill logic over generic reasoning. Understanding this hierarchy helps you predict how Claude will handle ambiguous requests — and helps you write prompts that guide it to the right tools.
Token budget management
Every component in a connected workflow consumes tokens from your context budget:
- Connector tool definitions — each installed Connector adds its schema to the context, even when not actively in use
- Skill instructions — loaded via progressive disclosure only when triggered, but still consuming tokens when active
- Retrieved data — the raw data a Connector fetches occupies context space
- Claude's reasoning — the analysis and synthesis step needs room to work
A workflow with 10 Connectors, 5 loaded Skills, and a large dataset fetched from Salesforce may leave Claude with insufficient context for quality reasoning. The result: degraded output, truncated analysis, or dropped steps.
Lean Context, Better Results
Disable or remove Connectors you aren't using in a given workflow. Each connector's tool definitions occupy context space even when idle. If your current task only needs Salesforce and Slack, having Jira, Notion, Google Calendar, and Asana definitions loaded is pure waste. Keep the active toolset minimal.
Tool-call batching for performance
The MCP specification supports sending multiple tool requests in a single protocol call. If your workflow needs data from Salesforce and Jira simultaneously, batching sends both requests together rather than waiting for one to complete before starting the other.
Think of it as ordering your starter and main course at the same time instead of waiting for the starter to arrive before ordering the main. The Streamable HTTP transport in the MCP 2025-03-26 specification makes this particularly efficient for workflows that pull from multiple sources.
You don't need to configure batching manually — Claude handles it when it identifies independent data requests. But understanding that it exists helps you design workflows where data retrieval steps are independent (and therefore batchable) rather than sequential (which forces one-at-a-time execution).
Designing for single responsibility
The most common architectural mistake in connected workflows is building one massive skill that tries to handle data retrieval, analysis, formatting, and distribution in a single file. This monolithic approach creates several problems:
- Hard to debug — when something fails, you can't isolate which step broke
- Hard to reuse — the skill is so specific to one workflow that no other task can use it
- Wasteful — the entire skill loads into context even when you only need one of its functions
- Fragile — a change to one step risks breaking unrelated steps
The better architecture: small, specialised components that chain together. A data-cleaning Skill. A risk-scoring Skill. A brand-formatting Skill. Each does one job well, and you assemble them into different workflows as needed. The Connectors handle data in and data out. Claude's reasoning engine orchestrates the flow.
Exam Trap: More Components Does Not Mean Less Reliable
A common exam distractor suggests that chaining multiple tools makes Claude less reliable. The opposite is true when done correctly. Specialised tool chains — where each component handles one well-defined task — are more reliable than asking Claude to do everything from scratch in a single, unstructured prompt. The reliability risk comes from over-engineering (too many components) or under-specifying (vague instructions), not from chaining itself.
A complete workflow example
Consider this real-world scenario: you need to fetch a customer's support history from a database, summarise their last three complaints using your company's tone guidelines, and post the summary to a private Slack channel.
Poor design: One massive skill that somehow handles database queries, text summarisation, and Slack posting.
Correct design:
- SQL MCP Connector fetches the customer's support tickets from the database
- Claude's reasoning identifies the three most recent complaints and analyses their themes
- Company Tone Skill applies your organisation's specific communication guidelines to the summary
- Slack MCP Connector posts the finished summary to the designated channel
Each component does one job. If the tone guidelines change, you update one Skill. If you switch from SQL to a different database, you swap one Connector. If you want the summary emailed instead of Slacked, you change the output Connector. The processing logic in the middle doesn't change.
Writing effective multi-step prompts
A connected workflow is only as good as the prompt that triggers it. Vague prompts force Claude to guess which tools to use, which data to fetch, and what format to produce. Specific prompts name the tools, define the data scope, and describe the expected output.
The structure that works consistently:
- Name the data source — "Using the Jira Connector, fetch..."
- Define the filter — "...all P1 tickets from the last 14 days"
- Specify the processing — "Apply the Triage Skill to categorise by component and severity"
- Describe the output — "Post a summary to #engineering-leads with the top 3 issues and recommended actions"
One prompt. Four clear instructions. Every component named. No ambiguity.
Common Mistakes
Common Mistake
Designing a workflow that chains 8-10 connectors for a task that genuinely needs 2-3 data sources — creating a slow, fragile pipeline that burns through the context budget.
Instead: Design for the minimum number of components needed to achieve the outcome. Each connector adds latency, token overhead, and a potential point of failure. If your workflow produces good results with 3 components, adding 5 more won't make it better.
Common Mistake
Building one massive Skill that handles data retrieval, analysis, formatting, and distribution — creating a monolithic file that is hard to debug, hard to reuse, and wasteful on tokens.
Instead: Design smaller, specialised Skills that chain together: a data-cleaning Skill, an analysis Skill, a formatting Skill. Each does one job well and can be reused across different workflows.
Common Mistake
Leaving 10 Connectors installed and active when the current workflow only uses 2 — not realising that every connector's tool definitions consume context tokens even when idle.
Instead: Disable or remove Connectors you aren't using in a given workflow. Keep the active toolset minimal to preserve context budget for reasoning and data processing.
Building an automated workflow
Before
Claude, do everything for my project.
After
Using the Jira Connector, pull the last 5 tickets for 'Project X'. Apply the Triage Skill to categorise them by urgency. Show me the results in a table sorted by priority.
Processing customer data
Before
Summarise my customer complaints and tell the team.
After
Using the SQL Connector, fetch all support tickets marked 'unresolved' from the past 30 days. Apply the Customer-Insights Skill to categorise them by theme and severity. Post a summary to #support-leads on Slack with the top 3 themes and recommended actions.
Weekly meeting preparation
Before
Check my calendar and emails and do something useful with them.
After
Using the Google Calendar Connector, find all meetings scheduled for this week. For each meeting, use the Meeting-Prep Skill to pull relevant documents from Google Drive and create a one-page briefing. Save all briefings to my 'Weekly Prep' folder.
Hands-On Activity
Hands-On Activity
Build a Multi-Step Connected Workflow
Design and execute a real connected workflow that chains a Connector for data retrieval, Claude's reasoning for processing, and either a Skill or a second Connector for output. By the end, you'll have a working multi-tool pipeline triggered by a single prompt.
What you will learn
- Design a workflow using the Connector-Logic-Connector pattern
- Write a single prompt that chains multiple ecosystem components
- Verify that each component in the chain executes its specific role
- Identify performance characteristics and potential bottlenecks in chained workflows
- 01
Identify a real multi-step task you perform regularly — for example, find a file in Google Drive, summarise its key findings, and email the summary to a colleague.
Why: Starting with a task you actually do ensures the workflow is practical and immediately useful, not just an academic exercise.
Expected: A clear mental model of the steps involved: data retrieval (Drive), processing (summarisation), and output (email).
- 02
Verify that both required Connectors are installed and authenticated — in this example, Google Drive and Gmail. Check their status in the Cowork sidebar under Customise > Plugins.
Why: A connected workflow fails if any link in the chain is missing. Checking connector status before prompting prevents frustrating mid-workflow errors.
Expected: Both Connectors showing a Connected status with valid authentication tokens.
- 03
Write a single prompt that triggers the entire chain: Claude, find the Q1 Report in my Google Drive, summarise the three key findings in two sentences each, and email that summary to sarah@company.com with the subject line Q1 Highlights.
Why: A single, well-structured prompt that names the Connectors and defines the processing logic demonstrates how workflow chaining works in practice — one instruction triggers a multi-tool pipeline.
Expected: Claude retrieves the file from Drive, generates the summary, composes the email, and asks for your confirmation before sending. The entire chain executes from one prompt.
- 04
After the workflow completes, verify the output: check your sent folder for the email, review the summary accuracy, and note how long the end-to-end chain took.
Why: Verification closes the loop. Checking timing helps you understand the performance characteristics of chained workflows and identify bottlenecks for optimisation.
Expected: A correctly formatted email in your sent folder containing an accurate summary of the Q1 Report's key findings.
Practice Question
Practice Question
You need to fetch a customer's support history from a SQL database, summarise their last three complaints using your company's tone guidelines, and post the summary to a private Slack channel. What is the most efficient architectural design?
Sources
- Cowork: Claude Code power for knowledge work — Anthropic
- Claude Cowork Guide 2026: Skills, Plugins, Connectors — FindSkill.ai
- Use plugins in Cowork — Anthropic
- Let Claude use your computer in Cowork — Anthropic