Domain 150 minutesIntermediate

Parallel Execution Lab

Design tasks that trigger parallel sub-agents, compare execution with sequential alternatives, and document when parallelism genuinely helps versus when it adds overhead.

What you will build: A documented comparison showing parallel versus sequential execution, with timing data, quality assessment, and practical recommendations for when to use each approach

The Scenario

You're a project coordinator responsible for a quarterly data review. Every quarter, your team receives 20-30 data files from different regional offices — spreadsheets, PDFs, text summaries — and someone has to process them all into a consolidated view. You've been told that Cowork can "do things in parallel" to speed this up, but nobody on your team actually understands when parallelism helps and when it makes things worse.

Your job: run controlled experiments, process the same batch of files using a parallel-friendly approach and a sequential-dependent approach, measure the differences, and produce a recommendation document your team can actually use.

What You'll Learn

By completing this lab, you'll be able to:

  • Design prompts that deliberately trigger parallel sub-agent execution
  • Design prompts that enforce sequential processing for dependent tasks
  • Measure the speed, quality, and token consumption trade-offs between approaches
  • Identify the prompt language patterns that signal parallelism versus sequencing to Cowork
  • Produce a decision framework your team can use to choose the right approach for any task

Understanding parallelism isn't academic. A team that blindly uses sequential prompts for independent tasks wastes hours per week. A team that forces parallelism on dependent tasks produces incoherent output. Knowing which approach to use — and why — is what separates genuinely proficient users from merely functional ones.

Prerequisites

  • Claude Desktop with Cowork enabled (Pro or Max plan)
  • A folder containing at least 20 files of similar types — ideally CSVs, spreadsheets, or PDFs that each contain independent data (regional reports, monthly summaries, client files). If you don't have real data, create 20 simple CSV files with different fictional regional sales figures.
  • A timer or stopwatch (your phone works fine)
  • A text editor for recording observations
[~]

The experiment works best when your files are genuinely independent — each one can be analysed without reference to the others. If you only have sequential files (where file 2 depends on file 1), the parallel test won't produce meaningful results.

Step 1: Create Two Test Folders

Set up your experiment with controlled conditions. On your desktop, create:

  • Parallel-Lab/batch-test/ — copy all 20+ files here
  • Parallel-Lab/sequential-test/ — copy the same files here
  • Parallel-Lab/results/ — this is where you'll save your comparison document

Having identical files in two separate folders lets you run both experiments without one affecting the other.

If you're creating sample data, here's a quick way to generate 20 regional CSVs:

  1. Create a template CSV with columns: Month, Region, Product, Revenue, Units, Target
  2. Fill it with 3 months of data for one fictional region
  3. Duplicate the file 19 times, changing the region name and varying the numbers in each copy
  4. Name them systematically: region-01-north.csv, region-02-south.csv, etc.

Alternatively, ask Cowork in a separate session to generate 20 sample CSV files with realistic but fictional regional sales data. Just make sure the files are genuinely independent — each region's data should stand on its own without referencing other regions.

Checkpoint: Two test folders with identical file sets, plus an empty results folder.

Step 2: Design the Parallel-Friendly Task

Write a prompt that processes each file independently — no file needs information from any other file. This is the ideal scenario for parallelism.

Analyse every file in this folder independently. For each file, produce: (1) a three-sentence summary of its contents, (2) the key metric or figure it contains, and (3) a quality rating (complete, partial, or poor) based on whether the data looks clean and consistent. Save all results in a single file called parallel-results.md, with one section per input file.

The critical word is "independently." Each file analysis is a self-contained unit of work — the perfect candidate for parallel sub-agents. No file needs information from any other file to produce its output. Cowork's planning engine should recognise this and spawn multiple sub-agents.

[~]

Parallelism keywords to include in your prompts: "independently," "each file separately," "for every file in the folder," "no cross-referencing needed." These signal to Cowork that the work can be divided across sub-agents. Avoid words like "then," "next," "after that," or "based on the previous" in parallel-friendly prompts — they imply sequential dependencies.

Simulated view

Let's knock something off your list

Analyse every file in this folder independently. For each file, produce: (1) a three-sentence summary, (2) the key metric, and (3) a quality rating. Save all results in parallel-results.md.

Parallel-Lab
Opus 4.6

A parallel-friendly prompt — note the word 'independently,' which signals that sub-agents can work on separate files at the same time.

Checkpoint: You've got a parallel-friendly prompt that processes files independently.

Step 3: Run the Parallel Test

Open Cowork, point it at Parallel-Lab/batch-test/, paste your parallel-friendly prompt, and start your timer the moment you click Allow.

While it runs, observe:

  • Sub-agent count: How many parallel workers did Cowork spawn?
  • Processing pattern: Are files being handled simultaneously, or does Cowork still process them in sequence despite the independence?
  • Any errors or retries: Did any sub-agent fail and get restarted?
Simulated view

Parallel batch analysis

Read all 20 files from batch-test/
Spawn sub-agents for independent file analysis
Analyse files in parallel (region-01 through region-20)
Merge results into parallel-results.md

Cowork processes independent files simultaneously via sub-agents — watch the sub-agent count in the execution panel.

Stop your timer when the task completes. Record:

  • Total elapsed time
  • Number of sub-agents observed
  • Whether all files were processed

Open parallel-results.md and spot-check 3-4 entries for accuracy. Note the quality.

Create a structured observation record:

MetricValue
Files submitted
Sub-agents spawned
Total execution time
Files processed correctly
Files with errors
Quality score (1-5)
Rate limit consumption
[!]

If Cowork doesn't spawn any sub-agents despite having 20+ independent files, your task may not have crossed the complexity threshold. That's a valid finding — document the threshold. Alternatively, your rate limit may be partially consumed, and Cowork may conserve resources by avoiding parallelism when capacity is constrained.

Checkpoint: Parallel test complete with timing data, sub-agent count, and quality notes recorded.

Step 4: Design the Sequential-Dependent Task

Now write a prompt where each step explicitly depends on the previous one. This forces Cowork into serial execution.

Process the files in this folder one at a time, in alphabetical order. For the first file, write a summary. For each subsequent file, compare it with the previous file's summary and note what changed. Build a running narrative in sequential-results.md where each new entry references findings from the entry before it. The final entry should be a synthesis of the entire sequence.

This task can't be parallelised because every step depends on the output of the step before it. Cowork's planning engine should recognise this dependency chain.

Checkpoint: You've got a sequential-dependent prompt that forces serial processing.

Step 5: Run the Sequential Test

Open a new Cowork session (or clear the previous context), point it at Parallel-Lab/sequential-test/, paste the sequential prompt, and start your timer.

Observe:

  • Sub-agent behaviour: Does Cowork attempt parallelism and fail, or does it correctly identify the sequential dependency from the start?
  • Processing order: Are files handled in the alphabetical order you specified?
  • Execution plan: Does the plan explicitly show dependencies between steps?

Stop your timer at completion. Record the same metrics as Step 3.

Create the same observation table for consistency. The key comparisons will be:

  • Time: How much longer did sequential processing take?
  • Sub-agents: Did Cowork correctly avoid spawning parallel workers?
  • Quality: Did the cross-referencing produce richer analysis than the independent summaries? (Sequential tasks should produce higher-quality synthesis because each step builds on previous findings.)
  • Execution plan: Did the plan explicitly show the dependency chain? A good plan for this task should make the sequential ordering visible.

If Cowork attempted parallelism on this task despite the explicit dependencies, document that as a significant finding — it means the planning engine failed to detect the sequential requirement.

Checkpoint: Sequential test complete with timing data and execution pattern notes recorded.

Step 6: Run a Hybrid Comparison

For your third data point, design a task that mixes parallel and sequential elements:

First, analyse every file in this folder independently and produce a one-paragraph summary of each (this part can be done in parallel). Then, once all summaries are complete, compare them against each other and write a consolidated analysis identifying the three most significant trends across all files. Save the individual summaries as summaries.md and the consolidated analysis as trends.md.

This tests whether Cowork can correctly identify that phase one (individual summaries) is parallel-friendly while phase two (cross-file comparison) must wait until phase one completes.

Record the same metrics: time, sub-agent behaviour, and output quality.

This is the most architecturally interesting test. Watch for:

  • Phase detection: Does Cowork's execution plan explicitly show two phases — a parallel analysis phase followed by a sequential synthesis phase?
  • Handoff point: How does Cowork transition from the parallel phase to the sequential phase? Does it wait for all sub-agents to complete before starting the comparison, or does it begin comparing as soon as some results are ready?
  • Quality of synthesis: The consolidated analysis should reference specific findings from the individual summaries. If it reads like a generic summary that could've been written without reading the individual results, the cross-referencing step may not be working properly.
  • Token consumption: The hybrid approach uses both parallel and sequential processing. It should consume more tokens than either pure approach — that's the cost of both phases.
Simulated view

Task complete

parallel-results.md — 20 independent file analyses

3m 42s

sequential-results.md — running narrative with cross-references

11m 18s

summaries.md + trends.md — hybrid parallel/sequential output

5m 55s

Three experiments complete — the timing differences between parallel, sequential, and hybrid approaches form the basis of your recommendation document.

Checkpoint: Hybrid test complete. You now have three data points for comparison.

Step 7: Compare and Analyse

Create your comparison document in the results folder. Structure it as follows:

Comparison Table

MetricParallel TestSequential TestHybrid Test
Total time
Sub-agents spawned
Files processed correctly
Output quality (1-5)
Execution plan accuracy

Analysis Questions to Address

  1. Speed difference: How much faster was the parallel test than the sequential one? Express this as both absolute time and a multiplier (e.g., "2.3x faster").

  2. Quality trade-off: Did parallelism affect output quality? Were the independently-produced summaries as good as the ones that referenced each other?

  3. Planning accuracy: Did Cowork correctly identify which tasks could be parallelised and which couldn't? Or did it attempt parallelism on the sequential task?

  4. Hybrid handling: Did Cowork successfully split the hybrid task into a parallel phase and a sequential phase? Where was the boundary?

  5. Token consumption: Which approach used more of your rate limit? Parallel tasks with multiple sub-agents consume tokens for each agent independently.

[!]

Don't fabricate results to match expectations. If the parallel test wasn't faster — perhaps because Cowork serialised it anyway, or because the overhead of spawning sub-agents negated the speed gain on a small batch — that's a valid and interesting finding. Your document should describe what actually happened.

Checkpoint: Your comparison document contains a completed table and analysis addressing all five questions.

Step 8: Write Practical Recommendations

Conclude your document with a recommendations section aimed at your team. Based on your experiments, answer:

  • When should the team use parallel-friendly prompts? (e.g., batch file processing, independent document reviews, bulk data extraction)
  • When should they force sequential execution? (e.g., tasks where each step builds on previous findings, narrative documents, iterative refinement)
  • What are the token cost implications? (More sub-agents means faster completion but higher consumption — what's the trade-off for a team of five?)
  • What prompt patterns signal parallelism to Cowork? (Words like "independently," "each file separately," "for every file" versus "then," "next," "based on the previous")
  • When is the hybrid approach optimal? (Tasks with an independent analysis phase followed by a synthesis step — which describes most real-world analytical workflows.)

Decision Framework

Summarise your findings as a decision tree your team can reference:

Is each sub-task independent of the others?
├── Yes → Use parallel-friendly prompt language
│         Expected benefit: 2-5x speed improvement
│         Token cost: Higher (one allocation per sub-agent)
│
├── No, each step depends on the previous →
│         Use sequential prompt language
│         Expected benefit: Higher quality synthesis
│         Token cost: Lower (single agent)
│
└── Mix of independent and dependent steps →
          Use hybrid approach (parallel analysis, sequential synthesis)
          Expected benefit: Best of both worlds
          Token cost: Highest (parallel + sequential)

Adapt the specific numbers in this framework based on your experimental data. Your team should be able to look at any new task, identify which pattern it fits, and choose the right approach without running their own experiments.

Checkpoint: Your document includes actionable team recommendations with specific examples of when to use each approach.

Expected Output

Your deliverable is a comparison document containing:

  • Timing data from three controlled experiments (parallel, sequential, hybrid)
  • A structured comparison table with five metrics
  • Analysis of speed, quality, planning accuracy, and token consumption
  • Practical recommendations for when to use parallel versus sequential approaches

This is the kind of evidence-based analysis that turns "Cowork is fast" into "here's exactly when and why parallelism helps, and here's when it doesn't."

Presenting Your Findings

If you share this document with your team, lead with the comparison table and the decision framework. Most people don't want to read the methodology — they want to know "when should I use parallel prompts?" Give them the answer first, then provide the experimental evidence for anyone who wants to validate the recommendation.

Include a one-paragraph executive summary at the top:

"We tested parallel, sequential, and hybrid task execution across 20 files. Parallel execution was [X]x faster for independent file processing but consumed [X]% more of the rate limit. Sequential execution produced higher-quality cross-referencing but was [X]x slower. The hybrid approach — parallel analysis followed by sequential synthesis — provided the best balance of speed and quality for most real-world analytical tasks. We recommend using parallel prompts for batch file processing and sequential prompts for tasks requiring iterative reasoning."

Fill in the actual numbers from your experiments. This summary alone is worth the entire tutorial for busy colleagues.

Extension Challenges

  1. Scale test — Repeat the parallel experiment with 50 files instead of 20. Does the speed multiplier increase linearly, or does Cowork hit a ceiling on the number of sub-agents it spawns?

  2. Error recovery test — Deliberately include one corrupted or empty file in the batch. How does Cowork handle a sub-agent that fails? Does it retry, skip, or halt the entire task?

  3. Cross-session comparison — Run the parallel test at different times of day (morning versus evening). Token availability and system load may affect sub-agent spawning and execution speed. Document whether you observe any difference.