Managing concurrency
This guide covers managing concurrency of Dagster assets, jobs, and Dagster instances to help prevent performance problems and downtime.
When your pipelines interact with rate-limited APIs, shared databases, or resource-constrained systems, it is a good idea to limit how many operations execute simultaneously. Dagster provides several mechanisms for managing concurrency at different levels of granularity.
| Mechanism | Scope | Protects against | Use when |
|---|---|---|---|
| Run queue limits | Deployment | Too many simultaneous runs |
|
| Concurrency pools | Cross-run | Overloading shared resources |
|
| Run executor limits | Single run | Memory/CPU exhaustion |
|
| Run tag limits | Single run | Resource contention by category |
|
| Branch deployment concurrency limits | All branch deployments | Branch deployments using too many resources | You are managing multi-developer teams (Dagster+ only) |
Combining mechanisms to manage concurrency
You can combine the mechanisms listed in the table above to manage concurrency. For example:
- Pools + run executor limits: Protect external resources while also limiting local parallelism
- Run queue + pools: Limit total runs AND protect specific resources within those runs
- Tag limits + pools: Fine-grained control within runs plus cross-run protection
concurrency:
runs:
max_concurrent_runs: 10
pools:
default_limit: 3