- Training machine learning models that require GPUs
- Processing large datasets that need significant memory
- Running lightweight tasks that can use minimal resources
Benefits
Submitting workflows directly to dynamic infrastructure provides several advantages:- Dynamic resource allocation: Choose infrastructure based on workflow requirements at runtime
- Cost efficiency: Use expensive infrastructure only when needed
- Consistency: Ensure workflows always run on the appropriate infrastructure type
- Simplified workflow management: No need to create and maintain deployments for different infrastructure types
Supported infrastructure
Direct submission of workflows is currently supported for the following infrastructures:| Infrastructure | Required Package | Decorator |
|---|---|---|
| Docker | prefect-docker | @docker |
| Kubernetes | prefect-kubernetes | @kubernetes |
| AWS ECS | prefect-aws | @ecs |
| Google Cloud Run | prefect-gcp | @cloud_run |
| Google Vertex AI | prefect-gcp | @vertex_ai |
| Azure Container Instances | prefect-azure | @azure_container_instance |
Prerequisites
Before submitting workflows to specific infrastructure, you need:- A work pool for each infrastructure type you want to use
- Object storage to associate with your work pool(s)
Setting up work pools and storage
Creating a work pool
Create work pools for each infrastructure type using the Prefect CLI:Configuring work pool storage
To enable Prefect to run workflows in remote infrastructure, work pools need an associated storage location to store serialized versions of submitted workflows and results from workflow runs. Configure storage for your work pools using one of the supported storage types:Local storage for To use local storage, ensure that:
@dockerWhen using the @docker decorator with a local Docker engine, you can use volume mounts to share data between your Docker container and host machine.Here’s an example:- The volume mount path is identical on both the host and container side
- The
LocalFileSystemblock’sbasepathmatches the path specified in the volume mount
Running infrastructure-bound flows
An infrastructure-bound flow supports three execution modes: direct calling,.submit(), and .submit_to_work_pool(). Each mode targets a different use case depending on whether you need blocking or non-blocking execution and whether the submitting machine has direct access to the target infrastructure.
| Method | Blocking | Requires local infrastructure access | Requires a running worker |
|---|---|---|---|
| Direct call | Yes | Yes | No |
.submit() | No | Yes | No |
.submit_to_work_pool() | No | No | Yes |
Direct call (blocking)
Calling an infrastructure-bound flow directly submits it to remote infrastructure and blocks until the run completes. Prefect spins up a temporary local worker to create the infrastructure and monitor the run.my_flow executes locally, while my_remote_flow is submitted to run in a Kubernetes job. The call blocks until the Kubernetes job finishes.
Non-blocking submission with .submit()
Use .submit() when you want to submit a flow to remote infrastructure without blocking the caller. Like a direct call, .submit() spins up a temporary local worker to create the infrastructure, but it returns a PrefectFlowRunFuture immediately so you can continue running other work.
Use .submit() when:
- The submitting machine has access to the target infrastructure (for example, it can connect to the Kubernetes cluster or has permissions to create an ECS task)
- You want to run multiple infrastructure-bound flows concurrently
- You don’t have a worker already running for the work pool
orchestrator flow continues executing and only blocks when it calls .result() on each future.
Submitting to a work pool with .submit_to_work_pool()
Use .submit_to_work_pool() when you want to submit a flow to remote infrastructure but the submitting machine does not have direct access to create that infrastructure. Instead of spinning up a local worker, this method creates a flow run and places it in the work pool for an already-running worker to pick up.
Use .submit_to_work_pool() when:
- The submitting machine cannot connect to the target infrastructure (for example, it cannot reach the Kubernetes cluster or lacks permissions to create an ECS task)
- You already have a worker running that polls the target work pool
- You want to separate the submission environment from the execution environment
.submit_to_work_pool(), start a worker that polls the target work pool:
Working with PrefectFlowRunFuture
Both .submit() and .submit_to_work_pool() return a PrefectFlowRunFuture. Use this object to check the status of the flow run, wait for it to finish, or retrieve the result.
Parameters must be serializableParameters passed to infrastructure-bound flows are serialized with
cloudpickle to allow them to be transported to the destination infrastructure.Most Python objects can be serialized with cloudpickle, but objects like database connections cannot be serialized. For parameters that cannot be serialized, create the object inside your infrastructure-bound workflow.Customizing infrastructure configuration
You can override the default configuration by providing additional kwargs to the infrastructure decorator:Including files in the bundle
When a flow runs on remote infrastructure, your code is serialized and sent to the execution environment. However, non-Python files such as configuration files, data files, or model artifacts are not included by default. Use theinclude_files parameter on any infrastructure decorator to bundle additional files alongside your flow.
include_files parameter accepts a list of relative paths and glob patterns. Paths are resolved relative to the directory containing the flow file.
Supported patterns
| Pattern | Description |
|---|---|
"config.yaml" | A single file |
"data/" | All files in a directory (recursive) |
"*.yaml" | Glob pattern matching files in the flow directory |
"data/**/*.csv" | Recursive glob pattern |
"!*.test.py" | Negation pattern to exclude previously matched files |
!) remove files already matched by earlier patterns:
fixtures/ directory.
Filtering with .prefectignore
If a .prefectignore file exists in the flow file’s directory or at the project root (detected via pyproject.toml), its patterns are applied to filter out matching files. The .prefectignore file uses gitignore-style syntax:
.prefectignore pattern are excluded from the bundle even if they match an include_files pattern.
Default exclusions
Certain common directories and file types are always excluded from directory and glob collection, even without a.prefectignore file:
__pycache__/,*.pyc,*.pyo.git/,.hg/,.svn/node_modules/,.venv/,venv/.idea/,.vscode/.DS_Store,Thumbs.db
.) are also excluded when collecting directories.
Configuring bundle launchers
By default, Prefect usesuv run to execute bundle upload and execution commands. If your execution environment already has the required dependencies installed—for example, a custom Docker image, a Poetry-managed environment, or a system-level Python interpreter—you can override the default launcher to skip the uv run wrapper entirely.
You can configure launchers at two levels:
- Per-flow: using the
launcherparameter on an infrastructure decorator - Per-work-pool: using CLI flags during storage configuration
Per-flow launcher override
Pass thelauncher parameter to any infrastructure decorator to override the command prefix used for both bundle upload and execution:
launcher parameter accepts:
- A
list[str]that applies to both upload and execution (for example,["python"]or["poetry", "run", "python"]) - A
dictwith"upload"and/or"execution"keys when you need different launchers for each phase:
uv run behavior, including automatic dependency installation via --with flags. Ensure your execution environment has all required dependencies pre-installed.
You can also override the launcher with .with_options():
Per-work-pool launcher via CLI
Configure a launcher for all flows that use a work pool by passing launcher flags during storage configuration:python as the launcher for both upload and execution steps on the work pool. To pass additional arguments to the launcher executable, use --launcher-arg (repeatable):
Separate upload and execution launchers via CLI
To configure different launchers for the upload and execution phases, use the--upload-launcher and --execution-launcher flags:
--launcher with phase-specific overrides. The shared launcher serves as the base, and phase-specific flags replace the executable for that phase:
python while execution uses poetry run python.
Inspecting launcher configuration
After configuring a launcher, verify the storage settings with:--output json for machine-readable output that includes the launcher configuration for both upload and execution steps:
When a per-flow launcher override is provided, it takes precedence over the work pool’s configured launcher for that flow submission.
Further reading
- Work pools concept page
- Manage work pools