Skip to main content

Overview

flytectl create registers new resources with FlyteAdmin.
flytectl create <resource> -p <project> -d <domain> [flags]

Subcommands

SubcommandAliasesDescription
create projectprojectsRegister a new project
create executionexecutionsLaunch a workflow or task execution

create project

Register a new Flyte project.
# Create a project using flags
flytectl create project \
  --name flytesnacks \
  --id flytesnacks \
  --description "flytesnacks description" \
  --labels app=flyte

# Create a project from a YAML definition file
flytectl create project --file project.yaml
Project definition file (project.yaml):
id: "project-unique-id"
name: "Name"
labels:
  values:
    app: flyte
description: "Some description for the project."
The project name must not contain whitespace characters.
Flags:
FlagDescription
--nameDisplay name of the project
--idUnique identifier for the project
--descriptionHuman-readable description
--labelsKey=value labels (repeatable)
--filePath to a YAML project definition file
--dryRunPrint the request without executing it

create execution

Launch a workflow or task execution. The recommended approach is to first generate an execution spec file using flytectl get task or flytectl get launchplan, then pass that file to create execution.
1

Generate an execution spec file

From a task:
flytectl get task -d development -p flytesnacks \
  core.control_flow.merge_sort.merge --version v2 \
  --execFile execution_spec.yaml
From a launch plan:
flytectl get launchplan -d development -p flytesnacks \
  core.control_flow.merge_sort.merge_sort \
  --execFile execution_spec.yaml
The generated file looks similar to:
iamRoleARN: ""
inputs:
  sorted_list1:
  - 0
  sorted_list2:
  - 0
kubeServiceAcct: ""
targetDomain: ""
targetProject: ""
task: core.control_flow.merge_sort.merge
version: "v2"
2

Edit the inputs (optional)

Modify the spec file to supply real input values:
iamRoleARN: 'arn:aws:iam::12345678:role/defaultrole'
inputs:
  sorted_list1:
  - 2
  - 4
  - 6
  sorted_list2:
  - 1
  - 3
  - 5
kubeServiceAcct: ""
targetDomain: ""
targetProject: ""
task: core.control_flow.merge_sort.merge
version: "v2"
You can also add environment variables:
envs:
  foo: bar
Or target a specific execution cluster:
targetExecutionCluster: "my-gpu-cluster"
3

Launch the execution

flytectl create execution \
  --execFile execution_spec.yaml \
  -p flytesnacks -d staging \
  --targetProject flytesnacks
The source project/domain and target project/domain can differ.

Relaunch an execution

flytectl create execution --relaunch ffb31066a0f8b4d52b77 \
  -p flytesnacks -d development

Recover an execution

Recreate an execution from the last known failure point:
flytectl create execution --recover ffb31066a0f8b4d52b77 \
  -p flytesnacks -d development

Named executions

Assign a custom name to an execution (must be unique within project+domain):
flytectl create execution --recover ffb31066a0f8b4d52b77 \
  -p flytesnacks -d development my_custom_name

Generic / struct input types

For tasks that accept struct or dataclass inputs, generate the spec first to see the placeholder structure, then populate it:
iamRoleARN: "arn:aws:iam::123456789:role/dummy"
inputs:
  "x":
    "x": 2
    "y": ydatafory
    "z":
      1: "foo"
      2: "bar"
  "y":
    "x": 3
    "y": ydataforx
    "z":
      3: "buzz"
      4: "lightyear"
kubeServiceAcct: ""
targetDomain: ""
targetProject: ""
task: core.type_system.custom_objects.add
version: v3

Assign to a cluster pool

flytectl create execution \
  --execFile execution_spec.yaml \
  -p flytesnacks -d development \
  --clusterPool my-gpu-cluster
Flags:
FlagDescription
--execFilePath to the execution spec YAML file
--relaunchExecution ID to relaunch
--recoverExecution ID to recover from last failure
--targetProjectProject in which to create the execution
--targetDomainDomain in which to create the execution
--targetExecutionClusterCluster to run the execution on
--iamRoleARNIAM role ARN for auth
--kubeServiceAcctKubernetes service account for auth
--versionVersion of the workflow or task
--clusterPoolCluster pool to assign the execution to
--overwriteCacheSkip cached results and recompute all outputs
--dryRunPrint the request without executing it

Build docs developers (and LLMs) love