Configuration options
This page lists all of the available settings in the Nextflow configuration.
Unscoped options
bucketDirThe remote work directory used by hybrid workflows. Equivalent to the
-bucket-diroption of theruncommand.cleanupIf
true, on a successful completion of a run all files in work directory are automatically deleted.Warning
The use of the
cleanupoption will prevent the use of the resume feature on subsequent executions of that pipeline run.Warning
The
cleanupoption is not supported for remote work directories, such as Amazon S3, Google Cloud Storage, and Azure Blob Storage.dumpHashesIf
true, dump task hash keys in the log file, for debugging purposes. Equivalent to the-dump-hashesoption of theruncommand.outputDirNew in version 24.10.0.
Defines the pipeline output directory. Equivalent to the
-output-diroption of theruncommand.resumeIf
true, enable the use of previously cached task executions. Equivalent to the-resumeoption of theruncommand.workDirDefines the pipeline work directory. Equivalent to the
-work-diroption of theruncommand.
apptainer
The apptainer scope controls how Apptainer containers are executed by Nextflow.
The following settings are available:
apptainer.autoMountsAutomatically mount host paths in the executed container (default:
true). It requires theuser bind controlfeature to be enabled in your Apptainer installation.apptainer.cacheDirThe directory where remote Apptainer images are stored. When using a computing cluster it must be a shared folder accessible to all compute nodes.
apptainer.enabledExecute tasks with Apptainer containers (default:
false).apptainer.engineOptionsSpecify additional options supported by the Apptainer engine i.e.
apptainer [OPTIONS].apptainer.envWhitelistComma separated list of environment variable names to be included in the container environment.
apptainer.libraryDirDirectory where remote Apptainer images are retrieved. When using a computing cluster it must be a shared folder accessible to all compute nodes.
apptainer.noHttpsPull the Apptainer image with http protocol (default:
false).apptainer.ociAutoPullNew in version 23.12.0-edge.
When enabled, OCI (and Docker) container images are pulled and converted to the SIF format by the Apptainer run command, instead of Nextflow (default:
false).Note
Leave
ociAutoPulldisabled if you are willing to build a Singularity/Apptainer native image with Wave (see the Build Singularity native images section).apptainer.pullTimeoutThe amount of time the Apptainer pull can last, exceeding which the process is terminated (default:
20 min).apptainer.registryThe registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e.
http://.apptainer.runOptionsSpecify extra command line options supported by
apptainer exec.
Read the Apptainer page to learn more about how to use Apptainer containers with Nextflow.
aws
The aws scope controls the interactions with AWS, including AWS Batch and S3. For example:
aws {
accessKey = '<YOUR S3 ACCESS KEY>'
secretKey = '<YOUR S3 SECRET KEY>'
region = 'us-east-1'
client {
maxConnections = 20
connectionTimeout = 10000
uploadStorageClass = 'INTELLIGENT_TIERING'
storageEncryption = 'AES256'
}
batch {
cliPath = '/home/ec2-user/miniconda/bin/aws'
maxTransferAttempts = 3
delayBetweenAttempts = '5 sec'
}
}
Tip
This scope can also be used to configure access to S3-compatible storage outside of AWS, such as Ceph and MinIO.
Read the Amazon Web Services and Amazon S3 pages for more information.
The following settings are available:
aws.accessKeyAWS account access key.
aws.profileNew in version 22.12.0-edge.
AWS profile from
~/.aws/credentials.aws.regionAWS region (e.g.
us-east-1).aws.secretKeyAWS account secret key.
aws.batch.cliPathThe path where the AWS command line tool is installed in the host AMI.
aws.batch.delayBetweenAttemptsDelay between download attempts from S3 (default:
10 sec).aws.batch.executionRoleNew in version 23.12.0-edge.
The AWS Batch Execution Role ARN that needs to be used to execute the Batch Job. This is mandatory when using AWS Fargate platform type. See AWS documentation for more details.
aws.batch.jobRoleThe AWS Batch Job Role ARN that needs to be used to execute the Batch Job.
aws.batch.logsGroupNew in version 22.09.0-edge.
The name of the logs group used by Batch Jobs (default:
/aws/batch/job).aws.batch.maxParallelTransfersMax parallel upload/download transfer operations per job (default:
4).aws.batch.maxSpotAttemptsNew in version 22.04.0.
Changed in version 24.08.0-edge: The default value was changed from
5to0.Max number of execution attempts of a job interrupted by a EC2 Spot reclaim event (default:
0)aws.batch.maxTransferAttemptsMax number of downloads attempts from S3 (default:
1).aws.batch.platformTypeNew in version 23.12.0-edge.
Allow specifying the compute platform type used by AWS Batch, that can be either
ec2orfargate. See AWS documentation to learn more about AWS Fargate platform type for AWS Batch.aws.batch.retryModeThe retry mode configuration setting, to accommodate rate-limiting on AWS services (default:
standard, other options:legacy,adaptive); this handling is delegated to AWS. To have Nextflow handle retries instead, usebuilt-in.aws.batch.schedulingPriorityNew in version 23.01.0-edge.
The scheduling priority for all tasks when using fair-share scheduling for AWS Batch (default:
0)aws.batch.shareIdentifierNew in version 22.09.0-edge.
The share identifier for all tasks when using fair-share scheduling for AWS Batch
aws.batch.terminateUnschedulableJobsNew in version 25.03.0-edge.
When
true, jobs that cannot be scheduled for lack of resources or misconfiguration are terminated automatically (default:false). The pipeline may complete with an error status depending on the error strategy defined for the corresponding jobs.aws.batch.volumesOne or more container mounts. Mounts can be specified as simple e.g.
/some/pathor canonical format e.g./host/path:/mount/path[:ro|rw]. Multiple mounts can be specified separating them with a comma or using a list object.aws.client.anonymousAllow the access of public S3 buckets without the need to provide AWS credentials (default:
false). Any service that does not accept unsigned requests will return a service access error.aws.client.s3AclAllow the setting of predefined bucket permissions, also known as canned ACL. Permitted values are
Private,PublicRead,PublicReadWrite,AuthenticatedRead,LogDeliveryWrite,BucketOwnerRead,BucketOwnerFullControl, andAwsExecRead(default: none). See Amazon docs for details.aws.client.connectionTimeoutThe amount of time to wait (in milliseconds) when initially establishing a connection before timing out (default:
10000).aws.client.endpointThe AWS S3 API entry point e.g.
https://s3-us-west-1.amazonaws.com. The endpoint must include the protocol prefix e.g.https://.aws.client.glacierAutoRetrievalDeprecated since version 24.02.0-edge: Glacier auto-retrieval is no longer supported. Instead, consider using the AWS CLI to restore any Glacier objects before or at the beginning of your pipeline (i.e. in a Nextflow process).
Enable auto retrieval of S3 objects with a Glacier storage class (default:
false).aws.client.glacierExpirationDaysDeprecated since version 24.02.0-edge.
The time, in days, between when an object is restored to the bucket and when it expires (default:
7).aws.client.glacierRetrievalTierDeprecated since version 24.02.0-edge.
The retrieval tier to use when restoring objects from Glacier, one of [
Expedited,Standard,Bulk].aws.client.maxConcurrencyNew in version 25.06.0-edge.
The maximum number of concurrent S3 transfers used by the S3 transfer manager. By default, this setting is determined by
aws.client.targetThroughputInGbps. Modifying this value can affect the amount of memory used for S3 transfers.aws.client.maxConnectionsThe maximum number of open HTTP connections used by the S3 transfer manager (default:
50).aws.client.maxErrorRetryThe maximum number of retry attempts for failed retryable requests (default:
-1).aws.client.maxNativeMemoryNew in version 25.06.0-edge.
The maximum native memory used by the S3 transfer manager. By default, this setting is determined by
aws.client.targetThroughputInGbps.aws.client.minimumPartSizeNew in version 25.06.0-edge.
The minimum part size used by the S3 transfer manager for multi-part uploads (default:
8 MB).aws.client.multipartThresholdNew in version 25.06.0-edge.
The object size threshold used by the S3 transfer manager for performing multi-part uploads (default: same as
aws.cllient.minimumPartSize).aws.client.protocolDeprecated since version 25.06.0-edge: This option is no longer supported.
The protocol to use when connecting to AWS. Can be
httporhttps(default:'https').aws.client.proxyHostThe proxy host to connect through.
aws.client.proxyPortThe port to use when connecting through a proxy.
aws.client.proxySchemeNew in version 25.06.0-edge.
The protocol scheme to use when connecting through a proxy. Can be
httporhttps(default:'http').aws.client.proxyUsernameThe user name to use when connecting through a proxy.
aws.client.proxyPasswordThe password to use when connecting through a proxy.
aws.client.requesterPaysNew in version 24.05.0-edge.
Use Requester Pays for S3 buckets (default:
false).aws.client.s3PathStyleAccessUse the path-based access model to access objects in S3-compatible storage systems (default:
false).aws.client.signerOverrideDeprecated since version 25.06.0-edge: This option is no longer supported.
The name of the signature algorithm to use for signing requests made by the client.
aws.client.socketSendBufferSizeHintDeprecated since version 25.06.0-edge: This option is no longer supported.
The Size hint (in bytes) for the low level TCP send buffer (default:
0).aws.client.socketRecvBufferSizeHintDeprecated since version 25.06.0-edge: This option is no longer supported.
The Size hint (in bytes) for the low level TCP receive buffer (default:
0).aws.client.socketTimeoutThe amount of time to wait (in milliseconds) for data to be transferred over an established, open connection before the connection is timed out (default:
50000).aws.client.storageEncryptionThe S3 server side encryption to be used when saving objects on S3. Can be
AES256oraws:kms(default: none).aws.client.storageKmsKeyIdNew in version 22.05.0-edge.
The AWS KMS key Id to be used to encrypt files stored in the target S3 bucket.
aws.client.targetThroughputInGbpsNew in version 25.06.0-edge.
The target network throughput (in Gbps) used by the S3 transfer manager (default:
10). This setting is not used whenaws.client.maxConcurrencyandaws.client.maxNativeMemoryare specified.aws.client.transferManagerThreadsNew in version 25.06.0-edge.
Number of threads used by the S3 transfer manager (default
10).aws.client.userAgentDeprecated since version 25.06.0-edge: This option is no longer supported.
The HTTP user agent header passed with all HTTP requests.
aws.client.uploadChunkSizeDeprecated since version 25.06.0-edge: This option is no longer supported.
The size of a single part in a multipart upload (default:
100 MB).aws.client.uploadMaxAttemptsDeprecated since version 25.06.0-edge: This option is no longer supported.
The maximum number of upload attempts after which a multipart upload returns an error (default:
5).aws.client.uploadMaxThreadsDeprecated since version 25.06.0-edge: This option is no longer supported.
The maximum number of threads used for multipart upload (default:
10).aws.client.uploadRetrySleepDeprecated since version 25.06.0-edge: This option is no longer supported.
The time to wait after a failed upload attempt to retry the part upload (default:
500ms).aws.client.uploadStorageClassThe S3 storage class applied to stored objects. Can be
STANDARD,STANDARD_IA,ONEZONE_IA, orINTELLIGENT_TIERING(default:STANDARD).
azure
The azure scope allows you to configure the interactions with Azure, including Azure Batch and Azure Blob Storage.
Read the Azure page for more information.
The following settings are available:
azure.activeDirectory.servicePrincipalIdThe service principal client ID. Defaults to environment variable
AZURE_CLIENT_ID.azure.activeDirectory.servicePrincipalSecretThe service principal client secret. Defaults to environment variable
AZURE_CLIENT_SECRET.azure.activeDirectory.tenantIdThe Azure tenant ID. Defaults to environment variable
AZURE_TENANT_ID.azure.azcopy.blobTierThe blob access tier used by
azcopyto upload files to Azure Blob Storage. Valid options areNone,Hot, orCool(default:None).azure.azcopy.blockSizeThe block size (in MB) used by
azcopyto transfer files between Azure Blob Storage and compute nodes (default:4).azure.batch.accountNameThe batch service account name. Defaults to environment variable
AZURE_BATCH_ACCOUNT_NAME.azure.batch.accountKeyThe batch service account key. Defaults to environment variable
AZURE_BATCH_ACCOUNT_KEY.azure.batch.allowPoolCreationEnable the automatic creation of batch pools specified in the Nextflow configuration file (default:
false).azure.batch.autoPoolModeEnable the automatic creation of batch pools depending on the pipeline resources demand (default:
true).azure.batch.copyToolInstallModeSpecify where the
azcopytool used by Nextflow. Whennodeis specified it’s copied once during the pool creation. Whentaskis provider, it’s installed for each task execution. Finally whenoffis specified, theazcopytool is not installed (default:node).azure.batch.deleteJobsOnCompletionDelete all jobs when the workflow completes (default:
false).Changed in version 23.08.0-edge: Default value was changed from
truetofalse.azure.batch.deletePoolsOnCompletionDelete all compute node pools when the workflow completes (default:
false).azure.batch.deleteTasksOnCompletionNew in version 23.08.0-edge.
Delete each task when it completes (default:
true).Although this setting is enabled by default, failed tasks will not be deleted unless it is explicitly enabled. This way, the default behavior is that successful tasks are deleted while failed tasks are preserved for debugging purposes.
azure.batch.endpointThe batch service endpoint e.g.
https://nfbatch1.westeurope.batch.azure.com.azure.batch.locationThe name of the batch service region, e.g.
westeuropeoreastus2. This is not needed when the endpoint is specified.azure.batch.jobMaxWallClockTimeNew in version 25.04.0.
The maximum elapsed time that jobs may run, measured from the time they are created. If jobs do not complete within this time limit, the Batch service terminates them and any tasks still running (default:
30d).azure.batch.terminateJobsOnCompletionNew in version 23.05.0-edge.
When the workflow completes, set all jobs to terminate on task completion. (default:
true).azure.batch.pools.<name>.autoScaleEnable autoscaling feature for the pool identified with
<name>.azure.batch.pools.<name>.fileShareRootPathIf mounting File Shares, this is the internal root mounting point. Must be
/mnt/resource/batch/tasks/fsmountsfor CentOS nodes or/mnt/batch/tasks/fsmountsfor Ubuntu nodes (default is for CentOS).azure.batch.pools.<name>.lowPriorityEnable the use of low-priority VMs (default:
false).Warning
As of September 30, 2025, Low Priority VMs will no longer be supported in Azure Batch accounts that use Batch Managed mode for pool allocation. You may continue to use this setting to configure Spot VMs in Batch accounts configured with User Subscription mode.
azure.batch.pools.<name>.maxVmCountSpecify the max of virtual machine when using auto scale option.
azure.batch.pools.<name>.mountOptionsSpecify the mount options for mounting the file shares (default:
-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp).azure.batch.pools.<name>.offerSpecify the offer type of the virtual machine type used by the pool identified with
<name>(default:centos-container).azure.batch.pools.<name>.privilegedEnable the task to run with elevated access. Ignored if
runAsis set (default:false).azure.batch.pools.<name>.publisherSpecify the publisher of virtual machine type used by the pool identified with
<name>(default:microsoft-azure-batch).azure.batch.pools.<name>.runAsSpecify the username under which the task is run. The user must already exist on each node of the pool.
azure.batch.pools.<name>.scaleFormulaSpecify the scale formula for the pool identified with
<name>. See Azure Batch scaling documentation for details.azure.batch.pools.<name>.scaleIntervalSpecify the interval at which to automatically adjust the Pool size according to the autoscale formula. The minimum and maximum value are 5 minutes and 168 hours respectively (default:
10 mins).azure.batch.pools.<name>.schedulePolicySpecify the scheduling policy for the pool identified with
<name>. It can be eitherspreadorpack(default:spread).azure.batch.pools.<name>.skuSpecify the ID of the Compute Node agent SKU which the pool identified with
<name>supports (default:batch.node.centos 8).azure.batch.pools.<name>.startTask.scriptNew in version 24.03.0-edge.
Specify the
startTaskthat is executed as the node joins the Azure Batch node pool.azure.batch.pools.<name>.startTask.privilegedNew in version 24.03.0-edge.
Enable the
startTaskto run with elevated access (default:false).azure.batch.pools.<name>.virtualNetworkNew in version 23.03.0-edge.
Specify the subnet ID of a virtual network in which to create the pool.
azure.batch.pools.<name>.vmCountSpecify the number of virtual machines provisioned by the pool identified with
<name>.azure.batch.pools.<name>.vmTypeSpecify the virtual machine type used by the pool identified with
<name>.azure.batch.poolIdentityClientIdNew in version 25.05.0-edge.
Specify the client ID for an Azure managed identity that is available on all Azure Batch node pools. This identity will be used by Fusion to authenticate to Azure storage. If set to
'auto', Fusion will use the first available managed identity.azure.managedIdentity.clientIdSpecify the client ID for an Azure managed identity. See Managed identities for more details. Defaults to environment variable
AZURE_MANAGED_IDENTITY_USER.azure.managedIdentity.systemWhen
true, uses the system-assigned managed identity to authenticate Azure resources. See Managed identities for more details. Defaults to environment variableAZURE_MANAGED_IDENTITY_SYSTEM.azure.registry.serverSpecify the container registry from which to pull the Docker images (default:
docker.io).azure.registry.userNameSpecify the username to connect to a private container registry.
azure.registry.passwordSpecify the password to connect to a private container registry.
azure.retryPolicy.delayDelay when retrying failed API requests (default:
500ms).azure.retryPolicy.jitterJitter value when retrying failed API requests (default:
0.25).azure.retryPolicy.maxAttemptsMax attempts when retrying failed API requests (default:
10).azure.retryPolicy.maxDelayMax delay when retrying failed API requests (default:
90s).azure.storage.accountNameThe blob storage account name. Defaults to environment variable
AZURE_STORAGE_ACCOUNT_NAME.azure.storage.accountKeyThe blob storage account key. Defaults to environment variable
AZURE_STORAGE_ACCOUNT_KEY.azure.storage.sasTokenThe blob storage shared access signature token, which can be provided as an alternative to
accountKey. Defaults to environment variableAZURE_STORAGE_SAS_TOKEN.azure.storage.tokenDurationThe duration of the shared access signature token created by Nextflow when the
sasTokenoption is not specified (default:48h).
charliecloud
The charliecloud scope controls how Charliecloud containers are executed by Nextflow.
The following settings are available:
charliecloud.enabledExecute tasks with Charliecloud containers (default:
false).charliecloud.writeFakeEnable
writeFakewith charliecloud (default:true) This allows to run containers from storage in writeable mode, using overlayfs.writeFakerequires unprivilegedoverlayfs(Linux kernel >= 5.11). For full support, tempfs with xattrs in the user namespace (Linux kernel >= 6.6) is required, see charliecloud documentation for details.charliecloud.cacheDirThe directory where remote Charliecloud images are stored. When using a computing cluster it must be a shared folder accessible to all compute nodes.
charliecloud.envWhitelistComma separated list of environment variable names to be included in the container environment.
charliecloud.pullTimeoutThe amount of time the Charliecloud pull can last, exceeding which the process is terminated (default:
20 min).charliecloud.runOptionsSpecify extra command line options supported by the
ch-runcommand.charliecloud.tempMounts a path of your choice as the
/tmpdirectory in the container. Use the special valueautoto create a temporary directory each time a container is created.charliecloud.registryThe registry from where images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e.
http://.
Read the Charliecloud page to learn more about how to use Charliecloud containers with Nextflow.
conda
The conda scope controls the creation of Conda environments by the Conda package manager.
The following settings are available:
conda.enabledEnables the use of Conda environments (default:
false).conda.cacheDirDefines the path where Conda environments are stored. Ensure the path is accessible from all compute nodes when using a shared file system.
conda.channelsDefines the Conda channels that can be used to resolve Conda packages. Channels can be defined as a list (e.g.,
['bioconda','conda-forge']) or a comma separated list string (e.g.,'bioconda,conda-forge'). Channel priority decreases from left to right.conda.createOptionsDefines extra command line options supported by the
conda createcommand. See the Conda documentation for more information.conda.createTimeoutDefines the amount of time the Conda environment creation can last (default:
20 min). The creation process is terminated when the timeout is exceeded.conda.useMambaUses the
mambabinary instead ofcondato create the Conda environments (default:false). See the Mamba documentation for more information about Mamba.conda.useMicromambaNew in version 22.05.0-edge.
Uses the
micromambabinary instead ofcondato create Conda environments (default:false). See the Micromamba documentation for more information about Micromamba.
See Conda environments for more information about using Conda environments with Nextflow.
dag
The dag scope controls the workflow diagram generated by Nextflow.
The following settings are available:
dag.enabledWhen
trueenables the generation of the DAG file (default:false).dag.depthNew in version 23.10.0.
Only supported by the HTML and Mermaid renderers.
Controls the maximum depth at which to render sub-workflows (default: no limit).
dag.directionNew in version 23.10.0.
Supported by Graphviz, DOT, HTML and Mermaid renderers.
Controls the direction of the DAG, can be
'LR'(left-to-right) or'TB'(top-to-bottom) (default:'TB').dag.fileGraph file name (default:
dag-<timestamp>.html).dag.overwriteWhen
trueoverwrites any existing DAG file with the same name (default:false).dag.verboseNew in version 23.10.0.
Only supported by the HTML and Mermaid renderers.
When
false, channel names are omitted, operators are collapsed, and empty workflow inputs are removed (default:false).
Read the Workflow diagram page to learn more about the workflow graph that can be generated by Nextflow.
docker
The docker scope controls how Docker containers are executed by Nextflow.
The following settings are available:
docker.enabledEnable Docker execution (default:
false).docker.engineOptionsSpecify additional options supported by the Docker engine i.e.
docker [OPTIONS].docker.envWhitelistComma separated list of environment variable names to be included in the container environment.
docker.fixOwnershipFix ownership of files created by the docker container (default:
false).docker.legacyUse command line options removed since Docker 1.10.0 (default:
false).docker.mountFlagsAdd the specified flags to the volume mounts e.g.
mountFlags = 'ro,Z'.docker.registryThe registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e.
http://.docker.registryOverrideNew in version 25.06.0-edge.
When
true, forces the override of the registry name in fully qualified container image names with the registry specified bydocker.registry(default:false). This setting allows you to redirect container image pulls from their original registry to a different registry, such as a private mirror or proxy.docker.removeClean-up the container after the execution (default:
true). See the Docker documentation for details.docker.runOptionsSpecify extra command line options supported by the
docker runcommand. See the Docker documentation for details.docker.sudoExecutes Docker run command as
sudo(default:false).docker.tempMounts a path of your choice as the
/tmpdirectory in the container. Use the special valueautoto create a temporary directory each time a container is created.docker.ttyAllocates a pseudo-tty (default:
false).
Read the Docker page to learn more about how to use Docker containers with Nextflow.
env
The env scope allows the definition one or more variables that will be exported into the environment where workflow tasks are executed.
Simply prefix your variable names with the env scope or surround them by curly brackets, as shown below:
env.ALPHA = 'some value'
env.BETA = "$HOME/some/path"
env {
DELTA = 'one more'
GAMMA = "/my/path:$PATH"
}
Note
In the above example, variables like $HOME and $PATH are evaluated when the workflow is launched. If you want these variables to be evaluated during task execution, escape them with \$. This difference is important for variables like $PATH, which may be different in the workflow environment versus the task environment.
Warning
The env scope provides environment variables to tasks, not Nextflow itself. Nextflow environment variables such as NXF_VER should be set in the environment in which Nextflow is launched.
executor
The executor scope controls various executor behaviors.
The following settings are available:
executor.accountNew in version 24.04.0.
Used only by the SLURM, LSF, PBS/Torque and PBS Pro executors.
The project or organization account that should be charged for running the pipeline jobs.
executor.cpusThe maximum number of CPUs made available by the underlying system. Used only by the
localexecutor.executor.dumpIntervalDetermines how often to log the executor status (default:
5min).executor.exitReadTimeoutUsed only by grid executors.
Determines how long to wait before returning an error status when a process is terminated but the
.exitcodefile does not exist or is empty (default:270 sec).executor.jobNameDetermines the name of jobs submitted to the underlying cluster executor e.g.
executor.jobName = { "$task.name - $task.hash" }. Make sure the resulting job name matches the validation constraints of the underlying batch scheduler.This setting is supported by the following executors: Bridge, Condor, Flux, HyperQueue, Lsf, Moab, Nqsii, Oar, PBS, PBS Pro, SGE, SLURM and Google Batch.
executor.killBatchSizeDetermines the number of jobs that can be killed in a single command execution (default:
100).executor.memoryThe maximum amount of memory made available by the underlying system. Used only by the
localexecutor.executor.nameThe name of the executor to be used (default:
local).executor.perCpuMemAllocationNew in version 23.07.0-edge.
Used only by the SLURM executor.
When
true, specifies memory allocations for SLURM jobs as--mem-per-cpu <task.memory / task.cpus>instead of--mem <task.memory>.executor.perJobMemLimitUsed only by the LSF executor.
Specifies Platform LSF per-job memory limit mode (default:
false).executor.perTaskReserveUsed only by the LSF executor.
Specifies Platform LSF per-task memory reserve mode (default:
false).executor.pollIntervalDefines the polling frequency for process termination detection. Default varies for each executor (see below).
executor.queueGlobalStatusNew in version 23.01.0-edge.
Determines how job status is retrieved. When
falseonly the queue associated with the job execution is queried. Whentruethe job status is queried globally i.e. irrespective of the submission queue (default:false).executor.queueSizeThe number of tasks the executor will handle in a parallel manner. A queue size of zero corresponds to no limit. Default varies for each executor (see below).
executor.queueStatIntervalUsed only by grid executors.
Determines how often to fetch the queue status from the scheduler (default:
1min).executor.retry.delayNew in version 22.03.0-edge.
Used only by grid executors.
Delay when retrying failed job submissions (default:
500ms).executor.retry.jitterNew in version 22.03.0-edge.
Used only by grid executors.
Jitter value when retrying failed job submissions (default:
0.25).executor.retry.maxAttemptNew in version 22.03.0-edge.
Used only by grid executors.
Max attempts when retrying failed job submissions (default:
3).executor.retry.maxDelayNew in version 22.03.0-edge.
Used only by grid executors.
Max delay when retrying failed job submissions (default:
30s).executor.submit.retry.reasonNew in version 22.03.0-edge.
Used only by grid executors.
Regex pattern that when verified cause a failed submit operation to be re-tried (default:
Socket timed out).executor.submitRateLimitDetermines the max rate of job submission per time unit, for example
'10sec'(10 jobs per second) or'50/2min'(50 jobs every 2 minutes) (default: unlimited).
Some executor settings have different default values depending on the executor.
Executor |
|
|
|---|---|---|
AWS Batch |
|
|
Azure Batch |
|
|
Google Batch |
|
|
Grid Executors |
|
|
Kubernetes |
|
|
Local |
N/A |
|
Executor config settings can be applied to specific executors by prefixing the executor name with the symbol $ and using it as special scope. For example:
// block syntax
executor {
$sge {
queueSize = 100
pollInterval = '30sec'
}
$local {
cpus = 8
memory = '32 GB'
}
}
// dot syntax
executor.$sge.queueSize = 100
executor.$sge.pollInterval = '30sec'
executor.$local.cpus = 8
executor.$local.memory = '32 GB'
fusion
The fusion scope provides advanced configuration for the use of the Fusion file system.
The following settings are available:
fusion.enabledEnable Fusion file system (default:
false).fusion.cacheSizeNew in version 23.11.0-edge.
Fusion client local cache size limit.
fusion.containerConfigUrlURL for downloading the container layer provisioning the Fusion client.
fusion.exportStorageCredentialsNew in version 23.05.0-edge: Previously named
fusion.exportAwsAccessKeys.Enable access to credentials for the underlying object storage are exported to the task environment (default:
false).fusion.logLevelFusion client log level.
fusion.logOutputLog output location.
fusion.privilegedNew in version 23.10.0.
Enable privileged containers for Fusion (default:
true)Non-privileged use is supported only on Kubernetes with the k8s-fuse-plugin or a similar FUSE device plugin.
fusion.snapshotsNew in version 25.03.0-edge.
Currently only supported for AWS Batch
Enable Fusion snapshotting (preview, default:
false). This feature allows Fusion to automatically restore a job when it is interrupted by a spot reclamation.fusion.tagsPattern for applying tags to files created via the Fusion client (default:
[.command.*|.exitcode|.fusion.*](nextflow.io/metadata=true),[*](nextflow.io/temporary=true)).Set to
falseto disable.
google
The google scope allows you to configure the interactions with Google Cloud, including Google Cloud Batch and Google Cloud Storage.
Read the Google Cloud page for more information.
google.enableRequesterPaysBucketsWhen
trueuses the given Google Cloud project ID as the billing project for storage access. This is required when accessing data from requester pays enabled buckets. See Requester Pays on Google Cloud Storage documentation (default:false).google.httpConnectTimeoutNew in version 23.06.0-edge.
Defines the HTTP connection timeout for Cloud Storage API requests (default:
'60s').google.httpReadTimeoutNew in version 23.06.0-edge.
Defines the HTTP read timeout for Cloud Storage API requests (default:
'60s').google.locationThe Google Cloud location where jobs are executed (default:
us-central1).google.projectThe Google Cloud project ID to use for pipeline execution.
google.batch.allowedLocationsNew in version 22.12.0-edge.
Define the set of allowed locations for VMs to be provisioned. See Google documentation for details (default: no restriction).
google.batch.autoRetryExitCodesNew in version 24.07.0-edge.
Defines the list of exit codes that will trigger Google Batch to automatically retry the job (default:
[50001]). For this setting to take effect,google.batch.maxSpotAttemptsmust be greater than 0. See Google Batch documentation for the complete list of retryable exit codes.google.batch.bootDiskImageNew in version 24.08.0-edge.
Set the image URI of the virtual machine boot disk, e.g
batch-debian. See Google documentation for details (default: none).google.batch.bootDiskSizeSet the size of the virtual machine boot disk, e.g
50.GB(default: none).google.batch.cpuPlatformSet the minimum CPU Platform, e.g.
'Intel Skylake'. See Specifying a minimum CPU Platform for VM instances (default: none).google.batch.gcsfuseOptionsNew in version 25.03.0-edge.
Defines a list of custom mount options for
gcsfuse(default:['-o rw', '-implicit-dirs']).google.batch.maxSpotAttemptsNew in version 23.11.0-edge.
Changed in version 24.08.0-edge: The default value was changed from
5to0.Max number of execution attempts of a job interrupted by a Compute Engine Spot reclaim event (default:
0).See also:
google.batch.autoRetryExitCodesgoogle.batch.networkThe URL of an existing network resource to which the VM will be attached.
You can specify the network as a full or partial URL. For example, the following are all valid URLs:
https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network}
projects/{project}/global/networks/{network}
global/networks/{network}
google.batch.networkTagsThe network tags to be applied to the instances created by Google Batch jobs. Network tags are used to apply firewall rules and control network access (e.g.,
['allow-ssh', 'allow-http']).Network tags are ignored when using instance templates. See Add network tags for more information.
google.batch.serviceAccountEmailDefine the Google service account email to use for the pipeline execution. If not specified, the default Compute Engine service account for the project will be used.
Note that the
google.batch.serviceAccountEmailservice account will only be used for spawned jobs, not for the Nextflow process itself. See the Google Cloud documentation for more information on credentials.google.batch.spotWhen
trueenables the usage of spot virtual machines orfalseotherwise (default:false).google.batch.subnetworkThe URL of an existing subnetwork resource in the network to which the VM will be attached.
You can specify the subnetwork as a full or partial URL. For example, the following are all valid URLs:
https://www.googleapis.com/compute/v1/projects/{project}/regions/{region}/subnetworks/{subnetwork}
projects/{project}/regions/{region}/subnetworks/{subnetwork}
regions/{region}/subnetworks/{subnetwork}
google.batch.usePrivateAddressWhen
truethe VM will NOT be provided with a public IP address, and only contain an internal IP. If this option is enabled, the associated job can only load docker images from Google Container Registry, and the job executable cannot use external services other than Google APIs (default:false).google.storage.retryPolicy.maxAttemptsNew in version 23.11.0-edge.
Max attempts when retrying failed API requests to Cloud Storage (default:
10).google.storage.retryPolicy.maxDelayNew in version 23.11.0-edge.
Max delay when retrying failed API requests to Cloud Storage (default:
'90s').google.storage.retryPolicy.multiplierNew in version 23.11.0-edge.
Delay multiplier when retrying failed API requests to Cloud Storage (default:
2.0).
k8s
The k8s scope controls the deployment and execution of workflow applications in a Kubernetes cluster.
The following settings are available:
k8s.autoMountHostPathsAutomatically mounts host paths into the task pods (default:
false). Only intended for development purposes when using a single node.k8s.computeResourceTypeNew in version 22.05.0-edge.
Define whether use Kubernetes
PodorJobresource type to carry out Nextflow tasks (default:Pod).k8s.contextDefines the Kubernetes configuration context name to use.
k8s.cpuLimitsNew in version 24.04.0.
When
true, set both the pod CPUsrequestandlimitto the value specified by thecpusdirective, otherwise set only therequest(default:false).This setting is useful when a K8s cluster requires a CPU limit to be defined through a LimitRange.
k8s.debug.yamlWhen
true, saves the pod spec for each task to.command.yamlin the task directory (default:false).k8s.fetchNodeNameNew in version 22.05.0-edge.
If you trace the hostname, activate this option (default:
false).k8s.fuseDevicePluginNew in version 24.01.0-edge.
The FUSE device plugin to be used when enabling Fusion in unprivileged mode (default:
['nextflow.io/fuse': 1]).k8s.httpConnectTimeoutNew in version 22.10.0.
Defines the Kubernetes client request HTTP connection timeout e.g.
'60s'.k8s.httpReadTimeoutNew in version 22.10.0.
Defines the Kubernetes client request HTTP connection read timeout e.g.
'60s'.k8s.launchDirDefines the path where the workflow is launched and the user data is stored. This must be a path in a shared K8s persistent volume (default:
<volume-claim-mount-path>/<user-name>).k8s.maxErrorRetryNew in version 22.09.6-edge.
Defines the Kubernetes API max request retries (default:
4).k8s.namespaceDefines the Kubernetes namespace to use (default:
default).k8s.podAllows the definition of one or more pod configuration options such as environment variables, config maps, secrets, etc. It allows the same settings as the pod process directive.
When using the
kuberuncommand, this setting also applies to the submitter pod.k8s.projectDirDefines the path where Nextflow projects are downloaded. This must be a path in a shared K8s persistent volume (default:
<volume-claim-mount-path>/projects).k8s.pullPolicyDefines the strategy to be used to pull the container image e.g.
'Always'.k8s.retryPolicy.delayDelay when retrying failed API requests (default:
500ms).k8s.retryPolicy.jitterJitter value when retrying failed API requests (default:
0.25).k8s.retryPolicy.maxAttemptsMax attempts when retrying failed API requests (default:
4).k8s.retryPolicy.maxDelayMax delay when retrying failed API requests (default:
90s).k8s.runAsUserDefines the user ID to be used to run the containers. Shortcut for the
securityContextoption.k8s.securityContextDefines the security context for all pods.
k8s.serviceAccountDefines the Kubernetes service account name to use.
k8s.storageClaimNameThe name of the persistent volume claim where store workflow result data.
k8s.storageMountPathThe path location used to mount the persistent volume claim (default:
/workspace).k8s.storageSubPathThe path in the persistent volume to be mounted (default:
/).k8s.workDirDefines the path where the workflow temporary data is stored. This must be a path in a shared K8s persistent volume (default:
<user-dir>/work).
See the Kubernetes page for more details.
lineage
The lineage scope controls the generation of lineage metadata.
The following settings are available:
lineage.enabledEnable generation of lineage metadata (default:
false).lineage.store.locationDefines the location of the lineage metadata store (default:
./.lineage).
mail
The mail scope controls the mail server used to send email notifications.
The following settings are available:
mail.debugEnables Java Mail logging for debugging purposes (default:
false).mail.fromDefault email sender address.
mail.smtp.hostHost name of the mail server.
mail.smtp.portPort number of the mail server.
mail.smtp.userUser name to connect to the mail server.
mail.smtp.passwordUser password to connect to the mail server.
mail.smtp.proxy.hostHost name of an HTTP web proxy server that will be used for connections to the mail server.
mail.smtp.proxy.portPort number for the HTTP web proxy server.
mail.smtp.*Any SMTP configuration property supported by the Java Mail API, which Nextflow uses to send emails. See the table of available properties here.
For example, the following snippet shows how to configure Nextflow to send emails through the AWS Simple Email Service:
mail {
smtp.host = 'email-smtp.us-east-1.amazonaws.com'
smtp.port = 587
smtp.user = '<Your AWS SES access key>'
smtp.password = '<Your AWS SES secret key>'
smtp.auth = true
smtp.starttls.enable = true
smtp.starttls.required = true
}
Note
Some versions of Java (e.g. Java 11 Corretto) do not default to TLS v1.2, and as a result may have issues with 3rd party integrations that enforce TLS v1.2 (e.g. Azure Active Directory OIDC). This problem can be addressed by setting the following config option:
mail {
smtp.ssl.protocols = 'TLSv1.2'
}
manifest
The manifest scope allows you to define some meta-data information needed when publishing or running your pipeline.
The following settings are available:
manifest.authorDeprecated since version 24.09.0-edge: Use
manifest.contributorsinstead.Project author name (use a comma to separate multiple names).
manifest.contributorsNew in version 24.09.0-edge.
List of project contributors. Should be a list of maps. The following fields are supported in the contributor map:
name: the contributor’s nameaffiliation: the contributor’s affiliated organizationemail: the contributor’s email addressgithub: the contributor’s GitHub URLcontribution: list of contribution types, each element can be one of'author','maintainer', or'contributor'orcid: the contributor’s ORCID URL
manifest.defaultBranchGit repository default branch (default:
master).manifest.descriptionFree text describing the workflow project.
manifest.docsUrlProject documentation URL.
manifest.doiProject related publication DOI identifier.
manifest.homePageProject home page URL.
manifest.iconProject related icon location (Relative path or URL).
manifest.licenseProject license.
manifest.mainScriptProject main script (default:
main.nf).manifest.nameProject short name.
manifest.nextflowVersionMinimum required Nextflow version.
This setting may be useful to ensure that a specific version is used:
manifest.nextflowVersion = '1.2.3' // exact match manifest.nextflowVersion = '1.2+' // 1.2 or later (excluding 2 and later) manifest.nextflowVersion = '>=1.2' // 1.2 or later manifest.nextflowVersion = '>=1.2, <=1.5' // any version in the 1.2 .. 1.5 range manifest.nextflowVersion = '!>=1.2' // with ! prefix, stop execution if current version does not match required version.
manifest.organizationProject organization
manifest.recurseSubmodulesPull submodules recursively from the Git repository.
manifest.versionProject version number.
Read the Sharing pipelines page to learn how to publish your pipeline to GitHub, BitBucket or GitLab.
nextflow
Changed in version 24.10.0: The nextflow.publish.retryPolicy settings were moved to workflow.output.retryPolicy.
Changed in version 25.06.0-edge: The workflow.output.retryPolicy settings were moved to nextflow.retryPolicy.
retryPolicy.delayDelay used for retryable operations (default:
350ms).retryPolicy.jitterJitter value used for retryable operations (default:
0.25).retryPolicy.maxAttemptsMax attempts used for retryable operations (default:
5).retryPolicy.maxDelayMax delay used for retryable operations (default:
90s).
notification
The notification scope allows you to define the automatic sending of a notification email message when the workflow execution terminates.
notification.attributesA map object modelling the variables that can be used in the template file.
notification.enabledSend a notification message when the workflow execution completes (default:
false).notification.fromSender address for the notification email message.
notification.templatePath of a template file which provides the content of the notification message.
notification.toRecipient address for the notification email. Multiple addresses can be specified separating them with a comma.
The notification message is sent my using the STMP server defined in the configuration mail scope.
If no mail configuration is provided, it tries to send the notification message by using the external mail command eventually provided by the underlying system (e.g. sendmail or mail).
podman
The podman scope controls how Podman containers are executed by Nextflow.
The following settings are available:
podman.enabledExecute tasks with Podman containers (default:
false).podman.engineOptionsSpecify additional options supported by the Podman engine i.e.
podman [OPTIONS].podman.envWhitelistComma separated list of environment variable names to be included in the container environment.
podman.mountFlagsAdd the specified flags to the volume mounts e.g.
mountFlags = 'ro,Z'.podman.registryThe registry from where container images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e.
http://.podman.removeClean-up the container after the execution (default:
true).podman.runOptionsSpecify extra command line options supported by the
podman runcommand.podman.tempMounts a path of your choice as the
/tmpdirectory in the container. Use the special valueautoto create a temporary directory each time a container is created.
Read the Podman page to learn more about how to use Podman containers with Nextflow.
report
The report scope allows you to configure the workflow Execution report.
The following settings are available:
report.enabledCreate the execution report on workflow completion (default:
false).report.fileThe path of the created execution report file (default:
report-<timestamp>.html).report.overwriteWhen
trueoverwrites any existing report file with the same name (default:false).
sarus
The sarus scope controls how Sarus containers are executed by Nextflow.
The following settings are available:
sarus.enabledExecute tasks with Sarus containers (default:
false).sarus.envWhitelistComma separated list of environment variable names to be included in the container environment.
sarus.runOptionsSpecify extra command line options supported by the
sarus runcommand. For details see the Sarus user guide.sarus.ttyAllocates a pseudo-tty (default:
false).
Read the Sarus page to learn more about how to use Sarus containers with Nextflow.
shifter
The shifter scope controls how Shifter containers are executed by Nextflow.
The following settings are available:
shifter.enabledExecute tasks with Shifter containers (default:
false).
Read the Shifter page to learn more about how to use Shifter containers with Nextflow.
singularity
The singularity scope controls how Singularity containers are executed by Nextflow.
The following settings are available:
singularity.autoMountsAutomatically mounts host paths in the executed container (default:
true). It requires theuser bind controlfeature to be enabled in your Singularity installation.Changed in version 23.09.0-edge: Default value was changed from
falsetotrue.singularity.cacheDirThe directory where remote Singularity images are stored. When using a computing cluster it must be a shared folder accessible to all compute nodes.
singularity.enabledExecute tasks with Singularity containers (default:
false).singularity.engineOptionsSpecify additional options supported by the Singularity engine i.e.
singularity [OPTIONS].singularity.envWhitelistComma separated list of environment variable names to be included in the container environment.
singularity.libraryDirDirectory where remote Singularity images are retrieved. When using a computing cluster it must be a shared folder accessible to all compute nodes.
singularity.noHttpsPull the Singularity image with http protocol (default:
false).singularity.ociAutoPullNew in version 23.12.0-edge.
When enabled, OCI (and Docker) container images are pull and converted to a SIF image file format implicitly by the Singularity run command, instead of Nextflow. Requires Singularity 3.11 or later (default:
false).Note
Leave
ociAutoPulldisabled if willing to build a Singularity native image with Wave (see the Build Singularity native images section).singularity.ociModeNew in version 23.12.0-edge.
Enable OCI-mode, that allows running native OCI compliant container image with Singularity using
crunorruncas low-level runtime. Note: it requires Singularity 4 or later. See--ociflag in the Singularity documentation for more details and requirements (default:false).Note
Leave
ociModedisabled if you are willing to build a Singularity native image with Wave (see the Build Singularity native images section).singularity.pullTimeoutThe amount of time the Singularity pull can last, exceeding which the process is terminated (default:
20 min).singularity.registryNew in version 22.12.0-edge.
The registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e.
http://.singularity.runOptionsSpecify extra command line options supported by
singularity exec.
Read the Singularity page to learn more about how to use Singularity containers with Nextflow.
spack
The spack scope controls the creation of a Spack environment by the Spack package manager.
The following settings are available:
spack.cacheDirDefines the path where Spack environments are stored. When using a compute cluster make sure to provide a shared file system path accessible from all compute nodes.
spack.checksumEnables checksum verification for source tarballs (default:
true). Only disable when requesting a package version not yet encoded in the corresponding Spack recipe.spack.createTimeoutDefines the amount of time the Spack environment creation can last (default:
60 min). The creation process is terminated when the timeout is exceeded.spack.parallelBuildsSets number of parallel package builds (Spack default: coincides with number of available CPU cores).
Nextflow does not allow for fine-grained configuration of the Spack package manager. Instead, this has to be performed directly on the host Spack installation. For more information see the Spack documentation.
timeline
The timeline scope controls the execution timeline report generated by Nextflow.
The following settings are available:
timeline.enabledCreate the timeline report on workflow completion file (default:
false).timeline.fileTimeline file name (default:
timeline-<timestamp>.html).timeline.overwriteWhen
trueoverwrites any existing timeline file with the same name (default:false).
tower
The tower scope controls the settings for the Seqera Platform (formerly Tower Cloud).
The following settings are available:
tower.accessTokenThe unique access token specific to your account on an instance of Seqera Platform.
Your
accessTokencan be obtained from your Seqera Platform instance in the Tokens page.tower.enabledSend workflow tracing and execution metrics to Seqera Platform (default:
false).tower.endpointThe endpoint of your Seqera Platform instance (default:
https://api.cloud.seqera.io).tower.workspaceIdThe ID of the Seqera Platform workspace where the run should be added (default: the launching user personal workspace).
The workspace ID can also be specified using the environment variable
TOWER_WORKSPACE_ID(config file has priority over the environment variable).
trace
The trace scope controls the layout of the execution trace file generated by Nextflow.
The following settings are available:
trace.enabledCreate the execution trace file on workflow completion (default:
false).trace.fieldsComma separated list of fields to be included in the report. The available fields are listed at this page.
trace.fileTrace file name (default:
trace-<timestamp>.txt).trace.overwriteWhen
trueoverwrites any existing trace file with the same name (default:false).trace.rawWhen
trueturns on raw number report generation i.e. date and time are reported as milliseconds and memory as number of bytes (default:false).trace.sepCharacter used to separate values in each row (default:
\t).
Read the Trace file page to learn more about the execution report that can be generated by Nextflow.
wave
The wave scope provides advanced configuration for the use of Wave containers.
The following settings are available:
wave.enabledEnable the use of Wave containers (default:
false).wave.build.repositoryThe container repository where images built by Wave are uploaded (note: the corresponding credentials must be provided in your Seqera Platform account).
wave.build.cacheRepositoryThe container repository used to cache image layers built by the Wave service (note: the corresponding credentials must be provided in your Seqera Platform account).
wave.build.compression.modeNew in version 25.05.0-edge.
Defines the compression algorithm that should be used when building the container. Allowed values are:
gzip,estargzandzstd(default:gzip).wave.build.compression.levelNew in version 25.05.0-edge.
Level of compression used when building a container depending the chosen algorithm: gzip, estargz (0-9) and zstd (0-22). .
wave.build.compression.forceNew in version 25.05.0-edge.
Forcefully apply compression option to all layers, including already existing layers (default:
false).wave.build.conda.basePackagesOne or more Conda packages to be always added in the resulting container (default:
conda-forge::procps-ng).wave.build.conda.commandsOne or more commands to be added to the Dockerfile used to build a Conda based image.
wave.build.conda.mambaImageThe Mamba container image is used to build Conda based container. This is expected to be micromamba-docker image.
wave.endpointThe Wave service endpoint (default:
https://wave.seqera.io).wave.freezeNew in version 23.07.0-edge.
Enables Wave container freezing (default:
false). Wave will provision a non-ephemeral container image that will be pushed to a container repository of your choice. It requires the use of thewave.build.repositorysetting.It is also recommended to specify a custom cache repository using
wave.build.cacheRepository.Note
The container repository authentication must be managed by the underlying infrastructure.
wave.httpClient.connectTimeNew in version 22.06.0-edge.
Sets the connection timeout duration for the HTTP client connecting to the Wave service (default:
30s).wave.mirrorNew in version 24.09.1-edge.
Enables Wave container mirroring (default:
false).This feature allow mirroring (i.e. copying) the containers defined in your pipeline configuration to a container registry of your choice, so that pipeline tasks will pull the copied containers from the target registry instead of the original one.
The resulting copied containers will maintain the name, digest and metadata.
The target registry is expected to be specified by using the
wave.build.repositoryoption.Note
This feature is only compatible with
wave.strategy = 'container'option.This feature cannot be used with Wave freeze mode.
The authentication of the resulting container images must be managed by the underlying infrastructure.
wave.retryPolicy.delayNew in version 22.06.0-edge.
The initial delay when a failing HTTP request is retried (default:
450ms).wave.retryPolicy.jitterNew in version 22.06.0-edge.
The jitter factor used to randomly vary retry delays (default:
0.25).wave.retryPolicy.maxAttemptsNew in version 22.06.0-edge.
The max number of attempts a failing HTTP request is retried (default:
5).wave.retryPolicy.maxDelayNew in version 22.06.0-edge.
The max delay when a failing HTTP request is retried (default:
90s).wave.scan.modeNew in version 24.09.1-edge.
Determines the container security scanning execution modality.
This feature allows scanning for security vulnerability the container used in your pipeline. The following options can be specified:
none: No security scan is performed on the containers used by your pipeline.async: The containers used by your pipeline are scanned for security vulnerability. The task execution is carried out independently of the security scan result.required: The containers used by your pipeline are scanned for security vulnerability. The task is only executed if the corresponding container is not affected by a security vulnerability.
wave.scan.allowedLevelsNew in version 24.09.1-edge.
Determines the allowed security levels when scanning containers for security vulnerabilities.
Allowed values are:
low,medium,high,critical. For example:wave.scan.allowedLevels = 'low,medium'.This option requires the use of
wave.scan.mode = 'required'.wave.strategyThe strategy to be used when resolving ambiguous Wave container requirements (default:
'container,dockerfile,conda').
workflow
New in version 24.10.0.
The workflow scope provides workflow execution options.
workflow.failOnIgnoreNew in version 24.05.0-edge.
When
true, the pipeline will exit with a non-zero exit code if any failed tasks are ignored using theignoreerror strategy (default:false).workflow.onCompleteSpecify a closure that will be invoked at the end of a workflow run (including failed runs). See Workflow handlers for more information.
workflow.onErrorSpecify a closure that will be invoked if a workflow run is terminated. See Workflow handlers for more information.
workflow.output.contentTypeCurrently only supported for S3.
Specify the media type, also known as MIME type, of published files (default:
false). Can be a string (e.g.'text/html'), ortrueto infer the content type from the file extension.workflow.output.copyAttributesNew in version 25.01.0-edge.
Currently only supported for local and shared filesystems.
Copy file attributes (such as the last modified timestamp) to the published file (default:
false).workflow.output.enabledEnable or disable publishing (default:
true).workflow.output.ignoreErrorsWhen
true, the workflow will not fail if a file can’t be published for some reason (default:false).workflow.output.modeThe file publishing method (default:
'symlink'). The following options are available:'copy'Copy each file into the output directory.
'copyNoFollow'Copy each file into the output directory without following symlinks, i.e. only the link is copied.
'link'Create a hard link in the output directory for each file.
'move'Move each file into the output directory.
Should only be used for files which are not used by downstream processes in the workflow.
'rellink'Create a relative symbolic link in the output directory for each file.
'symlink'Create an absolute symbolic link in the output directory for each output file.
workflow.output.overwriteWhen
trueany existing file in the specified folder will be overwritten (default:'standard'). The following options are available:falseNever overwrite existing files.
trueAlways overwrite existing files.
'deep'Overwrite existing files when the file content is different.
'lenient'Overwrite existing files when the file size is different.
'standard'Overwrite existing files when the file size or last modified timestamp is different.
workflow.output.storageClassCurrently only supported for S3.
Specify the storage class for published files.
workflow.output.tagsCurrently only supported for S3.
Specify arbitrary tags for published files. For example:
tags FOO: 'hello', BAR: 'world'