title | ms.custom | description | ms.topic | ms.assetid | ms.date | monikerRange |
---|---|---|---|---|---|---|
Conditions |
seodec18 |
Learn about how you can write custom conditions in Azure Pipelines. |
conceptual |
C79149CC-6E0D-4A39-B8D1-EB36C8D3AB89 |
02/17/2023 |
<= azure-devops |
[!INCLUDE version-lt-eq-azure-devops]
You can specify the conditions under which each stage, job, or step runs. By default, a job or stage runs if it doesn't depend on any other job or stage, or if all of the jobs or stages it depends on have completed and succeeded. This includes not only direct dependencies, but their dependencies as well, computed recursively. By default, a step runs if nothing in its job has failed yet and the step immediately preceding it has finished. You can customize this behavior by forcing a stage, job, or step to run even if a previous dependency fails or by specifying a custom condition.
::: moniker range="tfs-2018" [!INCLUDE temp] ::: moniker-end
::: moniker range=">=azure-devops-2020"
You can specify conditions under which a step, job, or stage will run. [!INCLUDE include]
- Custom conditions
By default, steps, jobs, and stages run if all previous steps/jobs have succeeded. It's as if you specified "condition: succeeded()" (see Job status functions).
jobs:
- job: Foo
steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled
- job: Bar
dependsOn: Foo
condition: failed() # this job will only run if Foo fails
You can also use variables in conditions.
variables:
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
stages:
- stage: A
jobs:
- job: A1
steps:
- script: echo Hello Stage A!
- stage: B
condition: and(succeeded(), eq(variables.isMain, true))
jobs:
- job: B1
steps:
- script: echo Hello Stage B!
- script: echo $(isMain)
Conditions are evaluated to decide whether to start a stage, job, or step.
This means that nothing computed at runtime inside that unit of work will be available.
For example, if you have a job that sets a variable using a runtime expression using $[ ]
syntax, you can't use that variable in your custom condition.
::: moniker-end
::: moniker range="< azure-devops" YAML isn't supported in TFS. ::: moniker-end
Inside the Control Options of each task, and in the Additional options for a job in a release pipeline, you can specify the conditions under which the task or job will run.
Note
When you specify your own condition
property for a stage / job / step, you overwrite its default condition: succeeded()
. This can lead to your stage / job / step running even if the build is cancelled. Make sure you take into account the state of the parent stage / job when writing your own conditions.
If the built-in conditions don't meet your needs, then you can specify custom conditions.
Conditions are written as expressions in YAML pipelines. The agent evaluates the expression beginning with the innermost function and works out its way. The final result is a boolean value that determines if the task, job, or stage should run or not. See the expressions article for a full guide to the syntax.
Do any of your conditions make it possible for the task to run even after the build is canceled by a user? If so, then specify a reasonable value for cancel timeout so that these kinds of tasks have enough time to complete after the user cancels a run.
When a build is canceled, it doesn't mean all its stages, jobs, or steps stop running. The decision depends on the stage, job, or step conditions
you specified and at what point of the pipeline's execution you canceled the build.
If your condition doesn't take into account the state of the parent of your stage / job / step, then if the condition evaluates to true
, your stage, job, or step will run, even if its parent is canceled. If its parent is skipped, then your stage, job, or step won't run.
Let's look at some examples.
In this pipeline, by default, stage2
depends on stage1
and stage2
has a condition
set. stage2
only runs when the source branch is main
.
stages:
- stage: stage1
jobs:
- job: A
steps:
- script: echo 1; sleep 30
- stage: stage2
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
jobs:
- job: B
steps:
- script: echo 2
If you queue a build on the main
branch, and you cancel it while stage1
is running, stage2
will still run, because eq(variables['Build.SourceBranch'], 'refs/heads/main')
evaluates to true
.
In this pipeline, stage1
depends on stage2
. Job B
has a condition
set for it.
stages:
- stage: stage1
jobs:
- job: A
steps:
- script: echo 1; sleep 30
- stage: stage2
jobs:
- job: B
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
steps:
- script: echo 2
If you queue a build on the main
branch, and you cancel it while stage1
is running, stage2
won't run, even though it contains a job A
whose condition evaluates to true
. The reason is because stage2
has the default condition: succeeded()
, which evaluates to false
when stage1
is canceled. Therefore, stage2
is skipped, and none of its jobs run.
Say you have the following YAML pipeline. Notice that, by default, stage2
depends on stage1
and that script: echo 2
has a condition
set for it.
stages:
- stage: stage1
jobs:
- job: A
steps:
- script: echo 1; sleep 30
- stage: stage2
jobs:
- job: B
steps:
- script: echo 2
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
If you queue a build on the main
branch, and you cancel it while stage1
is running, stage2
won't run, even though it contains a step in job B
whose condition evaluates to true
. The reason is because stage2
is skipped in response to stage1
being canceled.
Say you have the following YAML pipeline. Notice that job B
depends on job A
and that job B
has a condition
set for it.
jobs:
- job: A
steps:
- script: sleep 30
- job: B
dependsOn: A
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
steps:
- script: echo step 2.1
If you queue a build on the main
branch, and you cancel it while job A
is running, job B
will still run, because eq(variables['Build.SourceBranch'], 'refs/heads/main')
evaluates to true
.
If you want job B
to only run when job A
succeeds and you queue the build on the main
branch, then your condition
should read and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
.
In the following pipeline, B
depends on A
.
jobs:
- job: A
steps:
- script: sleep 30
- job: B
dependsOn: A
steps:
- script: echo step 2.1
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
If you queue a build on the main
branch, and you cancel the build when job A
is executing, job B
won't execute, even though step 2.1
has a condition
that evaluates to true
. The reason is because job B
has the default condition: succeeded()
, which evaluates to false
when job A
is canceled. Therefore, job B
is skipped, and none of its steps run.
You can also have conditions on steps. In this pipeline, notice that step 2.3 has a condition
set on it.
steps:
- script: echo step 2.1
- script: echo step 2.2; sleep 30
- script: echo step 2.3
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
If you queue a build on the main
branch, and you cancel the build when steps 2.1 or 2.2 are executing, step 2.3 will still execute, because eq(variables['Build.SourceBranch'], 'refs/heads/main')
evaluates to true
.
To prevent stages, jobs, or steps with conditions
from running when a build is canceled, make sure you consider their parent's state when writing the conditions
. For more information, see Job status functions.
eq(variables['Build.SourceBranch'], 'refs/heads/main')
and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/main'))
and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/users/'))
and(succeeded(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI'))
and(failed(), eq(variables['Build.Reason'], 'PullRequest'))
eq(variables['Build.Reason'], 'Schedule')
Release.Artifacts.{artifact-alias}.SourceBranch is equivalent to Build.SourceBranch.
eq(variables['System.debug'], true)
Since all variables are treated as strings in Azure Pipelines, an empty string is equivalent to null
in this pipeline.
variables:
- name: testEmpty
value: ''
jobs:
- job: A
steps:
- script: echo testEmpty is blank
condition: eq(variables.testEmpty, '')
When you declare a parameter in the same pipeline that you have a condition, parameter expansion happens before conditions are considered. In this case, you can embed parameters inside conditions. The script in this YAML file will run because parameters.doThing
is true.
The condition
in the pipeline combines two functions: succeeded()
and eq('${{ parameters.doThing }}', true)
. The succeeded()
function checks if the previous step succeeded. The succeeded()
function returns true because there was no previous step.
The eq('${{ parameters.doThing }}', true)
function checks whether the doThing parameter is equal to true
. Since the default value for doThing is true, the condition will return true by default unless a different values gets set in the pipeline.
For more template parameter examples, see Template types & usage.
parameters:
- name: doThing
default: true
type: boolean
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', true))
When you pass a parameter to a template, you need to set the parameter's value in your template or use templateContext to pass properties to templates.
# parameters.yml
parameters:
- name: doThing
default: true # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: ${{ if eq(parameters.doThing, true) }}
# azure-pipeline.yml
parameters:
- name: doThing
default: true
type: boolean
trigger:
- none
extends:
template: parameters.yml
The output of this pipeline is I did a thing
because the parameter doThing
is true.
You can make a variable available to future jobs and specify it in a condition. Variables available to future jobs must be marked as multi-job output variables using isOutput=true
.
jobs:
- job: Foo
steps:
- bash: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #set variable doThing to Yes
name: DetermineResult
- job: Bar
dependsOn: Foo
condition: eq(dependencies.Foo.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check the value
steps:
- script: echo "Job Foo ran and doThing is Yes."
You can make a variable available to future steps and specify it in a condition. By default, variables created from a step are available to future steps and don't need to be marked as multi-job output variables using isOutput=true
.
There are some important things to note regarding the above approach and scoping:
- Variables created in a step in a job will be scoped to the steps in the same job.
- Variables created in a step will only be available in subsequent steps as environment variables.
- Variables created in a step can't be used in the step that defines them.
Below is an example of creating a pipeline variable in a step and using the variable in a subsequent step's condition and script.
steps:
# This step creates a new pipeline variable: doThing. This variable will be available to subsquent steps.
- bash: |
echo "##vso[task.setvariable variable=doThing]Yes"
displayName: Step 1
# This step is able to use doThing, so it uses it in its condition
- script: |
# You can access the variable from Step 1 as an environment variable.
echo "Value of doThing (as DOTHING env var): $DOTHING."
displayName: Step 2
condition: and(succeeded(), eq(variables['doThing'], 'Yes')) # or and(succeeded(), eq(variables.doThing, 'Yes'))
You can use the result of the previous job. For example, in this YAML file, the condition eq(dependencies.A.result,'SucceededWithIssues')
allows the job to run because Job A succeeded with issues.
jobs:
- job: A
displayName: Job A
continueOnError: true # next job starts even if this one fails
steps:
- script: echo Job A ran
- script: exit 1
- job: B
dependsOn: A
condition: eq(dependencies.A.result,'SucceededWithIssues') # targets the result of the previous job
displayName: Job B
steps:
- script: echo Job B ran
You'll experience this issue if the condition that's configured in the stage doesn't include a job status check function. To resolve the issue, add a job status check function to the condition. If you cancel a job while it's in the queue, but not running, the entire job is canceled, including all the other stages.
Learn more about a pipeline's behavior when a build is canceled.