10 KiB
Concourse
Introduction
Concourse is a pipeline-based continuous thing-doer.
Installation for testing
This docker-compose file simplifies the installation to do some tests with concourse:
wget https://raw.githubusercontent.com/starkandwayne/concourse-tutorial/master/docker-compose.yml
docker-compose up -d
You can download the command line fly
for your OS from the web in 127.0.0.1:8080
Create Pipeline
A pipeline is made of a list of Jobs which contains an ordered list of Steps.
Steps
Several different type of steps can be used:
- the
task
step runs a task - the
get
step fetches a resource - the
put
step updates a resource - the
set_pipeline
step configures a pipeline - the
load_var
step loads a value into a local var - the
in_parallel
step runs steps in parallel - the
do
step runs steps in sequence - the
across
step modifier runs a step multiple times; once for each combination of variable values - the
try
step attempts to run a step and succeeds even if the step fails
Each step in a job plan runs in its own container. You can run anything you want inside the container (i.e. run my tests, run this bash script, build this image, etc.). So if you have a job with five steps Concourse will create five containers, one for each step.
Therefore, it's possible to indicate the type of container each step needs to be run in.
Simple Pipeline Example
jobs:
- name: escape
plan:
- task: escape-task
privileged: true
config:
# Tells Concourse which type of worker this task should run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker hub by default
run:
path: sh
args:
- -cx
- |
ls -l .
echo "hello from another step!" > the-artifact/message
fly -t tutorial set-pipeline -p hello-world -c hello-world.yml
# pipelines are paused when first created
fly -t tutorial unpause-pipeline -p hello-world
# trigger the job and watch it run to completion
fly -t tutorial trigger-job --job hello-world/hello-world-job --watch
Bash script with output/input pipeline
It's possible to save the results of one task in a file and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to mount the directory of the previous task in the new task where you can access the files created by the previous task.
Triggers
You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time:
- Some time passes: Time resource
- On new commits to the main branch: Git resource
- New PR's: Github-PR resource
- Fetch or push the latest image of your app: Registry-image resource
Check a YAML pipeline example that triggers on new commits to master in https://concourse-ci.org/tutorial-resources.html
User Roles & Permissions
Concourse comes with five roles:
- Concourse Admin: This role is only given to owners of the main team (default initial concourse team). Admins can configure other teams (e.g.:
fly set-team
,fly destroy-team
...). The permissions of this role cannot be affected by RBAC. - owner: Team owners can modify everything within the team.
- member: Team members can read and write within the teams assets but cannot modify the team settings.
- pipeline-operator: Pipeline operators can perform pipeline operations such as triggering builds and pinning resources, however they cannot update pipeline configurations.
- viewer: Team viewers have "read-only" access to a team and its pipelines.
{% hint style="info" %} Moreover, the permissions of the roles owner, member, pipeline-operator and viewer can be modified configuring RBAC (configuring more specifically it's actions). Read more about it in: https://concourse-ci.org/user-roles.html {% endhint %}
Note that Concourse groups pipelines inside Teams. Therefore users belonging to a Team will be able to manage those pipelines and several Teams might exist. A user can belong to several Teams and have different permissions inside each of them.
Vars & Credential Manager
In the YAML configs you can configure values using the syntax ((
source-name
:
secret-path
.
secret-field
))
.
The source-name is optional, and if omitted, the cluster-wide credential manager will be used, or the value may be provided statically.
The **optional **secret-field specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.
Moreover, the secret-path and secret-field may be surrounded by double quotes "..."
if they contain special characters like .
and :
. For instance, ((source:"my.secret"."field:1"))
will set the secret-path to my.secret
and the secret-field to field:1
.
Static Vars
Static vars can be specified in tasks steps:
- task: unit-1.13
file: booklit/ci/unit.yml
vars: {tag: 1.13}
Or using the following fly
arguments:
-v
or--var
NAME=VALUE
sets the stringVALUE
as the value for the varNAME
.-y
or--yaml-var
NAME=VALUE
parsesVALUE
as YAML and sets it as the value for the varNAME
.-i
or--instance-var
NAME=VALUE
parsesVALUE
as YAML and sets it as the value for the instance varNAME
. See Grouping Pipelines to learn more about instance vars.-l
or--load-vars-from
FILE
loadsFILE
, a YAML document containing mapping var names to values, and sets them all.
Credential Management
There are different ways a Credential Manager can be specified in a pipeline, read how in https://concourse-ci.org/creds.html.
Moreover, Concourse supports different credential managers:
- The Vault credential manager
- The CredHub credential manager
- The AWS SSM credential manager
- The AWS Secrets Manager credential manager
- Kubernetes Credential Manager
- The Conjur credential manager
- Caching credentials
- Redacting credentials
- Retrying failed fetches
{% hint style="danger" %} Note that if you have some kind of write access to Concourse you can create jobs to exfiltrate those secrets as Concourse needs to be able to access them. {% endhint %}
Concourse Enumeration
In order to enumerate a concourse environment you first need to gather valid credentials or to find an authenticated token probably in a .flyrc
config file.
Login and Current User enum
- To login you need to know the endpoint, the team name (default is
main
) and a team the user belongs to:fly --target example login --team-name my-team --concourse-url https://ci.example.com [
--insecure] [--client-cert=./path --client-key=./path]
- Get configured targets:
fly targets
- Get if the configured target connection is still valid:
fly -t <target> status
- Get role of the user against the indicated target:
fly -t <target> userinfo
Teams & Users
- Get a list of the Teams
fly -t <target> teams
- Get roles inside team
fly -t <target> get-team -n <team-name>
- Get a list of users
fly -t <target> active-users
Pipelines
- List pipelines:
fly -t <target> pipelines -a
- Get pipeline yaml (sensitive information might be found in the definition):
fly -t <target> get-pipeline -p <pipeline-name>
- Get all pipeline config declared vars
for pipename in $(fly -t <target> pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t <target> get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done
- Get all the pipelines secret names used (if you can create/modify a job or hijack a container you could exfiltrate them):
rm /tmp/secrets.txt;
for pipename in $(fly -t onelogin pipelines | grep -Ev "^id" | awk '{print $2}'); do
echo $pipename;
fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt;
echo "";
done
echo ""
echo "ALL SECRETS"
cat /tmp/secrets.txt | sort | uniq
rm /tmp/secrets.txt
Containers & Workers
- List containers:
fly -t <target> containers
- List workers:
fly -t <target> workers
Concourse Attacks
Session inside running or recently run container
If you have enough privileges (member role or more) you will be able to list pipelines and roles and just get a session inside the <pipeline>/<job>
container using:
fly -t tutorial intercept --job pipeline-name/job-name