I set myself an objective to contribute to open source in 2019 more than I did last year. Since I’ve been working with serverless technologies and I have years of experience with continuous integration and deployment, Knative looked like a good candidate. I started looking at setting up a simple CD pipeline for an helm chart I worked on some time ago. I previously setup CI for the same repo Skaffold and TravisCI. The Knative community has been transparent and welcoming and so after a few small pull requests were merged, I was able to stand up a simple CD pipeline that builds three docker images and deploys a helm chart that consumes them. In this blog post I will describe how to setup a development environment for Knative’s build-pipeline using your latop and IBM Cloud, both the container service IKS as well as the IBM Container Registry. I won’t get into platform specific details about how to configure kubectl
and other tools on your laptop; instead I will provide links to existing excellent documents. In the next blog post I will describe how to setup the CD pipeline.
Knative Pipelines
Pipelines are the newest addition to the Knative project, which already included three components: serving, eventing and build. Quoting from the official README, “The Pipeline CRD provides k8s-style resources for declaring CI/CD-style pipelines”. The build-pipeline project introduces a few new custom resource definitions (CRDs) that make it possible to define pipelineresources
, tasks/taskruns
and pipelines/pipelineruns
directly in Kubernetes.
Preparing your laptop
Before you start, you need to set up the development environment on your laptop. Install git, go and the IBM Cloud CLI. Make sure your GOPATH
is set correctly. Either /go
or ~/go
are good choices, I prefer the former to keep paths shorter.
1 2 3 4 5 6 7 8 9 10 11 12 |
# Create the required folders sudo mkdir -p $GOPATH sudo chown -R $USER:$USER $GOPATH mkdir -p ${GOPATH}/src mkdir -p ${GOPATH}/bin # Set the following in your shell profile, so that you may download, build and run go programs export $GOPATH export PATH=${GOPATH}/bin:$PATH # Install the IBM Cloud CLI curl -sL https://ibm.biz/idt-installer | bash |
You also need an IBM Cloud account. If you don’t have one, you can create one for free at cloud.ibm.com. Knative development benefits from ko to build and deploy its components seamlessly. You will use ko
to build knative container images and publish them to the container registry. Let’s go ahead and install it.
1 |
go get github.com/google/go-containerregistry/cmd/ko |
Next you need to configure ko to be able to push images to the cloud registry:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# Login to the cloud and to the container registry (CR) ibmcloud login ibmcloud cr login REGISTRY_ENDPOINT=$(ibmcloud cr info | awk '/registry/{ print $3 }' | head -1) # Create a CR token with write access CR_TOKEN=$(ibmcloud cr token-add --readwrite --description ko_rw --non-expiring -q) # Backup your docker config if you have one cp ~/.docker/config.json ~/.docker/config.json.$(date +%F) # Setup docker auth so it may talk to the CR. echo '{"auths":{"'$REGISTRY_ENDPOINT'":{"auth":"'$(echo -n token:$CR_TOKEN | base64)'"}}}' > ~/.docker/config.json # Create a CR namespace CR_NAMESPACE=knative ibmcloud cr namespace-add $CR_NAMESPACE # Configure ko KO_DOCKER_REPO=$REGISTRY_ENDPOINT/$CR_NAMESPACE |
You need a kubernetes cluster where to deploy Knative. If you don’t have one, provision one in IKS (IBM Cloud Kubernetes Service). Store the cluster name in the IKS_CLUSTER environment variable.
1 2 |
export IKS_CLUSTER=<your cluster name> eval $(ibmcloud ks cluster-config $IKS_CLUSTER --export) |
Installing Knative from source
Everything is ready now to setup Knative. Obtain the source code:
1 2 3 |
mkdir -p ${GOPATH}/src/github.com/knative cd ${GOPATH}/src/github.com/knative git clone https://github.com/knative/build-pipeline |
Deploy Knative build pipeline
:
1 2 |
cd ${GOPATH}/src/github.com/knative/build-pipeline ko apply -f config/ |
In the last step, ko
compiles the code, builds the docker images, pushes them to the registry, updates the YAML manifests to include the correct image path and version and finally applies all of them to the kubernetes cluster. The first time you run this it will take a bit longer. The manifest file creates a namespace knative-build-pipeline
and a service account within it called build-pipeline-controller
. This service account won’t be able to pull the images from the CR until we define the default image pull secret to be used in every pod created with that service account.
1 2 3 4 5 |
# Copy the existing image pull secrets from the default namespace to the knative namespace kubectl get secret bluemix-default-secret-regional -o yaml | sed 's/default/knative-build-pipeline/g' | kubectl -n knative-build-pipeline create -f - # Patch the service account to include the secret kubectl patch -n knative-build-pipeline serviceaccount/build-pipeline-controller -p '{"imagePullSecrets":[{"name": "bluemix-knative-build-pipeline-secret-regional"}]}' |
Delete the controller pods so they are restarted with the right secrets:
1 |
kubectl get pods -n knative-build-pipeline | awk '/build/{ print $1 }' | xargs kubectl delete pod -n knative-build-pipeline |
If everything went well, you will see something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
$ kubectl get all -n knative-build-pipeline NAME READY STATUS RESTARTS AGE pod/build-pipeline-controller-85f669c78b-nx7hp 1/1 Running 0 1d pod/build-pipeline-webhook-7d6dd99bf7-lrzwj 1/1 Running 0 1d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/build-pipeline-controller ClusterIP 172.21.137.128 9090/TCP 7d service/build-pipeline-webhook ClusterIP 172.21.166.128 443/TCP 7d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/build-pipeline-controller 1 1 1 1 7d deployment.apps/build-pipeline-webhook 1 1 1 1 7d NAME DESIRED CURRENT READY AGE replicaset.apps/build-pipeline-controller-7cd4d5495c 0 0 0 5d replicaset.apps/build-pipeline-controller-85f669c78b 1 1 1 5d replicaset.apps/build-pipeline-controller-dd945bf4 0 0 0 5d replicaset.apps/build-pipeline-webhook-684ccc869b 0 0 0 7d replicaset.apps/build-pipeline-webhook-7d6dd99bf7 1 1 1 5d |
Prepare a service account to push images
You configured ko
to be able to push images to the registry, and the build-pipeline-controller
service account to be able to pull images from it. The pipeline will execute build and push images using the PIPELINE_SERVICE_ACCOUNT
in the PIPELINE_NAMESPACE
, so you need to ensure that PIPELINE_SERVICE_ACCOUNT
can push images to the registry as well. Create a container registry read/write token, in the same way as you did for configuring ko
. Define the following secret template:
1 2 3 4 5 6 7 8 9 10 |
apiVersion: v1 kind: Secret metadata: name: ibm-cr-token annotations: build.knative.dev/docker-0: __CR_ENDPOINT__ type: kubernetes.io/basic-auth stringData: username: token password: __CR_TOKEN__ |
Fill in the endpoint and token values from the environment variables:
1 2 3 4 5 6 7 8 9 10 11 12 |
# Create the secret manifest sed -e 's/__CR_TOKEN__/'"$CR_TOKEN"'/g' \ -e 's/__CR_ENDPOINT__/'"$REGISTRY_ENDPOINT"'/g' \ cr-secret.yaml.template > cr-secret.yaml # Create the secret in kubernetes kubectl apply -f cr-secret.yaml # Alter the service account to use the secret PIPELINE_SERVICE_ACCOUNT=default PIPELINE_NAMESPACE=default kubectl patch -n $PIPELINE_NAMESPACE serviceaccount/$PIPELINE_SERVICE_ACCOUNT -p '{"secrets":[{"name": "ibm-cr-token"}]}' |
Making a code change to Knative
To verify that the development workflow is setup correctly, let’s make a small code change to the Knative pipeline controller:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ git diff diff --git a/cmd/controller/main.go b/cmd/controller/main.go index e6a889ea..34cd26ae 100644 --- a/cmd/controller/main.go +++ b/cmd/controller/main.go @@ -63,7 +63,7 @@ func main() { logger, atomicLevel := logging.NewLoggerFromConfig(loggingConfig, logging.ControllerLogKey) defer logger.Sync() - logger.Info("Starting the Pipeline Controller") + logger.Info("Starting the Customized Pipeline Controller") // set up signals so we handle the first shutdown signal gracefully stopCh := signals.SetupSignalHandler() |
You can build and deploy the modified controller with just one command:
1 |
ko apply -f config/controller.yaml |
The output looks like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
(...) 2019/02/04 13:03:37 Using base gcr.io/distroless/base:latest for github.com/knative/build-pipeline/cmd/controller (...) 2019/02/04 13:03:44 Publishing registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291:latest 2019/02/04 13:03:45 existing blob: sha256:d4210e88ff2398b08758ca768f9230571f8625023c3c59b78b479a26ff2f603d 2019/02/04 13:03:45 existing blob: sha256:bb2297ebc4b391f2fd41c48df5731cdd4dc542f6eb6113436b81c886b139a048 2019/02/04 13:03:45 existing blob: sha256:8ff7789f00584c4605cff901525c8acd878ee103d32351ece7d7c8e5eac5d8b4 2019/02/04 13:03:54 pushed blob: sha256:6c40cc604d8e4c121adcb6b0bfe8bb038815c350980090e74aa5a6423f8f82c0 2019/02/04 13:03:58 pushed blob: sha256:4497e3594708bab98b6f517bd7cfd4a2da18c6c6e3d79731821dd17705bfbee6 2019/02/04 13:03:59 pushed blob: sha256:7aaa1004f57382596bab1f7499bb02e5d1b5b28a288e14e6760ae36b784bf4c0 2019/02/04 13:04:00 registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291:latest: digest: sha256:f7640cd1e556cc6fe1816d554d7dbd0da1d7d7728f220669e15a00576c999468 size: 918 2019/02/04 13:04:00 Published registry.ng.bluemix.net/knative/controller-7cb61323de6451022678822f2a2d2291@sha256:f7640cd1e556cc6fe1816d554d7dbd0da1d7d7728f220669e15a00576c999468 (...) deployment.apps/build-pipeline-controller configured |
Changing the code of the controller causes the controller pod to be destroyed and recreated automatically, so if you check the controller logs you can see the customized startup message:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
$ kubectl logs pod/$(kubectl get pods -n knative-build-pipeline | awk '/controller/{ print $1 }') -n knative-build-pipeline {"level":"info","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n \"level\": \"info\",\n \"development\": false,\n \"sampling\": {\n \"initial\": 100,\n \"thereafter\": 100\n },\n \"outputPaths\": [\"stdout\"],\n \"errorOutputPaths\": [\"stderr\"],\n \"encoding\": \"json\",\n \"encoderConfig\": {\n \"timeKey\": \"\",\n \"levelKey\": \"level\",\n \"nameKey\": \"logger\",\n \"callerKey\": \"caller\",\n \"messageKey\": \"msg\",\n \"stacktraceKey\": \"stacktrace\",\n \"lineEnding\": \"\",\n \"levelEncoder\": \"\",\n \"timeEncoder\": \"\",\n \"durationEncoder\": \"\",\n \"callerEncoder\": \"\"\n }\n}\n"} {"level":"info","caller":"logging/config.go:97","msg":"Logging level set to info"} {"level":"warn","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"} {"level":"info","logger":"controller","caller":"controller/main.go:66","msg":"Starting the Customized Pipeline Controller"} {"level":"info","logger":"controller.taskrun-controller","caller":"taskrun/taskrun.go:122","msg":"Setting up event handlers","knative.dev/controller":"taskrun-controller"} {"level":"info","logger":"controller.taskrun-controller","caller":"taskrun/taskrun.go:134","msg":"Setting up ConfigMap receivers","knative.dev/controller":"taskrun-controller"} {"level":"info","logger":"controller.pipeline-controller","caller":"pipelinerun/pipelinerun.go:110","msg":"Setting up event handlers","knative.dev/controller":"pipeline-controller"} W0204 12:04:44.062820 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. {"level":"info","logger":"controller.taskrun-controller.config-store","caller":"configmap/store.go:166","msg":"taskrun config \"config-entrypoint\" config was added or updated: &{gcr.io/k8s-prow/entrypoint@sha256:7c7cd8906ce4982ffee326218e9fc75da2d4896d53cabc9833b9cc8d2d6b2b8f}","knative.dev/controller":"taskrun-controller"} {"level":"info","logger":"controller","caller":"controller/main.go:143","msg":"Waiting for informer caches to sync"} {"level":"info","logger":"controller","caller":"controller/main.go:156","msg":"Starting controllers"} {"level":"info","logger":"controller.pipeline-controller","caller":"controller/controller.go:215","msg":"Starting controller and workers","knative.dev/controller":"pipeline-controller"} {"level":"info","logger":"controller.taskrun-controller","caller":"controller/controller.go:215","msg":"Starting controller and workers","knative.dev/controller":"taskrun-controller"} {"level":"info","logger":"controller.taskrun-controller","caller":"controller/controller.go:223","msg":"Started workers","knative.dev/controller":"taskrun-controller"} {"level":"info","logger":"controller.pipeline-controller","caller":"controller/controller.go:223","msg":"Started workers","knative.dev/controller":"pipeline-controller"} |
If you can see the line highlighted in the log above, you successfully setup your Knative pipeline development environment on IBM Cloud, congratulations!
Notes
The Knative pipeline manifest that configures the build-pipeline-controller
service account does not support configuring imagePullSecrets
; this is why the service account has to be patched after the initial install. It is convenient, however, when developing on Knative, to simple issue a ko apply -f config/
command to apply to the cluster all code changes at once. That command would however revert the service account and drop the imagePullSecrets
. I use git stash
to work around this issue as follows:
- On a clean code base, alter
config/200-serviceaccount.yaml
, to include theimagePullSecrets
:
1234567apiVersion: v1kind: ServiceAccountmetadata:name: build-pipeline-controllernamespace: knative-build-pipelineimagePullSecrets:- name: bluemix-knative-build-pipeline-secret-regional - Run
git stash
to restore a clean code base
The deployment workflow then becomes:
1 2 3 4 5 6 7 8 9 |
# Commit your changes locally, and the pop the service account changes out of stash # git add ... / git commit ... git stash pop # Redeploy ko apply -f config # Stash the service account change away git stash |
Conclusions
You can follow a similar approach to set up other Knative components as well. In the next blog post, I will continue from here to describe how to set up a CD pipeline through the Knative pipeline service you just installed.
Build pipelines have been renamed as Tekton pipelines now, and consequently the new
serviceaccount
istekton-pipelines-controller
and the new namespace istekton-pipelines
.