There are several methods to install open source Spinnaker on EKS/Kubernetes:
In this workshop we will be using Spinnaker Operator, a Kubernetes Operator for managing Spinnaker, built by Armory. The Operator makes managing Spinnaker, which runs in Kubernetes, dramatically simpler and more automated, while introducing new Kubernetes-native features. The current tool (Halyard) involved significant manual processes and requires Spinnaker domain expertise.
In contrast, the Operator lets you treat Spinnaker as just another Kubernetes deployment, which makes installing and managing Spinnaker easy and reliable. The Operator unlocks the scalability of a GitOps workflow by defining Spinnaker configurations in a code repository rather than in hal commands.
More details on the benefits of Sipnnaker Operator can be found in Armory Docs
We assume that we have an existing EKS Cluster eksworkshop-eksctl created from EKS Workshop.
We also assume that we have increased the disk size on your Cloud9 instance as we need to build docker images for our application.
[Optional] If you want to use AWS Console to navigate and explore resources in Amazon EKS ensure that you have completed Console Credentials to get full access to the EKS Cluster in the EKS console.
We also have installed the prequisite for EKS Cluster installation based on the instructions here
And we have also validated the IAM role in use by the Cloud9 IDE based on the intructions here
Ensure you are getting the IAM role that you have attached to Cloud9 IDE when you execute the below command
aws sts get-caller-identity
test -n "$AWS_REGION" && echo AWS_REGION is "$AWS_REGION" || echo AWS_REGION is not set
test -n "$ACCOUNT_ID" && echo ACCOUNT_ID is "$ACCOUNT_ID" || echo ACCOUNT_ID is not set
If not, export the ACCOUNT_ID and AWS_REGION to ENV
We need bigger instance type for installing spinnaker services, hence we are creating new EKS cluster.
We are also deleting the existng nodegroup nodegroup
that was created as part of cluster creation as we need Spinnaker Operator to create the services in the new nodegroup spinnaker
cat << EOF > spinnakerworkshop.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eksworkshop-eksctl
region: ${AWS_REGION}
# https://eksctl.io/usage/eks-managed-nodegroups/
managedNodeGroups:
- name: spinnaker
minSize: 2
maxSize: 3
desiredCapacity: 3
instanceType: m5.large
ssh:
enableSsm: true
volumeSize: 20
labels: {role: spinnaker}
tags:
nodegroup-role: spinnaker
EOF
eksctl create nodegroup -f spinnakerworkshop.yaml
eksctl delete nodegroup --cluster=eksworkshop-eksctl --name=nodegroup
Confirm the setup
kubectl get nodes
Pick a release from https://github.com/armory/spinnaker-operator/releases and export that version. Below we are using the latest release of Spinnaker Operator when this workshop was written,
export VERSION=1.2.4
echo $VERSION
mkdir -p spinnaker-operator && cd spinnaker-operator
bash -c "curl -L https://github.com/armory/spinnaker-operator/releases/download/v${VERSION}/manifests.tgz | tar -xz"
kubectl apply -f deploy/crds/
Install operator in namespace spinnaker-operator
. We have used Cluster mode
for the operator that works across namespaces and requires a ClusterRole to perform validation.
kubectl create ns spinnaker-operator
kubectl -n spinnaker-operator apply -f deploy/operator/cluster
Make sure the Spinnaker-Operator pod is running
This may take couple of minutes
kubectl get pod -n spinnaker-operator