Installing Lagoon Into Existing Kubernetes Cluster#
- Kubernetes 1.23+ (Kubernetes 1.21 is supported, but 1.23 is recommended)
- Familiarity with Helm and Helm Charts, and kubectl.
- Ingress controller, we recommend ingress-nginx, installed into ingress-nginx namespace
- Cert manager (for TLS) - We highly recommend using letsencrypt
- StorageClasses (RWO as default, RWM for persistent types)
We acknowledge that this is a lot of steps, and our roadmap for the immediate future includes reducing the number of steps in this process.
Specific requirements (as of January 2023)#
Lagoon supports Kubernetes versions 1.21 onwards. We actively test and develop against Kubernetes 1.24, also regularly testing against 1.21,1.22 and 1.25.
The next large round of breaking changes is in Kubernetes 1.25, and we will endeavour to be across these in advance, although this will require a bump in the minimum supported version of Lagoon.
Lagoon is currently configured only for a single ingress-nginx controller, and therefore defining an IngressClass had not been necessary. in the past
In order to use the recent ingress-nginx controllers (v4 onwards, required for Kubernetes 1.22), the following configuration should be used, as per the ingress-nginx docs.
- nginx-ingress should be configured as the default controller - set
.controller.ingressClassResource.default: truein Helm values
- nginx-ingress should be configured to watch ingresses without IngressClass set - set
.controller.watchIngressWithoutClass: truein Helm values
This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set.
Other configurations may be possible, but have not been tested.
Versions 2.1 and 2.2+ of Harbor are currently supported - the method of retrieving robot accounts was changed in 2.2, and the Lagoon remote-controller is able to handle these tokens. This means that Harbor has to be configured with the credentials in lagoon-build-deploy - not lagoon-core.
k8up for backups#
Lagoon has built in configuration for the k8up backup operator. Lagoon can configure prebackup pods, schedules and retentions, and manage backups and restores for K8up. Lagoon currently only supports the 1.x versions of k8up, owing to a namespace change in v2 onwards, but we are working on a fix.
Lagoon does not currently support k8up v2 onwards due to a namespace change here
Lagoon utilises a default 'standard' StorageClass for most workloads, and the internal provisioner for most Kubernetes platforms will suffice. This should be configured to be dynamic provisioning and expandable where possible.
Lagoon also requires a storageClass called 'bulk' to be available to support persistant pod replicas (across nodes). This storageClass should support ReadWriteMany access mode and should be configured to be dynamic provisioning and expandable where possible. See https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for more information, and the production drivers list for a complete list of compatible drivers.
We have curently only included the instructions for (the now deprecated) EFS Provisioner - as the Production EFS CSI driver has issues with provisioning more than 120 PVCs that we are awaiting upstream possible fixes for - here and here - but most other providers CSI drivers should also work, as will configurations with an NFS-compatible server and provisioner.
How much Kubernetes experience/knowledge is required?#
Lagoon uses some very involved Kubernetes and Cloud Native concepts, and while full familiarity may not be necessary to install and configure Lagoon, diagnosing issues and contributing may prove difficult without a good level of familiarity.
As an indicator, comfort with the curriculum for the Certified Kubernetes Administrator would be suggested as a minimum.