This page provides an overview of available methods to install CAP Operator on a Kubernetes cluster.
This is the multi-page printable view of this section. Click here to print.
Installation
1 - Prerequisites
We recommend that you use a “Gardener” managed cluster to deploy CAP applications that are managed with CAP Operator.
The Kubernetes cluster must be set up with the following prerequisites before you install CAP Operator:
Istio (version >= 1.22)
Istio service mesh is used for HTTP traffic management. CAP Operator creates Istio resources to manage incoming HTTP requests to the application as well as to route requests on specific (tenant) subdomains.
It’s required that you determine the public ingress Gateway subdomain and the overall shoot domain for the system and specify them in the chart values
Note: Istio promoted many of its APIs to v1 in 1.22 release. Hence as of CAP Operator release v0.11.0 istio version >= 1.22 is a prerequisite.
sap-btp-service-operator or cf-service-operator
These operators can be used for managing SAP BTP service instances and service bindings from within the Kubernetes cluster.
If some SAP BTP services are not available for Kubernetes platforms, you may use cf-service-operator, which creates the services for a Cloud Foundry space and inserts the required access credentials as Secrets into the Kubernetes cluster.
Please note that service credentials added as Kubernetes Secrets to a namespace by these operators, support additional metadata. If you don’t use this feature of these operators, use
secretKey: credentials
in the spec of these operators to ensure that the service credentials retain any JSON data as it is. We recommend that you usesecretKey
, even when credential metadata is available to reduce the overhead of interpreting parsing multiple JSON attributes.
“Gardener” certificate management
This component is available in clusters managed by “Gardener” and will be used to manage TLS certificates and issuers. “Gardener” manages encryption, issuing, and signing of certificates. Alternatively, you can use cert-manager.io cert-manager.
2 - Using Helm
To install CAP operator components, we recommend using the Helm chart that is published as an OCI package at oci://ghcr.io/sap/cap-operator-lifecycle/helm/cap-operator
.
Installation
Create a namespace and install the Helm chart in that namespace by specifying the domain
and the dnsTarget
for your subscription server, either
As command line parameters:
kubectl create namespace cap-operator-system helm upgrade -i -n cap-operator-system cap-operator oci://ghcr.io/sap/cap-operator-lifecycle/helm/cap-operator --set subscriptionServer.domain=cap-operator.<CLUSTER-DOMAIN> --set subscriptionServer.dnsTarget=public-ingress.<CLUSTER-DOMAIN>
Or as a
YAML
file with the values:kubectl create namespace cap-operator-system helm upgrade -i -n cap-operator-system cap-operator oci://ghcr.io/sap/cap-operator-lifecycle/helm/cap-operator -f my-cap-operator-values.yaml
In this example, the provided values file,
my-cap-operator-values.yaml
, can have the following content:subscriptionServer: dnsTarget: public-ingress.<CLUSTER-DOMAIN> domain: cap-operator.<CLUSTER-DOMAIN>
Optional steps
Enable Service Monitors for metrics emitted by controller and subscription server
To enable Monitoring via metrics emitted by CAP Operator components, the following value can be specified:
monitoring: enabled: true # <-- This enables creation of service monitors, for metrics emitted by the cap operator components
Detailed operational metrics for the controller can be enabled with the following config:
controller: detailedOperationalMetrics: true
Setup Prometheus Integration for Version Monitoring
To use the Version Monitoring feature of the CAP Operator, a Prometheus server URL can be provided to the CAP Operator. When installing the CAP Operator using the Helm chart, the following values can be specified in the values:
controller: versionMonitoring: prometheusAddress: "http://prometheus-operated.monitoring.svc.cluster.local:9090" # <-- example of a Prometheus server running inside the same cluster promClientAcquireRetryDelay: "2h" metricsEvaluationInterval: "30m" # <-- duration after which version metrics are evaluated
When the controller is started, the operator will try to connect to the Prometheus server and fetch runtime information to verify the connection. If the connection is not successful, it will be retried after the duration specified as
controller.versionMonitoring.promClientAcquireRetryDelay
. Check default values for these attributes here.Note
- When connecting the controller to a Prometheus server running inside the cluster, please ensure that
NetworkPolicies
required for connecting to the service in the namespace where Prometheus is running are also created. - If the Prometheus service is configured to use TLS, the relevant CA root certificates which need to be trusted can be mounted as volumes to the controller.
- When connecting the controller to a Prometheus server running inside the cluster, please ensure that
2.1 - Helm Values
Values
Key | Type | Default | Description |
---|---|---|---|
image.tag | string | "" | Default image tag (can be overwritten on component level) |
image.pullPolicy | string | "" | Default image pull policy (can be overwritten on component level) |
imagePullSecrets | list | [] | Default image pull secrets (can be overwritten on component level) |
podSecurityContext | object | {} | Default pod security content (can be overwritten on component level) |
nodeSelector | object | {} | Default node selector (can be overwritten on component level) |
affinity | object | {} | Default affinity settings (can be overwritten on component level) |
tolerations | list | [] | Default tolerations (can be overwritten on component level) |
priorityClassName | string | "" | Default priority class (can be overwritten on component level) |
topologySpreadConstraints | list | [] | Default topology spread constraints (can be overwritten on component level) |
podLabels | object | {} | Additional pod labels for all components |
podAnnotations | object | {} | Additional pod annotations for all components |
monitoring | object | {"enabled":false} | Monitoring configuration for all components |
monitoring.enabled | bool | false | Optionally enable Prometheus monitoring for all components (disabled by default) |
controller.replicas | int | 1 | Replicas |
controller.image.repository | string | "ghcr.io/sap/cap-operator/controller" | Image repository |
controller.image.tag | string | "" | Image tag |
controller.image.pullPolicy | string | "" | Image pull policy |
controller.imagePullSecrets | list | [] | Image pull secrets |
controller.podLabels | object | {} | Additional labels for controller pods |
controller.podAnnotations | object | {} | Additional annotations for controller pods |
controller.podSecurityContext | object | {} | Pod security content |
controller.nodeSelector | object | {} | Node selector |
controller.affinity | object | {} | Affinity settings |
controller.tolerations | list | [] | Tolerations |
controller.priorityClassName | string | "" | Priority class |
controller.topologySpreadConstraints | list | [] | Topology spread constraints |
controller.securityContext | object | {} | Security context |
controller.resources.limits.memory | string | "500Mi" | Memory limit |
controller.resources.limits.cpu | float | 0.2 | CPU limit |
controller.resources.requests.memory | string | "50Mi" | Memory request |
controller.resources.requests.cpu | float | 0.02 | CPU request |
controller.volumes | list | [] | Optionally specify list of additional volumes for the controller pod(s) |
controller.volumeMounts | list | [] | Optionally specify list of additional volumeMounts for the controller container(s) |
controller.dnsTarget | string | "" | The dns target mentioned on the public ingress gateway service used in the cluster |
controller.detailedOperationalMetrics | bool | false | Optionally enable detailed opertational metrics for the controller by setting this to true |
controller.versionMonitoring.prometheusAddress | string | "" | The URL of the Prometheus server from which metrics related to managed application versions can be queried |
controller.versionMonitoring.metricsEvaluationInterval | string | "1h" | The duration (example 2h) after which versions are evaluated for deletion; based on specified workload metrics |
controller.versionMonitoring.promClientAcquireRetryDelay | string | "1h" | The duration (example 10m) to wait before retrying to acquire Prometheus client and verify connection, after a failed attempt |
subscriptionServer.replicas | int | 1 | Replicas |
subscriptionServer.image.repository | string | "ghcr.io/sap/cap-operator/server" | Image repository |
subscriptionServer.image.tag | string | "" | Image tag |
subscriptionServer.image.pullPolicy | string | "" | Image pull policy |
subscriptionServer.imagePullSecrets | list | [] | Image pull secrets |
subscriptionServer.podLabels | object | {} | Additional labels for subscription server pods |
subscriptionServer.podAnnotations | object | {} | Additional annotations for subscription server pods |
subscriptionServer.podSecurityContext | object | {} | Pod security content |
subscriptionServer.nodeSelector | object | {} | Node selector |
subscriptionServer.affinity | object | {} | Affinity settings |
subscriptionServer.tolerations | list | [] | Tolerations |
subscriptionServer.priorityClassName | string | "" | Priority class |
subscriptionServer.topologySpreadConstraints | list | [] | Topology spread constraints |
subscriptionServer.securityContext | object | {} | Security context |
subscriptionServer.resources.limits.memory | string | "200Mi" | Memory limit |
subscriptionServer.resources.limits.cpu | float | 0.1 | CPU limit |
subscriptionServer.resources.requests.memory | string | "20Mi" | Memory request |
subscriptionServer.resources.requests.cpu | float | 0.01 | CPU request |
subscriptionServer.volumes | list | [] | Optionally specify list of additional volumes for the server pod(s) |
subscriptionServer.volumeMounts | list | [] | Optionally specify list of additional volumeMounts for the server container(s) |
subscriptionServer.port | int | 4000 | Service port |
subscriptionServer.istioSystemNamespace | string | "istio-system" | The namespace in the cluster where istio system components are installed |
subscriptionServer.ingressGatewayLabels | object | {"app":"istio-ingressgateway","istio":"ingressgateway"} | Labels used to identify the istio ingress-gateway component |
subscriptionServer.dnsTarget | string | "public-ingress.clusters.cs.services.sap" | The dns target mentioned on the public ingress gateway service used in the cluster |
subscriptionServer.domain | string | "cap-operator.clusters.cs.services.sap" | The domain under which the cap operator subscription server would be available |
subscriptionServer.certificateManager | string | "Gardener" | Certificate manager which can be either Gardener or CertManager |
subscriptionServer.certificateConfig | object | {"certManager":{"issuerGroup":"","issuerKind":"","issuerName":""},"gardener":{"issuerName":"","issuerNamespace":""}} | Certificate configuration |
subscriptionServer.certificateConfig.gardener | object | {"issuerName":"","issuerNamespace":""} | Optionally specify the corresponding certificate configuration |
subscriptionServer.certificateConfig.gardener.issuerName | string | "" | Issuer name |
subscriptionServer.certificateConfig.gardener.issuerNamespace | string | "" | Issuer namespace |
subscriptionServer.certificateConfig.certManager | object | {"issuerGroup":"","issuerKind":"","issuerName":""} | Cert Manager configuration |
subscriptionServer.certificateConfig.certManager.issuerGroup | string | "" | Issuer group |
subscriptionServer.certificateConfig.certManager.issuerKind | string | "" | Issuer kind |
subscriptionServer.certificateConfig.certManager.issuerName | string | "" | Issuer name |
webhook.sidecar | bool | false | Side car to mount admission review |
webhook.replicas | int | 1 | Replicas |
webhook.image.repository | string | "ghcr.io/sap/cap-operator/web-hooks" | Image repository |
webhook.image.tag | string | "" | Image tag |
webhook.image.pullPolicy | string | "" | Image pull policy |
webhook.imagePullSecrets | list | [] | Image pull secrets |
webhook.podLabels | object | {} | Additional labels for validating webhook pods |
webhook.podAnnotations | object | {} | Additional annotations for validating webhook pods |
webhook.podSecurityContext | object | {} | Pod security content |
webhook.nodeSelector | object | {} | Node selector |
webhook.affinity | object | {} | Affinity settings |
webhook.tolerations | list | [] | Tolerations |
webhook.priorityClassName | string | "" | Priority class |
webhook.topologySpreadConstraints | list | [] | Topology spread constraints |
webhook.securityContext | object | {} | Security context |
webhook.resources.limits.memory | string | "200Mi" | Memory limit |
webhook.resources.limits.cpu | float | 0.1 | CPU limit |
webhook.resources.requests.memory | string | "20Mi" | Memory request |
webhook.resources.requests.cpu | float | 0.01 | CPU request |
webhook.service | object | {"port":443,"targetPort":1443,"type":"ClusterIP"} | Service port |
webhook.service.type | string | "ClusterIP" | Service type |
webhook.service.port | int | 443 | Service port |
webhook.service.targetPort | int | 1443 | Target port |
webhook.certificateManager | string | "Default" | Certificate manager which can be either Default or CertManager |
webhook.certificateConfig | object | {"certManager":{"issuerGroup":"","issuerKind":"","issuerName":""}} | Optionally specify the corresponding certificate configuration |
webhook.certificateConfig.certManager.issuerGroup | string | "" | Issuer group |
webhook.certificateConfig.certManager.issuerKind | string | "" | Issuer kind |
webhook.certificateConfig.certManager.issuerName | string | "" | Issuer name |
3 - Using CAP Operator Manager
To install the CAP Operator using CAP Operator Manager, please execute the following commands:
kubectl apply -f https://github.com/SAP/cap-operator-lifecycle/releases/latest/download/manager_manifest.yaml
The above command will create namespace cap-operator-system
with CAP Operator Manager installed. Once the CAP Operator Manager pod is running, you can install the CAP operator by executing the following command:
kubectl apply -n cap-operator-system -f https://github.com/SAP/cap-operator-lifecycle/releases/latest/download/manager_default_CR.yaml
This would work only if the ingressGatewayLabels
in your clusters matches the following values
ingressGatewayLabels:
- name: istio
value: ingressgateway
- name: app
value: istio-ingressgateway
If not, you will have to manually create the CAPOperator
resource. For more details on the same, please refer to link.