Skip to main content

Develop your App for Kubernetes with SAP Gardener and Java SDK

SAP Gardener is a managed Kubernetes service by SAP developed as an Open Source project. It helps create and manage multiple Kubernetes clusters with less effort by abstracting environment specifics to deliver the same homogeneous Kubernetes-native DevOps experience everywhere.

The SAP Cloud SDK for Java supports SAP Gardener-based Kubernetes clusters out of the box.

SAP Cloud SDK Features Supported on SAP Gardener

Find below the list of features we currently support: Legend: ✅ - supported, ❗- partially supported, ❌ - not supported

  • ✅ Consume SAP BTP services like Destination, Connectivity, IAS, XSUAA, and others
  • ✅ Multitenancy
  • ✅ Resilience & Caching
  • ✅ Connect to and consume services from SAP S/4HANA Cloud
  • ❌ Connect to and consume services from SAP S/4HANA On-Premise
  • ✅ Seamless use of typed clients provided by the SAP Cloud SDK

Getting Started with the SAP Cloud SDK on Gardener

This detailed guide will help get your SAP Cloud SDK Java application up and running in the SAP Gardener-based Kubernetes cluster. You can also use this guide to migrate your existing application to Kubernetes.


For additional information on more deployment options you can also check out the guide for JavaScript.


To follow this guide you will need:

Check out the details below in case you are uncertain about any of the prerequisites.

Gardener Cluster

This guide assumes you have created a cluster via Gardener dashboard, have Kubernetes CLI installed on your local machine and have it set up for cluster access.


Running kubectl cluster-info should print out your cluster endpoints.

In case you haven't set this up you can do so by downloading a kubeconfig from your Gardener dashboard. You can read more about accessing clusters using kubeconfig on the Kubernetes documentation

We also recommend to have an Ingress set up that exposes the application to the internet. You can read more about configuring an Ingress on the Gardener documentation.

SAP BTP Service Operator

This guide assumes you have the SAP BTP Service Operator installed in your cluster. The operator is used to create and bind service instances. The same can also be achieved via the Service Catalog. However, this guide will focus on the Service Operator usage.


In case you don't have it installed please follow the documentation to install it.


This guide assumes you have Docker installed on your local machine.

Furthermore, you need a Docker repository where you can store images. The repository needs to be publicly accessible in order for the cluster to access and download the Docker image we are going to create.

In case you don't have such a repository yet we recommend either:

Access to images in a repository may be limited to authenticated and/or authorized users, depending on your configuration.

Make sure you are logged in to your repository on your local machine by running:

docker login (your-repo) --username=(your-username)

And check your configuration which is usually located under (your-home-directory)/.docker/config.json.


In case AuthN/AuthZ is required to download images make sure you have a secret configured in your cluster:

kubectl create secret docker-registry (name-of-the-secret) --docker-username=(username) --docker-password=(API-token) --docker-server=(your-repo)
Application using the SAP Cloud SDK

If you don't have an application already you can comfortably create one from our archetypes.

Containerize the Application

To run on Kubernetes the application needs to be shipped in a container. For this guide we will be using Docker.

Create a Dockerfile in the project root directory:

FROM openjdk:8-jdk-alpine
ARG JAR_FILE=application/target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

If needed, update the JAR_FILE to point to your JAR file.


You can find more information on how to containerize Spring Boot applications in this guide (in particular, check the Containerize It section).

Compile and push the image by running:

docker build -t <your-repo>/<your-image-name> .
docker push <your-repo>/<your-image-name>

In case you are facing authorization issues when pushing to your repository refer to the dedicated section under Prerequisites.

Create a Kubernetes Deployment

  1. Create a new YAML file describing the Kubernetes deployment:

    apiVersion: apps/v1
    kind: Deployment
    name: my-app-deployment
    replicas: 1
    app: my-app
    app: my-app
    # Configure the docker image you just pushed to your repository here
    - image: <your-repo>/<your-image>:latest
    name: my-app
    imagePullPolicy: Always
    memory: '1Gi'
    cpu: '500m'
    memory: '1.5Gi'
    cpu: '750m'
    # Volume mounts needed for injecting BTP service credentials
    # In case your repository requires a login, reference your secret here
    - name: <your-secret-for-docker-login>
    # Volumes containing BTP serice credentials from secrets
    apiVersion: v1
    kind: Service
    app: my-app
    name: my-app
    namespace: default
    type: NodePort
    - port: 8080
    app: my-app
  2. Install the configuration via kubectl apply -f deployment.yml.

  3. Monitor the status of the deployment by running: kubectl get deployment my-app-deployment.

Eventually, you should see an output similar to:

$ kubectl get deployment my-app-deployment

my-app-deployment 1/1 1 1 15s

In case something went wrong use kubectl describe together with deployment or pod to get more information about the status of your application.

Create an Ingress

To make your application available from outside the cluster we will create an Ingress.


In case you already have an Ingress configured in your cluster only add the new rule for your new applications.

You can read more about configuring an Ingress on the Gardener documentation.

  1. Create a new YAML file containing the following Ingress configuration:
kind: Ingress
name: ingress-router
namespace: default
# managed
- hosts:
# - "<your-cluster-host>"
# - "*.ingress.<your-cluster-host>"
# secretName: secret-tls
- host: 'my-app.ingress.<your-cluster-host>'
- path: /
pathType: Prefix
name: my-app
number: 8080
  1. Install the configuration via kubectl apply -f ingress.yml.

  2. Verify the Ingress is up and running: kubectl describe ingress ingress-router

You should see an entry with the path / pointing to the backend my-app.


In case something went wrong and you are struggling to configure the Ingress you can also come back and set it up later. The Ingress is a convenient way to access your application. It is not strictly required for the rest of this guide.

Recommended: Configure TLS for your Ingress

Enable the NGINX Ingress add-on in your Gardener dashboard. The process may take a few minutes. Afterwards, you should see a domain in the dashboard as well as a Kubernetes secret secret-tls.

Un-comment the 4 lines in the YAML above using the generated domain and secret. Then re-deploy the configuration as usual. Your cluster endpoint should now be trusted by your browser.


We highly recommended enabling TLS for your cluster endpoints. It ensures your client (e.g. browser) can verify the cluster's identity.

Access Your Application

At this point take a moment to verify you can access your application. Use the host you have defined in your Ingress rule in a browser or other tool of your choice (e.g. Postman). In case you started with an SAP Cloud SDK Archetype your should be greeted with a welcome page under the root path.


In case you skipped setting up an Ingress before you can use port forwarding to access your application.

Identify the pod name of your application with kubectl get pods. Then enable port forwarding to it by running: kubectl port-forward (your-pod-name) 8080:8080. With that you should be able to access the application on your http://localhost:8080

Bind SAP BTP Services to the Application

The SAP Business Technology Platform offers various services that can be used by applications. To access services from a Kubernetes environment instances have to be created and bound to the application.

For this guide we'll assume we want to use two services:

  1. Destination Service
  2. Identity and Authentication Service (IAS)

Bind the Destination Service

  1. Create a new YAML file:
kind: ServiceInstance
name: destination-service
serviceOfferingName: destination
servicePlanName: lite
externalName: default-destination-service
kind: ServiceBinding
name: my-destination-service-binding
serviceInstanceName: destination-service
secretName: my-destination-service-secret
secretRootKey: my-destination-service-key
  1. Install the configuration via kubectl apply -f destination-service.yml.

  2. Monitor the status via kubectl describe ServiceInstance destination-service. Eventually this should automatically create a Kubernetes secret named my-destination-service-secret. This secret will contain the actual service binding information.

  3. Mount the my-destination-service-secret secret into the file system of the application as follows:

    1. Find the empty list of volumes at the end of your deployment.yml. Add a new volume, referencing the secret:

      - name: my-destination-service-binding-volume
      secretName: my-destination-service-secret
    2. Mount this volume into the file system of your application. Add it to the empty list of volumeMounts in the container section of your deployment.yml:

      - name: my-destination-service-binding-volume
      mountPath: '/etc/secrets/sapcp/destination/my-destination-service'
      readOnly: true
  4. Update the configuration via kubectl apply -f deployment.yml.

Bind the Identity and Authentication Service

  1. Create a new identity-service.yaml file:
kind: ServiceInstance
name: my-identity-service
serviceOfferingName: identity
servicePlanName: application
# Allowed redirect URIs in case you want to use an approuter behind an ingress for user login
# oauth2-configuration:
# redirect-uris:
# - https://*.ingress.<your-cluster-host>/login/callback
consumed-services: []
xsuaa-cross-consumption: true
multi-tenant: true
kind: ServiceBinding
name: my-identity-service-binding
serviceInstanceName: my-identity-service
secretName: my-identity-service-secret
secretRootKey: my-identity-service-key
  1. Repeat the same steps 2-5 from the previous section, always replacing destination with identity.

Excursion: Bind Services created by the Service Catalog

In case of using the Kubernetes Service Catalog via the Service Catalog CLI you will receive service bindings that are not immediately compatible with the SAP Cloud SDK.

Known XSUAA Service Incompatibility

For example, let us assume you want to create an XSUAA Service Binding. You would use commands similar to the following:

svcat provision svcat-xsuaa-service --class xsuaa --plan application
svcat bind svcat-xsuaa-service

To see the resulting secret on K8s you can run the following command:

kubectl get secrets svcat-xsuaa-service -o yaml

The data block of the result looks something like this:

apiurl: <base64-encoded-value>
clientid: <base64-encoded-value>
clientsecret: <base64-encoded-value>
credential-type: <base64-encoded-value>
identityzone: <base64-encoded-value>
identityzoneid: <base64-encoded-value>
sburl: <base64-encoded-value>
subaccountid: <base64-encoded-value>
tenantid: <base64-encoded-value>
tenantmode: <base64-encoded-value>
uaadomain: <base64-encoded-value>
url: <base64-encoded-value>
verificationkey: <base64-encoded-value>
xsappname: <base64-encoded-value>
zoneid: <base64-encoded-value>

You can see that the property plan is missing there. This property, however, is required by the SAP Cloud SDK, so that runtime errors are produced once the application tries to read this service binding.

To fix this issue you need to edit the secret so that it contains the plan property. The easiest way, when you are already using the CLI, is by using the kubectl edit command:

kubectl edit secrets svcat-xsuaa-service

In there you can now add a plan property with one of the following base64 encoded values:

planbase64 encoded value

The resulting service binding can now be consumed with the SAP Cloud SDK.

Known Connectivity Service Incompatibility

As another example, let us assume you want to create an Connectivity Service Binding with the Service Catalog CLI.

You would, again, use commands similar to the following to create the binding:

svcat provision svcat-connectivity-service --class connectivity --plan connectivity_proxy
svcat bind svcat-connectivity-service

This time we will have a look at the decoded content by executing the following command:

kubectl get secret svcat-connectivity-service -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

This will show you the content of the data block with all values base64 decoded:

clientid: <some-string>
clientsecret: <some-string>
connectivity_service: {"CAs_path":"<some-string>","CAs_signing_path":"<some-string>","api_path":"<some-string>","tunnel_path":"<some-string>","url":"<some-string>"}
subaccount_id: <some-string>
subaccount_subdomain: <some-string>
token_service_domain: <some-string>
token_service_url: <some-string>
token_service_url_pattern: <some-string>
token_service_url_pattern_tenant_key: <some-string>
xsappname: <some-string>

Here you can see, that the connectivity_service property contains a JSON object, whereas the other properties only contain simple strings. Due to the way the SAP Cloud SDK reads the service bindings and tries to find the credentials, it assumes that a single JSON object property contains the credentials. In our case, however, this does not hold true.

To fix this issue you would follow the same step described above, this time just removing the unnecessary property:

kubectl edit secrets svcat-connectivity-service

On-Premise Connectivity


On-Premise connectivity in Kubernetes is currently not available for external SAP customers. This might be changed in the near future. We'll be updating our documentation accordingly.

The following steps have not been tested by our team.

To connect to On-Premise systems inside a Kubernetes cluster, you need to use the Connectivity Proxy. The following guide will show you what has to be done to create and use it.

  1. You need to create a route for the Connectivity Proxy to use. This route needs to have TLS enabled. To enable TLS on SAP Gardener, refer to the note on Create an Ingress section.

    Here is an example where we add our custom domain connectivitytunnel.* to our TLS section, in SAP Gardener. This creates a certificate for this domain automatically.

    - hosts:
    - '<your-cluster-host>'
    - '*.ingress.<your-cluster-host>'
    - 'connectivitytunnel.ingress.<your-cluster-host>'
    secretName: secret-tls
  2. Now we need to add our CA certificate to the JVM trust store of the Cloud Connector. The CA certificate is stored in the TLS secret, in our case, it is secret-tls.

    To access the information inside a secret, use the following code snippet:

    kubectl get secret <secret-name> -o go-template='
    {{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

    Inside the secret should be a block prefixed with ca.crt, copy this certificate into a file and then follow this guide to add it to the JVM trust store of your Cloud Connector.

  3. Create and bind the connectivity service with the connectivty_proxy plan. We already explained how to do it in the section above. Additionally, to bind the secret represented by Kubernetes native YAML format, you need to convert it to a JSON to be consumable by the connectivity proxy. Retrieve the secret's content using the previous code snippet and convert it into a JSON before saving it. You can use the following as a guiding example.

    apiVersion: v1
    kind: Secret
    name: connectivity-proxy-service-key
    type: Opaque
    connectivity_key: '{
    "clientid": "<client-id>",
    "clientsecret": "<clientsecret>",
    "xsappname": "<xsappname>",
    "connectivity_service": {
    "subaccount_id": "<subaccount_id",
    "subaccount_subdomain": "<subaccount_subdomain>",
    "token_service_domain": "<token_service_domain">",
    "token_service_url": "<token_service_url>",
    "token_service_url_pattern": "https://{tenant}",
    "token_service_url_pattern_tenant_key": "subaccount_subdomain"

    Note that we used the stringData field type instead of the default data field type to benefit from automatic base64 encoding, instead of doing it ourselves. This is a requirement of the connectivity proxy since it can't consume the data of the secret in YAML format yet.

  4. Now we need to download the CA certificate of the connectivity service and create a secret containing:

    • The CA certificate of the connectivity service
    • Our private key
    • Our public certificate

    The private key and public certificate are also stored in our TLS secret, use the previous code snippet to retrieve it from the secret and save them in separate files. Finally, download the CA certificate with the following line:

    curl -o connectivity_ca.crt

    Now you can create the secret with this command:

    kubectl create secret generic connectivity-tls --from-file=ca.crt=<your-connectivity-ca.crt> --from-file=tls.key=<your-private.key> --from-file=tls.crt=<your-public.crt> --namespace default
  5. Create a secret that contains credentials to access the docker image which the Connectivity Proxy is using.

    The image is located here:

    To create the registry secret, use the following command:

    kubectl create secret docker-registry <your-registry-secret> \
    --docker-username=<your-username> \
    --docker-password=<your-password> \
  6. Create a values.yaml file containing the configuration that suits your desired operational mode of the connectivity proxy, for the available operational modes refer to the documentation.

    For the specific content of the configuration refer to the configuration guide.

    Here is an example for the Single tenant in a trusted environment mode:

    replicaCount: 1
    pullSecret: 'proxy-secret'
    secretName: 'connectivity-tls'
    mode: console
    serviceCredentialsKey: 'connectivity_key'
    tenantMode: dedicated
    subaccountId: '<subaccount-id>'
    subaccountSubdomain: '<subaccount-domain>'
    externalHost: 'connectivitytunnel.ingress.<your-cluster-host>'
    externalPort: 443
    enabled: true
    enableProxyAuthorization: false
    enabled: true
    enableProxyAuthorization: false
    enableRequestTracing: true
    enableProxyAuthorization: false
  7. For your application to reach On-Premise destinations, it needs to provide the proxy settings and the token URL. Currently, you have to add them manually to the secret containing the connectivity service binding.

    To do this, use the following code snippet:

    kubectl edit secret <binding-name>

    Now you have to add the fields onpremise_proxy_host and onpremise_proxy_port and url. The host has the pattern connectivity-proxy.<namespace> which is in our case connectivity-proxy.default. The default port is 20003. The url field should contain the same value as token_service_url. Be aware that the values have to be encoded in base64, for example:

    onpremise_proxy_host: Y29ubmVjdGl2aXR5LXByb3h5LmRlZmF1bHQ=
    onpremise_proxy_port: MjAwMDM=
    url: aHR0cHM6Ly9teS1hcGkuYXV0aGVudGljYXRpb24uc2FwLmhhbmEub25kZW1hbmQuY29tCg==
  8. Finally, add the binding to your deployment.yml, the same way you would add any other binding.

Excursion: Debug Kubernetes App From Your Local IDE

To understand some problems with an application it might be helpful to debug the application from within your IDE. Then you can go through the code step by step and see, where your expectations are not fulfilled anymore.

This excursion will guide you through the necessary steps to get your application running on your Kubernetes cluster connected to your local IDE.

  1. Add the following parameter to your invocation of the JVM:


    As an example, let's assume that your Dockerfile has the following entrypoint:

    ENTRYPOINT ["java","-jar","/app.jar"]

    Then you can update your deployment by adding the command and args properties to your image spec in your deployment.yml:

    spec: # pod spec
    - image: <your-image-spec>
    name: <your-container-name>
    command: ['java']

    This will replace the entrypoint with the given command and arguments, as described in the Kubernetes documentation.

  2. Make sure that the adjusted image is actually running in your Kubernetes Cluster.

  3. Identify the pod you want to debug against, for example using the kubectl get pods command.

  4. Forward the port used in the debug String above via the following command to your local machine:

    kubectl port-forward <name-of-your-pod> 5005:5005
  5. Let your IDE connect against localhost:5005. The specifics of this step depend heavily on your choice of IDE, so we cannot give a fits-all solution.