Migrate a Cloud Foundry Application to a Kubernetes Cluster
This guide details the steps necessary to migrate an application from Cloud Foundry to a Gardener-based Kubernetes cluster. The samples repository contains the Kubernetes application that will be used in this guide.
Prerequisites
-
Install
kubectl
. -
Install
docker
and ensure access to a publicly reachable Docker repository.infoA Kubernetes cluster uses a secret to authenticate with a container registry to pull an image. For that, you will need to configure a secret in your cluster:
kubectl create secret docker-registry regcred --docker-server=YOUR_DOCKER_SERVER --docker-username=YOUR_DOCKER_USERNAME --docker-password=YOUR_DOCKER_PASSWORD --docker-email=YOUR_DOCKER_EMAIL
-
Use Gardener for a Kubernetes Cluster with a load balancer enabled.
infoThis guide assumes you have created a cluster via Gardener dashboard and have it set up for cluster access. We also recommend to have an Ingress set up that exposes the application to the internet.
-
Install
helm
, a package manager for Kubernetes. -
Install the SAP BTP Service Operator in the cluster.
infoFor production environments, you need a reliable way of configuring which services are running in your cluster. To achieve that, leverage the SAP BTP Service Operator. This guide assumes you have the SAP BTP Service Operator installed in your cluster. If not, follow the setup instructions here.
Create and Bind SAP BTP Services
Since the service operator is running in your cluster, you can create services just like you would do in SAP BTP CF, but instead of the SAP BTP cockpit, you'll use YAML service definitions. For this guide, we'll assume the application uses two services:
- Destination Service
- XSUAA Service
Bind the Destination Service
- Create
yaml
files for the destination service instance and binding:
apiVersion: services.cloud.sap.com/v1alpha1
kind: ServiceInstance
metadata:
name: operator-destination-service
spec:
serviceOfferingName: destination
servicePlanName: lite
apiVersion: services.cloud.sap.com/v1alpha1
kind: ServiceBinding
metadata:
name: operator-destination-service
spec:
serviceInstanceName: operator-destination-service
- Install the configuration using the commands:
kubectl apply -f operator-destination-service.yml
kubectl apply -f operator-destination-binding.yml
This should create a Kubernetes secret named operator-destination-service
.
This secret will contain the actual service binding information.
- Monitor the status via
kubectl describe ServiceInstance operator-destination-service
.
Bind the XSUAA Service
Create yaml
files for the XSUAA service instance and binding:
apiVersion: services.cloud.sap.com/v1alpha1
kind: ServiceInstance
metadata:
name: operator-xsuaa-service
spec:
serviceOfferingName: xsuaa
servicePlanName: application
parameters:
xsappname: kubernetes-xsuaa
tenant-mode: shared
scopes:
- name: '$XSAPPNAME.Callback'
description: 'With this scope set, the callbacks for tenant onboarding, offboarding and getDependencies can be called.'
grant-as-authority-to-apps:
- $XSAPPNAME(application,sap-provisioning,tenant-onboarding)
role-templates:
- name: TOKEN_EXCHANGE
description: Token exchange
scope-references:
- uaa.user
- name: 'MultitenancyCallbackRoleTemplate'
description: 'Call callback-services of applications'
scope-references:
- '$XSAPPNAME.Callback'
oauth2-configuration:
grant-types:
- authorization_code
- client_credentials
- password
- refresh_token
- urn:ietf:params:oauth:grant-type:saml2-bearer
- user_token
- client_x509
- urn:ietf:params:oauth:grant-type:jwt-bearer
# Allowed redirect URIs in case you want to use an approuter behind an ingress for user login
redirect-uris:
- https://*/**
apiVersion: services.cloud.sap.com/v1alpha1
kind: ServiceBinding
metadata:
name: operator-xsuaa-service
spec:
serviceInstanceName: operator-xsuaa-service
- Repeat steps 2 and 3 from the previous section for the XSUAA service.
Replace
destination
withxsuaa
.
We will see how to mount the created secrets into the file system of the application in the deployment configuration step.
Notice that the metadata -> name
property can be anything you want.
In this case, it's operator-destination-service
.
Make sure it matches exactly to the spec -> serviceInstanceName
field in the binding.
Deploy to Kubernetes
This section covers the following:
- How to deploy an application to Kubernetes.
- How to consume the bound services in the application from within the cluster.
Containerize the Application
Create a Dockerfile
defining a container for the application.
Then it can be deployed to one or more Kubernetes Pods.
FROM node:14-alpine
WORKDIR /workdir
COPY /deployment /workdir
RUN ["npm", "install", "--unsafe-perm"]
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
Compile and push the image by running:
docker build -t <YOUR_REPO>/<YOUR_IMAGE_NAME>:<YOUR_TAG> .
docker push <YOUR_REPO>/<YOUR_IMAGE_NAME>:<YOUR_TAG>
Create a Deployment Configuration
Create a deployment.yml
file as shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sdkjs-e2e-deployment
spec:
replicas: 2
selector:
matchLabels:
app: sdkjs-e2e
template:
metadata:
labels:
app: sdkjs-e2e
spec:
containers:
- name: sdkjs-e2e
image: <YOUR_REPO>/k8s-e2e-app:latest
resources:
requests:
memory: '256Mi'
cpu: '500m'
limits:
memory: '512Mi'
cpu: '1000m'
ports:
- containerPort: 3000
volumeMounts:
- name: destination-volume
mountPath: '/etc/secrets/sapcp/destination/operator-destination-service'
readOnly: true
- name: xsuaa-volume
mountPath: '/etc/secrets/sapcp/xsuaa/operator-xsuaa-service'
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: destination-volume
secret:
secretName: operator-destination-service
- name: xsuaa-volume
secret:
secretName: operator-xsuaa-service
The file contains the following data:
- Resources definition: The
requests
field specifies the CPU and memory requirements of the app. Additionally, the resources can scale based on the values provided in thelimits
field. - Service bindings: the volumes
destination-volume
andxsuaa-volume
reference the secrets (service bindings). The volumes are mounted into the file system of your application at a specific path by adding them to the list ofvolumeMounts
in thecontainers
section.infoThe path convention is provided by the xsenv library.
- Container image and registry secrets:
regcred
secret used by theimagePullSecrets
config contains your registry credentials that you bound as a secret (in case the image was pushed to a private repository).
Deploy the application by running the command:
kubectl apply -f deployment.yml
Access Your Application
To access the application, you have two options. Either expose it to the internet directly or port-forward to your local machine.
Local Connection
Run the command kubectl port-forward deployment YOUR_DEPLOYMENT :3000
on your local machine to enable port forwarding.
Your application will listen to port 3000.
Kubernetes finds an available port on your local machine and forwards port 3000 of your deployment to it.
Make a call to the application via a provided link.
Internet Facing Connection
Exposing an application this way is good only for testing. Don't use it in production.
Run the command below to expose the application to the internet. It will use the cluster's IP address and the port the application listens on.
kubectl expose deployment YOUR_DEPLOYMENT --type="LoadBalancer"
Configure TLS and Obtain a Domain in SAP Gardener
If you want to expose your cluster under a domain name with TLS, check out the steps below or follow the official Kubernetes documentation for a general setup.
Enable the NGINX Ingress add-on for your SAP Gardener cluster.
The fastest way to enable TLS and obtain a domain for your application is to create a Service for your Deployment, and an Ingress to handle the routing. Create a Service that contains your Deployment and the port you want to expose as shown in the example below:
apiVersion: v1
kind: Service
metadata:
name: sdkjs-e2e-svc
spec:
selector:
app: sdkjs-e2e
ports:
- port: 8080
targetPort: 3000
Check your shoot cluster's kubeconfig.yaml
for the configured DNS, which should be located in your Gardener project's dashboard, under the YAML tab.
It should be a field that looks like this:
spec:
dns:
domain: cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
Since the NGINX Ingress is enabled, all domains have to follow the pattern *.ingress.YOUR_DNS
, for example:
e2e.ingress.cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
This is how the Ingress configuration file should look like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sdkjs-e2e-ingress
annotations:
cert.gardener.cloud/purpose: managed
spec:
tls:
- hosts:
- cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com # - "<YOUR_DNS>"
- e2e.ingress.cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com # - "*.ingress.<YOUR_DNS>"
secretName: secret-tls
rules:
- host: e2e.ingress.cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sdkjs-e2e-svc
port:
number: 8080
Notice how the Ingress uses the annotation cert.gardener.cloud/purpose: managed
, which is important so that SAP Gardener manages TLS.
The spec.tls.hosts
config exposes 2 domains.
The first one is your default domain, limited to a maximum of 64 characters.
Other domains can be any size, but should follow the Ingress pattern.
The field secretName: secret-tls
in the configuration implies that all TLS certificates will be saved by SAP Gardener.
Finally, look at how to serve the Service at the root of your subdomain.
This way the Service is exposed to the internet using TLS.
Expose the Application using approuter
In the next steps, you will configure and deploy an approuter so that only users authenticated by your identity provider can access the application endpoints. For that, create a simple application that uses the @sap/approuter.
-
Package the application router as a docker image so that it can run in Kubernetes, refer to the documentation for configuration details.
-
Create a Kubernetes Deployment and a Kubernetes Service to run and expose the application router.
apiVersion: apps/v1
kind: Deployment
metadata:
name: approuter
labels:
app: approuter
spec:
replicas: 1
selector:
matchLabels:
app: approuter
template:
metadata:
labels:
app: approuter
spec:
containers:
- image: <YOUR_REPO>/k8s-approuter:latest
resources:
requests:
memory: '256Mi'
cpu: '250m'
limits:
memory: '512Mi'
cpu: '500m'
name: approuter
volumeMounts:
- name: xsuaa-volume
mountPath: '/etc/secrets/sapcp/xsuaa/operator-xsuaa-service'
readOnly: true
env:
- name: PORT
value: '5000'
- name: destinations
value: '[{"name":"backend", "url":"http://sdkjs-e2e-svc:8080/", "forwardAuthToken": true}]'
imagePullSecrets:
- name: regcred
volumes:
- name: xsuaa-volume
secret:
secretName: operator-xsuaa-service
It references the application running in your cluster. Instead of an Ingress endpoint, it directly points to the Service. This is possible because the application router runs in your cluster and can therefore use the Kubernetes native Service discovery.
The Service configuration:
apiVersion: v1
kind: Service
metadata:
name: approuter-svc
labels:
app: approuter
spec:
ports:
- port: 8080
targetPort: 5000
selector:
app: approuter
- Finally, configure the Ingress to create a session cookie that is consumed by the application router.
Set the
backend.service.name
toapprouter-svc
to point the Ingress to theapprouter
Service. To secure your application, remove all previous routes that pointed to your application's endpoints and only expose them through the application router. For that, specify the Service names in your approuter destinations' configuration and remove the rules you previously created for these endpoints in the Ingress.
Depending on the Ingress controller, you have to use different annotations. An Ingress that only exposes the application router and uses the annotations is shown in the following example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sdkjs-e2e-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: 'cookie'
nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
nginx.ingress.kubernetes.io/session-cookie-name: 'JSESSIONID'
cert.gardener.cloud/purpose: managed
spec:
tls:
- hosts:
- cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
- e2e.ingress.cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
secretName: secret-tls
rules:
- host: e2e.ingress.cloud-sdk-js.sdktests.shoot.canary.k8s-hana.ondemand.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: approuter-svc
port:
number: 8080
On-Premise Connectivity
Prerequisites
This guide assumes you have both the Transparent Proxy (version >= 1.4.0
) and Connectivity Proxy (version >= 2.11.0
) installed in your cluster.
For Kyma the Transparent Proxy is available as a module that can be enabled as described here.
The Connectivity Proxy can be installed as described here.
Background Information
When using the Transparent Proxy your app performs requests against the Transparent Proxy without explicit authentication, relying on the secure network communication provided by Kyma via Istio. The Transparent Proxy will obtain the relevant destination from the destination service and use it to forward the request via the Connectivity Proxy to the On-Premise system. Consequently, your app itself does not interact with destination or connectivity service at all and thus your application pods do not require bindings to these two services.
Please note that the current implementation of the Transparent Proxy does not yet cover all use cases.
Constraints when using the Transparent Proxy
- Private Link not yet supported
This approach is conceptually different from what you may be used to from a Cloud Foundry environment. The official documentation of the Transparent Proxy gives more information on the architecture.
Create a Kubernetes Resource
For the creation of the necessary Kubernetes Resources, please refer to our Java documentation.
Executing Requests
In your application you can now configure a destination to execute requests:
- Individual Destination
- Dynamic Desitnations
import {
registerDestination,
getTenantId,
retrieveJwt
} from '@sap-cloud-sdk/connectivity';
const destination = {
name: 'registered-destination',
url: 'http://my-destination.namespace/'
// for principal propagation make sure to set the forwardAuthToken flag to true:
// forwardAuthToken: true
};
await registerDestination(destination, options);
const result = await myEntityApi
.requestBuilder()
.getAll()
// for a subscriber tenant make sure to send the tenant header:
// .addCustomHeaders({ 'X-Tenant-Id': getTenantId(retrieveJwt(request)) })
.execute({ destinationName: 'registered-destination' });
// for principal propagation make sure the send the auth token:
// .execute({ destinationName: 'registered-destination', jwt: retrieveJwt(request) });
import {
registerDestination,
getTenantId,
retrieveJwt
} from '@sap-cloud-sdk/connectivity';
const destination = {
name: 'registered-destination',
url: 'http://dynamic-destination.namespace/'
// for principal propagation make sure to set the forwardAuthToken flag to true:
// forwardAuthToken: true
};
await registerDestination(destination, options);
const result = await myEntityApi
.requestBuilder()
.getAll()
.addCustomHeaders({ 'X-Destination-Name': '<CF-DESTINATION-NAME>' })
// for a subscriber tenant make sure to send the tenant header:
// .addCustomHeaders({ 'X-Tenant-Id': getTenantId(retrieveJwt(request)) })
.execute({ destinationName: 'registered-destination' });
// for principal propagation make sure the send the auth token:
// .execute({ destinationName: 'registered-destination', jwt: retrieveJwt(request) });
- Replace
namespace
in the URL with the namespace you installed the Transparent Proxy into.
The code above shows an example how you can then use the destination
object to perform an OData request against the system.
The above approach is not limited to destinations of proxy type ON_PREMISE
, INTERNET
destinations are supported as well.
Troubleshooting
When using proxy servers it can be difficult to troubleshoot issues as it is often not obvious where exactly the error occurred. For example, with the Transparent Proxy errors might occur on the target system (e.g. OData service), the Destination Service or the Transparent Proxy itself.
To make troubleshooting easier the Transparent Proxy adds additional response headers to provide more information about where an error occurred. For the above example of executing OData requests you can access the response headers as follows:
const result = await myEntityApi
.requestBuilder()
.getAll()
.execute('registered-destination')
.catch(err => {
console.error('Error Headers:', err.rootCause?.response?.headers);
});
List of headers added by the Transparent Proxy
X-Error-Origin
- the source of the errorX-Proxy-Server
- the proxy server (Transparent Proxy)X-Error-Message
- thorough error messageX-Error-Internal-Code
- set only when the source of the error is the XSUAA or Destination service. The value is the HTTP code returned from one of these services.X-Request-Id
is sent with the response in all requests, both successful and failed