Overview
This framework is based on the controller-runtime project. Therefore one way of consuming it would be to bootstrap a kubebuilder project, such as
kubebuilder init \
--domain kyma-project.io \
--repo github.com/myorg/mycomponent-operator \
--project-name=mycomponent-operator
kubebuilder create api \
--group operator \
--version v1alpha1 \
--kind MyComponent \
--resource \
--controller \
--make
First you will enhance the custom resource type, as generated by kubebuilder in the api/v1alpha1
folder:
// MyComponentSpec defines the desired state of MyComponent
type MyComponentSpec struct {
// Add fields here ...
}
// MyComponentStatus defines the observed state of MyComponent
type MyComponentStatus struct {
component.Status `json:",inline"`
}
In many cases, it makes sense to embed one or more of the following structs into the spec:
component.PlacementSpec
if instances should be allowed to specify target namespace and name different from the component’s namespace and namecomponent.ClientSpec
if (remote) deployments via a specified kubeconfig shall be possiblecomponent.ImpersonationSpec
if you want to support impersonation of the deployment (e.g. via a service account).
Most likely you will add own attributes to the spec, allowing to parameterize the deployment of your component.
Including component.Status
into the status is mandatory, but you are free to add further fields if needed.
In order to make the custom resource type implement the Component
interface, add the following methods:
var _ component.Component = &MyComponent{}
func (s *MyComponentSpec) ToUnstructured() map[string]any {
result, err := runtime.DefaultUnstructuredConverter.ToUnstructured(s)
if err != nil {
panic(err)
}
return result
}
func (c *MyComponent) GetSpec() runtimetypes.Unstructurable {
return &c.Spec
}
func (c *MyComponent) GetStatus() *component.Status {
return &c.Status.Status
}
Now we are settled to replace the controller generated by kubebuilder with the component-operator-runtime reconciler in the scaffolded main.go
:
// Replace this by a real resource generator (e.g. HelmGenerator or KustomizeGenerator, or your own one).
resourceGenerator, err := manifests.NewDummyGenerator()
if err != nil {
setupLog.Error(err, "error initializing resource generator")
os.Exit(1)
}
if err := component.NewReconciler[*operatorv1alpha1.MyComponent](
"mycomponent-operator.kyma-project.io",
nil,
nil,
nil,
nil,
resourceGenerator,
).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "MyComponent")
os.Exit(1)
}
In addition, you have to add the apiextensions.k8s.io/v1
and apiregistration.k8s.io/v1
groups to the used scheme, such
that the kubebuilder generated init()
function looks like this:
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(apiextensionsv1.AddToScheme(scheme))
utilruntime.Must(apiregistrationv1.AddToScheme(scheme))
utilruntime.Must(operatorv1alpha1.AddToScheme(scheme))
}
Furthermore, pay attention to bypass informer caching in the client at least for the following types:
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
// ...
Client: client.Options{
Cache: &client.CacheOptions{
DisableFor: []client.Object{
&operatorv1alpha1.MyComponent{},
&apiextensionsv1.CustomResourceDefinition{},
&apiregistrationv1.APIService{},
},
},
},
// ...
})
Now the actual work starts, which means that you will tailor the custom resource type’s spec according to
the needs of the managed component, and implement a meaningful resource generator, replacing manifests.NewDummyGenerator()
.