We wanted an on-premise version of yipee.io with a smaller footprint than our SaaS product. For an initial POC we decided to pursue an alternate dashboard for Kubernetes, utilizing the application editor from yipee.io. Instead of creating and storing application definitions like yipee.io does, this dashboard loads application definitions from a live Kubernetes cluster, treating the contents of each namespace as a discrete application.
This dashboard version of yipee presents a nice concise view of a running Kubernetes cluster. The yipee application editor will do an auto-layout of the components in a Kubernetes app, and it will allow you to rearrange the components. But there is no provision for storing those layout changes. So you have to reproduce the changes every time you close/open an application.
In the “slimmed down” version of yipee that runs in a Kubernetes cluster, we deliberately removed the database that is used by the yipee SaaS product. Such a DB would certainly offer a place to store layout changes, but running a complete DBMS to store layout data seems like overkill. As a lightweight alternative, we decided to explore the use of Kubernetes “custom resources” as a storage vehicle. What follows is a summary of that exploration.
Here’s how the catalog looks in a minikube cluster of four namespaces.
Here’s how the editor looks for the kube-system namespace of a minikube cluster
Custom Resources in Kubernetes
Kubernetes custom resources have the following characteristics:
- Use Kubernetes cluster storage (etcd by default). Note: it is important to NOT store large volumes of data here.
- Are referenced as new endpoints in the Kubernetes API. Paths/endpoints are constructed based on data in the resource definition.
- Can be accessed via kubectl just like any other Kubernetes object
- Common Kubernetes API characteristics and concepts apply to the new resources just as they do to system-defined resources, including:
- support for versioning and api groups
- common metadata elements
- support for concurrent updates
See Kubernetes docs for details:
We’re not using this in yipee.io (yet), but if you’re interested in Kubernetes custom resources it’s probably worth recognizing the “operator pattern” that is being promoted by CoreOS. The basic idea is to have code that monitors changes to custom resources and takes appropriate actions based on those changes.
See coreos operators for more info.
Use of Custom Resources in Yipee.io
The Kubernetes representation of our yipee app is comprised of multiple Kubernetes object definitions. There are several configuration objects in addition to the definition of the application itself. As with all Kubernetes objects, these can be created/modified using kubectl apply -f file-name.
Yipee configuration information includes:
- Definition of our custom resource
- Definition of a ServiceAccount and roles that are specific to yipee. Every Kubernetes pod has an associated ServiceAccount. If no ServiceAccount is explicitly specified, the default ServiceAccount will be automatically assigned. The yipee app requires access to our newly defined custom resource (as well as some other permissions for overall cluster visibility) so good hygiene dictates that we limit the scope of that access to only the yipee app. The yipee app uses ServiceAccount credentials for authorization when making Kubernetes API calls.
With a configuration like that in place, our application can be deployed and utilized. The following sections show the specific details of our configuration.
apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: models.yipee.io spec: group: yipee.io version: v1 versions: - name: v1 served: true storage: true scope: Namespaced names: plural: models singular: model kind: YipeeModel shortNames: - ym
The preceding object is the definition of the yipee custom resource. Note that the spec.group, spec.version, and spec.names attributes of the definition are used in construction of URLs for API access to custom resources, like so:
A URL to access the YipeeModel containing layout for a namespace “foo” would look like this:
These objects are used to configure the Kubernetes role based access control system for yipee. See Kubernetes RBAC for details. Here are the Kubernetes object manifests that define the yipee ServiceAccount and its associated access permissions.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-all rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list", "watch"] --- # role-based access to custom resources kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-yipee-admin-edit labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rules: - apiGroups: ["yipee.io"] resources: ["models"] verbs: ["*"] --- # yipee-specific service account. apiVersion: v1 kind: ServiceAccount metadata: name: yipee-service-account --- # grant service account access to custom resources apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: yipee-edit roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: aggregate-yipee-admin-edit subjects: - kind: ServiceAccount name: yipee-service-account namespace: default --- # grant cluster view access to yipee service account apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: default-view roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: yipee-service-account namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: yipee-view roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: aggregate-yipee-admin-edit subjects: - kind: ServiceAccount name: yipee-service-account namespace: default
Use of ServiceAccount
Here is a subset of the manifest for one of the yipee app Deployment objects. Note the reference to our newly created ServiceAccount. This is what allows the app to access our custom resource (as well as to view other cluster elements needed to complete our model view):
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: backend spec: template: spec: serviceAccountName: yipee-service-account containers: ...
Loading a Model
In our POC, a yipee model is comprised of all the Services, Deployments, StatefulSets, DaemonSets and YipeeModels (our new custom resource) that are associated with a given name space.
All of these are collected via GET operations from the Kubernetes API and translated into a format recognized by the yipee editor. The layout information represented by the YipeeModel object is included in this representation. Thus the yipee editor can display the last-saved view of the application model.
Storing a Model
In our current POC, the only data stored is the layout information for the model. If no layout data existed when the model was loaded, we create a new YipeeModel object and create a new custom resource with a POST API call. If layout data already exists (i.e., if a YipeeModel object was found when the model was loaded), we use a PUT API call to save it. A couple things to note about modification of Kubernetes objects (including custom resources) via API:
- PUT will replace the entire contents of the relevant object
- PUT will enforce “optimistic locking” so you must have the appropriate “metadata.resourceVersion” value as part of the request payload.
- PATCH can be used for finer-grained updates (e.g., add an element to an existing list). The semantic of PATCH is “last writer wins” (no optimistic locking). There are a couple different flavors of PATCH request – see Kubernetes documentation for full details.
We don’t yet have a lot of experience with this approach, but early indications are promising. It certainly is convenient to be able to store and retrieve modest amounts of data using built-in Kubernetes capabilities.
Coming Attractions at Yipee.io
Our next areas of emphasis at yipee.io are:
- Rollback of complete Kubernetes apps
- Easy maintenance of a Kubernetes app across multiple environments
Please follow the links for these capabilities and let us know if you’re interested.