This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Integrations

Integrate Capsule with other platforms and solutions

1 - Addons

Addons from the Capsule Community

1.1 - Capsule Proxy

Improve the UX even more with the Capsule Proxy

Capsule Proxy is an add-on for Capsule Operator addressing some RBAC issues when enabling multi-tenancy in Kubernetes since users cannot list the owned cluster-scoped resources. One solution to this problem would be to grant all users LIST permissions for the relevant cluster-scoped resources (eg. Namespaces). However, this would allow users to list all cluster-scoped resources, which is not desirable in a multi-tenant environment and may lead to security issues. Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:

Error from server (Forbidden): namespaces is forbidden:
User "alice" cannot list resource "namespaces" in API group "" at the cluster scope

The reason, as the error message reported, is that the RBAC list action is available only at Cluster-Scope and it is not granted to users without appropriate permissions.

To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user’s experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another.

With Capsule, we took a different approach. As one of the key goals, we want to keep the same user experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work.

1.1.1 - ProxySettings

Configure proxy settings for your tenants

Primitives

Namespaces are treated specially. A users can list the namespaces they own, but they cannot list all the namespaces in the cluster. You can’t define additional selectors.

Primitives are strongly considered for tenants, therefor

The proxy setting kind is an enum accepting the supported resources:

EnumDescriptionEffective Operations
TenantUsers are able to LIST this tenant- LIST
StorageClassesPerform operations on the allowed StorageClasses for the tenant- LIST
  • Nodes: Based on the NodeSelector and the Scheduling Expressions nodes can be listed

  • StorageClasses: Perform actions on the allowed StorageClasses for the tenant

  • IngressClasses: Perform actions on the allowed IngressClasses for the tenant

  • PriorityClasses: Perform actions on the allowed PriorityClasses for the tenant PriorityClasses

  • RuntimeClasses: Perform actions on the allowed RuntimeClasses for the tenant

  • PersistentVolumes: Perform actions on the PersistentVolumes owned by the tenant

    GatewayClassesProxy ProxyServiceKind = “GatewayClasses” TenantProxy ProxyServiceKind = “Tenant”

Each Resource kind can be granted with several verbs, such as:

  • List
  • Update
  • Delete

Cluster Resources

This approach is for more generic cluster scoped resources.

TBD

Proxy Settings

Tenants

The Capsule Proxy is a multi-tenant application. Each tenant is a separate instance of the Capsule Proxy. The tenant is identified by the tenantId in the URL. The tenantId is a unique identifier for the tenant. The tenantId is used to identify the tenant in the Capsule Proxy.

1.1.2 - Installation

Installation guide for the capsule-proxy

Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it. Use the capsule-proxy only if you want Tenant Owners to list their Cluster-Scope resources.

The capsule-proxy can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server. Optionally, it can be deployed as a sidecar container in the backend of a dashboard.

Running outside a Kubernetes cluster is also viable, although a valid KUBECONFIG file must be provided, using the environment variable KUBECONFIG or the default file in $HOME/.kube/config.

A Helm Chart is available here.

Exposure

Depending on your environment, you can expose the capsule-proxy by:

  • Ingress
  • NodePort Service
  • LoadBalance Service
  • HostPort
  • HostNetwork

Here how it looks like when exposed through an Ingress Controller:

Distribute CA within the Cluster

The capsule-proxy requires the CA certificate to be distributed to the clients. The CA certificate is stored in a Secret named capsule-proxy in the capsule-system namespace, by default. In most cases the distribution of this secret is required for other clients within the cluster (e.g. the Tekton Dashboard). If you are using Ingress or any other endpoints for all the clients, this step is probably not required.

Here’s an example of how to distribute the CA certificate to the namespace tekton-pipelines by using kubectl and jq:

 kubectl get secret capsule-proxy -n capsule-system -o json \
 | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
 | kubectl apply -n tekton-pipelines -f -

This can be used for development purposes, but it’s not recommended for production environments. Here are solutions to distribute the CA certificate, which might be useful for production environments:

1.1.3 - Controller Options

Configure the Capsule Proxy Controller

You can customize the Capsule Proxy with the following configuration

Flags

Feature Gates

Feature Gates are a set of key/value pairs that can be used to enable or disable certain features of the Capsule Proxy. The following feature gates are available:

Feature GateDefault ValueDescription
ProxyAllNamespacedfalseProxyAllNamespaced allows to proxy all the Namespaced objects. When enabled, it will discover apis and ensure labels are set for resources in all tenant namespaces resulting in increased memory. However this feature helps with user experience.
SkipImpersonationReviewfalseSkipImpersonationReview allows to skip the impersonation review for all requests containing impersonation headers (user and groups). DANGER: Enabling this flag allows any user to impersonate as any user or group essentially bypassing any authorization. Only use this option in trusted environments where authorization/authentication is offloaded to external systems.
ProxyClusterScopedfalseProxyClusterScoped allows to proxy all clusterScoped objects for all tenant users. These can be defined via ProxySettings

1.1.4 - API Reference

API Reference

Packages:

capsule.clastix.io/v1beta1

Resource Types:

ProxySetting

ProxySetting is the Schema for the proxysettings API.

NameTypeDescriptionRequired
apiVersionstringcapsule.clastix.io/v1beta1true
kindstringProxySettingtrue
metadataobjectRefer to the Kubernetes API documentation for the fields of the metadata field.true
specobjectProxySettingSpec defines the additional Capsule Proxy settings for additional users of the Tenant. Resource is Namespace-scoped and applies the settings to the belonged Tenant.false

ProxySetting.spec

ProxySettingSpec defines the additional Capsule Proxy settings for additional users of the Tenant. Resource is Namespace-scoped and applies the settings to the belonged Tenant.

NameTypeDescriptionRequired
subjects[]objectSubjects that should receive additional permissions.true

ProxySetting.spec.subjects[index]

NameTypeDescriptionRequired
kindenumKind of tenant owner. Possible values are “User”, “Group”, and “ServiceAccount”
Enum: User, Group, ServiceAccount
true
namestringName of tenant owner.true
clusterResources[]objectCluster Resources for tenant Owner.false
proxySettings[]objectProxy settings for tenant owner.false

ProxySetting.spec.subjects[index].clusterResources[index]

NameTypeDescriptionRequired
apiGroups[]stringAPIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against any resource listed will be allowed. ‘*’ represents all resources. Empty string represents v1 api resources.true
operations[]enumOperations which can be executed on the selected resources.
Default: [List]
true
resources[]stringResources is a list of resources this rule applies to. ‘*’ represents all resources.true
selectorobjectSelect all cluster scoped resources with the given label selector.true

ProxySetting.spec.subjects[index].clusterResources[index].selector

Select all cluster scoped resources with the given label selector.

NameTypeDescriptionRequired
matchExpressions[]objectmatchExpressions is a list of label selector requirements. The requirements are ANDed.false
matchLabelsmap[string]stringmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.false

ProxySetting.spec.subjects[index].clusterResources[index].selector.matchExpressions[index]

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

NameTypeDescriptionRequired
keystringkey is the label key that the selector applies to.true
operatorstringoperator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.true
values[]stringvalues is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.false

ProxySetting.spec.subjects[index].proxySettings[index]

NameTypeDescriptionRequired
kindenum
Enum: Nodes, StorageClasses, IngressClasses, PriorityClasses, RuntimeClasses, PersistentVolumes
true
operations[]enumtrue

1.2 - How to operate Tenants GitOps with Flux

How to operate Tenants the GitOps way with Flux and Capsule together

Multi-tenancy the GitOps way

This document will guide you to manage Tenant resources the GitOps way with Flux configured with the multi-tenancy lockdown.

The proposed approach consists on making Flux to reconcile Tenant resources as Tenant Owners, while still providing Namespace as a Service to Tenants.

This means that Tenants can operate and declare multiple Namespaces in their own Git repositories while not escaping the policies enforced by Capsule.

Quickstart

Install

In order to make it work you can install the FluxCD addon via Helm:

helm install -n capsule-system capsule-addon-fluxcd \
  oci://ghcr.io/projectcapsule/charts/capsule-addon-fluxcd

Configure Tenants

In order to make Flux controllers reconcile Tenant resources impersonating a Tenant Owner, a Tenant Owner as Service Account is required.

To be recognized by the addon that will automate the required configurations, the ServiceAccount needs the capsule.addon.fluxcd/enabled=true annotation.

Assuming a configured oil Tenant, the following Tenant Owner ServiceAccount must be declared:

---
apiVersion: v1
kind: Namespace
metadata:
  name: oil-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitops-reconciler
  namespace: oil-system
  annotations:
    capsule.addon.fluxcd/enabled: "true"

set it as a valid oil Tenant owner, and made Capsule recognize its Group:

---
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  additionalRoleBindings:
  - clusterRoleName: cluster-admin
    subjects:
    - name: gitops-reconciler
      kind: ServiceAccount
      namespace: oil-system
  owners:
  - name: system:serviceaccount:oil-system:gitops-reconciler
    kind: ServiceAccount
---
apiVersion: capsule.clastix.io/v1beta2
kind: CapsuleConfiguration
metadata:
  name: default
spec:
  userGroups:
  - capsule.clastix.io
  - system:serviceaccounts:oil-system

The addon will automate:

  • RBAC configuration for the Tenant owner ServiceAccount
  • Tenant owner ServiceAccount token generation
  • Tenant owner kubeconfig needed to send Flux reconciliation requests through the Capsule proxy
  • Tenant kubeconfig distribution accross all Tenant Namespaces.

The last automation is needed so that the kubeconfig can be set on Kustomizations/HelmReleases across all Tenant’s Namespaces.

More details on this are available in the deep-dive section.

How to use

Consider a Tenant named oil that has a dedicated Git repository that contains oil’s configurations.

You as a platform administrator want to provide to the oil Tenant a Namespace-as-a-Service with a GitOps experience, allowing the tenant to version the configurations in a Git repository.

You, as Tenant owner, can configure Flux reconciliation resources to be applied as Tenant owner:

---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: oil-apps
  namespace: oil-system
spec:
  serviceAccountName: gitops-reconciler
  kubeConfig:
    secretRef:
      name: gitops-reconciler-kubeconfig
      key: kubeconfig
  sourceRef:
    kind: GitRepository
    name: oil
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
  name: oil
  namespace: oil-system
spec:
  url: https://github.com/oil/oil-apps

Let’s analyze the setup field by field:

  • the GitRepository and the Kustomization are in a Tenant system Namespace
  • the Kustomization refers to a ServiceAccount to be impersonated when reconciling the resources the Kustomization refers to: this ServiceAccount is a oil Tenant owner
  • the Kustomization refers also to a kubeConfig to be used when reconciling the resources the Kustomization refers to: this is needed to make requests through the Capsule proxy in order to operate on cluster-wide resources as a Tenant

The oil tenant can also declare new Namespaces thanks to the segregation provided by Capsule.

Note: it can be avoided to explicitely set the the service account name when it’s set as default Service Account name at Flux’s kustomize-controller level via the default-service-account flag.

More information are available in the addon repository.

Deep dive

Flux and multi-tenancy

Flux v2 released a set of features that further increased security for multi-tenancy scenarios.

These features enable you to:

  • disable cross-Namespace reference of Source CRs from Reconciliation CRs and Notification CRs. This way, especially for tenants, they can’t access resources outside their space. This can be achieved with --no-cross-namespace-refs=true option of kustomize, helm, notification, image-reflector, image-automation controllers.

  • set a default ServiceAccount impersonation for Reconciliation CRs. This is supposed to be an unprivileged SA that reconciles just the tenant’s desired state. This will be enforced when is not otherwise specified explicitly in Reconciliation CR spec. This can be enforced with the --default-service-account=<name> option of helm and kustomize controllers.

    For this responsibility we identify a Tenant GitOps Reconciler identity, which is a ServiceAccount and it’s also the tenant owner (more on tenants and owners later on, with Capsule).

  • disallow remote bases for Kustomizations. Actually, this is not strictly required, but it decreases the risk of referencing Kustomizations which aren’t part of the controlled GitOps pipelines. In a multi-tenant scenario this is important too. They can be disabled with --no-remote-bases=true option of the kustomize controller.

Where required, to ensure privileged Reconciliation resources have the needed privileges to be reconciled, we can explicitly set a privileged ServiceAccounts.

In any case, is required that the ServiceAccount is in the same Namespace of the Kustomization, so unprivileged spaces should not have privileged ServiceAccounts available.

For example, for the root Kustomization:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-system
  namespace: flux-system
spec:
  serviceAccountName: kustomize-controller # It has cluster-admin permissions
  path: ./clusters/staging
  sourceRef:
    kind: GitRepository
    name: flux-system

In example, the cluster admin is supposed to apply this Kustomization, during the cluster bootstrap that i.e. will reconcile also Flux itself. All the remaining Reconciliation resources can be children of this Kustomization.

bootstrap

Namespace-as-a-Service

Tenants could have his own set of Namespaces to operate on but it should be prepared by higher-level roles, like platform admins: the declarations would be part of the platform space. They would be responsible of tenants administration, and each change (e.g. new tenant Namespace) should be a request that would pass through approval.

no-naas

What if we would like to provide tenants the ability to manage also their own space the GitOps-way? Enter Capsule.

naas

Manual setup

Legenda:

  • Privileged space: group of Namespaces which are not part of any Tenant.
  • Privileged identity: identity that won’t pass through Capsule tenant access control.
  • Unprivileged space: group of Namespaces which are part of a Tenant.
  • Unprivileged identity: identity that would pass through Capsule tenant access control.
  • Tenant GitOps Reconciler: a machine Tenant Owner expected to reconcile Tenant desired state.

Capsule

Capsule provides a Custom Resource Tenant and ability to set its owners through spec.owners as references to:

  • User
  • Group
  • ServiceAccount

Tenant and Tenant Owner

We would like to let a machine reconcile Tenant’s states, we’ll need a ServiceAccount as a Tenant Owner:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitops-reconciler
  namespace: my-tenant
---
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: my-tenant
spec:
  owners:
  - name: system:serviceaccount:my-tenant:gitops-reconciler # the Tenant GitOps Reconciler

From now on, we’ll refer to it as the Tenant GitOps Reconciler.

Tenant Groups

We also need to state that Capsule should enforce tenant access control for requests coming from tenants, and we can do that by specifying one of the Groups bound by default by Kubernetes to the Tenant GitOps Reconciler ServiceAccount in the CapsuleConfiguration:

apiVersion: capsule.clastix.io/v1beta2
kind: CapsuleConfiguration
metadata:
  name: default
spec:
  userGroups:
  - system:serviceaccounts:my-tenant

Other privileged requests, e.g. for reconciliation coming from the Flux privileged ServiceAccounts like flux-system/kustomize-controller will bypass Capsule.

Flux

Flux enables to specify with which identity Reconciliation resources are reconciled, through:

  • ServiceAccount impersonation
  • kubeconfig

ServiceAccount

As by default Flux reconciles those resources with Flux cluster-admin Service Accounts, we set at controller-level the default ServiceAccount impersonation to the unprivileged Tenant GitOps Reconciler:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- flux-controllers.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/0
        value: --default-service-account=gitops-reconciler # the Tenant GitOps Reconciler      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller)"

This way tenants can’t make Flux apply their Reconciliation resources with Flux’s privileged Service Accounts, by not specifying a spec.ServiceAccountName on them.

At the same time at resource-level in privileged space we still can specify a privileged ServiceAccount, and its reconciliation requests won’t pass through Capsule validation:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-system
  namespace: flux-system
spec:
  serviceAccountName: kustomize-controller
  path: ./clusters/staging
  sourceRef:
    kind: GitRepository
    name: flux-system

Kubeconfig

We also need to specify on Tenant’s Reconciliation resources, the Secret with kubeconfig configured to use the Capsule Proxy as the API server in order to provide the Tenant GitOps Reconciler the ability to list cluster-level resources. The kubeconfig would specify also as the token the Tenant GitOps Reconciler SA token,

For example:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: my-app
  namespace: my-tenant
spec:
  kubeConfig:
    secretRef:
      name: gitops-reconciler-kubeconfig
      key: kubeconfig 
  sourceRef:
    kind: GitRepository
    name: my-tenant
  path: ./staging

We’ll see how to prepare the related Secret (i.e. gitops-reconciler-kubeconfig) later on.

Each request made with this kubeconfig will be done impersonating the user of the default impersonation SA, that is the same of the token specified in the kubeconfig. To deepen on this please go to #Insights.

The recipe

How to setup Tenants GitOps-ready

Given that Capsule and Capsule Proxy are installed, and Flux v2 configured with multi-tenancy lockdown features, of which the patch below:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- flux-components.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/0
        value: --no-cross-namespace-refs=true            
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller|notification-controller|image-reflector-controller|image-automation-controller)"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --no-remote-bases=true            
    target:
      kind: Deployment
      name: "kustomize-controller"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/0
        value: --default-service-account=gitops-reconciler # The Tenant GitOps Reconciler      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller)"
  - patch: |
      - op: add
        path: /spec/serviceAccountName
        value: kustomize-controller            
    target:
      kind: Kustomization
      name: "flux-system"

this is the required set of resources to setup a Tenant:

  • Namespace: the Tenant GitOps Reconciler “home”. This is not part of the Tenant to avoid a chicken & egg problem:
    apiVersion: v1
    kind: Namespace
    metadata:
      name: my-tenant
    
  • ServiceAccount of the Tenant GitOps Reconciler, in the above Namespace:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: gitops-reconciler
      namespace: my-tenant
    
  • Tenant resource with the above Tenant GitOps Reconciler’s SA as Tenant Owner, with:
  • Additional binding to cluster-admin ClusterRole for the Tenant’s Namespaces and Namespace of the Tenant GitOps Reconciler’ ServiceAccount. By default Capsule binds only admin ClusterRole, which has no privileges over Custom Resources, but cluster-admin has. This is needed to operate on Flux CRs:
    apiVersion: capsule.clastix.io/v1beta2
    kind: Tenant
    metadata:
      name: my-tenant
    spec:
      additionalRoleBindings:
      - clusterRoleName: cluster-admin
        subjects:
        - name: gitops-reconciler
          kind: ServiceAccount
          namespace: my-tenant
      owners:
      - name: system:serviceaccount:my-tenant:gitops-reconciler
        kind: ServiceAccount
    
  • Additional binding to cluster-admin ClusterRole for home Namespace of the Tenant GitOps Reconciler’ ServiceAccount, so that the Tenant GitOps Reconciler can create Flux CRs on the tenant home Namespace and use Reconciliation resource’s spec.targetNamespace to place resources to Tenant Namespaces:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: gitops-reconciler
      namespace: my-tenant
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: gitops-reconciler
      namespace: my-tenant
    
  • Additional Group in the CapsuleConfiguration to make Tenant GitOps Reconciler requests pass through Capsule admission (group system:serviceaccount:<tenant-gitops-reconciler-home-namespace>):
    apiVersion: capsule.clastix.io/v1alpha1
    kind: CapsuleConfiguration
    metadata:
      name: default
    spec:
      userGroups:
      - system:serviceaccounts:my-tenant
    
  • Additional ClusterRole with related ClusterRoleBinding that allows the Tenant GitOps Reconciler to impersonate his own User (e.g. system:serviceaccount:my-tenant:gitops-reconciler):
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: my-tenant-gitops-reconciler-impersonator
    rules:
    - apiGroups: [""]
      resources: ["users"]
      verbs: ["impersonate"]
      resourceNames: ["system:serviceaccount:my-tenant:gitops-reconciler"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: my-tenant-gitops-reconciler-impersonate
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: my-tenant-gitops-reconciler-impersonator
    subjects:
    - name: gitops-reconciler
      kind: ServiceAccount
      namespace: my-tenant
    
  • Secret with kubeconfig for the Tenant GitOps Reconciler with Capsule Proxy as kubeconfig.server and the SA token as kubeconfig.token.

    This is supported only with Service Account static tokens.

  • Flux Source and Reconciliation resources that refer to Tenant desired state. This typically points to a specific path inside a dedicated Git repository, where tenant’s root configuration reside:
    apiVersion: source.toolkit.fluxcd.io/v1beta2
    kind: GitRepository
    metadata:
      name: my-tenant
      namespace: my-tenant
    spec:
      url: https://github.com/my-tenant/all.git # Git repository URL
      ref:
        branch: main # Git reference
    ---
    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
      name: my-tenant
      namespace: my-tenant
    spec:
      kubeConfig:
        secretRef:
          name: gitops-reconciler-kubeconfig
          key: kubeconfig
      sourceRef:
        kind: GitRepository
        name: my-tenant
      path: config # Path to config from GitRepository Source
    
    This Kustomization can in turn refer to further Kustomization resources creating a tenant configuration hierarchy.

Generate the Capsule Proxy kubeconfig Secret

You need to create a Secret in the Tenant GitOps Reconciler home Namespace, containing the kubeconfig that specifies:

  • server: Capsule Proxy Service URL with related CA certificate for TLS
  • token: the token of the Tenant GitOps Reconciler

With required privileges over the target Namespace to create Secret, you can generate it with the proxy-kubeconfig-generator utility:

$ go install github.com/maxgio92/proxy-kubeconfig-generator@latest
$ proxy-kubeconfig-generator \
  --kubeconfig-secret-key kubeconfig \
  --namespace my-tenant \
  --server 'https://capsule-proxy.capsule-system.svc:9001' \
  --server-tls-secret-namespace capsule-system \
  --server-tls-secret-name capsule-proxy \
  --serviceaccount gitops-reconciler

How a Tenant can declare his state

Considering the example above, a Tenant my-tenant could place in his own repository (i.e. https://github.com/my-tenant/all), on branch main at path /config further Reconciliation resources, like:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: my-apps
  namespace: my-tenant
spec:
  kubeConfig:
    secretRef:
      name: gitops-reconciler-kubeconfig
      key: kubeconfig 
  sourceRef:
    kind: GitRepository
    name: my-tenant
  path: config/apps

that refer to the same Source but different path (i.e. config/apps) that could contain his applications’ manifests.

The same is valid for a HelmReleases, that instead will refer to an HelmRepository Source.

The reconciliation requests will pass through Capsule Proxy as Tenant GitOps Reconciler with impersonation. Then, as the identity group of the requests matches the Capsule groups they will be validated by Capsule, and finally the RBAC will provide boundaries to Tenant GitOps Reconciler privileges.

If the spec.kubeConfig is not specified the Flux privileged ServiceAccount will impersonate the default unprivileged Tenant GitOps Reconciler ServiceAccount as configured with --default-service-account option of kustomize and helm controllers, but it list requests on cluster-level resources like Namespaces will fail.

Full setup

To have a glimpse on a full setup you can follow the flux2-capsule-multi-tenancy repository. For simplicity, the system and tenants declarations are on the same repository but on dedicated git branches.

It’s a fork of flux2-multi-tenancy but with the integration we saw with Capsule.

Insights

Why ServiceAccount that impersonates its own User

As stated just above, you’d be wondering why a user would make a request impersonating himself (i.e. the Tenant GitOps Reconciler ServiceAccount User).

This is because we need to make tenant reconciliation requests through Capsule Proxy and we want to protect from risk of privilege escalation done through bypass of impersonation.

Threats

Bypass unprivileged impersonation

The reason why we can’t set impersonation to be optional is because, as each tenant is allowed to not specify neither the kubeconfig nor the impersonation SA for the Reconciliation resource, and because in any case that kubeconfig could contain whatever privileged credentials, Flux would otherwise use the privileged ServiceAccount, to reconcile tenant resources.

That way, a tenant would be capable of managing the GitOps way the cluster as he was a cluster admin.

Furthermore, let’s see if there are other vulnerabilities we are able to protect from.

Impersonate privileged SA

Then, what if a tenant tries to escalate by using one of the Flux controllers privileged ServiceAccounts?

As spec.ServiceAccountName for Reconciliation resource cannot cross-namespace reference Service Accounts, tenants are able to let Flux apply his own resources only with ServiceAccounts that reside in his own Namespaces. Which is, Namespace of the ServiceAccount and Namespace of the Reconciliation resource must match.

He could neither create the Reconciliation resource where a privileged ServiceAccount is present (like flux-system), as the Namespace has to be owned by the Tenant. Capsule would block those Reconciliation resource creation requests.

Create and impersonate privileged SA

Then, what if a tenant tries to escalate by creating a privileged ServiceAccount inside on of his own Namespaces?

A tenant could create a ServiceAccount in an owned Namespace, but he can’t neither bind at cluster-level nor at a non-owned Namespace-level a ClusterRole, as that wouldn’t be permitted by Capsule admission controllers.

Now let’s go on with the practical part.

Change ownership of privileged Namespaces (e.g. flux-system)

He could try to use privileged ServiceAccount by changing ownership of a privileged Namespace so that he could create Reconciliation resource there and using the privileged SA. This is not permitted as he can’t patch Namespaces which have not been created by him. Capsule request validation would not pass.

For other protections against threats in this multi-tenancy scenario please see the Capsule Multi-Tenancy Benchmark.

References

2 - Managed Kubernetes

Capsule on managed Kubernetes offerings

Capsule Operator can be easily installed on a Managed Kubernetes Service. Since you do not have access to the Kubernetes APIs Server, you should check with the provider of the service:

the default cluster-admin ClusterRole is accessible the following Admission Webhooks are enabled on the APIs Server:

  • PodNodeSelector
  • LimitRanger
  • ResourceQuota
  • MutatingAdmissionWebhook
  • ValidatingAdmissionWebhook

AWS EKS

This is an example of how to install AWS EKS cluster and one user manged by Capsule. It is based on Using IAM Groups to manage Kubernetes access

Create EKS cluster:

export AWS_DEFAULT_REGION="eu-west-1"
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"

eksctl create cluster \
--name=test-k8s \
--managed \
--node-type=t3.small \
--node-volume-size=20 \
--kubeconfig=kubeconfig.conf

Create AWS User alice using CloudFormation, create AWS access files and kubeconfig for such user:

cat > cf.yml << EOF
Parameters:
  ClusterName:
    Type: String
Resources:
  UserAlice:
    Type: AWS::IAM::User
    Properties:
      UserName: !Sub "alice-${ClusterName}"
      Policies:
      - PolicyName: !Sub "alice-${ClusterName}-policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Sid: AllowAssumeOrganizationAccountRole
            Effect: Allow
            Action: sts:AssumeRole
            Resource: !GetAtt RoleAlice.Arn
  AccessKeyAlice:
    Type: AWS::IAM::AccessKey
    Properties:
      UserName: !Ref UserAlice
  RoleAlice:
    Type: AWS::IAM::Role
    Properties:
      Description: !Sub "IAM role for the alice-${ClusterName} user"
      RoleName: !Sub "alice-${ClusterName}"
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
        - Effect: Allow
          Principal:
            AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
          Action: sts:AssumeRole
Outputs:
  RoleAliceArn:
    Description: The ARN of the Alice IAM Role
    Value: !GetAtt RoleAlice.Arn
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-RoleAliceArn"
  AccessKeyAlice:
    Description: The AccessKey for Alice user
    Value: !Ref AccessKeyAlice
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-AccessKeyAlice"
  SecretAccessKeyAlice:
    Description: The SecretAccessKey for Alice user
    Value: !GetAtt AccessKeyAlice.SecretAccessKey
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-SecretAccessKeyAlice"
EOF

eval aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=test-k8s" \
  --stack-name "test-k8s-users" --template-file cf.yml

AWS_CLOUDFORMATION_DETAILS=$(aws cloudformation describe-stacks --stack-name "test-k8s-users")
ALICE_ROLE_ARN=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"RoleAliceArn\") .OutputValue")
ALICE_USER_ACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"AccessKeyAlice\") .OutputValue")
ALICE_USER_SECRETACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"SecretAccessKeyAlice\") .OutputValue")

eksctl create iamidentitymapping --cluster="test-k8s" --arn="${ALICE_ROLE_ARN}" --username alice --group capsule.clastix.io

cat > aws_config << EOF
[profile alice]
role_arn=${ALICE_ROLE_ARN}
source_profile=alice
EOF

cat > aws_credentials << EOF
[alice]
aws_access_key_id=${ALICE_USER_ACCESSKEY}
aws_secret_access_key=${ALICE_USER_SECRETACCESSKEY}
EOF

eksctl utils write-kubeconfig --cluster=test-k8s --kubeconfig="kubeconfig-alice.conf"
cat >> kubeconfig-alice.conf << EOF
      - name: AWS_PROFILE
        value: alice
      - name: AWS_CONFIG_FILE
        value: aws_config
      - name: AWS_SHARED_CREDENTIALS_FILE
        value: aws_credentials
EOF

Export “admin” kubeconfig to be able to install Capsule:

export KUBECONFIG=kubeconfig.conf

Install Capsule and create a tenant where alice has ownership. Use the default Tenant example:

kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml

Based on the tenant configuration above the user alice should be able to create namespace. Switch to a new terminal and try to create a namespace as user alice:

# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"

Azure AKS

This reference implementation introduces the recommended starting (baseline) infrastructure architecture for implementing a multi-tenancy Azure AKS cluster using Capsule. See CoAKS.

Charmed Kubernetes

Canonical Charmed Kubernetes is a Kubernetes distribution coming with out-of-the-box tools that support deployments and operational management and make microservice development easier. Combined with Capsule, Charmed Kubernetes allows users to further reduce the operational overhead of Kubernetes setup and management.

The Charm package for Capsule is available to Charmed Kubernetes users via Charmhub.io.

3 - Tools

Integrate Capsule with other tools

The project Capsule overs addons for certain common use-cases. These addons are provided as a way to extend the functionality of Capsule and are not part of the core Capsule project.

3.1 - Kubernetes Dashboard

Capsule Integration with Kubernetes Dashboard

This guide works with the kubernetes dashboard v2.0.0 (Chart 6.0.8). It has not yet been tested successfully with with v3.x version of the dashboard.

This guide describes how to integrate the Kubernetes Dashboard and Capsule Proxy with OIDC authorization.

OIDC Authentication

Your cluster must also be configured to use OIDC Authentication for seemless Kubernetes RBAC integration. In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --oidc-issuer-url=https://${OIDC_ISSUER}
    - --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
    - --oidc-client-id=${OIDC_CLIENT_ID}
    - --oidc-username-claim=preferred_username
    - --oidc-groups-claim=groups
    - --oidc-username-prefix=-

Where ${OIDC_CLIENT_ID} refers to the client ID that all tokens must be issued.

For this client we need: 1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: “https://${DASHBOARD_URL}/oauth2/callback”, it needs to add this path to the Valid Redirect URIs 2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’. 3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name (${OIDC_CLIENT_ID}).

OAuth2 Proxy

To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy. In this article, we will use oauth2-proxy and install it as a pod in the Kubernetes Dashboard namespace. Alternatively, we can install oauth2-proxy in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.

Prepare the values for oauth2-proxy:

cat > values-oauth2-proxy.yaml <<EOF
config:
  clientID: "${OIDC_CLIENT_ID}"
  clientSecret: ${OIDC_CLIENT_SECRET}

extraArgs:
  provider: "keycloak-oidc"
  redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
  oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
  pass-access-token: true
  set-authorization-header: true
  pass-user-headers: true

ingress:
  enabled: true
  path: "/oauth2"
  hosts:
    - ${DASHBOARD_URL}
  tls:
    - hosts:
      - ${DASHBOARD_URL}
EOF

More information about the keycloak-oidc provider can be found on the oauth2-proxy documentation. We’re ready to install the oauth2-proxy:

helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml

Configuring Keycloak

The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we’re giving for granted that Keycloak is used, if you need more info please follow the OIDC Authentication section.

In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --oidc-issuer-url=https://${OIDC_ISSUER}
    - --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
    - --oidc-client-id=${OIDC_CLIENT_ID}
    - --oidc-username-claim=preferred_username
    - --oidc-groups-claim=groups
    - --oidc-username-prefix=-

Where ${OIDC_CLIENT_ID} refers to the client ID that all tokens must be issued.

For this client we need:

  1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: "https://${DASHBOARD_URL}/oauth2/callback", it needs to add this path to the Valid Redirect URIs
  2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’.
  3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).

Configuring Kubernetes Dashboard

If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.

cat > ca.crt<< EOF
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
EOF

kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}

Prepare the values for the Kubernetes Dashboard:

cat > values-kubernetes-dashboard.yaml <<EOF
extraVolumes:
  - name: token-ca
    projected:
      sources:
        - serviceAccountToken:
            expirationSeconds: 86400
            path: token
        - secret:
            name: certificate
            items:
              - key: ca.crt
                path: ca.crt
extraVolumeMounts:
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: token-ca

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
    nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
    nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
  hosts:
    - ${DASHBOARD_URL}
  tls:
    - hosts:
      - ${DASHBOARD_URL}

extraEnv:
  - name: KUBERNETES_SERVICE_HOST
    value: '${CAPSULE_PROXY_URL}'
  - name: KUBERNETES_SERVICE_PORT
    value: '${CAPSULE_PROXY_PORT}'
EOF

To add the Certificate Authority for the Capsule Proxy URL, we use the volume token-ca to mount the ca.crt file. Additionally, we set the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to route requests to the Capsule Proxy.

Now you can install the Kubernetes Dashboard:

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml

3.2 - Kyverno

Kyverno is a policy engine designed for Kubernetes. It provides the ability to validate, mutate, and generate Kubernetes resources using admission control. Kyverno policies are managed as Kubernetes resources and can be applied to a cluster using kubectl. Capsule integrates with Kyverno to provide a set of policies that can be used to improve the security and governance of the Kubernetes cluster.

References

Here are some policies for reference. We do not provide a complete list of policies, but we provide some examples to get you started. This policies are not meant to be used in production. You may adopt principles shown here to create your own policies.

Extract tenant based on namespace

To get the tenant name based on the namespace, you can use a context. With this context we resolve the tenant, based on the {{request.namespace}} for the requested resource. The context calls /api/v1/namespaces/ API with the {{request.namespace}}. The jmesPath is used to check if the tenant label is present. You could assign a default if nothing was found, in this case it’s empty string:

    context:
      - name: tenant_name
        apiCall:
          method: GET
          urlPath: "/api/v1/namespaces/{{request.namespace}}"
          jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || '')"

Select namespaces with label capsule.clastix.io/tenant

When you are performing a policy on namespaced objects, you can select the objects, which are within a tenant namespace by using the namespaceSelector. In this example we select all Kustomization and HelmRelease resources which are within a tenant namespace:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: flux-policies
spec:
  validationFailureAction: Enforce
  rules:
    # Enforcement (Mutate to Default)
    - name: Defaults Kustomizations/HelmReleases
      match:
        any:
        - resources:
            kinds:
              - Kustomization
              - HelmRelease
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      mutate:
        patchStrategicMerge:
          spec:
            +(targetNamespace): "{{ request.object.metadata.namespace }}"
            +(serviceAccountName): "default"

Compare Source and Destination Tenant

With this policy we try to enforce, that helmreleases within a tenant can only use targetNamespaces, which are within the same tenant or the same namespace the resource is deployed in:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-compare
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: Validate HelmRelease/Kustomization Target Namespace
      context:

        # Get tenant based on target namespace
        - name: destination_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.spec.targetNamespace}}"
            jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""

        # Get tenant based on resource namespace    
        - name: source_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
            jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
      match:
        any:
        - resources:
            kinds:
              - HelmRelease
              - Kustomization
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      preconditions:
        all:
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            values: [ "{{request.object.metadata.namespace}}" ]
      validate:
        message: "spec.targetNamespace must be in the same tenant ({{source_tenant}})"
        deny:
          conditions:
            - key: "{{source_tenant}}"
              operator: NotEquals
              value:  "{{destination_tenant}}"

Using Global Configuration

When creating a a lot of policies, you might want to abstract your configuration into a global configuration. This is a good practice to avoid duplication and to have a single source of truth. Also if we introduce breaking changes (like changing the label name), we only have to change it in one place. Here is an example of a global configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kyverno-global-config
  namespace: kyverno-system
data:
  # Label for public namespaces
  public_identifier_label: "company.com/public"
  # Value for Label for public namespaces
  public_identifier_value: "yeet"
  # Label which is used to select the tenant name
  tenant_identifier_label: "capsule.clastix.io/tenant"

This configuration can be referenced via context in your policies. Let’s extend the above policy with the global configuration. Additionally we would like to allow the usage of public namespaces:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-compare
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: Validate HelmRelease/Kustomization Target Namespace
      context:

        # Load Gloabl Configuration
        - name: global
          configMap:
            name: kyverno-global-config
            namespace: kyverno-system

        # Get All Public Namespaces based on the label and it's value from the global configuration
        - name: public_namespaces
          apiCall:
            urlPath: "/api/v1/namespaces"
            jmesPath: "items[?metadata.labels.\"{{global.data.public_identifier_label}}\" == '{{global.data.public_identifier_value}}'].metadata.name | []" 

        # Get Tenant information from source namespace
        # Defaults to a character, which can't be a label value
        - name: source_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
            jmesPath: "metadata.labels.\"{{global.data.tenant_identifier_label}}\" | '?'"

        # Get Tenant information from destination namespace
        # Returns Array with Tenant Name or Empty
        - name: destination_tenant
          apiCall:
            urlPath: "/api/v1/namespaces"
            jmesPath: "items[?metadata.name == '{{request.object.spec.targetNamespace}}'].metadata.labels.\"{{global.data.tenant_identifier_label}}\""

      preconditions:
        all:
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            values: [ "{{request.object.metadata.namespace}}" ]
        any: 
          # Source is not Self-Reference  
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotEquals
            value: "{{request.object.metadata.namespace}}"

          # Source not in Public Namespaces
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            value: "{{public_namespaces}}"

          # Source not in Destination
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            value: "{{destination_tenant}}"
      match:
        any:
        - resources:
            kinds:
              - HelmRelease
              - Kustomization
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      validate:
        message: "Can not use namespace {{request.object.spec.chart.spec.sourceRef.namespace}} as source reference!"
        deny: {}

Extended Validation and Defaulting

Here’s extended examples for using validation and defaulting. The first policy is used to validate the tenant name. The second policy is used to default the tenant properties, you as cluster-administrator would like to enforce for each tenant.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-core
spec:
  validationFailureAction: Enforce
  rules:
  - name: tenant-name
    match:
      all:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    validate:
      message: "Using this tenant name is not allowed."
      deny:
        conditions:
          - key: "{{ request.object.metadata.name }}"
            operator: In
            value: ["default", "cluster-system" ]

  - name: tenant-properties
    match:
      any:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      patchesJson6902: |-
        - op: add
          path: "/spec/namespaceOptions/forbiddenLabels/deniedRegex"
          value: ".*company.ch"
        - op: add
          path: "/spec/priorityClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/serviceOptions/allowedServices/nodePort"
          value: false
        - op: add
          path: "/spec/ingressOptions/allowedClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/storageClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/nodeSelector"
          value:
            nodepool: "workers"        
  

Adding Default Owners/Permissions to Tenant

Since the Owners Spec is a list, it’s a bit more trickier to add a default owner without causing recursions. You must make sure, to validate if the value you are setting is already present. Otherwise you will create a loop. Here is an example of a policy, which adds the cluster:admin as owner to a tenant:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-policy
spec:
  validationFailureAction: Enforce
  background: true
  rules:

  # With this policy for each tenant cluster:admin is added as owner.
  # Only Append these on CREATE, otherwise they will be added per reconciliation and create a loop.
  - name: tenant-owner
    preconditions:
      all:
      - key: "cluster:admin"
        operator: NotIn
        value: "{{ request.object.spec.owners[?kind == 'Group'].name }}"
    match:
      all:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      patchesJson6902: |-
        - op: add
          path: "/spec/owners/-"
          value:
            name: "cluster:admin"
            kind: "Group"        

  # With this policy for each tenant a default ProxySettings are added.
  # Completely overwrites the ProxySettings, if they are already present.
  - name: tenant-proxy-settings
    match:
      any:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      foreach:
      - list: "request.object.spec.owners"
        patchesJson6902: |-
          - path: /spec/owners/{{elementIndex}}/proxySettings
            op: add
            value:
              - kind: IngressClasses
                operations:
                - List
              - kind: StorageClasses
                operations:
                - List
              - kind: PriorityClasses
                operations:
                - List
              - kind: Nodes
                operations:
                - List          

3.3 - Lens

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Features

Capsule extension for Lens provides these capabilities:

  • List all tenants
  • See tenant details and change through the embedded Lens editor
  • Check Resources Quota and Budget at both the tenant and namespace level

Please, see the README for details about the installation of the Capsule Lens Extension.

3.4 - Monitoring

While we can not provide a full list of all the monitoring solutions available, we can provide some guidance on how to integrate Capsule with some of the most popular ones. Also this is dependent on how you have set up your monitoring solution. We will just explore the options available to you.

Logging

Loki

Promtail

config:
  clients:
    - url: "https://loki.company.com/loki/api/v1/push"
      # Maximum wait period before sending batch
      batchwait: 1s
      # Maximum batch size to accrue before sending, unit is byte
      batchsize: 102400
      # Maximum time to wait for server to respond to a request
      timeout: 10s
      backoff_config:
        # Initial backoff time between retries
        min_period: 100ms
        # Maximum backoff time between retries
        max_period: 5s
        # Maximum number of retries when sending batches, 0 means infinite retries
        max_retries: 20
      tenant_id: "tenant"
      external_labels:
        cluster: "${cluster_name}"
  serverPort: 3101
  positions:
    filename: /run/promtail/positions.yaml
  target_config:
    # Period to resync directories being watched and files being tailed
    sync_period: 10s
  snippets:
    pipelineStages:
      - docker: {}
      # Drop health logs
      - drop:
          expression: "(.*/health-check.*)|(.*/health.*)|(.*kube-probe.*)"
      - static_labels:
          cluster: ${cluster}
      - tenant:
          source: tenant
    # This wont work if pods on the cluster are not labeled with tenant
    extraRelabelConfigs:
      - action: replace
        source_labels:
          - __meta_kubernetes_pod_label_capsule_clastix_io_tenant
        target_label: tenant
...

As mentioned, the above configuration will not work if the pods on the cluster are not labeled with tenant. You can use the following Kyverno policy to ensure that all pods are labeled with tenant. If the pod does not belong to any tenant, it will be labeled with management (assuming you have a central management tenant)

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: capsule-pod-labels
spec:
  background: false
  rules:
  - name: add-pod-label
    context:
      - name: tenant
        apiCall:
          method: GET
          urlPath: "/api/v1/namespaces/{{request.namespace}}"
          jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || 'management')"
    match:
      all:
      - resources:
          kinds:
            - Pod
          operations:
            - CREATE
            - UPDATE
    mutate:
      patchStrategicMerge:
        metadata:
          labels:
            +(capsule.clastix.io/tenant): "{{ tenant_name }}"

Grafana

3.5 - Rancher

The integration between Rancher and Capsule, aims to provide a multi-tenant Kubernetes service to users, enabling:

  • a self-service approach
  • access to cluster-wide resources

to end-users.

Tenant users will have the ability to access Kubernetes resources through:

  • Rancher UI
  • Rancher Shell
  • Kubernetes CLI

On the other side, administrators need to manage the Kubernetes clusters through Rancher.

Rancher provides a feature called Projects to segregate resources inside a common domain. At the same time Projects doesn’t provide way to segregate Kubernetes cluster-scope resources.

Capsule as a project born for creating a framework for multi-tenant platforms, integrates with Rancher Projects enhancing the experience with Tenants.

Capsule allows tenants isolation and resources control in a declarative way, while enabling a self-service experience to tenants. With Capsule Proxy users can also access cluster-wide resources, as configured by administrators at Tenant custom resource-level.

You can read in detail how the integration works and how to configure it, in the following guides.

How to integrate Rancher Projects with Capsule Tenants How to enable cluster-wide resources and Rancher shell access.

Tenants and Projects

3.6 - Tekton

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Prerequisites

Tekton must be already installed on your cluster, if that’s not the case consult the documentation here:

Cluster Scoped Permissions

Tekton Dashboard

Now for the enduser experience we are going to deploy the tekton dashboard. When using oauth2-proxy we can deploy one single dashboard, which can be used for all tenants. Refer to the following guide to setup the dashboard with the oauth2-proxy:

Once that is done, we need to make small adjustments to the tekton-dashboard service account.

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
patches:
  # Adjust the service for the capsule-proxy according to your installation
  # The used values are compatbile with the default installation values
  - target:
      version: v1
      kind: Deployment
      name: tekton-dashboard
    patch: |-
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: KUBERNETES_SERVICE_HOST
          value: "capsule-proxy.capsule-system.svc"
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: KUBERNETES_SERVICE_PORT
          value: "9001"      

  # Adjust the CA certificate for the capsule-proxy according to your installation
  - target:
      version: v1
      kind: Deployment
      name: tekton-dashboard
    patch: |-
      - op: add
        path: /spec/template/spec/containers/0/volumeMounts
        value: []
      - op: add
        path: /spec/template/spec/containers/0/volumeMounts/-
        value:
          mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
          name: token-ca
      - op: add
        path: /spec/template/spec/volumes
        value: []
      - op: add
        path: /spec/template/spec/volumes/-
        value:
          name: token-ca
          projected:
            sources:
              - serviceAccountToken:
                  expirationSeconds: 86400
                  path: token
              - secret:
                  name: capsule-proxy
                  items:
                    - key: ca
                      path: ca.crt      

This patch assumes there’s a secret called capsule-proxy with the CA certificate for the Capsule Proxy URL.

Apply the given kustomization:

extraEnv:

  • name: KUBERNETES_SERVICE_HOST value: ‘${CAPSULE_PROXY_URL}’
  • name: KUBERNETES_SERVICE_PORT value: ‘${CAPSULE_PROXY_PORT}’

Tekton Operator

When using the Tekton Operator, you need to add the following to the TektonConfig:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  dashboard:
    readonly: false
    options:
      disabled: false
      deployments:
        tekton-dashboard:
          spec:
            template:
              spec:
                volumes:
                  - name: token-ca
                    projected:
                      sources:
                        - serviceAccountToken:
                            expirationSeconds: 86400
                            path: token
                        - secret:
                            name: capsule-proxy
                            items:
                              - key: ca
                                path: ca.crt
                containers:
                  - name: tekton-dashboard
                    volumeMounts:
                      - mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
                        name: token-ca
                    env:
                      - name: KUBERNETES_SERVICE_HOST
                        value: "capsule-proxy.capsule-system.svc"
                      - name: KUBERNETES_SERVICE_PORT
                        value: "9001"

See for reference the options spec

3.7 - Teleport

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Features

Capsule extension for Lens provides these capabilities:

  • List all tenants
  • See tenant details and change through the embedded Lens editor
  • Check Resources Quota and Budget at both the tenant and namespace level

Please, see the README for details about the installation of the Capsule Lens Extension.