This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Ecosystem

All entries on this page were added by people who worked on these and thus self-identified as being part of the Project Capsule Ecosystem.

Integrations

Capsule works well with other CNCF kubernetes based solutions. Below you can see the ones we have documented. In the end it can work with any solution, due to Capsule’s kubernetes native approach:

Addons

Addons are seperate projects which interact with the core Capsule Project. Since our commitment is, to have a stable core API we decided to push towards an addon based ecosystem. If you have a new addon, which interacts with the capsule core project, consider adding the addon.

Proxy

core ux

Enhance the user experience by allowing users to query the Kubernetes API and only getting the results, they are supposed to get.

Rancher

community ux

Integrate Capsule with Rancher to manage Capsule Tenants and their resources with Rancher Projects.

ArgoCD

vendor gitops

This addon is designed for kubernetes administrators, to automatically translate their existing Capsule Tenants into Argo Appprojects.

Sops Operator

core secrets gitops

Handle SOPS Secrets in a multi-tenant and kubernetes-native way.

FluxCD

core gitops

In particular enables Tenants to manage their resources, including creating Namespaces.

Cortex Proxy

core observability

Route metrics to cortex organizations based on the relational of namespace metrics to capsule tenants.

1 - Integrations

Integrate Capsule with other platforms and solutions

1.1 - Managed Kubernetes

Capsule on managed Kubernetes offerings

Capsule Operator can be easily installed on a Managed Kubernetes Service. Since you do not have access to the Kubernetes APIs Server, you should check with the provider of the service:

the default cluster-admin ClusterRole is accessible the following Admission Webhooks are enabled on the APIs Server:

  • PodNodeSelector
  • LimitRanger
  • ResourceQuota
  • MutatingAdmissionWebhook
  • ValidatingAdmissionWebhook

AWS EKS

This is an example of how to install AWS EKS cluster and one user manged by Capsule. It is based on Using IAM Groups to manage Kubernetes access

Create EKS cluster:

export AWS_DEFAULT_REGION="eu-west-1"
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"

eksctl create cluster \
--name=test-k8s \
--managed \
--node-type=t3.small \
--node-volume-size=20 \
--kubeconfig=kubeconfig.conf

Create AWS User alice using CloudFormation, create AWS access files and kubeconfig for such user:

cat > cf.yml << EOF
Parameters:
  ClusterName:
    Type: String
Resources:
  UserAlice:
    Type: AWS::IAM::User
    Properties:
      UserName: !Sub "alice-${ClusterName}"
      Policies:
      - PolicyName: !Sub "alice-${ClusterName}-policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Sid: AllowAssumeOrganizationAccountRole
            Effect: Allow
            Action: sts:AssumeRole
            Resource: !GetAtt RoleAlice.Arn
  AccessKeyAlice:
    Type: AWS::IAM::AccessKey
    Properties:
      UserName: !Ref UserAlice
  RoleAlice:
    Type: AWS::IAM::Role
    Properties:
      Description: !Sub "IAM role for the alice-${ClusterName} user"
      RoleName: !Sub "alice-${ClusterName}"
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
        - Effect: Allow
          Principal:
            AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
          Action: sts:AssumeRole
Outputs:
  RoleAliceArn:
    Description: The ARN of the Alice IAM Role
    Value: !GetAtt RoleAlice.Arn
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-RoleAliceArn"
  AccessKeyAlice:
    Description: The AccessKey for Alice user
    Value: !Ref AccessKeyAlice
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-AccessKeyAlice"
  SecretAccessKeyAlice:
    Description: The SecretAccessKey for Alice user
    Value: !GetAtt AccessKeyAlice.SecretAccessKey
    Export:
      Name:
        Fn::Sub: "${AWS::StackName}-SecretAccessKeyAlice"
EOF

eval aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=test-k8s" \
  --stack-name "test-k8s-users" --template-file cf.yml

AWS_CLOUDFORMATION_DETAILS=$(aws cloudformation describe-stacks --stack-name "test-k8s-users")
ALICE_ROLE_ARN=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"RoleAliceArn\") .OutputValue")
ALICE_USER_ACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"AccessKeyAlice\") .OutputValue")
ALICE_USER_SECRETACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"SecretAccessKeyAlice\") .OutputValue")

eksctl create iamidentitymapping --cluster="test-k8s" --arn="${ALICE_ROLE_ARN}" --username alice --group capsule.clastix.io

cat > aws_config << EOF
[profile alice]
role_arn=${ALICE_ROLE_ARN}
source_profile=alice
EOF

cat > aws_credentials << EOF
[alice]
aws_access_key_id=${ALICE_USER_ACCESSKEY}
aws_secret_access_key=${ALICE_USER_SECRETACCESSKEY}
EOF

eksctl utils write-kubeconfig --cluster=test-k8s --kubeconfig="kubeconfig-alice.conf"
cat >> kubeconfig-alice.conf << EOF
      - name: AWS_PROFILE
        value: alice
      - name: AWS_CONFIG_FILE
        value: aws_config
      - name: AWS_SHARED_CREDENTIALS_FILE
        value: aws_credentials
EOF

Export “admin” kubeconfig to be able to install Capsule:

export KUBECONFIG=kubeconfig.conf

Install Capsule and create a tenant where alice has ownership. Use the default Tenant example:

kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml

Based on the tenant configuration above the user alice should be able to create namespace. Switch to a new terminal and try to create a namespace as user alice:

# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"

Azure AKS

This reference implementation introduces the recommended starting (baseline) infrastructure architecture for implementing a multi-tenancy Azure AKS cluster using Capsule. See CoAKS.

Charmed Kubernetes

Canonical Charmed Kubernetes is a Kubernetes distribution coming with out-of-the-box tools that support deployments and operational management and make microservice development easier. Combined with Capsule, Charmed Kubernetes allows users to further reduce the operational overhead of Kubernetes setup and management.

The Charm package for Capsule is available to Charmed Kubernetes users via Charmhub.io.

1.2 - Kubernetes Dashboard

Capsule Integration with Kubernetes Dashboard

This guide works with the kubernetes dashboard v2.0.0 (Chart 6.0.8). It has not yet been tested successfully with with v3.x version of the dashboard.

This guide describes how to integrate the Kubernetes Dashboard and Capsule Proxy with OIDC authorization.

OIDC Authentication

Your cluster must also be configured to use OIDC Authentication for seemless Kubernetes RBAC integration. In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --oidc-issuer-url=https://${OIDC_ISSUER}
    - --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
    - --oidc-client-id=${OIDC_CLIENT_ID}
    - --oidc-username-claim=preferred_username
    - --oidc-groups-claim=groups
    - --oidc-username-prefix=-

Where ${OIDC_CLIENT_ID} refers to the client ID that all tokens must be issued.

For this client we need: 1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: “https://${DASHBOARD_URL}/oauth2/callback”, it needs to add this path to the Valid Redirect URIs 2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’. 3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name (${OIDC_CLIENT_ID}).

OAuth2 Proxy

To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy. In this article, we will use oauth2-proxy and install it as a pod in the Kubernetes Dashboard namespace. Alternatively, we can install oauth2-proxy in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.

Prepare the values for oauth2-proxy:

cat > values-oauth2-proxy.yaml <<EOF
config:
  clientID: "${OIDC_CLIENT_ID}"
  clientSecret: ${OIDC_CLIENT_SECRET}

extraArgs:
  provider: "keycloak-oidc"
  redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
  oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
  pass-access-token: true
  set-authorization-header: true
  pass-user-headers: true

ingress:
  enabled: true
  path: "/oauth2"
  hosts:
    - ${DASHBOARD_URL}
  tls:
    - hosts:
      - ${DASHBOARD_URL}
EOF

More information about the keycloak-oidc provider can be found on the oauth2-proxy documentation. We’re ready to install the oauth2-proxy:

helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml

Configuring Keycloak

The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we’re giving for granted that Keycloak is used, if you need more info please follow the OIDC Authentication section.

In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --oidc-issuer-url=https://${OIDC_ISSUER}
    - --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
    - --oidc-client-id=${OIDC_CLIENT_ID}
    - --oidc-username-claim=preferred_username
    - --oidc-groups-claim=groups
    - --oidc-username-prefix=-

Where ${OIDC_CLIENT_ID} refers to the client ID that all tokens must be issued.

For this client we need:

  1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: "https://${DASHBOARD_URL}/oauth2/callback", it needs to add this path to the Valid Redirect URIs
  2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’.
  3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).

Configuring Kubernetes Dashboard

If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.

cat > ca.crt<< EOF
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
EOF

kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}

Prepare the values for the Kubernetes Dashboard:

cat > values-kubernetes-dashboard.yaml <<EOF
extraVolumes:
  - name: token-ca
    projected:
      sources:
        - serviceAccountToken:
            expirationSeconds: 86400
            path: token
        - secret:
            name: certificate
            items:
              - key: ca.crt
                path: ca.crt
extraVolumeMounts:
  - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: token-ca

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
    nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
    nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
  hosts:
    - ${DASHBOARD_URL}
  tls:
    - hosts:
      - ${DASHBOARD_URL}

extraEnv:
  - name: KUBERNETES_SERVICE_HOST
    value: '${CAPSULE_PROXY_URL}'
  - name: KUBERNETES_SERVICE_PORT
    value: '${CAPSULE_PROXY_PORT}'
EOF

To add the Certificate Authority for the Capsule Proxy URL, we use the volume token-ca to mount the ca.crt file. Additionally, we set the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to route requests to the Capsule Proxy.

Now you can install the Kubernetes Dashboard:

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml

1.3 - Kyverno

Kyverno is a policy engine designed for Kubernetes. It provides the ability to validate, mutate, and generate Kubernetes resources using admission control. Kyverno policies are managed as Kubernetes resources and can be applied to a cluster using kubectl. Capsule integrates with Kyverno to provide a set of policies that can be used to improve the security and governance of the Kubernetes cluster.

References

Here are some policies for reference. We do not provide a complete list of policies, but we provide some examples to get you started. This policies are not meant to be used in production. You may adopt principles shown here to create your own policies.

Extract tenant based on namespace

To get the tenant name based on the namespace, you can use a context. With this context we resolve the tenant, based on the {{request.namespace}} for the requested resource. The context calls /api/v1/namespaces/ API with the {{request.namespace}}. The jmesPath is used to check if the tenant label is present. You could assign a default if nothing was found, in this case it’s empty string:

    context:
      - name: tenant_name
        apiCall:
          method: GET
          urlPath: "/api/v1/namespaces/{{request.namespace}}"
          jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || '')"

Select namespaces with label capsule.clastix.io/tenant

When you are performing a policy on namespaced objects, you can select the objects, which are within a tenant namespace by using the namespaceSelector. In this example we select all Kustomization and HelmRelease resources which are within a tenant namespace:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: flux-policies
spec:
  validationFailureAction: Enforce
  rules:
    # Enforcement (Mutate to Default)
    - name: Defaults Kustomizations/HelmReleases
      match:
        any:
        - resources:
            kinds:
              - Kustomization
              - HelmRelease
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      mutate:
        patchStrategicMerge:
          spec:
            +(targetNamespace): "{{ request.object.metadata.namespace }}"
            +(serviceAccountName): "default"

Compare Source and Destination Tenant

With this policy we try to enforce, that helmreleases within a tenant can only use targetNamespaces, which are within the same tenant or the same namespace the resource is deployed in:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-compare
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: Validate HelmRelease/Kustomization Target Namespace
      context:

        # Get tenant based on target namespace
        - name: destination_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.spec.targetNamespace}}"
            jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""

        # Get tenant based on resource namespace    
        - name: source_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
            jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
      match:
        any:
        - resources:
            kinds:
              - HelmRelease
              - Kustomization
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      preconditions:
        all:
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            values: [ "{{request.object.metadata.namespace}}" ]
      validate:
        message: "spec.targetNamespace must be in the same tenant ({{source_tenant}})"
        deny:
          conditions:
            - key: "{{source_tenant}}"
              operator: NotEquals
              value:  "{{destination_tenant}}"

Using Global Configuration

When creating a a lot of policies, you might want to abstract your configuration into a global configuration. This is a good practice to avoid duplication and to have a single source of truth. Also if we introduce breaking changes (like changing the label name), we only have to change it in one place. Here is an example of a global configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kyverno-global-config
  namespace: kyverno-system
data:
  # Label for public namespaces
  public_identifier_label: "company.com/public"
  # Value for Label for public namespaces
  public_identifier_value: "yeet"
  # Label which is used to select the tenant name
  tenant_identifier_label: "capsule.clastix.io/tenant"

This configuration can be referenced via context in your policies. Let’s extend the above policy with the global configuration. Additionally we would like to allow the usage of public namespaces:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-compare
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: Validate HelmRelease/Kustomization Target Namespace
      context:

        # Load Gloabl Configuration
        - name: global
          configMap:
            name: kyverno-global-config
            namespace: kyverno-system

        # Get All Public Namespaces based on the label and it's value from the global configuration
        - name: public_namespaces
          apiCall:
            urlPath: "/api/v1/namespaces"
            jmesPath: "items[?metadata.labels.\"{{global.data.public_identifier_label}}\" == '{{global.data.public_identifier_value}}'].metadata.name | []" 

        # Get Tenant information from source namespace
        # Defaults to a character, which can't be a label value
        - name: source_tenant
          apiCall:
            urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
            jmesPath: "metadata.labels.\"{{global.data.tenant_identifier_label}}\" | '?'"

        # Get Tenant information from destination namespace
        # Returns Array with Tenant Name or Empty
        - name: destination_tenant
          apiCall:
            urlPath: "/api/v1/namespaces"
            jmesPath: "items[?metadata.name == '{{request.object.spec.targetNamespace}}'].metadata.labels.\"{{global.data.tenant_identifier_label}}\""

      preconditions:
        all:
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            values: [ "{{request.object.metadata.namespace}}" ]
        any: 
          # Source is not Self-Reference  
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotEquals
            value: "{{request.object.metadata.namespace}}"

          # Source not in Public Namespaces
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            value: "{{public_namespaces}}"

          # Source not in Destination
          - key: "{{request.object.spec.targetNamespace}}"
            operator: NotIn
            value: "{{destination_tenant}}"
      match:
        any:
        - resources:
            kinds:
              - HelmRelease
              - Kustomization
            operations:
              - CREATE
              - UPDATE
            namespaceSelector:
              matchExpressions:
                - key: "capsule.clastix.io/tenant"
                  operator: Exists
      validate:
        message: "Can not use namespace {{request.object.spec.chart.spec.sourceRef.namespace}} as source reference!"
        deny: {}

Extended Validation and Defaulting

Here’s extended examples for using validation and defaulting. The first policy is used to validate the tenant name. The second policy is used to default the tenant properties, you as cluster-administrator would like to enforce for each tenant.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-core
spec:
  validationFailureAction: Enforce
  rules:
  - name: tenant-name
    match:
      all:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    validate:
      message: "Using this tenant name is not allowed."
      deny:
        conditions:
          - key: "{{ request.object.metadata.name }}"
            operator: In
            value: ["default", "cluster-system" ]

  - name: tenant-properties
    match:
      any:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      patchesJson6902: |-
        - op: add
          path: "/spec/namespaceOptions/forbiddenLabels/deniedRegex"
          value: ".*company.ch"
        - op: add
          path: "/spec/priorityClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/serviceOptions/allowedServices/nodePort"
          value: false
        - op: add
          path: "/spec/ingressOptions/allowedClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/storageClasses/matchLabels"
          value:
            consumer: "customer"
        - op: add
          path: "/spec/nodeSelector"
          value:
            nodepool: "workers"
  

Adding Default Owners/Permissions to Tenant

Since the Owners Spec is a list, it’s a bit more trickier to add a default owner without causing recursions. You must make sure, to validate if the value you are setting is already present. Otherwise you will create a loop. Here is an example of a policy, which adds the cluster:admin as owner to a tenant:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: tenant-policy
spec:
  validationFailureAction: Enforce
  background: true
  rules:

  # With this policy for each tenant cluster:admin is added as owner.
  # Only Append these on CREATE, otherwise they will be added per reconciliation and create a loop.
  - name: tenant-owner
    preconditions:
      all:
      - key: "cluster:admin"
        operator: NotIn
        value: "{{ request.object.spec.owners[?kind == 'Group'].name }}"
    match:
      all:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      patchesJson6902: |-
        - op: add
          path: "/spec/owners/-"
          value:
            name: "cluster:admin"
            kind: "Group"

  # With this policy for each tenant a default ProxySettings are added.
  # Completely overwrites the ProxySettings, if they are already present.
  - name: tenant-proxy-settings
    match:
      any:
      - resources:
          kinds:
          - "capsule.clastix.io/v1beta2/Tenant"
          operations:
          - CREATE
          - UPDATE
    mutate:
      foreach:
      - list: "request.object.spec.owners"
        patchesJson6902: |-
          - path: /spec/owners/{{elementIndex}}/proxySettings
            op: add
            value:
              - kind: IngressClasses
                operations:
                - List
              - kind: StorageClasses
                operations:
                - List
              - kind: PriorityClasses
                operations:
                - List
              - kind: Nodes
                operations:
                - List

1.4 - Lens

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Features

Capsule extension for Lens provides these capabilities:

  • List all tenants
  • See tenant details and change through the embedded Lens editor
  • Check Resources Quota and Budget at both the tenant and namespace level

Please, see the README for details about the installation of the Capsule Lens Extension.

1.5 - Monitoring

While we can not provide a full list of all the monitoring solutions available, we can provide some guidance on how to integrate Capsule with some of the most popular ones. Also this is dependent on how you have set up your monitoring solution. We will just explore the options available to you.

Logging

Loki

Promtail

config:
  clients:
    - url: "https://loki.company.com/loki/api/v1/push"
      # Maximum wait period before sending batch
      batchwait: 1s
      # Maximum batch size to accrue before sending, unit is byte
      batchsize: 102400
      # Maximum time to wait for server to respond to a request
      timeout: 10s
      backoff_config:
        # Initial backoff time between retries
        min_period: 100ms
        # Maximum backoff time between retries
        max_period: 5s
        # Maximum number of retries when sending batches, 0 means infinite retries
        max_retries: 20
      tenant_id: "tenant"
      external_labels:
        cluster: "${cluster_name}"
  serverPort: 3101
  positions:
    filename: /run/promtail/positions.yaml
  target_config:
    # Period to resync directories being watched and files being tailed
    sync_period: 10s
  snippets:
    pipelineStages:
      - docker: {}
      # Drop health logs
      - drop:
          expression: "(.*/health-check.*)|(.*/health.*)|(.*kube-probe.*)"
      - static_labels:
          cluster: ${cluster}
      - tenant:
          source: tenant
    # This wont work if pods on the cluster are not labeled with tenant
    extraRelabelConfigs:
      - action: replace
        source_labels:
          - __meta_kubernetes_pod_label_capsule_clastix_io_tenant
        target_label: tenant
...

As mentioned, the above configuration will not work if the pods on the cluster are not labeled with tenant. You can use the following Kyverno policy to ensure that all pods are labeled with tenant. If the pod does not belong to any tenant, it will be labeled with management (assuming you have a central management tenant)

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: capsule-pod-labels
spec:
  background: false
  rules:
  - name: add-pod-label
    context:
      - name: tenant
        apiCall:
          method: GET
          urlPath: "/api/v1/namespaces/{{request.namespace}}"
          jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || 'management')"
    match:
      all:
      - resources:
          kinds:
            - Pod
          operations:
            - CREATE
            - UPDATE
    mutate:
      patchStrategicMerge:
        metadata:
          labels:
            +(capsule.clastix.io/tenant): "{{ tenant_name }}"

Grafana

1.6 - Rancher

The integration between Rancher and Capsule, aims to provide a multi-tenant Kubernetes service to users, enabling:

  • a self-service approach
  • access to cluster-wide resources

to end-users.

Tenant users will have the ability to access Kubernetes resources through:

  • Rancher UI
  • Rancher Shell
  • Kubernetes CLI

On the other side, administrators need to manage the Kubernetes clusters through Rancher.

Rancher provides a feature called Projects to segregate resources inside a common domain. At the same time Projects doesn’t provide way to segregate Kubernetes cluster-scope resources.

Capsule as a project born for creating a framework for multi-tenant platforms, integrates with Rancher Projects enhancing the experience with Tenants.

Capsule allows tenants isolation and resources control in a declarative way, while enabling a self-service experience to tenants. With Capsule Proxy users can also access cluster-wide resources, as configured by administrators at Tenant custom resource-level.

You can read in detail how the integration works and how to configure it, in the following guides.

capsule rancher addon

Tenants and Projects

This guide explains how to setup the integration between Capsule and Rancher Projects.

It then explains how for the tenant user, the access to Kubernetes resources is transparent.

Pre-requisites

  • An authentication provider in Rancher, e.g. an OIDC identity provider
  • A Tenant Member Cluster Role in Rancher

Configure an identity provider for Kubernetes

You can follow this general guide to configure an OIDC authentication for Kubernetes.

For a Keycloak specific setup yon can check this resources list.

Known issues

Keycloak new URLs without /auth makes Rancher crash

Create the Tenant Member Cluster Role

A custom Rancher Cluster Role is needed to allow Tenant users, to read cluster-scope resources and Rancher doesn’t provide e built-in Cluster Role with this tailored set of privileges.

When logged-in to the Rancher UI as administrator, from the Users & Authentication page, create a Cluster Role named Tenant Member with the following privileges:

  • get, list, watch operations over IngressClasses resources.
  • get, list, watch operations over StorageClasses resources.
  • get, list, watch operations over PriorityClasses resources.
  • get, list, watch operations over Nodes resources.
  • get, list, watch operations over RuntimeClasses resources.

Configuration (administration)

Tenant onboarding

When onboarding tenants, the administrator needs to create the following, in order to bind the Project with the Tenant:

  • In Rancher, create a Project.

  • In the target Kubernetes cluster, create a Tenant, with the following specification:

    kind: Tenant
    ...
    spec:
      namespaceOptions:
        additionalMetadata:
          annotations:
            field.cattle.io/projectId: ${CLUSTER_ID}:${PROJECT_ID}
          labels:
            field.cattle.io/projectId: ${PROJECT_ID}
    

    where $CLUSTER_ID and $PROEJCT_ID can be retrieved, assuming a valid $CLUSTER_NAME, as:

    CLUSTER_NAME=foo
    CLUSTER_ID=$(kubectl get cluster -n fleet-default ${CLUSTER_NAME} -o jsonpath='{.status.clusterName}')
    PROJECT_IDS=$(kubectl get projects -n $CLUSTER_ID -o jsonpath="{.items[*].metadata.name}")
    for project_id in $PROJECT_IDS; do echo "${project_id}"; done
    

    More on declarative Projects here.

  • In the identity provider, create a user with correct OIDC claim of the Tenant.

  • In Rancher, add the new user to the Project with the Read-only Role.

  • In Rancher, add the new user to the Cluster with the Tenant Member Cluster Role.

Create the Tenant Member Project Role

A custom Project Role is needed to allow Tenant users, with minimum set of privileges and create and delete Namespaces.

Create a Project Role named Tenant Member that inherits the privileges from the following Roles:

  • read-only
  • create-ns

Usage

When the configuration administrative tasks have been completed, the tenant users are ready to use the Kubernetes cluster transparently.

For example can create Namespaces in a self-service mode, that would be otherwise impossible with the sole use of Rancher Projects.

Namespace creation

From the tenant user perspective both CLI and the UI are valid interfaces to communicate with.

From CLI

  • Tenants kubectl-logs in to the OIDC provider
  • Tenant creates a Namespace, as a valid OIDC-discoverable user.

the Namespace is now part of both the Tenant and the Project.

As administrator, you can verify with:

kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}'
kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID}

From UI

  • Tenants logs in to Rancher, with a valid OIDC-discoverable user (in a valid Tenant group).
  • Tenant user create a valid Namespace

the Namespace is now part of both the Tenant and the Project.

As administrator, you can verify with:

kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}'
kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID}

Additional administration

Project monitoring

Before proceeding is recommended to read the official Rancher documentation about Project Monitors.

In summary, the setup is composed by a cluster-level Prometheus, Prometheus Federator via which single Project-level Prometheus federate to.

Network isolation

Before proceeding is recommended to read the official Capsule documentation about NetworkPolicy at Tenant-level`.

Network isolation and Project Monitor

As Rancher’s Project Monitor deploys the Prometheus stack in a Namespace that is not part of neither the Project nor the Tenant Namespaces, is important to apply the label selectors in the NetworkPolicy ingress rules to the Namespace created by Project Monitor.

That Project monitoring Namespace will be named as cattle-project-<PROJECT_ID>-monitoring.

For example, if the NetworkPolicy is configured to allow all ingress traffic from Namespace with label capsule.clastix.io/tenant=foo, this label is to be applied to the Project monitoring Namespace too.

Then, a NetworkPolicy can be applied at Tenant-level with Capsule GlobalTenantResources. For example it can be applied a minimal policy for the oil Tenant:

apiVersion: capsule.clastix.io/v1beta2
kind: GlobalTenantResource
metadata:
  name: oil-networkpolicies
spec:
  tenantSelector:
    matchLabels:
      capsule.clastix.io/tenant: oil
  resyncPeriod: 360s
  pruningOnDelete: true
  resources:
    - namespaceSelector:
        matchLabels:
          capsule.clastix.io/tenant: oil
      rawItems:
      - apiVersion: networking.k8s.io/v1
        kind: NetworkPolicy
        metadata:
          name: oil-minimal
        spec:
          podSelector: {}
          policyTypes:
            - Ingress
            - Egress
          ingress:
            # Intra-Tenant
            - from:
              - namespaceSelector:
                  matchLabels:
                    capsule.clastix.io/tenant: oil
            # Rancher Project Monitor stack
            - from:
              - namespaceSelector:
                  matchLabels:
                    role: monitoring
            # Kubernetes nodes
            - from:
              - ipBlock:
                  cidr: 192.168.1.0/24
          egress:
            # Kubernetes DNS server
            - to:
              - namespaceSelector: {}
                podSelector:
                  matchLabels:
                    k8s-app: kube-dns
                ports:
                  - port: 53
                    protocol: UDP
            # Intra-Tenant
            - to:
              - namespaceSelector:
                  matchLabels:
                    capsule.clastix.io/tenant: oil
            # Kubernetes API server
            - to:
              - ipBlock:
                  cidr: 10.43.0.1/32
                ports:
                  - port: 443

Capsule Proxy and Rancher Projects

This guide explains how to setup the integration between Capsule Proxy and Rancher Projects.

It then explains how for the tenant user, the access to Kubernetes cluster-wide resources is transparent.

Rancher Shell and Capsule

In order to integrate the Rancher Shell with Capsule it’s needed to route the Kubernetes API requests made from the shell, via Capsule Proxy.

The capsule-rancher-addon allows the integration transparently.

Install the Capsule addon

Add the Clastix Helm repository https://clastix.github.io/charts.

By updating the cache with Clastix’s Helm repository a Helm chart named capsule-rancher-addon is available.

Install keeping attention to the following Helm values:

  • proxy.caSecretKey: the Secret key that contains the CA certificate used to sign the Capsule Proxy TLS certificate (it should be"ca.crt" when Capsule Proxy has been configured with certificates generated with Cert Manager).
  • proxy.servicePort: the port configured for the Capsule Proxy Kubernetes Service (443 in this setup).
  • proxy.serviceURL: the name of the Capsule Proxy Service (by default "capsule-proxy.capsule-system.svc" hen installed in the capsule-system Namespace).

Rancher Cluster Agent

In both CLI and dashboard use cases, the Cluster Agent is responsible for the two-way communication between Rancher and the downstream cluster.

In a standard setup, the Cluster Agents communicates to the API server. In this setup it will communicate with Capsule Proxy to ensure filtering of cluster-scope resources, for Tenants.

Cluster Agents accepts as arguments:

  • KUBERNETES_SERVICE_HOST environment variable
  • KUBERNETES_SERVICE_PORT environment variable

which will be set, at cluster import-time, to the values of the Capsule Proxy Service. For example:

  • KUBERNETES_SERVICE_HOST=capsule-proxy.capsule-system.svc
  • (optional) KUBERNETES_SERVICE_PORT=9001. You can skip it by installing Capsule Proxy with Helm value service.port=443.

The expected CA is the one for which the certificate is inside the kube-root-ca ConfigMap in the same Namespace of the Cluster Agent (cattle-system).

Capsule Proxy

Capsule Proxy needs to provide a x509 certificate for which the root CA is trusted by the Cluster Agent. The goal can be achieved by, either using the Kubernetes CA to sign its certificate, or by using a dedicated root CA.

With the Kubernetes root CA

Note: this can be achieved when the Kubernetes root CA keypair is accessible. For example is likely to be possibile with on-premise setup, but not with managed Kubernetes services.

With this approach Cert Manager will sign certificates with the Kubernetes root CA for which it’s needed to be provided a Secret.

kubectl create secret tls -n capsule-system kubernetes-ca-key-pair --cert=/path/to/ca.crt --key=/path/to/ca.key

When installing Capsule Proxy with Helm chart, it’s needed to specify to generate Capsule Proxy Certificates with Cert Manager with an external ClusterIssuer:

  • certManager.externalCA.enabled=true
  • certManager.externalCA.secretName=kubernetes-ca-key-pair
  • certManager.generateCertificates=true

and disable the job for generating the certificates without Cert Manager:

  • options.generateCertificates=false

Enable tenant users access cluster resources

In order to allow tenant users to list cluster-scope resources, like Nodes, Tenants need to be configured with proper proxySettings, for example:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - kind: User
    name: alice
    proxySettings:
    - kind: Nodes
      operations:
      - List
[...]

Also, in order to assign or filter nodes per Tenant, it’s needed labels on node in order to be selected:

kubectl label node worker-01 capsule.clastix.io/tenant=oil

and a node selector at Tenant level:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  nodeSelector:
    capsule.clastix.io/tenant: oil
[...]

The final manifest is:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - kind: User
    name: alice
    proxySettings:
    - kind: Node
      operations:
      - List
  nodeSelector:
    capsule.clastix.io/tenant: oil

The same appplies for:

  • Nodes
  • StorageClasses
  • IngressClasses
  • PriorityClasses

More on this in the official documentation.

Configure OIDC authentication with Keycloak

Pre-requisites

  • Keycloak realm for Rancher
  • Rancher OIDC authentication provider

Keycloak realm for Rancher

These instructions is specific to a setup made with Keycloak as an OIDC identity provider.

Mappers

  • Add to userinfo Group Membership type, claim name groups
  • Add to userinfo Audience type, claim name client audience
  • Add to userinfo, full group path, Group Membership type, claim name full_group_path

More on this on the official guide.

Rancher OIDC authentication provider

Configure an OIDC authentication provider, with Client with issuer, return URLs specific to the Keycloak setup.

Use old and Rancher-standard paths with /auth subpath (see issues below).

Add custom paths, remove /auth subpath in return and issuer URLs.

Configuration

Configure Tenant users

  1. In Rancher, configure OIDC authentication with Keycloak to use with Rancher.
  2. In Keycloak, Create a Group in the rancher Realm: capsule.clastix.io.
  3. In Keycloak, Create a User in the rancher Realm, member of capsule.clastix.io Group.
  4. In the Kubernetes target cluster, update the CapsuleConfiguration by adding the "keycloakoidc_group://capsule.clastix.io" Kubernetes Group.
  5. Login to Rancher with Keycloak with the new user.
  6. In Rancher as an administrator, set the user custom role with get of Cluster.
  7. In Rancher as an administrator, add the Rancher user ID of the just-logged in user as Owner of a Tenant.
  8. (optional) configure proxySettings for the Tenant to enable tenant users to access cluster-wide resources.

1.7 - Tekton

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Prerequisites

Tekton must be already installed on your cluster, if that’s not the case consult the documentation here:

Cluster Scoped Permissions

Tekton Dashboard

Now for the enduser experience we are going to deploy the tekton dashboard. When using oauth2-proxy we can deploy one single dashboard, which can be used for all tenants. Refer to the following guide to setup the dashboard with the oauth2-proxy:

Once that is done, we need to make small adjustments to the tekton-dashboard service account.

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
patches:
  # Adjust the service for the capsule-proxy according to your installation
  # The used values are compatbile with the default installation values
  - target:
      version: v1
      kind: Deployment
      name: tekton-dashboard
    patch: |-
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: KUBERNETES_SERVICE_HOST
          value: "capsule-proxy.capsule-system.svc"
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: KUBERNETES_SERVICE_PORT
          value: "9001"

  # Adjust the CA certificate for the capsule-proxy according to your installation
  - target:
      version: v1
      kind: Deployment
      name: tekton-dashboard
    patch: |-
      - op: add
        path: /spec/template/spec/containers/0/volumeMounts
        value: []
      - op: add
        path: /spec/template/spec/containers/0/volumeMounts/-
        value:
          mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
          name: token-ca
      - op: add
        path: /spec/template/spec/volumes
        value: []
      - op: add
        path: /spec/template/spec/volumes/-
        value:
          name: token-ca
          projected:
            sources:
              - serviceAccountToken:
                  expirationSeconds: 86400
                  path: token
              - secret:
                  name: capsule-proxy
                  items:
                    - key: ca
                      path: ca.crt

This patch assumes there’s a secret called capsule-proxy with the CA certificate for the Capsule Proxy URL.

Apply the given kustomization:

extraEnv:

  • name: KUBERNETES_SERVICE_HOST value: ‘${CAPSULE_PROXY_URL}’
  • name: KUBERNETES_SERVICE_PORT value: ‘${CAPSULE_PROXY_PORT}’

Tekton Operator

When using the Tekton Operator, you need to add the following to the TektonConfig:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  dashboard:
    readonly: false
    options:
      disabled: false
      deployments:
        tekton-dashboard:
          spec:
            template:
              spec:
                volumes:
                  - name: token-ca
                    projected:
                      sources:
                        - serviceAccountToken:
                            expirationSeconds: 86400
                            path: token
                        - secret:
                            name: capsule-proxy
                            items:
                              - key: ca
                                path: ca.crt
                containers:
                  - name: tekton-dashboard
                    volumeMounts:
                      - mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
                        name: token-ca
                    env:
                      - name: KUBERNETES_SERVICE_HOST
                        value: "capsule-proxy.capsule-system.svc"
                      - name: KUBERNETES_SERVICE_PORT
                        value: "9001"

See for reference the options spec

1.8 - Teleport

With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.

Features

Capsule extension for Lens provides these capabilities:

  • List all tenants
  • See tenant details and change through the embedded Lens editor
  • Check Resources Quota and Budget at both the tenant and namespace level

Please, see the README for details about the installation of the Capsule Lens Extension.

1.9 - Velero