Ecosystem
All entries on this page were added by people who worked on these and thus self-identified as being part of the Project Capsule Ecosystem.
Integrations
Capsule works well with other CNCF kubernetes based solutions. Below you can see the ones we have documented. In the end it can work with any solution, due to Capsule’s kubernetes native approach:
Addons
Addons are seperate projects which interact with the core Capsule Project. Since our commitment is, to have a stable core API we decided to push towards an addon based ecosystem. If you have a new addon, which interacts with the capsule core project, consider adding the addon.
Capsule Proxy
core
gitops
This addon is designed for kubernetes administrators, to automatically translate their existing Capsule Tenants into Argo Appprojects.
ArgoCD Addon
community
gitops
This addon is designed for kubernetes administrators, to automatically translate their existing Capsule Tenants into Argo Appprojects.
Flux Addon
community
gitops
In particular enables Tenants to manage their resources, including creating Namespaces, respecting the [Flux multi-tenancy lockdown](https://fluxcd.io/flux/installation/configuration/multitenancy/).
1 - Integrations
Integrate Capsule with other platforms and solutions
1.1 - Managed Kubernetes
Capsule on managed Kubernetes offerings
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since you do not have access to the Kubernetes APIs Server, you should check with the provider of the service:
the default cluster-admin ClusterRole is accessible
the following Admission Webhooks are enabled on the APIs Server:
PodNodeSelector
LimitRanger
ResourceQuota
MutatingAdmissionWebhook
ValidatingAdmissionWebhook
AWS EKS
This is an example of how to install AWS EKS cluster and one user manged by Capsule. It is based on Using IAM Groups to manage Kubernetes access
Create EKS cluster:
export AWS_DEFAULT_REGION="eu-west-1"
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"
eksctl create cluster \
--name=test-k8s \
--managed \
--node-type=t3.small \
--node-volume-size=20 \
--kubeconfig=kubeconfig.conf
Create AWS User alice using CloudFormation, create AWS access files and kubeconfig for such user:
cat > cf.yml << EOF
Parameters:
ClusterName:
Type: String
Resources:
UserAlice:
Type: AWS::IAM::User
Properties:
UserName: !Sub "alice-${ClusterName}"
Policies:
- PolicyName: !Sub "alice-${ClusterName}-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: AllowAssumeOrganizationAccountRole
Effect: Allow
Action: sts:AssumeRole
Resource: !GetAtt RoleAlice.Arn
AccessKeyAlice:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref UserAlice
RoleAlice:
Type: AWS::IAM::Role
Properties:
Description: !Sub "IAM role for the alice-${ClusterName} user"
RoleName: !Sub "alice-${ClusterName}"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
Action: sts:AssumeRole
Outputs:
RoleAliceArn:
Description: The ARN of the Alice IAM Role
Value: !GetAtt RoleAlice.Arn
Export:
Name:
Fn::Sub: "${AWS::StackName}-RoleAliceArn"
AccessKeyAlice:
Description: The AccessKey for Alice user
Value: !Ref AccessKeyAlice
Export:
Name:
Fn::Sub: "${AWS::StackName}-AccessKeyAlice"
SecretAccessKeyAlice:
Description: The SecretAccessKey for Alice user
Value: !GetAtt AccessKeyAlice.SecretAccessKey
Export:
Name:
Fn::Sub: "${AWS::StackName}-SecretAccessKeyAlice"
EOF
eval aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides "ClusterName=test-k8s" \
--stack-name "test-k8s-users" --template-file cf.yml
AWS_CLOUDFORMATION_DETAILS=$(aws cloudformation describe-stacks --stack-name "test-k8s-users")
ALICE_ROLE_ARN=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"RoleAliceArn\") .OutputValue")
ALICE_USER_ACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"AccessKeyAlice\") .OutputValue")
ALICE_USER_SECRETACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"SecretAccessKeyAlice\") .OutputValue")
eksctl create iamidentitymapping --cluster="test-k8s" --arn="${ALICE_ROLE_ARN}" --username alice --group capsule.clastix.io
cat > aws_config << EOF
[profile alice]
role_arn=${ALICE_ROLE_ARN}
source_profile=alice
EOF
cat > aws_credentials << EOF
[alice]
aws_access_key_id=${ALICE_USER_ACCESSKEY}
aws_secret_access_key=${ALICE_USER_SECRETACCESSKEY}
EOF
eksctl utils write-kubeconfig --cluster=test-k8s --kubeconfig="kubeconfig-alice.conf"
cat >> kubeconfig-alice.conf << EOF
- name: AWS_PROFILE
value: alice
- name: AWS_CONFIG_FILE
value: aws_config
- name: AWS_SHARED_CREDENTIALS_FILE
value: aws_credentials
EOF
Export “admin” kubeconfig to be able to install Capsule:
export KUBECONFIG=kubeconfig.conf
Install Capsule and create a tenant where alice has ownership. Use the default Tenant example:
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml
Based on the tenant configuration above the user alice should be able to create namespace. Switch to a new terminal and try to create a namespace as user alice:
# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"
Azure AKS
This reference implementation introduces the recommended starting (baseline) infrastructure architecture for implementing a multi-tenancy Azure AKS cluster using Capsule. See CoAKS.
Charmed Kubernetes
Canonical Charmed Kubernetes is a Kubernetes distribution coming with out-of-the-box tools that support deployments and operational management and make microservice development easier. Combined with Capsule, Charmed Kubernetes allows users to further reduce the operational overhead of Kubernetes setup and management.
The Charm package for Capsule is available to Charmed Kubernetes users via Charmhub.io.
1.2 - Kubernetes Dashboard
Capsule Integration with Kubernetes Dashboard
This guide works with the kubernetes dashboard v2.0.0 (Chart 6.0.8). It has not yet been tested successfully with with v3.x version of the dashboard.
This guide describes how to integrate the Kubernetes Dashboard and Capsule Proxy with OIDC authorization.
OIDC Authentication
Your cluster must also be configured to use OIDC Authentication for seemless Kubernetes RBAC integration. In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need: 1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: “https://${DASHBOARD_URL}/oauth2/callback”, it needs to add this path to the Valid Redirect URIs 2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’. 3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name (${OIDC_CLIENT_ID}
).
OAuth2 Proxy
To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy. In this article, we will use oauth2-proxy and install it as a pod in the Kubernetes Dashboard namespace. Alternatively, we can install oauth2-proxy in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.
Prepare the values for oauth2-proxy:
cat > values-oauth2-proxy.yaml <<EOF
config:
clientID: "${OIDC_CLIENT_ID}"
clientSecret: ${OIDC_CLIENT_SECRET}
extraArgs:
provider: "keycloak-oidc"
redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
pass-access-token: true
set-authorization-header: true
pass-user-headers: true
ingress:
enabled: true
path: "/oauth2"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
EOF
More information about the keycloak-oidc provider can be found on the oauth2-proxy documentation. We’re ready to install the oauth2-proxy:
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml
Configuring Keycloak
The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we’re giving for granted that Keycloak is used, if you need more info please follow the OIDC Authentication section.
In a such scenario, you should have in the kube-apiserver.yaml
manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need:
- Check
Valid Redirect URIs
: in the oauth2-proxy
configuration we set redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
, it needs to add this path to the Valid Redirect URIs
- Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’.
- Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).
Configuring Kubernetes Dashboard
If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.
cat > ca.crt<< EOF
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
EOF
kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}
Prepare the values for the Kubernetes Dashboard:
cat > values-kubernetes-dashboard.yaml <<EOF
extraVolumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: certificate
items:
- key: ca.crt
path: ca.crt
extraVolumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: token-ca
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
extraEnv:
- name: KUBERNETES_SERVICE_HOST
value: '${CAPSULE_PROXY_URL}'
- name: KUBERNETES_SERVICE_PORT
value: '${CAPSULE_PROXY_PORT}'
EOF
To add the Certificate Authority for the Capsule Proxy URL, we use the volume token-ca to mount the ca.crt file. Additionally, we set the environment variables KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
to route requests to the Capsule Proxy.
Now you can install the Kubernetes Dashboard:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml
1.3 - Kyverno
Kyverno is a policy engine designed for Kubernetes. It provides the ability to validate, mutate, and generate Kubernetes resources using admission control. Kyverno policies are managed as Kubernetes resources and can be applied to a cluster using kubectl. Capsule integrates with Kyverno to provide a set of policies that can be used to improve the security and governance of the Kubernetes cluster.
References
Here are some policies for reference. We do not provide a complete list of policies, but we provide some examples to get you started. This policies are not meant to be used in production. You may adopt principles shown here to create your own policies.
To get the tenant name based on the namespace, you can use a context. With this context we resolve the tenant, based on the {{request.namespace}}
for the requested resource. The context calls /api/v1/namespaces/
API with the {{request.namespace}}
. The jmesPath
is used to check if the tenant label is present. You could assign a default if nothing was found, in this case it’s empty string:
context:
- name: tenant_name
apiCall:
method: GET
urlPath: "/api/v1/namespaces/{{request.namespace}}"
jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || '')"
Select namespaces with label capsule.clastix.io/tenant
When you are performing a policy on namespaced objects, you can select the objects, which are within a tenant namespace by using the namespaceSelector
. In this example we select all Kustomization
and HelmRelease
resources which are within a tenant namespace:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-policies
spec:
validationFailureAction: Enforce
rules:
# Enforcement (Mutate to Default)
- name: Defaults Kustomizations/HelmReleases
match:
any:
- resources:
kinds:
- Kustomization
- HelmRelease
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
mutate:
patchStrategicMerge:
spec:
+(targetNamespace): "{{ request.object.metadata.namespace }}"
+(serviceAccountName): "default"
Compare Source and Destination Tenant
With this policy we try to enforce, that helmreleases within a tenant can only use targetNamespaces, which are within the same tenant or the same namespace the resource is deployed in:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-compare
spec:
validationFailureAction: Enforce
background: true
rules:
- name: Validate HelmRelease/Kustomization Target Namespace
context:
# Get tenant based on target namespace
- name: destination_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.spec.targetNamespace}}"
jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
# Get tenant based on resource namespace
- name: source_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
match:
any:
- resources:
kinds:
- HelmRelease
- Kustomization
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
preconditions:
all:
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
values: [ "{{request.object.metadata.namespace}}" ]
validate:
message: "spec.targetNamespace must be in the same tenant ({{source_tenant}})"
deny:
conditions:
- key: "{{source_tenant}}"
operator: NotEquals
value: "{{destination_tenant}}"
Using Global Configuration
When creating a a lot of policies, you might want to abstract your configuration into a global configuration. This is a good practice to avoid duplication and to have a single source of truth. Also if we introduce breaking changes (like changing the label name), we only have to change it in one place. Here is an example of a global configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: kyverno-global-config
namespace: kyverno-system
data:
# Label for public namespaces
public_identifier_label: "company.com/public"
# Value for Label for public namespaces
public_identifier_value: "yeet"
# Label which is used to select the tenant name
tenant_identifier_label: "capsule.clastix.io/tenant"
This configuration can be referenced via context in your policies. Let’s extend the above policy with the global configuration. Additionally we would like to allow the usage of public namespaces:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-compare
spec:
validationFailureAction: Enforce
background: true
rules:
- name: Validate HelmRelease/Kustomization Target Namespace
context:
# Load Gloabl Configuration
- name: global
configMap:
name: kyverno-global-config
namespace: kyverno-system
# Get All Public Namespaces based on the label and it's value from the global configuration
- name: public_namespaces
apiCall:
urlPath: "/api/v1/namespaces"
jmesPath: "items[?metadata.labels.\"{{global.data.public_identifier_label}}\" == '{{global.data.public_identifier_value}}'].metadata.name | []"
# Get Tenant information from source namespace
# Defaults to a character, which can't be a label value
- name: source_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
jmesPath: "metadata.labels.\"{{global.data.tenant_identifier_label}}\" | '?'"
# Get Tenant information from destination namespace
# Returns Array with Tenant Name or Empty
- name: destination_tenant
apiCall:
urlPath: "/api/v1/namespaces"
jmesPath: "items[?metadata.name == '{{request.object.spec.targetNamespace}}'].metadata.labels.\"{{global.data.tenant_identifier_label}}\""
preconditions:
all:
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
values: [ "{{request.object.metadata.namespace}}" ]
any:
# Source is not Self-Reference
- key: "{{request.object.spec.targetNamespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
# Source not in Public Namespaces
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
value: "{{public_namespaces}}"
# Source not in Destination
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
value: "{{destination_tenant}}"
match:
any:
- resources:
kinds:
- HelmRelease
- Kustomization
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
validate:
message: "Can not use namespace {{request.object.spec.chart.spec.sourceRef.namespace}} as source reference!"
deny: {}
Extended Validation and Defaulting
Here’s extended examples for using validation and defaulting. The first policy is used to validate the tenant name. The second policy is used to default the tenant properties, you as cluster-administrator would like to enforce for each tenant.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-core
spec:
validationFailureAction: Enforce
rules:
- name: tenant-name
match:
all:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
validate:
message: "Using this tenant name is not allowed."
deny:
conditions:
- key: "{{ request.object.metadata.name }}"
operator: In
value: ["default", "cluster-system" ]
- name: tenant-properties
match:
any:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
patchesJson6902: |-
- op: add
path: "/spec/namespaceOptions/forbiddenLabels/deniedRegex"
value: ".*company.ch"
- op: add
path: "/spec/priorityClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/serviceOptions/allowedServices/nodePort"
value: false
- op: add
path: "/spec/ingressOptions/allowedClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/storageClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/nodeSelector"
value:
nodepool: "workers"
Adding Default Owners/Permissions to Tenant
Since the Owners Spec is a list, it’s a bit more trickier to add a default owner without causing recursions. You must make sure, to validate if the value you are setting is already present. Otherwise you will create a loop. Here is an example of a policy, which adds the cluster:admin
as owner to a tenant:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-policy
spec:
validationFailureAction: Enforce
background: true
rules:
# With this policy for each tenant cluster:admin is added as owner.
# Only Append these on CREATE, otherwise they will be added per reconciliation and create a loop.
- name: tenant-owner
preconditions:
all:
- key: "cluster:admin"
operator: NotIn
value: "{{ request.object.spec.owners[?kind == 'Group'].name }}"
match:
all:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
patchesJson6902: |-
- op: add
path: "/spec/owners/-"
value:
name: "cluster:admin"
kind: "Group"
# With this policy for each tenant a default ProxySettings are added.
# Completely overwrites the ProxySettings, if they are already present.
- name: tenant-proxy-settings
match:
any:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
foreach:
- list: "request.object.spec.owners"
patchesJson6902: |-
- path: /spec/owners/{{elementIndex}}/proxySettings
op: add
value:
- kind: IngressClasses
operations:
- List
- kind: StorageClasses
operations:
- List
- kind: PriorityClasses
operations:
- List
- kind: Nodes
operations:
- List
1.4 - Lens
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.
1.5 - Monitoring
While we can not provide a full list of all the monitoring solutions available, we can provide some guidance on how to integrate Capsule with some of the most popular ones. Also this is dependent on how you have set up your monitoring solution. We will just explore the options available to you.
Logging
Loki
Promtail
config:
clients:
- url: "https://loki.company.com/loki/api/v1/push"
# Maximum wait period before sending batch
batchwait: 1s
# Maximum batch size to accrue before sending, unit is byte
batchsize: 102400
# Maximum time to wait for server to respond to a request
timeout: 10s
backoff_config:
# Initial backoff time between retries
min_period: 100ms
# Maximum backoff time between retries
max_period: 5s
# Maximum number of retries when sending batches, 0 means infinite retries
max_retries: 20
tenant_id: "tenant"
external_labels:
cluster: "${cluster_name}"
serverPort: 3101
positions:
filename: /run/promtail/positions.yaml
target_config:
# Period to resync directories being watched and files being tailed
sync_period: 10s
snippets:
pipelineStages:
- docker: {}
# Drop health logs
- drop:
expression: "(.*/health-check.*)|(.*/health.*)|(.*kube-probe.*)"
- static_labels:
cluster: ${cluster}
- tenant:
source: tenant
# This wont work if pods on the cluster are not labeled with tenant
extraRelabelConfigs:
- action: replace
source_labels:
- __meta_kubernetes_pod_label_capsule_clastix_io_tenant
target_label: tenant
...
As mentioned, the above configuration will not work if the pods on the cluster are not labeled with tenant. You can use the following Kyverno policy to ensure that all pods are labeled with tenant. If the pod does not belong to any tenant, it will be labeled with management (assuming you have a central management tenant)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: capsule-pod-labels
spec:
background: false
rules:
- name: add-pod-label
context:
- name: tenant
apiCall:
method: GET
urlPath: "/api/v1/namespaces/{{request.namespace}}"
jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || 'management')"
match:
all:
- resources:
kinds:
- Pod
operations:
- CREATE
- UPDATE
mutate:
patchStrategicMerge:
metadata:
labels:
+(capsule.clastix.io/tenant): "{{ tenant_name }}"
Grafana
1.6 - Rancher
The integration between Rancher and Capsule, aims to provide a multi-tenant Kubernetes service to users, enabling:
- a self-service approach
- access to cluster-wide resources
to end-users.
Tenant users will have the ability to access Kubernetes resources through:
- Rancher UI
- Rancher Shell
- Kubernetes CLI
On the other side, administrators need to manage the Kubernetes clusters through Rancher.
Rancher provides a feature called Projects to segregate resources inside a common domain. At the same time Projects doesn’t provide way to segregate Kubernetes cluster-scope resources.
Capsule as a project born for creating a framework for multi-tenant platforms, integrates with Rancher Projects enhancing the experience with Tenants.
Capsule allows tenants isolation and resources control in a declarative way, while enabling a self-service experience to tenants. With Capsule Proxy users can also access cluster-wide resources, as configured by administrators at Tenant custom resource-level.
You can read in detail how the integration works and how to configure it, in the following guides.
How to integrate Rancher Projects with Capsule Tenants
How to enable cluster-wide resources and Rancher shell access.
Tenants and Projects
1.7 - Tekton
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Prerequisites
Tekton must be already installed on your cluster, if that’s not the case consult the documentation here:
Cluster Scoped Permissions
Tekton Dashboard
Now for the enduser experience we are going to deploy the tekton dashboard. When using oauth2-proxy we can deploy one single dashboard, which can be used for all tenants. Refer to the following guide to setup the dashboard with the oauth2-proxy:
Once that is done, we need to make small adjustments to the tekton-dashboard
service account.
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
patches:
# Adjust the service for the capsule-proxy according to your installation
# The used values are compatbile with the default installation values
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_PORT
value: "9001"
# Adjust the CA certificate for the capsule-proxy according to your installation
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/volumeMounts
value: []
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
- op: add
path: /spec/template/spec/volumes
value: []
- op: add
path: /spec/template/spec/volumes/-
value:
name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
This patch assumes there’s a secret called capsule-proxy
with the CA certificate for the Capsule Proxy URL.
Apply the given kustomization:
extraEnv:
- name: KUBERNETES_SERVICE_HOST
value: ‘${CAPSULE_PROXY_URL}’
- name: KUBERNETES_SERVICE_PORT
value: ‘${CAPSULE_PROXY_PORT}’
Tekton Operator
When using the Tekton Operator, you need to add the following to the TektonConfig
:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
dashboard:
readonly: false
options:
disabled: false
deployments:
tekton-dashboard:
spec:
template:
spec:
volumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
containers:
- name: tekton-dashboard
volumeMounts:
- mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
env:
- name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- name: KUBERNETES_SERVICE_PORT
value: "9001"
See for reference the options spec
1.8 - Teleport
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.
1.9 - Velero