The project Capsule overs addons for certain common use-cases. These addons are provided as a way to extend the functionality of Capsule and are not part of the core Capsule project.
This is the multi-page printable view of this section. Click here to print.
Tools
- 1: Kubernetes Dashboard
- 2: Kyverno
- 3: Lens
- 4: Monitoring
- 5: Rancher
- 6: Tekton
- 7: Teleport
1 - Kubernetes Dashboard
This guide works with the kubernetes dashboard v2.0.0 (Chart 6.0.8). It has not yet been tested successfully with with v3.x version of the dashboard.
This guide describes how to integrate the Kubernetes Dashboard and Capsule Proxy with OIDC authorization.
OIDC Authentication
Your cluster must also be configured to use OIDC Authentication for seemless Kubernetes RBAC integration. In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need: 1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: “https://${DASHBOARD_URL}/oauth2/callback”, it needs to add this path to the Valid Redirect URIs 2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’. 3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name (${OIDC_CLIENT_ID}
).
OAuth2 Proxy
To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy. In this article, we will use oauth2-proxy and install it as a pod in the Kubernetes Dashboard namespace. Alternatively, we can install oauth2-proxy in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.
Prepare the values for oauth2-proxy:
cat > values-oauth2-proxy.yaml <<EOF
config:
clientID: "${OIDC_CLIENT_ID}"
clientSecret: ${OIDC_CLIENT_SECRET}
extraArgs:
provider: "keycloak-oidc"
redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
pass-access-token: true
set-authorization-header: true
pass-user-headers: true
ingress:
enabled: true
path: "/oauth2"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
EOF
More information about the keycloak-oidc provider can be found on the oauth2-proxy documentation. We’re ready to install the oauth2-proxy:
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml
Configuring Keycloak
The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we’re giving for granted that Keycloak is used, if you need more info please follow the OIDC Authentication section.
In a such scenario, you should have in the kube-apiserver.yaml
manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need:
- Check
Valid Redirect URIs
: in theoauth2-proxy
configuration we setredirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
, it needs to add this path to theValid Redirect URIs
- Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’.
- Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).
Configuring Kubernetes Dashboard
If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.
cat > ca.crt<< EOF
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
EOF
kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}
Prepare the values for the Kubernetes Dashboard:
cat > values-kubernetes-dashboard.yaml <<EOF
extraVolumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: certificate
items:
- key: ca.crt
path: ca.crt
extraVolumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: token-ca
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
extraEnv:
- name: KUBERNETES_SERVICE_HOST
value: '${CAPSULE_PROXY_URL}'
- name: KUBERNETES_SERVICE_PORT
value: '${CAPSULE_PROXY_PORT}'
EOF
To add the Certificate Authority for the Capsule Proxy URL, we use the volume token-ca to mount the ca.crt file. Additionally, we set the environment variables KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
to route requests to the Capsule Proxy.
Now you can install the Kubernetes Dashboard:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml
2 - Kyverno
Kyverno is a policy engine designed for Kubernetes. It provides the ability to validate, mutate, and generate Kubernetes resources using admission control. Kyverno policies are managed as Kubernetes resources and can be applied to a cluster using kubectl. Capsule integrates with Kyverno to provide a set of policies that can be used to improve the security and governance of the Kubernetes cluster.
References
Here are some policies for reference. We do not provide a complete list of policies, but we provide some examples to get you started. This policies are not meant to be used in production. You may adopt principles shown here to create your own policies.
Extract tenant based on namespace
To get the tenant name based on the namespace, you can use a context. With this context we resolve the tenant, based on the {{request.namespace}}
for the requested resource. The context calls /api/v1/namespaces/
API with the {{request.namespace}}
. The jmesPath
is used to check if the tenant label is present. You could assign a default if nothing was found, in this case it’s empty string:
context:
- name: tenant_name
apiCall:
method: GET
urlPath: "/api/v1/namespaces/{{request.namespace}}"
jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || '')"
Select namespaces with label capsule.clastix.io/tenant
When you are performing a policy on namespaced objects, you can select the objects, which are within a tenant namespace by using the namespaceSelector
. In this example we select all Kustomization
and HelmRelease
resources which are within a tenant namespace:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-policies
spec:
validationFailureAction: Enforce
rules:
# Enforcement (Mutate to Default)
- name: Defaults Kustomizations/HelmReleases
match:
any:
- resources:
kinds:
- Kustomization
- HelmRelease
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
mutate:
patchStrategicMerge:
spec:
+(targetNamespace): "{{ request.object.metadata.namespace }}"
+(serviceAccountName): "default"
Compare Source and Destination Tenant
With this policy we try to enforce, that helmreleases within a tenant can only use targetNamespaces, which are within the same tenant or the same namespace the resource is deployed in:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-compare
spec:
validationFailureAction: Enforce
background: true
rules:
- name: Validate HelmRelease/Kustomization Target Namespace
context:
# Get tenant based on target namespace
- name: destination_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.spec.targetNamespace}}"
jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
# Get tenant based on resource namespace
- name: source_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
jmesPath: "metadata.labels.\"capsule.clastix.io/tenant\""
match:
any:
- resources:
kinds:
- HelmRelease
- Kustomization
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
preconditions:
all:
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
values: [ "{{request.object.metadata.namespace}}" ]
validate:
message: "spec.targetNamespace must be in the same tenant ({{source_tenant}})"
deny:
conditions:
- key: "{{source_tenant}}"
operator: NotEquals
value: "{{destination_tenant}}"
Using Global Configuration
When creating a a lot of policies, you might want to abstract your configuration into a global configuration. This is a good practice to avoid duplication and to have a single source of truth. Also if we introduce breaking changes (like changing the label name), we only have to change it in one place. Here is an example of a global configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: kyverno-global-config
namespace: kyverno-system
data:
# Label for public namespaces
public_identifier_label: "company.com/public"
# Value for Label for public namespaces
public_identifier_value: "yeet"
# Label which is used to select the tenant name
tenant_identifier_label: "capsule.clastix.io/tenant"
This configuration can be referenced via context in your policies. Let’s extend the above policy with the global configuration. Additionally we would like to allow the usage of public namespaces:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-compare
spec:
validationFailureAction: Enforce
background: true
rules:
- name: Validate HelmRelease/Kustomization Target Namespace
context:
# Load Gloabl Configuration
- name: global
configMap:
name: kyverno-global-config
namespace: kyverno-system
# Get All Public Namespaces based on the label and it's value from the global configuration
- name: public_namespaces
apiCall:
urlPath: "/api/v1/namespaces"
jmesPath: "items[?metadata.labels.\"{{global.data.public_identifier_label}}\" == '{{global.data.public_identifier_value}}'].metadata.name | []"
# Get Tenant information from source namespace
# Defaults to a character, which can't be a label value
- name: source_tenant
apiCall:
urlPath: "/api/v1/namespaces/{{request.object.metadata.namespace}}"
jmesPath: "metadata.labels.\"{{global.data.tenant_identifier_label}}\" | '?'"
# Get Tenant information from destination namespace
# Returns Array with Tenant Name or Empty
- name: destination_tenant
apiCall:
urlPath: "/api/v1/namespaces"
jmesPath: "items[?metadata.name == '{{request.object.spec.targetNamespace}}'].metadata.labels.\"{{global.data.tenant_identifier_label}}\""
preconditions:
all:
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
values: [ "{{request.object.metadata.namespace}}" ]
any:
# Source is not Self-Reference
- key: "{{request.object.spec.targetNamespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
# Source not in Public Namespaces
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
value: "{{public_namespaces}}"
# Source not in Destination
- key: "{{request.object.spec.targetNamespace}}"
operator: NotIn
value: "{{destination_tenant}}"
match:
any:
- resources:
kinds:
- HelmRelease
- Kustomization
operations:
- CREATE
- UPDATE
namespaceSelector:
matchExpressions:
- key: "capsule.clastix.io/tenant"
operator: Exists
validate:
message: "Can not use namespace {{request.object.spec.chart.spec.sourceRef.namespace}} as source reference!"
deny: {}
Extended Validation and Defaulting
Here’s extended examples for using validation and defaulting. The first policy is used to validate the tenant name. The second policy is used to default the tenant properties, you as cluster-administrator would like to enforce for each tenant.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-core
spec:
validationFailureAction: Enforce
rules:
- name: tenant-name
match:
all:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
validate:
message: "Using this tenant name is not allowed."
deny:
conditions:
- key: "{{ request.object.metadata.name }}"
operator: In
value: ["default", "cluster-system" ]
- name: tenant-properties
match:
any:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
patchesJson6902: |-
- op: add
path: "/spec/namespaceOptions/forbiddenLabels/deniedRegex"
value: ".*company.ch"
- op: add
path: "/spec/priorityClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/serviceOptions/allowedServices/nodePort"
value: false
- op: add
path: "/spec/ingressOptions/allowedClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/storageClasses/matchLabels"
value:
consumer: "customer"
- op: add
path: "/spec/nodeSelector"
value:
nodepool: "workers"
Adding Default Owners/Permissions to Tenant
Since the Owners Spec is a list, it’s a bit more trickier to add a default owner without causing recursions. You must make sure, to validate if the value you are setting is already present. Otherwise you will create a loop. Here is an example of a policy, which adds the cluster:admin
as owner to a tenant:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant-policy
spec:
validationFailureAction: Enforce
background: true
rules:
# With this policy for each tenant cluster:admin is added as owner.
# Only Append these on CREATE, otherwise they will be added per reconciliation and create a loop.
- name: tenant-owner
preconditions:
all:
- key: "cluster:admin"
operator: NotIn
value: "{{ request.object.spec.owners[?kind == 'Group'].name }}"
match:
all:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
patchesJson6902: |-
- op: add
path: "/spec/owners/-"
value:
name: "cluster:admin"
kind: "Group"
# With this policy for each tenant a default ProxySettings are added.
# Completely overwrites the ProxySettings, if they are already present.
- name: tenant-proxy-settings
match:
any:
- resources:
kinds:
- "capsule.clastix.io/v1beta2/Tenant"
operations:
- CREATE
- UPDATE
mutate:
foreach:
- list: "request.object.spec.owners"
patchesJson6902: |-
- path: /spec/owners/{{elementIndex}}/proxySettings
op: add
value:
- kind: IngressClasses
operations:
- List
- kind: StorageClasses
operations:
- List
- kind: PriorityClasses
operations:
- List
- kind: Nodes
operations:
- List
3 - Lens
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.
4 - Monitoring
While we can not provide a full list of all the monitoring solutions available, we can provide some guidance on how to integrate Capsule with some of the most popular ones. Also this is dependent on how you have set up your monitoring solution. We will just explore the options available to you.
Logging
Loki
Promtail
config:
clients:
- url: "https://loki.company.com/loki/api/v1/push"
# Maximum wait period before sending batch
batchwait: 1s
# Maximum batch size to accrue before sending, unit is byte
batchsize: 102400
# Maximum time to wait for server to respond to a request
timeout: 10s
backoff_config:
# Initial backoff time between retries
min_period: 100ms
# Maximum backoff time between retries
max_period: 5s
# Maximum number of retries when sending batches, 0 means infinite retries
max_retries: 20
tenant_id: "tenant"
external_labels:
cluster: "${cluster_name}"
serverPort: 3101
positions:
filename: /run/promtail/positions.yaml
target_config:
# Period to resync directories being watched and files being tailed
sync_period: 10s
snippets:
pipelineStages:
- docker: {}
# Drop health logs
- drop:
expression: "(.*/health-check.*)|(.*/health.*)|(.*kube-probe.*)"
- static_labels:
cluster: ${cluster}
- tenant:
source: tenant
# This wont work if pods on the cluster are not labeled with tenant
extraRelabelConfigs:
- action: replace
source_labels:
- __meta_kubernetes_pod_label_capsule_clastix_io_tenant
target_label: tenant
...
As mentioned, the above configuration will not work if the pods on the cluster are not labeled with tenant. You can use the following Kyverno policy to ensure that all pods are labeled with tenant. If the pod does not belong to any tenant, it will be labeled with management (assuming you have a central management tenant)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: capsule-pod-labels
spec:
background: false
rules:
- name: add-pod-label
context:
- name: tenant
apiCall:
method: GET
urlPath: "/api/v1/namespaces/{{request.namespace}}"
jmesPath: "not_null(metadata.labels.\"capsule.clastix.io/tenant\" || 'management')"
match:
all:
- resources:
kinds:
- Pod
operations:
- CREATE
- UPDATE
mutate:
patchStrategicMerge:
metadata:
labels:
+(capsule.clastix.io/tenant): "{{ tenant_name }}"
Grafana
5 - Rancher
The integration between Rancher and Capsule, aims to provide a multi-tenant Kubernetes service to users, enabling:
- a self-service approach
- access to cluster-wide resources
to end-users.
Tenant users will have the ability to access Kubernetes resources through:
- Rancher UI
- Rancher Shell
- Kubernetes CLI
On the other side, administrators need to manage the Kubernetes clusters through Rancher.
Rancher provides a feature called Projects to segregate resources inside a common domain. At the same time Projects doesn’t provide way to segregate Kubernetes cluster-scope resources.
Capsule as a project born for creating a framework for multi-tenant platforms, integrates with Rancher Projects enhancing the experience with Tenants.
Capsule allows tenants isolation and resources control in a declarative way, while enabling a self-service experience to tenants. With Capsule Proxy users can also access cluster-wide resources, as configured by administrators at Tenant custom resource-level.
You can read in detail how the integration works and how to configure it, in the following guides.
How to integrate Rancher Projects with Capsule Tenants How to enable cluster-wide resources and Rancher shell access.
Tenants and Projects
6 - Tekton
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Prerequisites
Tekton must be already installed on your cluster, if that’s not the case consult the documentation here:
Cluster Scoped Permissions
Tekton Dashboard
Now for the enduser experience we are going to deploy the tekton dashboard. When using oauth2-proxy we can deploy one single dashboard, which can be used for all tenants. Refer to the following guide to setup the dashboard with the oauth2-proxy:
Once that is done, we need to make small adjustments to the tekton-dashboard
service account.
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
patches:
# Adjust the service for the capsule-proxy according to your installation
# The used values are compatbile with the default installation values
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_PORT
value: "9001"
# Adjust the CA certificate for the capsule-proxy according to your installation
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/volumeMounts
value: []
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
- op: add
path: /spec/template/spec/volumes
value: []
- op: add
path: /spec/template/spec/volumes/-
value:
name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
This patch assumes there’s a secret called capsule-proxy
with the CA certificate for the Capsule Proxy URL.
Apply the given kustomization:
extraEnv:
- name: KUBERNETES_SERVICE_HOST value: ‘${CAPSULE_PROXY_URL}’
- name: KUBERNETES_SERVICE_PORT value: ‘${CAPSULE_PROXY_PORT}’
Tekton Operator
When using the Tekton Operator, you need to add the following to the TektonConfig
:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
dashboard:
readonly: false
options:
disabled: false
deployments:
tekton-dashboard:
spec:
template:
spec:
volumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
containers:
- name: tekton-dashboard
volumeMounts:
- mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
env:
- name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- name: KUBERNETES_SERVICE_PORT
value: "9001"
See for reference the options spec
7 - Teleport
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.