Proxy
Improve the User-Experience with teh capsule-proxy
Capsule Proxy is an add-on for Capsule Operator addressing some RBAC issues when enabling multi-tenancy in Kubernetes since users cannot list the owned cluster-scoped resources. One solution to this problem would be to grant all users LIST
permissions for the relevant cluster-scoped resources (eg. Namespaces
). However, this would allow users to list all cluster-scoped resources, which is not desirable in a multi-tenant environment and may lead to security issues. Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:
Error from server (Forbidden): namespaces is forbidden:
User "alice" cannot list resource "namespaces" in API group "" at the cluster scope
The reason, as the error message reported, is that the RBAC list action is available only at Cluster-Scope and it is not granted to users without appropriate permissions.
To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user’s experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another.
With Capsule, we took a different approach. As one of the key goals, we want to keep the same user experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work.
Integrations
Capsule Proxy is a strong addition for Capsule and works well with other CNCF kubernetes based solutions. It allows any user facing solutions, which make impersonated requests as the users, to proxy the request and show the results Correctly.
1 - ProxySettings
Configure proxy settings for your tenants
Primitives
Namespaces are treated specially. A users can list the namespaces they own, but they cannot list all the namespaces in the cluster. You can’t define additional selectors.
Primitives are strongly considered for tenants, therefor
The proxy setting kind is an enum accepting the supported resources:
Enum | Description | Effective Operations |
---|
Tenant | Users are able to LIST this tenant | - LIST |
StorageClasses | Perform operations on the allowed StorageClasses for the tenant | - LIST |
Nodes: Based on the NodeSelector and the Scheduling Expressions nodes can be listed
StorageClasses: Perform actions on the allowed StorageClasses for the tenant
IngressClasses: Perform actions on the allowed IngressClasses for the tenant
PriorityClasses: Perform actions on the allowed PriorityClasses for the tenant
PriorityClasses
RuntimeClasses: Perform actions on the allowed RuntimeClasses for the tenant
PersistentVolumes: Perform actions on the PersistentVolumes owned by the tenant
GatewayClassesProxy ProxyServiceKind = “GatewayClasses”
TenantProxy ProxyServiceKind = “Tenant”
Each Resource kind can be granted with several verbs, such as:
Cluster Resources
This approach is for more generic cluster scoped resources.
TBD
Proxy Settings
Tenants
The Capsule Proxy is a multi-tenant application. Each tenant is a separate instance of the Capsule Proxy. The tenant is identified by the tenantId
in the URL. The tenantId
is a unique identifier for the tenant. The tenantId
is used to identify the tenant in the Capsule Proxy.
2 - Installation
Installation guide for the capsule-proxy
Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it. Use the capsule-proxy only if you want Tenant Owners to list their Cluster-Scope resources.
The capsule-proxy can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server. Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
Running outside a Kubernetes cluster is also viable, although a valid KUBECONFIG file must be provided, using the environment variable KUBECONFIG or the default file in $HOME/.kube/config.
A Helm Chart is available here.
Exposure
Depending on your environment, you can expose the capsule-proxy by:
Ingress
NodePort Service
LoadBalance Service
HostPort
HostNetwork
Here how it looks like when exposed through an Ingress Controller:
Distribute CA within the Cluster
The capsule-proxy requires the CA certificate to be distributed to the clients. The CA certificate is stored in a Secret named capsule-proxy
in the capsule-system
namespace, by default. In most cases the distribution of this secret is required for other clients within the cluster (e.g. the Tekton Dashboard). If you are using Ingress or any other endpoints for all the clients, this step is probably not required.
Here’s an example of how to distribute the CA certificate to the namespace tekton-pipelines
by using kubectl
and jq
:
kubectl get secret capsule-proxy -n capsule-system -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n tekton-pipelines -f -
This can be used for development purposes, but it’s not recommended for production environments. Here are solutions to distribute the CA certificate, which might be useful for production environments:
3 - Controller Options
Configure the Capsule Proxy Controller
You can customize the Capsule Proxy with the following configuration
Flags
Feature Gates
Feature Gates are a set of key/value pairs that can be used to enable or disable certain features of the Capsule Proxy. The following feature gates are available:
Feature Gate | Default Value | Description |
---|
ProxyAllNamespaced | false | ProxyAllNamespaced allows to proxy all the Namespaced objects. When enabled, it will discover apis and ensure labels are set for resources in all tenant namespaces resulting in increased memory. However this feature helps with user experience. |
SkipImpersonationReview | false | SkipImpersonationReview allows to skip the impersonation review for all requests containing impersonation headers (user and groups). DANGER: Enabling this flag allows any user to impersonate as any user or group essentially bypassing any authorization. Only use this option in trusted environments where authorization/authentication is offloaded to external systems. |
ProxyClusterScoped | false | ProxyClusterScoped allows to proxy all clusterScoped objects for all tenant users. These can be defined via ProxySettings |
4 - Integrations
Integrate Capsule with other platforms and solutions
4.1 - Kubernetes Dashboard
Capsule Integration with Kubernetes Dashboard
This guide works with the kubernetes dashboard v2.0.0 (Chart 6.0.8). It has not yet been tested successfully with with v3.x version of the dashboard.
This guide describes how to integrate the Kubernetes Dashboard and Capsule Proxy with OIDC authorization.
OIDC Authentication
Your cluster must also be configured to use OIDC Authentication for seemless Kubernetes RBAC integration. In a such scenario, you should have in the kube-apiserver.yaml manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need: 1. Check Valid Redirect URIs: in the oauth2-proxy configuration we set redirect-url: “https://${DASHBOARD_URL}/oauth2/callback”, it needs to add this path to the Valid Redirect URIs 2. Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’. 3. Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name (${OIDC_CLIENT_ID}
).
OAuth2 Proxy
To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy. In this article, we will use oauth2-proxy and install it as a pod in the Kubernetes Dashboard namespace. Alternatively, we can install oauth2-proxy in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.
Prepare the values for oauth2-proxy:
cat > values-oauth2-proxy.yaml <<EOF
config:
clientID: "${OIDC_CLIENT_ID}"
clientSecret: ${OIDC_CLIENT_SECRET}
extraArgs:
provider: "keycloak-oidc"
redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
pass-access-token: true
set-authorization-header: true
pass-user-headers: true
ingress:
enabled: true
path: "/oauth2"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
EOF
More information about the keycloak-oidc provider can be found on the oauth2-proxy documentation. We’re ready to install the oauth2-proxy:
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml
Configuring Keycloak
The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we’re giving for granted that Keycloak is used, if you need more info please follow the OIDC Authentication section.
In a such scenario, you should have in the kube-apiserver.yaml
manifest the following content:
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Where ${OIDC_CLIENT_ID}
refers to the client ID that all tokens must be issued.
For this client we need:
- Check
Valid Redirect URIs
: in the oauth2-proxy
configuration we set redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
, it needs to add this path to the Valid Redirect URIs
- Create a mapper with Mapper Type ‘Group Membership’ and Token Claim Name ‘groups’.
- Create a mapper with Mapper Type ‘Audience’ and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).
Configuring Kubernetes Dashboard
If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.
cat > ca.crt<< EOF
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
EOF
kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}
Prepare the values for the Kubernetes Dashboard:
cat > values-kubernetes-dashboard.yaml <<EOF
extraVolumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: certificate
items:
- key: ca.crt
path: ca.crt
extraVolumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: token-ca
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
hosts:
- ${DASHBOARD_URL}
tls:
- hosts:
- ${DASHBOARD_URL}
extraEnv:
- name: KUBERNETES_SERVICE_HOST
value: '${CAPSULE_PROXY_URL}'
- name: KUBERNETES_SERVICE_PORT
value: '${CAPSULE_PROXY_PORT}'
EOF
To add the Certificate Authority for the Capsule Proxy URL, we use the volume token-ca to mount the ca.crt file. Additionally, we set the environment variables KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
to route requests to the Capsule Proxy.
Now you can install the Kubernetes Dashboard:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml
4.2 - Lens
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.
4.3 - Tekton
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Prerequisites
Tekton must be already installed on your cluster, if that’s not the case consult the documentation here:
Cluster Scoped Permissions
Tekton Dashboard
Now for the enduser experience we are going to deploy the tekton dashboard. When using oauth2-proxy we can deploy one single dashboard, which can be used for all tenants. Refer to the following guide to setup the dashboard with the oauth2-proxy:
Once that is done, we need to make small adjustments to the tekton-dashboard
service account.
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
patches:
# Adjust the service for the capsule-proxy according to your installation
# The used values are compatbile with the default installation values
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: KUBERNETES_SERVICE_PORT
value: "9001"
# Adjust the CA certificate for the capsule-proxy according to your installation
- target:
version: v1
kind: Deployment
name: tekton-dashboard
patch: |-
- op: add
path: /spec/template/spec/containers/0/volumeMounts
value: []
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
- op: add
path: /spec/template/spec/volumes
value: []
- op: add
path: /spec/template/spec/volumes/-
value:
name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
This patch assumes there’s a secret called capsule-proxy
with the CA certificate for the Capsule Proxy URL.
Apply the given kustomization:
extraEnv:
- name: KUBERNETES_SERVICE_HOST
value: ‘${CAPSULE_PROXY_URL}’
- name: KUBERNETES_SERVICE_PORT
value: ‘${CAPSULE_PROXY_PORT}’
Tekton Operator
When using the Tekton Operator, you need to add the following to the TektonConfig
:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
dashboard:
readonly: false
options:
disabled: false
deployments:
tekton-dashboard:
spec:
template:
spec:
volumes:
- name: token-ca
projected:
sources:
- serviceAccountToken:
expirationSeconds: 86400
path: token
- secret:
name: capsule-proxy
items:
- key: ca
path: ca.crt
containers:
- name: tekton-dashboard
volumeMounts:
- mountPath: "/var/run/secrets/kubernetes.io/serviceaccount"
name: token-ca
env:
- name: KUBERNETES_SERVICE_HOST
value: "capsule-proxy.capsule-system.svc"
- name: KUBERNETES_SERVICE_PORT
value: "9001"
See for reference the options spec
4.4 - Teleport
With Capsule extension for Lens, a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
Features
Capsule extension for Lens provides these capabilities:
- List all tenants
- See tenant details and change through the embedded Lens editor
- Check Resources Quota and Budget at both the tenant and namespace level
Please, see the README for details about the installation of the Capsule Lens Extension.