Easing the pain of switching from Openshift to vanilla kubernetes. Setting up openshift-console with SSO support
In our organization, like many others, we are switching to domestic products, and this has also affected the containerization environment. Over the years of operation, we fell in love with OKD (Openshift) and were very upset in vanilla kubernetes, noticing the absence of things that had already become familiar. However, OKD consists of freely distributed components, which means that something can be reused, for example, a web console. We decided to transfer it to the fullest possible functionality. Existing guides usually cover only the installation of the console itself, but we wanted to use both SSO and additional console elements – a directory of links and advertisements in the header.
So, we need:
Certificates for web-console maintenance
OIDC provider. In our case Keycloak
Kubernetes cluster
openshift-console image from quay.io
Repository with CRD for web console
cli from openshift (optional, but you will have to adapt the cli commands yourself)
1. Set up the client in the OIDC provider
Create a keycloak client. I’ll hide all images with setup examples under a spoiler.
client ID:
kubernetes
root url:
<url консоли>
valid redirect urls:
/*
Client authentication:
on
We save the client
Inside the client, in the client scopes tab, add audience
go to scope kubernetes-dedicated (or
-dedicated if the ID is different) Click
Configure a new mapper
Select the type:
Audience
enter a name and select included client audience:
kubernetes
(clientID)Add to ID token:
on
Leave the remaining parameters as default, click save
In the global client scopes tab (on the left) add Groups:
name: groups
Type: Default
Go to the Mappers tab, click Configure a new mapper
select Group Membership
name: groups
Token Claim Name: groups
Full group path: off
Save changes
Hidden text
Important: This is a working config, but after that you need to harden the client in accordance with the security policies of your organization.
We save the config, save it for ourselves Client ID
And Client Secret
(Credentials tab). We also need an Issuer, you can get it from the realm settings -> OpenID Endpoint Configuration section.
Setting example:
2. Setting up kubernetes
In order for the console to perform actions on behalf of the user, the kuebrentes cluster must be able to authenticate requests using your jwt token. Our cluster was deployed via kubeadm; for other installations the config locations may differ
On the VM with kubernetes apiserver, you need to add the certificate that is used in keycloak along the path:
/etc/kubernetes/pki/oidc-ca.crt
Add the oidc config to kubernetes apiserver:
/etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://auth.keycloak.myinfra.zone/auth/realms/myrealm
- --oidc-client-id=kubernetes
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-ca-file=/etc/kubernetes/pki/oidc-ca.crt
- --oidc-username-prefix=-
...
oidc-issuer-url – url from OpenID Endpoint Configuration
oidc-client-id – client id
oidc-username-claim – attribute from jwt by which the user login is determined
oidc-groups-claim – claim, which contains a list of groups
oidc-ca-file – path to the file with the keycloak certificate
oidc-username-prefix – a prefix that is added to the user login inside kubernetes (for example, for rolebinding)
After restarting the pods, the new settings should be updated; you can check access by generating an access token and contacting kube-api with it
Hidden text
curl -k -L -X POST 'https://auth.keycloak.myinfra.zone/auth/realms/myrealm/protocol/openid-connect/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=kubernetes' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'client_secret=<KUBERNETES_CLIENT_ID_TOKEN>' \
--data-urlencode 'scope=openid' \
--data-urlencode 'username=<KEYCLOAK_USER>' \
--data-urlencode 'password=<KEYCLOAK_USER_PASSWORD>'
oc login --token=ACCESS_TOKEN_HERE --server=https://apiserver_url:6443
3. Setting up openshift-console in a cluster
All we have to do is set up the manifests correctly:
Hidden text
Auxiliary manifests, namespaces, serviceaccouns, clusterrolebindings:
---
kind: Namespace
apiVersion: v1
metadata:
name: openshift-console
---
kind: Namespace
apiVersion: v1
metadata:
name: openshift-console-user-settings
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: console
namespace: openshift-console
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: okd-console-role
subjects:
- kind: ServiceAccount
name: console
namespace: openshift-console
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
---
Certificates for https console and ca to trust keycloak:
kind: Secret
apiVersion: v1
metadata:
name: console-serving-cert
namespace: openshift-console
data:
ca.crt: >-
ca_cert_base_64
tls.crt: >-
public_cert_base_64
tls.key: >-
private_cert_base_64
type: kubernetes.io/tls
Deployment:
Hidden text
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: console
namespace: openshift-console
labels:
app: console
component: ui
spec:
replicas: 1
selector:
matchLabels:
app: console
component: ui
template:
metadata:
name: console
creationTimestamp: null
labels:
app: console
component: ui
spec:
nodeSelector:
node-role.kubernetes.io/control-plane: ''
restartPolicy: Always
serviceAccountName: console
schedulerName: default-scheduler
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- ui
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 30
securityContext: {}
containers:
- name: console
image: quay.io/openshift/origin-console:4.12.0
command:
- /opt/bridge/bin/bridge
- '--public-dir=/opt/bridge/static'
- '--control-plane-topology-mode=HighlyAvailable'
- '--k8s-public-endpoint=https://kubernetes_apserver:6443'
- '--listen=http://[::]:8080'
- '--k8s-auth=oidc'
- '--k8s-mode=in-cluster'
- '--tls-cert-file=/var/serving-cert/tls.crt'
- '--tls-key-file=/var/serving-cert/tls.key'
- '--base-address=https://console.apps.myinfra.zone'
- '--user-auth=oidc'
- '--user-auth-oidc-ca-file=/var/serving-cert/ca.crt'
- '--user-auth-oidc-client-id=kubernetes' # the same as for kubernetes apiserver client
- '--user-auth-oidc-client-secret=oidc_client_from_keycloak'
- '--user-auth-logout-redirect=https://console.apps.myinfra.zone'
- >-
-user-auth-oidc-issuer-url=https://auth.keycloak.myinfra.zone/auth/realms/myrealm
resources: {}
volumeMounts:
- name: console-serving-cert
readOnly: true
mountPath: /var/serving-cert
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 150
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
ports:
- name: https
containerPort: 8443
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
serviceAccount: console
volumes:
- name: console-serving-cert
secret:
secretName: console-serving-cert
defaultMode: 420
dnsPolicy: ClusterFirst
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 3
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
Service and ingress:
---
kind: Service
apiVersion: v1
metadata:
name: console
namespace: openshift-console
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8443
- name: http
protocol: TCP
port: 80
targetPort: 8080
internalTrafficPolicy: Cluster
type: ClusterIP
selector:
app: console
component: ui
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: okd-console
namespace: openshift-console
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/ssl-passthrough: 'false'
spec:
ingressClassName: nginx
tls:
- secretName: console-serving-cert
rules:
- host: console.apps.myinfra.zone
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: console
port:
number: 80
Depending on your requirements, you can enable ssl-passthrough, not forgetting to change the listen port in the application and correct the rules section of ingress, however, I encountered a problem that with this configuration > 1 pod does not work – round-robin traffic is sent to the pods and session loss is possible.
At this stage, your web console should start, and when you log in, you should be redirected to SSO. Once logged in, requests will be made from the authenticated user.
4. Enable console links, ConsoleNotification, ConsoleCLIDownload
All we need is to take the CRD manifests from the repository and apply them to the cluster:
Hidden text
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consolelink.crd.yaml
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consoleclidownload.crd.yaml
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consolenotification.crd.yaml
If desired, you can use other CRD console
Then you can create the corresponding objects (API Explorer -> group: console.openshift.io) and observe the effect:
That's it, you've launched the OKD web console with authorization, alerts and custom links!