Easing the pain of switching from Openshift to vanilla kubernetes. Setting up openshift-console with SSO support

In our organization, like many others, we are switching to domestic products, and this has also affected the containerization environment. Over the years of operation, we fell in love with OKD (Openshift) and were very upset in vanilla kubernetes, noticing the absence of things that had already become familiar. However, OKD consists of freely distributed components, which means that something can be reused, for example, a web console. We decided to transfer it to the fullest possible functionality. Existing guides usually cover only the installation of the console itself, but we wanted to use both SSO and additional console elements – a directory of links and advertisements in the header.

So, we need:

  • Certificates for web-console maintenance

  • OIDC provider. In our case Keycloak

  • Kubernetes cluster

  • openshift-console image from quay.io

  • Repository with CRD for web console

  • cli from openshift (optional, but you will have to adapt the cli commands yourself)

1. Set up the client in the OIDC provider

Create a keycloak client. I’ll hide all images with setup examples under a spoiler.

  1. client ID: kubernetes

  2. root url: <url консоли>

  3. valid redirect urls: /*

  4. Client authentication: on

  5. We save the client

  6. Inside the client, in the client scopes tab, add audience

    1. go to scope kubernetes-dedicated (or -dedicated if the ID is different)

    2. Click Configure a new mapper

    3. Select the type: Audience

    4. enter a name and select included client audience: kubernetes (clientID)

    5. Add to ID token: on

    6. Leave the remaining parameters as default, click save

  7. In the global client scopes tab (on the left) add Groups:

    1. name: groups

    2. Type: Default

    3. Go to the Mappers tab, click Configure a new mapper

      1. select Group Membership

      2. name: groups

      3. Token Claim Name: groups

      4. Full group path: off

    4. Save changes

Hidden text

Important: This is a working config, but after that you need to harden the client in accordance with the security policies of your organization.

We save the config, save it for ourselves Client ID And Client Secret (Credentials tab). We also need an Issuer, you can get it from the realm settings -> OpenID Endpoint Configuration section.

Setting example:
Setting up client

Setting up client

6. Setting up Audience

6. Setting up Audience

7. Setting up the global Client scope

7. Setting up the global Client scope

2. Setting up kubernetes

In order for the console to perform actions on behalf of the user, the kuebrentes cluster must be able to authenticate requests using your jwt token. Our cluster was deployed via kubeadm; for other installations the config locations may differ

  1. On the VM with kubernetes apiserver, you need to add the certificate that is used in keycloak along the path: /etc/kubernetes/pki/oidc-ca.crt

  2. Add the oidc config to kubernetes apiserver:

/etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --oidc-issuer-url=https://auth.keycloak.myinfra.zone/auth/realms/myrealm
    - --oidc-client-id=kubernetes
    - --oidc-username-claim=preferred_username
    - --oidc-groups-claim=groups
    - --oidc-ca-file=/etc/kubernetes/pki/oidc-ca.crt
    - --oidc-username-prefix=-
...
  • oidc-issuer-url – url from OpenID Endpoint Configuration

  • oidc-client-id – client id

  • oidc-username-claim – attribute from jwt by which the user login is determined

  • oidc-groups-claim – claim, which contains a list of groups

  • oidc-ca-file – path to the file with the keycloak certificate

  • oidc-username-prefix – a prefix that is added to the user login inside kubernetes (for example, for rolebinding)

After restarting the pods, the new settings should be updated; you can check access by generating an access token and contacting kube-api with it

Hidden text
curl -k -L -X POST 'https://auth.keycloak.myinfra.zone/auth/realms/myrealm/protocol/openid-connect/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=kubernetes' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'client_secret=<KUBERNETES_CLIENT_ID_TOKEN>' \
--data-urlencode 'scope=openid' \
--data-urlencode 'username=<KEYCLOAK_USER>' \
--data-urlencode 'password=<KEYCLOAK_USER_PASSWORD>'
oc login --token=ACCESS_TOKEN_HERE --server=https://apiserver_url:6443

3. Setting up openshift-console in a cluster

All we have to do is set up the manifests correctly:

Hidden text

Auxiliary manifests, namespaces, serviceaccouns, clusterrolebindings:

---
kind: Namespace
apiVersion: v1
metadata:
  name: openshift-console
---
kind: Namespace
apiVersion: v1
metadata:
  name: openshift-console-user-settings
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: console
  namespace: openshift-console
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: okd-console-role
subjects:
  - kind: ServiceAccount
    name: console
    namespace: openshift-console
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
---

Certificates for https console and ca to trust keycloak:

kind: Secret
apiVersion: v1
metadata:
  name: console-serving-cert
  namespace: openshift-console
data:
  ca.crt: >-
    ca_cert_base_64
  tls.crt: >-
    public_cert_base_64
  tls.key: >-
    private_cert_base_64
type: kubernetes.io/tls

Deployment:

Hidden text
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: console
  namespace: openshift-console
  labels:
    app: console
    component: ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console
      component: ui
  template:
    metadata:
      name: console
      creationTimestamp: null
      labels:
        app: console
        component: ui
    spec:
      nodeSelector:
        node-role.kubernetes.io/control-plane: ''
      restartPolicy: Always
      serviceAccountName: console
      schedulerName: default-scheduler
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: component
                    operator: In
                    values:
                      - ui
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      securityContext: {}
      containers:
        - name: console
          image: quay.io/openshift/origin-console:4.12.0
          command:
            - /opt/bridge/bin/bridge
            - '--public-dir=/opt/bridge/static'
            - '--control-plane-topology-mode=HighlyAvailable'
            - '--k8s-public-endpoint=https://kubernetes_apserver:6443'
            - '--listen=http://[::]:8080'
            - '--k8s-auth=oidc'
            - '--k8s-mode=in-cluster'
            - '--tls-cert-file=/var/serving-cert/tls.crt'
            - '--tls-key-file=/var/serving-cert/tls.key'
            - '--base-address=https://console.apps.myinfra.zone'
            - '--user-auth=oidc'
            - '--user-auth-oidc-ca-file=/var/serving-cert/ca.crt'
            - '--user-auth-oidc-client-id=kubernetes' # the same as for kubernetes apiserver client
            - '--user-auth-oidc-client-secret=oidc_client_from_keycloak'
            - '--user-auth-logout-redirect=https://console.apps.myinfra.zone'
            - >-
              -user-auth-oidc-issuer-url=https://auth.keycloak.myinfra.zone/auth/realms/myrealm
          resources: {}
          volumeMounts:
            - name: console-serving-cert
              readOnly: true
              mountPath: /var/serving-cert
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 150
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          ports:
            - name: https
              containerPort: 8443
              protocol: TCP
            - name: http
              containerPort: 8080
              protocol: TCP
          imagePullPolicy: IfNotPresent
      serviceAccount: console
      volumes:
        - name: console-serving-cert
          secret:
            secretName: console-serving-cert
            defaultMode: 420
      dnsPolicy: ClusterFirst
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 3
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

Service and ingress:

---
kind: Service
apiVersion: v1
metadata:
  name: console
  namespace: openshift-console
spec:
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 8443
    - name: http
      protocol: TCP
      port: 80
      targetPort: 8080
  internalTrafficPolicy: Cluster
  type: ClusterIP
  selector:
    app: console
    component: ui
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: okd-console
  namespace: openshift-console
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/ssl-passthrough: 'false'
spec:
  ingressClassName: nginx
  tls:
    - secretName: console-serving-cert
  rules:
    - host: console.apps.myinfra.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: console
                port:
                  number: 80

Depending on your requirements, you can enable ssl-passthrough, not forgetting to change the listen port in the application and correct the rules section of ingress, however, I encountered a problem that with this configuration > 1 pod does not work – round-robin traffic is sent to the pods and session loss is possible.

At this stage, your web console should start, and when you log in, you should be redirected to SSO. Once logged in, requests will be made from the authenticated user.

4. Enable console links, ConsoleNotification, ConsoleCLIDownload

All we need is to take the CRD manifests from the repository and apply them to the cluster:

Hidden text
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consolelink.crd.yaml
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consoleclidownload.crd.yaml
oc apply -f https://raw.githubusercontent.com/openshift/api/refs/heads/release-4.12/console/v1/0000_10_consolenotification.crd.yaml

If desired, you can use other CRD console from repository. The console will pick them up too.

Then you can create the corresponding objects (API Explorer -> group: console.openshift.io) and observe the effect:

That's it, you've launched the OKD web console with authorization, alerts and custom links!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *