How Do I Decode An OpenShift Secret In A Mounted Volume?

Mounting an OpenShift Secret to its own volume is straightforward. There are plenty of examples on how to do it. The Web is littered with examples. Next, the most common thing any manual, guide or tutorial would say is to encode the secret in Base64.

For example I have a SSL certificate stored in a Java KeyStore file format or JKS. The recommended way is to store it in a Secret instead of ConfigMap since it is sensitive information. Of course, that goes without saying the JKS file is password-protected.

Second, it must be as a Base64 string before I save it as OpenShift Secret.

How do I get the Base64 string to be decoded in the mounted volume? This one does not seem to get many answers.

Do I need to include shell commands to decode it and write it to a folder?

Which folder should I write it to as best practice?

ANSWER

#1 The encoded JKS Secret as an environment variable

This is an option but I’m not a fan of it. The JKS file when encoded can become very long, especially when the file size is large. I don’t think environment variables were meant to be used like this – hold very long text values. That said, this is an easier implementation.

Map out the secret as an env var in OpenShift or Kubernetes. Then you can use that env var to echo the value and write it out to a file.

echo -n $SECRET_JKS_VAR | base64 --decode > /file/path/to/decoded-secret.jks

#2 Mount for read, mount for write

You have the secret mounted to a file path. That is for reading. Now you need to decode, which means writing to a file so your application can read it back unencoded.

First, define a mount point as and empty directory. It must be writable. Then make it memory only.

Next, read the JKS file from its mount point, write it out to the empty dir mount point.

cat /mount/file/path/for-reading/encoded-secret.jks | base 64 --decode > /mount/file/path/for-writing/decoded-secret.jks

I like this approach better. It makes more sense rather than mapping a very long base64-encoded text value as an environment variable. The caveat is here is slightly more configurations to be made. Also if I’m not mistaken the in-memory volume count against your app memory quota. Should be negligible unless you write thousands of files into it.

#3 Lastly, write where you can

Underneath that container is a file system. Where you have permission to write, then do so there. If it’s Linux, then this decision is pretty much arbitrary IMHO. Put it in /tmp or /home or /mnt. Security wise, others might have access to your app’s pod, that means they can get to the Secret as well. But that is another topic.

Scheduling an OpenShift “job”

I have a backup process that involves an Openshift “job”.
We fire off this job by

  • oc delete the previous job object
  • Edit the following yaml with the backup location which is either backupa or backupb mount. this alternates each time we do a backup so we always have the last 2 backups.
apiVersion: batch/v1
kind: Job
metadata:
  name: backup-job
spec:
  template:
    spec:
      containers:
      - name: backup-job
        image: example-registry.svc:5000/someproject/some-image:latest
        args:
        - /bin/sh
        - -c
        - DIR=/path/to/backupb; cd ${DIR}; /do/backup
        volumeMounts:
          - mountPath: /path/to/backupa
            name: example-backup-dira
          - mountPath: /path/to/backupb
            name: example-backup-dirb
      volumes:
      - name: connector-data
        persistentVolumeClaim:
          claimName: shared-connector-data
      - name: example-backup-dira
        persistentVolumeClaim:
          claimName: backupa
      - name: example-backup-dirb
        persistentVolumeClaim:
          claimName: backupb
      restartPolicy: Never
  backoffLimit: 1
  • oc create the yaml which iniates the backup.

I would like to create a weekly job for this that runs on a schedule.

What is the Openshift/Kubernetes way of scheduling such a thing?

Google gets me here https://docs.openshift.com/container-platform/3.3/dev_guide/scheduled_jobs.html

But I’m a little unclear how to use this? Do I create a template for my backup job and then create a scheduled job that takes my backup template, substitutes the backup mount variable, and runs based on a Cron expression?

I’m just looking for a little guidance how the Openshift experts out there accomplish this so I’m not shooting in the dark.

Go to Source
Author: Nicholas DiPiazza

Connect a specific pod from the outside world

I have a service for a specific pod of a statefulset: (and probably gonna create similar services for more pods lately)

apiVersion: v1
kind: Service
metadata:
  name: app-0
spec:

  type: LoadBalancer
externalTrafficPolicy: Local
  selector:
  statefulset.kubernetes.io/pod-name: app-0
  ports:
  – protocol: TCP
    port: 2000
    targetPort: 2000
nodePort: 32328

  • protocol: TCP
        port: 3000
        targetPort: 3000
    nodePort: 31795

I want to connect to those ports from another app running on a remote computer. I have to supply that app an ip address and port number. However I couldn’t communicate with this service externally, what am I doing wrong?

Go to Source
Author: flowerProgrammer