Kubernetes deployment – specify multiple options for image pull as a fallback?

We have had image pull issues at one time or another with all of our possible docker registries including Artifactory, AWS ECR, and GitLab. Even DockerHub occasionally has issues.

Is there a way in a kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?

If not, what other solutions are there to maintain stability? I’ve seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.

NOTE: Cross posted on SO just to get help faster, but it belongs here.


Go to Source
Author: John Humphreys – w00te

Traefik HTTPS ingress for application outside of cluster

I have an application that runs in it’s own vm on the same bare metal server. I also have a k8s setup which runs multiple applications behind traefik. I want to use the k8s traefik to reverse proxy the application running on the VM? Is that possible?

It looks like I can define a service which points to the IP address but it’s not recommended, instead it points to headless services but this doesn’t seem like it’ll work.

Go to Source
Author: digital

Control GKE CICD from a Jenkins in a lab with private network?

For a test purpose I need to use my locally provisioned Jenkins with Vagrant in order to connect to GKE and use pods to build. Is that possible, because from what I read K8s will need access to Jenkins as well. How can I achieve that?

Looks to be possible, but I am stuck on access rights for now:

o.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://xxxxxx/api/v1/namespaces/cicd/pods?labelSelector=jenkins%3Dslave. Message: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "cicd". Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], group=null, kind=pods, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "cicd", metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}).

Go to Source
Author: anVzdGFub3RoZXJodW1hbg

Cannot mount CIFS storage on k8s cluster

I have to mount CIFS storage, trying to use flexvolume, fstab/cifs, but I have no idea what i’m doing wrong.

Using microk8s v1.18

root@master:~/yamls# cat pod.yaml 
apiVersion: v1
kind: Secret
  name: cifs-secret
  namespace: default
type: fstab/cifs
  username: 'xxxxxxxxxxx='
  password: 'xxxxxxxxxxxxxxxxxxxxxx=='
apiVersion: v1
kind: Pod
  name: busybox
  namespace: default
  - name: busybox
    image: busybox
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    - name: test
      mountPath: /data
  - name: test
      driver: "fstab/cifs"
      fsType: "cifs"
        name: "cifs-secret"
        networkPath: "//srv/storage"
        mountOptions: "dir_mode=0755,file_mode=0644,noperm"


root@master:~/yamls# kubectl apply -f pod.yaml 
pod/busybox configured
The Secret "cifs-secret" is invalid: type: Invalid value: "fstab/cifs": field is immutable

On changing type of secret to Opaque I get this

  Type     Reason       Age                   From                                      Message
  ----     ------       ----                  ----                                      -------
  Normal   Scheduled    <unknown>             default-scheduler                         Successfully assigned default/busybox to spb-airsys-services.spb.rpkb.ru
  Warning  FailedMount  17m (x23 over 48m)    kubelet, master  MountVolume.SetUp failed for volume "test" : Couldn't get secret default/cifs-secret err: Cannot get secret of type fstab/cifs

What I have to use with CIFS driver on Secret? Why this is so hard? Is it changing API or else? Why API version changing from version to version, is it invented in order to give version compability?

And, in future, what can you suggest to NFS mounting? Even more, which practices do you use to provide mounts’ snapshots (or any other backup system)?

Go to Source
Author: Deerenaros

K8s sig-storage-local-static-provisioner hostDir against /vagrant mount?

I am trying to set storage sig-storage-local-static-provisioner to /vagrant mapped folder on windows host, my expectation is that the localstorage calss will automatically provision pvs based on pvc requests, is that possible, trying to use confluent kafka with it with following config:
namespace: kube-system

  • name: local-storage
    hostDir: /vagrant/kafkastorage
    However I am stuck on waiting for consume I don’t see PVs getting created, any idea if that is possible at all?
    Latest events are:
    2m52s Normal WaitForFirstConsumer persistentvolumeclaim/datadir-0-confluent-prod-cp-kafka-0 waiting for first consumer to be created before binding
    2m52s Normal WaitForFirstConsumer persistentvolumeclaim/datadir-confluent-prod-cp-zookeeper-0 waiting for first consumer to be created before binding
    2m52s Normal WaitForFirstConsumer persistentvolumeclaim/datalogdir-confluent-prod-cp-zookeeper-0 waiting for first consumer to be created before binding
cat local.storage.class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Go to Source
Author: anVzdGFub3RoZXJodW1hbg

How to keep secrets out of version control with kustomize?

I’ve started using kustomize. It lets you generate secrets with something like:

  - name: mariadb-env
      - mariadb.env

This is great because kustomize appends a hash so that every time I edit my secret, kubernetes will see it as being new and restart the server.

However, if I put kustomization.yaml under version control, then it kind of entails that I put mariadb.env under version control too. If I don’t, then kubernetes build x will fail because of the missing file [for anyone that tries to clone the repo]. Even if I don’t put it under VCS, it still means I have these secret files on my dev workstation.

Prior to adopting kustomize, I’d just create the secret once, send it to the kubernetes cluster, and let it live there. I could still reference in my configs by name, but with the hash, I can’t really do that anymore. But the hash is also incredibly useful for forcing the restart.

How are people dealing with this?

Go to Source
Author: mpen

Can a kubernetes pod be forced to stay alive after its main command fails?

After starting a long running kubernetes job we’ve found that the final result will fail to upload to its final location. We would like to force the pod to stay open after the main process fails so we can exec in and manually process the final results. If the main process fails and the pod exits before uploading the final result we will lose a large amount of time to re-process the job.

Is there a way to ensure the pod stays alive manually?

Go to Source
Author: David Parks