Kubernetes deployment – specify multiple options for image pull as a fallback?

We have had image pull issues at one time or another with all of our possible docker registries including Artifactory, AWS ECR, and GitLab. Even DockerHub occasionally has issues.

Is there a way in a kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?

If not, what other solutions are there to maintain stability? I’ve seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.

NOTE: Cross posted on SO just to get help faster, but it belongs here.

https://stackoverflow.com/questions/63266765/kubernetes-deployment-specify-multiple-options-for-image-pull-as-a-fallback

Go to Source
Author: John Humphreys – w00te

haproxy container not responding that i have created with remote machine ansible playbook although container is running

I have alredy created ansible playbook for one haproxy container and 2 apache container. Moreover I also mapped ports and volumes in order to distribute client request with haproxy to apache servers. but haproxy server not responding. containers were created and mapped. . . here is my playbook .—
.

  • hosts: all
    vars:
    img:
    – httpd:latest
    – haproxy:latest
    tasks:

            - name: pull image
              docker_image:
                  name: "{{ item }}"
                  source: pull
              loop: "{{ img }}"
            - name: copy my contain1
              copy:
                      src: ./index1.html
                      dest: /home/roy/volume/index.html
              notify: apache1
            - name: copy my contain2
              copy:
                      src: ./index2.html
                      dest: /home/roy/volume1/index.html
              notify: apache2
            - name: create containers
              docker_container:
                 name: haproxycon1
                 image: haproxy:latest
                 volumes:
                         - /home/roy/ha/:/usr/local/etc/haproxy/
                 state: stopped
                 ports:
                         - "80:80"
                 command: sleep 1d
              notify: pop
              register: haproxycon1
              ignore_errors: yes
            - name: create apache1 container
              docker_container:
                  name:  con1
                  image: httpd:latest
                  state: started
                  volumes:
                          - /home/roy/volume:/usr/local/apache2/htdocs
                  exposed_ports:
                          - 80
              register: con1
              ignore_errors: yes
            - name: create apache2
              docker_container:
                      name: con2
                      image: httpd:latest
                      state: started
                      volumes:
                              - /home/roy/volume1:/usr/local/apache2/htdocs
                      exposed_ports:
                              - 80
              register: con2
              ignore_errors: yes
            - name: haproxyconf
              template:
                     src: ./haproxy.cfg
                     dest: /home/roy/ha/haproxy.cfg
              notify: pop
    

    handlers:
    – name: pop
    docker_container:
    name: haproxycon1
    state: started
    – name: apache1
    docker_container:
    name: con1
    state: started
    – name: apache2
    docker_container:
    name: con2
    state: started

Go to Source
Author: KKSpro

Linking nodes in a cluster to run a single instance of a docker container

In this cluster, there are 11 identical nodes, each with 16gb of memory. So far, I’ve been able to connect each node of the cluster to a docker swarm hosted on the control node. My end goal is to run a single instance of a container that utilizes the resources of each node so the total memory of that container would be 176gb. How should I accomplish this?

Each worker node is running Ubuntu Server 20.04 and the control node is running Ubuntu Desktop 16.04, all with docker engine 19.03.8.

Go to Source
Author: AJ_4real

How to use custom apparmor profile for container with dockerode api using Nodejs

How to use custom apparmor profile for container with dockerode api using Nodejs

I was wondering how I can use apprmor profile for docker containers using dockerode (docker api) under Nodejs

Visit this project

I found something called “SecurityOpt”: [ ], In official documention of DockerApi

But still I haven’t got succeed to implement it Please anyone can help me how it can be possible to achieve the exact result using docker api which I usually get by running these command for applying apparmor profile See here

Go to Source
Author: Siddhant Gour