What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.


While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

docker run vs docker-compose – different results?

I have just started exploring Docker and am having trouble with docker commit/-compose.

I have copied a file into /root/.guacamole/lib (in the container) and then created an image using docker commit.

An ls of the relevant path in the container using

docker run -it --name="name" image bash

confirms that the file exists, but when the following docker-compose file sets up the container, the file is missing, even though it reports using the same image.

Any idea why that should be?


$ cat docker-compose.yml
version: "2"
    image: "guacamole/guacd"
    container_name: guacd
    hostname: guacd
    restart: always
      - "/data/shared/guacamole/guacd/data:/data"
      - "/data/shared/guacamole/guacd/conf:/conf:ro"
      - "4822"
      - "4822:4822"
    network_mode: bridge

    image: "guac2"
    container_name: guac2
    hostname: guac2
    restart: always
      - "/data/shared/guacamole/guacamole/guac-home:/data"
      - "/data/shared/guacamole/guacamole/conf:/conf:ro"
      - "8080"
      - "8084:8080"
    network_mode: bridge
      - "GUACD_HOSTNAME=pc.lan"
      - "GUACD_PORT=4822"
      - "MYSQL_HOSTNAME=pc.lan"
      - "MYSQL_PORT=3306"
      - "MYSQL_DATABASE=guacamole_db"
      - "MYSQL_USER=guacamole_user"
      - "MYSQL_PASSWORD=password"
      - "GUACAMOLE_HOME=/data"

Here is the terminal text:

user@pc:~$ docker run -it --name="test" guac2 bash
root@b6dcd7dda985:/opt/guacamole# cd /root/.g*/lib
root@b6dcd7dda985:~/.guacamole/lib# ls
mariadb-java-client-2.6.2.jar  mysql-connector-java-5.1.46-bin.jar  mysql-connector-java-5.1.46.jar
***[NB 3 files listed above]***
root@b6dcd7dda985:~/.guacamole/lib# exit
user@pc:~$ docker stop test
user@pc:~$ docker rm test
user@pc:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
ea92c2db77f6        guac2               "/opt/guacamole/bin/…"   51 minutes ago      Up 51 minutes>8080/tcp   guac2
52aea98f7e2c        guacamole/guacd     "/bin/sh -c '/usr/lo…"   About an hour ago   Up 57 minutes>4822/tcp   guacd
user@pc:~$ docker stop guacd
user@pc:~$ docker stop guac2
user@pc:~$ docker rm guacd
user@pc:~$ docker rm guac2
user@pc:~$ docker-compose up -d
Creating guac2 ... done
Creating guacd ... done
user@pc:~$ docker exec -it guac2 bash
root@guac2:/opt/guacamole# cd /root/.g*/lib
root@guac2:~/.guacamole/lib# ls
mysql-connector-java-5.1.46-bin.jar  mysql-connector-java-5.1.46.jar
***[NB only 2 files listed above]***
root@guac2:~/.guacamole/lib# exit

Go to Source
Author: Gareth

How to send user input as some text inside Dockerfile?

I have following Dockerfile:

FROM ubuntu
RUN apt-get -y update
RUN apt-get -y install ttf-dejavu
COPY ./soap.sh /
RUN chmod +x /soap.sh
ENTRYPOINT ["/soap.sh"]
CMD ["/script.sh", "arg1"]

I am trying to install SoapUI using above Dockerfile.

Using above Dockerfile, I am able to build the image Successfully as shown below:

root@test00-new:/home/sam/1_Docker# docker build -t balu1 .
Sending build context to Docker daemon  385.8MB
Step 1/8 : FROM ubuntu
 ---> 1e4467b07108
Step 2/8 : MAINTAINER Sample
 ---> Running in c6dc0992d25c
Removing intermediate container c6dc0992d25c
 ---> 7ecd8a4fcbfd
Step 3/8 : RUN apt-get -y update
 ---> Running in aa6467b0fc5c
 ---> cd3b7f8f5389
Step 4/8 : RUN apt-get -y install ttf-dejavu
 ---> Running in 8fcc7ce6ab9f
 ---> 55649115b86c
Step 5/8 : COPY ./soap.sh /
 ---> 2c236754e2d5
Step 6/8 : RUN chmod +x /soap.sh
 ---> Running in 408d72761df5
Removing intermediate container 408d72761df5
 ---> 17d2afb0bc43
Step 7/8 : ENTRYPOINT ["/soap.sh"]
 ---> Running in 0b10d8b91d65
Removing intermediate container 0b10d8b91d65
 ---> a7d0b50d4fdd
Step 8/8 : CMD ["/script.sh", "arg1"]
 ---> Running in a8ba5d134b5d
Removing intermediate container a8ba5d134b5d
 ---> d41cffedccda
Successfully built d41cffedccda
Successfully tagged balu1:latest
root@test00-new:/home/sam/1_Docker# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
balu1               latest              d41cffedccda        29 seconds ago      351MB
ubuntu              latest              1e4467b07108        3 weeks ago         73.9MB

But When I run the docker using above image, I am getting the following output:

root@test00-new:/home/sam/1_Docker# docker run balu1 OK
Unpacking JRE ...
Preparing JRE ...
Starting Installer ...
This will install SoapUI 5.2.1 on your computer.
OK [o, Enter], Cancel [c]

In the above case how to handle user input in Dockerfile?

I need to send a letter o or ENTER as input inside Dockerfile

Please guide me on this.

Go to Source
Author: data123

docker-compose fail to maintain up centos build

Im building a basic centos image with only some packages but Im forgotting something because finish well but not maintain the container. What Im doing bad?


version: '3'
    build: ../common/docker/ansible/.
    stdin_open: true
      - ../../api:/var/www/api
      - ../../server:/var/www/server
      - ../server/provision/server:/tmp/ansible
    working_dir: /tmp/ansible
    command: ansible-playbook playbook.yml

    driver: bridge
      driver: default
        - subnet:


FROM centos:7
RUN yum makecache && yum install -y epel-release wget
RUN yum install -y ansible

I would like to check If all tasks was well, and using:

docker logs 

I see logs from ansible script finishing well. But I don’t know why not maintains container with Exited (0) error

command used is

docker-compose up -d --build 


Go to Source
Author: deconya

redis connection error within docker container

I’m trying to setup a light-weight RQ task queue app running in Docker. I keep on getting this error:

redis.exceptions.ConnectionError: Error -2 connecting to redis:6379. Name or service not known.


ValueError: Redis URL must specify one of the following schemes (redis://, rediss://, unix://)
I get the first error if I hardcode redis://redis:6379 and the second when grabbing from environmental variable (os.getenv()). I looked at: link1 and link2, but there wasn’t really a solution, the thread went dead.

My redis connection code is like so:

import os
import redis
from rq import Worker, Queue, Connection

listen = ['default']
redis_url = os.getenv('REDIS_URL', 'redis://localhost:6379')

if __name__ == '__main__':
    print(f"redis url: {redis_url}")
    with Connection(redis.from_url(redis_url)):
        worker = Worker(list(map(Queue, listen)))

When it prints the redis_url it is correct; I set the environmental var, REDIS_URL to redis://redis:6379 where redis is the hostname in docker-compose.yml.

    image: redis:alpine
      - '6379'

Appreciate any help. Thanks.

Go to Source
Author: Spencer Trinh

docker-compose healthcheck for rabbitMQ

I’m trying to run rabbitMQ using docker-compose, but the service is always starting or unhealthy.
rabbit is running fine, so I suspect there is something wrong with my health check.

Running the healthcheck command locally does return a value.

> curl -f http://localhost:5672
AMQP    %

But docker-compose ps always says the service is unhealthy (or starting, before it runs out of time).

> docker-compose ps
docker-entrypoint.sh rabbi ...   Up (unhealthy)   15671/tcp

Here is what my docker-compose.yml file looks like.

# docker-compose.yml
version: '2.3' # note: I can't change this version, must be 2.3


    hostname: 'rabbit'
    image: rabbitmq:3.8.5-management
      test: ["CMD", "curl", "-f", "http://localhost:5672"]
      interval: 30s
      timeout: 30s
      retries: 3
      - '5672:5672'
      - '15672:15672'
      - 'rabbit-data:/var/lib/rabbitmq/mnesia/'
      - rabbitmq

    driver: bridge

I have also tried using nc instead of curl in the healthcheck, but got the same result.

  test: [ "CMD", "nc", "-z", "localhost", "5672" ]

From https://github.com/docker-library/rabbitmq/issues/326

Go to Source
Author: con–

GitLab keeps loading and finally fails when deploying a dockerized node.js app

GitLab Job Log

[0KRunning with gitlab-runner 13.2.0-rc2 (45f2b4ec)
[0;m[0K  on docker-auto-scale fa6cab46
[0K[0K[36;1mPreparing the "docker+machine" executor[0;m
[0;m[0KUsing Docker executor with image gitlab/dind:latest ...
[0;m[0KStarting service docker:dind ...
[0;m[0KPulling docker image docker:dind ...
[0;m[0KUsing docker image sha256:d5d139be840a6ffa04348fc87740e8c095cade6e9cb977785fdba51e5fd7ffec for docker:dind ...
[0;m[0KWaiting for services to be up and running...
[0;33m*** WARNING:[0;m Service runner-fa6cab46-project-18378289-concurrent-0-31a688551619da9f-docker-0 probably didn't start properly.

Health check error:
service "runner-fa6cab46-project-18378289-concurrent-0-31a688551619da9f-docker-0-wait-for-service" timeout

Health check container logs:

Service container logs:
2020-07-20T08:21:19.734721788Z time="2020-07-20T08:21:19.734543379Z" level=info msg="Starting up"
2020-07-20T08:21:19.742928068Z time="2020-07-20T08:21:19.742802844Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-07-20T08:21:19.743943014Z time="2020-07-20T08:21:19.743853574Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-07-20T08:21:19.764021012Z time="2020-07-20T08:21:19.763898078Z" level=info msg="libcontainerd: started new containerd process" pid=23
2020-07-20T08:21:19.764159337Z time="2020-07-20T08:21:19.764107864Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:19.764207629Z time="2020-07-20T08:21:19.764179926Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:19.764319635Z time="2020-07-20T08:21:19.764279612Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:19.764371375Z time="2020-07-20T08:21:19.764344798Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:19.969344247Z time="2020-07-20T08:21:19.969193121Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13 
2020-07-20T08:21:19.969863044Z time="2020-07-20T08:21:19.969784495Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 
2020-07-20T08:21:19.970042302Z time="2020-07-20T08:21:19.969997665Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:19.970399514Z time="2020-07-20T08:21:19.970336671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
2020-07-20T08:21:19.970474776Z time="2020-07-20T08:21:19.970428684Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.019585153Z time="2020-07-20T08:21:20.019421401Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'nmodprobe: can't change directory to '/lib/modules': No such file or directoryn": exit status 1" 
2020-07-20T08:21:20.019709540Z time="2020-07-20T08:21:20.019668899Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.019934319Z time="2020-07-20T08:21:20.019887606Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.020299876Z time="2020-07-20T08:21:20.020218529Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.021038477Z time="2020-07-20T08:21:20.020887571Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
2020-07-20T08:21:20.021162370Z time="2020-07-20T08:21:20.021121663Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 
2020-07-20T08:21:20.021406797Z time="2020-07-20T08:21:20.021348536Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'nmodprobe: can't change directory to '/lib/modules': No such file or directoryn": exit status 1" 
2020-07-20T08:21:20.021487917Z time="2020-07-20T08:21:20.021435946Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" 
2020-07-20T08:21:20.021581245Z time="2020-07-20T08:21:20.021533539Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
2020-07-20T08:21:20.030531741Z time="2020-07-20T08:21:20.030427430Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 
2020-07-20T08:21:20.030639854Z time="2020-07-20T08:21:20.030604536Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 
2020-07-20T08:21:20.030779501Z time="2020-07-20T08:21:20.030736875Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.030865060Z time="2020-07-20T08:21:20.030833703Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.030955439Z time="2020-07-20T08:21:20.030912981Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031027842Z time="2020-07-20T08:21:20.030998003Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031132325Z time="2020-07-20T08:21:20.031083782Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031202966Z time="2020-07-20T08:21:20.031174445Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031286993Z time="2020-07-20T08:21:20.031253528Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.031370557Z time="2020-07-20T08:21:20.031312376Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 
2020-07-20T08:21:20.031709756Z time="2020-07-20T08:21:20.031650044Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 
2020-07-20T08:21:20.031941868Z time="2020-07-20T08:21:20.031897088Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 
2020-07-20T08:21:20.032929781Z time="2020-07-20T08:21:20.032846588Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 
2020-07-20T08:21:20.033064279Z time="2020-07-20T08:21:20.033014391Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 
2020-07-20T08:21:20.034207198Z time="2020-07-20T08:21:20.034120505Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034316027Z time="2020-07-20T08:21:20.034279582Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034402334Z time="2020-07-20T08:21:20.034369239Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034482782Z time="2020-07-20T08:21:20.034452282Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034564724Z time="2020-07-20T08:21:20.034533365Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034645756Z time="2020-07-20T08:21:20.034617060Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034722695Z time="2020-07-20T08:21:20.034689037Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034800005Z time="2020-07-20T08:21:20.034770572Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.034873069Z time="2020-07-20T08:21:20.034837050Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 
2020-07-20T08:21:20.036608424Z time="2020-07-20T08:21:20.036525701Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036722927Z time="2020-07-20T08:21:20.036684403Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036799326Z time="2020-07-20T08:21:20.036769392Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.036876692Z time="2020-07-20T08:21:20.036844684Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 
2020-07-20T08:21:20.037291381Z time="2020-07-20T08:21:20.037244979Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" 
2020-07-20T08:21:20.037493736Z time="2020-07-20T08:21:20.037445814Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" 
2020-07-20T08:21:20.037563487Z time="2020-07-20T08:21:20.037522305Z" level=info msg="containerd successfully booted in 0.069638s" 
2020-07-20T08:21:20.087933162Z time="2020-07-20T08:21:20.087804902Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-07-20T08:21:20.088415387Z time="2020-07-20T08:21:20.088327506Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:20.088533804Z time="2020-07-20T08:21:20.088465157Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:20.088620947Z time="2020-07-20T08:21:20.088562235Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:20.088709546Z time="2020-07-20T08:21:20.088654016Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:20.092857445Z time="2020-07-20T08:21:20.092749940Z" level=info msg="parsed scheme: "unix"" module=grpc
2020-07-20T08:21:20.092962469Z time="2020-07-20T08:21:20.092914347Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
2020-07-20T08:21:20.093060327Z time="2020-07-20T08:21:20.093013905Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
2020-07-20T08:21:20.093142744Z time="2020-07-20T08:21:20.093102173Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
2020-07-20T08:21:20.149109416Z time="2020-07-20T08:21:20.148965236Z" level=info msg="Loading containers: start."
2020-07-20T08:21:20.159351905Z time="2020-07-20T08:21:20.159146135Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'nbridge                167936  1 br_netfilternstp                    16384  1 bridgenllc                    16384  2 bridge,stpnip: can't find device 'br_netfilter'nbr_netfilter           24576  0 nbridge                167936  1 br_netfilternmodprobe: can't change directory to '/lib/modules': No such file or directoryn, error: exit status 1"
2020-07-20T08:21:20.280536391Z time="2020-07-20T08:21:20.280402152Z" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be used to set a preferred IP address"
2020-07-20T08:21:20.337028532Z time="2020-07-20T08:21:20.336889956Z" level=info msg="Loading containers: done."
2020-07-20T08:21:20.435200532Z time="2020-07-20T08:21:20.435033092Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
2020-07-20T08:21:20.436386855Z time="2020-07-20T08:21:20.436266338Z" level=info msg="Daemon has completed initialization"
2020-07-20T08:21:20.476621441Z time="2020-07-20T08:21:20.475137317Z" level=info msg="API listen on [::]:2375"
2020-07-20T08:21:20.477679219Z time="2020-07-20T08:21:20.477535808Z" level=info msg="API listen on /var/run/docker.sock"


[0KPulling docker image gitlab/dind:latest ...
[0;m[0KUsing docker image sha256:cc674e878f23bdc3c36cc37596d31adaa23bca0fc3ed18bea9b59abc638602e1 for gitlab/dind:latest ...
[0K[0K[36;1mPreparing environment[0;m
[0;mRunning on runner-fa6cab46-project-18378289-concurrent-0 via runner-fa6cab46-srm-1595233216-1bd30100...
[0K[0K[36;1mGetting source from Git repository[0;m
[0;m[32;1m$ eval "$CI_PRE_CLONE_SCRIPT"[0;m
[32;1mFetching changes with git depth set to 50...[0;m
Initialized empty Git repository in /builds/xxx.us/backend/.git/
[32;1mCreated fresh repository.[0;m
[32;1mChecking out 257ffdf2 as stage...[0;m

[32;1mSkipping Git submodules setup[0;m
[0K[0K[36;1mRestoring cache[0;m
[0;m[32;1mChecking cache for stage node:14.5.0-alpine-2...[0;m
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/18378289/stage%20node:14.5.0-alpine-2[0;m 
[32;1mSuccessfully extracted cache[0;m
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m
[0;mln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
time="2020-07-20T08:22:14.844844859Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]" 
time="2020-07-20T08:22:14.846663310Z" level=info msg="libcontainerd: new containerd process, pid: 57" 
time="2020-07-20T08:22:14.906788853Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" 
time="2020-07-20T08:22:14.907996055Z" level=info msg="Loading containers: start." 
time="2020-07-20T08:22:14.910877638Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'nmodprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'n, error: exit status 1" 
time="2020-07-20T08:22:14.912665866Z" level=warning msg="Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.914201302Z" level=warning msg="Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.989456423Z" level=warning msg="Could not load necessary modules for IPSEC rules: Running modprobe xfrm_user failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.78-coreos/modules.dep.bin'`, error: exit status 1" 
time="2020-07-20T08:22:14.990108153Z" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be used to set a preferred IP address" 

time="2020-07-20T08:22:15.029286773Z" level=info msg="Loading containers: done." 
time="2020-07-20T08:22:15.029664106Z" level=info msg="Daemon has completed initialization" 
time="2020-07-20T08:22:15.029823541Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay2 version=1.12.1 
time="2020-07-20T08:22:15.048665494Z" level=info msg="API listen on /var/run/docker.sock" 
time="2020-07-20T08:22:15.049046558Z" level=info msg="API listen on [::]:7070" 

# Keeps loading and finally fails after a couple of seconds


  key: '$CI_COMMIT_REF_NAME node:14.5.0-alpine'
    - node_modules/

  - release
  - deploy


  stage: release
  image: docker:19.03.12
    - docker:dind
    DOCKER_DRIVER: overlay2
    - docker version
    - docker info
    - echo "$CI_JOB_TOKEN" | docker login --username $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
    - docker build --pull --tag $TAGGED_IMAGE --cache-from $TAGGED_IMAGE --build-arg NODE_ENV=$CI_ENVIRONMENT_NAME .
    - docker push $TAGGED_IMAGE
    - docker logout $CI_REGISTRY

  stage: deploy
  image: gitlab/dind:latest
    - docker:dind
    DOCKER_COMPOSE_PATH: '~/docker-composes/$CI_PROJECT_PATH/docker-compose.yml'
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d 'r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - ssh-keyscan $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts
    - rsync -avR --rsync-path="mkdir -p ~/docker-composes/$CI_PROJECT_PATH/; rsync" ./docker-compose.yml root@$DEPLOYMENT_SERVER_IP:~/docker-composes/$CI_PROJECT_PATH/
    - ssh root@$DEPLOYMENT_SERVER_IP "echo "$CI_REGISTRY_PASSWORD" | docker login --username $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker-compose -f $DOCKER_COMPOSE_PATH rm -f -s -v $CI_COMMIT_REF_NAME; docker pull $TAGGED_IMAGE; docker-compose -f $DOCKER_COMPOSE_PATH up -d $CI_COMMIT_REF_NAME;"

  extends: .release
    - stage
    name: staging

  extends: .deploy
    - stage
    name: staging



FROM node:14.5.0-alpine



# Set working directory
WORKDIR /var/www/

# Install app dependencies
COPY package.json package-lock.json ./
RUN npm ci --silent --only=production

COPY . ./

# Start the application
CMD [ "npm", "run", "start" ]


version: '3.8'

    container_name: redis-stage
    image: redis:6.0.5-alpine
      - '7075:6379'
    restart: always
      - my-proxy-net

    container_name: xxx-backend-stage
    image: registry.gitlab.com/xxx.us/backend:latest
    build: .
      - '7070'
    restart: always
      - my-proxy-net
      - redis-stage
      VIRTUAL_HOST: backend.xxx.us
      VIRTUAL_PROTO: https
      LETSENCRYPT_HOST: backend.xxx.us

      name: mynetwork

Update 1

I got a warning on the page claims I have used over 30% of my shared runner minutes. Maybe it is about not having enough minutes.

Update 2

The release stage gets completed successfully.

Update 3

Before I get into this problem, I deployed once successfully. I decided to test that commit once again and see if it succeeds, but it fails!

Go to Source
Author: Alireza

Jenkins – docker login doesn’t seem to persist: docker pull won’t work but docker-compose can pull without problems

I am setting up a new Jenkins as our old one is based on a deprecated base image.

Dockerfile is:

FROM jenkins/jenkins:lts-centos7

USER root

RUN yum -y install docker 
    && yum clean all

USER jenkins

When I docker exec ... into the container and log into our Sonatype Nexus to pull Docker images from it, the successful login does not seem to be heeded by Jenkins afterwards:

[root@909fb3a3d52c .docker]# docker login https://our.nexus.internal:<endpoint>
Username (jenkins): jenkins
Login Succeeded
[root@909fb3a3d52c .docker]# pwd
[root@909fb3a3d52c .docker]# cat config.json
    "auths": {
        "https://our.nexus.internal:<endpoint>": {
            "auth": "<CORRECT base64 hash>"

Login has succeeded, the auth credentials are written to /root/.docker/config.json and when I base64-decode them they are correct “jenkins:”.


[root@909fb3a3d52c .docker]# docker pull https://our.nexus.internal:<endpoint>/myImage:myTag
Error response from daemon: Get https://our.nexus.internal:<endpoint>/v2/myImage/manifests/myTag: no basic auth credentials

This will also happen during execution of a Jenkins pipeline (obviously), where job log reports:

docker pull https://our.nexus.internal:<endpoint>/myImage:myTag

Error response from daemon: Get Error response from daemon: Get https://our.nexus.internal:<endpoint>/v2/myImage/manifests/myTag: no basic auth credentials

script returned exit code 1

I assume this to not be a question about nexus but about how the credentials from docker login are stored and used.

Sidenote: We have a different Jenkins currently operational which is based on jenkinsci/blueocean:1.22.0 which does NOT display this behaviour. It can docker login fine and docker pull fine.


Running docker-compose does work while a manual docker pull... will not:

[root@3eeee032ff08 /]# docker login https://nexus:port
Username: jenkins
Login Succeeded

[root@3eeee032ff08 /]# docker pull nexus:port/company/image:myTag
Error response from daemon: Get https://nexus:port/v2/company/image/manifests/myTag: no basic auth credentials

[root@3eeee032ff08 /]# cat docker-compose.yml
version: '3.3'

    user: root
    image: nexus:port/company/image:myTag

[root@3eeee032ff08 /]# docker-compose up

Creating network "default_default" with the default driver
Pulling jenkins (nexus:port/company/image:myTag)...
myTag: Pulling from company/image
f34b00c7da20: Pull complete
3f316190de62: Pull complete
480967924aca: Pull complete
619ac94672e0: Pull complete
cb1c6713a236: Pull complete
55766a17b1c6: Pull complete
8f6e881b2ef2: Pull complete
f118a721eee8: Pull complete
fae3c8e02b83: Pull complete
093ceb59408d: Pull complete
e3bd3cbf474d: Pull complete
41a6f4e5de6d: Pull complete
aa1aece4c754: Pull complete
fc897d8ca22f: Pull complete
Digest: sha256:3fd74169f648784fe960add9c0138e3802a91e82c9b5fdbd734faaeaaef696f9
Status: Downloaded newer image for nexus:port/company/image:myTag

Go to Source
Author: Worp

Are there turn-key development webapp containers for evaluation?

In short: I want a ready-made webapp framework with a web server and a database in a container, such that I can edit files on my local drive and view the results through a browser pointed at the container – in order to get started with said framework. I want to be writing and viewing webpages within 10 minutes of starting. Seems achievable in theory.

If that’s clear, feel free to skip the rest…

Every few years I check in on the web development world, which I am not involved in. I always bail out because the tools take too long to set up and rarely work without lots of tinkering. I would think by now there should be some simple, turn-key docker-compose solutions that can give a reasonable working setup (ie. Node.js+MySQL, LAMP, etc).

Ideally it would look to a configurable directory on the local disk for the actual code to run, so I can just start working on a project. I am not locked in to any platform, but it should be up and running (serving pages) inside of 10 minutes. Shouldn’t have to configure more than a couple ports, passwords, and directories.

I have found tutorials that get halfway to setting it up, but not one actually works as advertised. After having spent the better part of the past two days trying solutions, I am about ready to give up yet again. My goal is not to get into the minutiae of setting up a deployment. I just want to mess around with the frameworks and languages and see what they are capable of and what I might like to use.

Is there something out there I am just not seeing? Is there something that prevents this? Wouldn’t this be the preferred method of starting a project for just about everyone, experienced or not?

It seems like a few of the common stacks would be really handy.

If I see something that requires making custom Docker files, I am out. I’ve been down that road (quite extensively in the supercomputing realm with Singularity), and it seems entirely unnecessary here. The official containers for these tools seem okay, they just need to be integrated and brought up together.

This and this (ignore the terrible politics splashed all over the site) seemed to get close in terms of the goal, but it did not work in practice and needs debugging. Even then, trying it required reading through a lot of unnecessary stuff and copying and pasting pieces of multiple versions of files.

I would think the maintainers of projects would have demo versions ready to go. It would probably attract new users.

In many cases, the final deployment very well may look like the development one, which would be a nice bonus.

Go to Source
Author: ThisShouldBeSimpleNow

traefik two hosts in the same config

I’m trying to run Nexus in swarm with traefik, nexus has a admin interface that runs on port 8081 and also has a nexus docker registry repository running on port 9615.

My goal is access registry through registry.docker.domain and nexus by nexus.domain but I am not able to do this.

traefik config on compose

  - 8081
  - 9615
    - "traefik.enable=true"
    # Config Nexus
    - "traefik.http.router.nexus.rule=Host('nexus.domain')"
    - "traefik.http.services.nexus.loadbalancer.server.port=8081"
    - "traefik.http.routers.nexus.entrypoints=web-secure"
    - "traefik.http.routers.nexus.service=nexus"

    # Config Registry
    - "traefik.http.router.registry.rule=Host('registry.docker')"
    - "traefik.http.services.registry.loadbalancer.server.port=9615"
    - "traefik.http.routers.registry.entrypoints=web-secure"
    - "traefik.http.routers.registry.tls=true"
    - "traefik.http.routers.registry.service=registry"

If I set up only registry labels I can access the registry by https.://registry.domanin and nexus by registry.domain:PORT

If I set up only nexus labels I can access nexus by http:.//Nexus.domain

But the config above, with labels for both at the same time, didn’t work, nexus and registry return 404.

Obs: I’m using traefik 2.2, the documentation and the use are very different from 1.6

Go to Source
Author: Joao Vitorino

Failure to build with docker-compose: Invalid signal: SIGQUIT

PS D:>docker-compose up
Building app
Step 1/7 : FROM php:7.4-fpm-alpine
—> 5ae58b49885c
Step 2/7 : RUN apk add --no-cache autoconf curl dpkg-dev dpkg freetype-dev file g++ gcc git icu-dev jpeg-dev libc-dev libmcrypt-dev libpng-dev libjpeg-turbo-dev libxml2-dev libzip-dev make mariadb-dev postgresql-dev pkgconf php7-dev re2c rsync unzip wget zlib-dev
ERROR: Service ‘app’ failed to build: Invalid signal: SIGQUIT

Version: 19.03.8

OS/Arch: windows/amd64

Experimental: true

Switching containers doesn’t help. What can I do or how to debug?

Thank you.

Go to Source
Author: dick-justice