What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.

Why?

While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

docker-compose fail to maintain up centos build

Im building a basic centos image with only some packages but Im forgotting something because finish well but not maintain the container. What Im doing bad?

docker-compose:

version: '3'
services:
  config_server:
    build: ../common/docker/ansible/.
    stdin_open: true
    volumes:
      - ../../api:/var/www/api
      - ../../server:/var/www/server
      - ../server/provision/server:/tmp/ansible
    working_dir: /tmp/ansible
    networks:
      net:
        ipv4_address: 172.28.0.10
    command: ansible-playbook playbook.yml

networks:
  net:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.28.0.0/24

Dockerfile:

FROM centos:7
RUN yum makecache && yum install -y epel-release wget
RUN yum install -y ansible

I would like to check If all tasks was well, and using:

docker logs 

I see logs from ansible script finishing well. But I don’t know why not maintains container with Exited (0) error

command used is

docker-compose up -d --build 

Suggestions?

Go to Source
Author: deconya

redis connection error within docker container

I’m trying to setup a light-weight RQ task queue app running in Docker. I keep on getting this error:

redis.exceptions.ConnectionError: Error -2 connecting to redis:6379. Name or service not known.

or

ValueError: Redis URL must specify one of the following schemes (redis://, rediss://, unix://)
I get the first error if I hardcode redis://redis:6379 and the second when grabbing from environmental variable (os.getenv()). I looked at: link1 and link2, but there wasn’t really a solution, the thread went dead.

My redis connection code is like so:

import os
import redis
from rq import Worker, Queue, Connection

listen = ['default']
redis_url = os.getenv('REDIS_URL', 'redis://localhost:6379')

if __name__ == '__main__':
    print(f"redis url: {redis_url}")
    with Connection(redis.from_url(redis_url)):
        worker = Worker(list(map(Queue, listen)))
        worker.work()

When it prints the redis_url it is correct; I set the environmental var, REDIS_URL to redis://redis:6379 where redis is the hostname in docker-compose.yml.

  redis:
    image: redis:alpine
    expose:
      - '6379'

Appreciate any help. Thanks.

Go to Source
Author: Spencer Trinh

Failure to build with docker-compose: Invalid signal: SIGQUIT

PS D:>docker-compose up
Building app
Step 1/7 : FROM php:7.4-fpm-alpine
—> 5ae58b49885c
Step 2/7 : RUN apk add --no-cache autoconf curl dpkg-dev dpkg freetype-dev file g++ gcc git icu-dev jpeg-dev libc-dev libmcrypt-dev libpng-dev libjpeg-turbo-dev libxml2-dev libzip-dev make mariadb-dev postgresql-dev pkgconf php7-dev re2c rsync unzip wget zlib-dev
ERROR: Service ‘app’ failed to build: Invalid signal: SIGQUIT

Version: 19.03.8

OS/Arch: windows/amd64

Experimental: true

Switching containers doesn’t help. What can I do or how to debug?

Thank you.

Go to Source
Author: dick-justice