What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.


While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

NGINX serving by IP only, not by server name

A Raspbery Pi running (arm-)Arch sits behind my router NAT. The RasPi has a static IP and an nginx serving on port 8093. (The nginx is listening on port 80 for another webpage.)

The server_name is “pi.hole” and it is resolved correctly by the source machine to

The interface opens successfully in my browser at “”

A “404 Not Found” pops when opening “pi.hole”

Bellow are my /etc/nginx/nginx.conf

user http;

worker_processes auto;

worker_rlimit_nofile 10240;

events {
    # Use epoll on Linux 2.6+
    use epoll;
    # Max number of simultaneous connections per worker process
    worker_connections 2048;
    # Accept all new connections at one time
    multi_accept on;

http {

    server_tokens off;

    sendfile on;

    tcp_nopush on;

    tcp_nodelay off;
    send_timeout 30;

    keepalive_timeout 60;

    keepalive_requests 200;
    reset_timedout_connection on;
    types_hash_max_size 2048;

    server_names_hash_bucket_size 64;

    include /etc/nginx/mime.types;
    default_type text/html;
    charset UTF-8;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    gzip on;

    gzip_min_length 1000;

    gzip_disable "msie6";
    gzip_proxied any;

    gzip_comp_level 5;

    open_file_cache max=1000 inactive=20s;
    open_file_cache_valid    30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors   on;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

and /etc/nginx/conf.d/pihole.conf

# /etc/nginx/conf.d/pihole.conf
# https://github.com/pi-hole/pi-hole/wiki/Nginx-Configuration

server {
    listen ;

    root /srv/http/pihole;
    server_name pi.hole;
    autoindex off;

    proxy_intercept_errors on;
    error_page 404 /pihole/index.php;

    index pihole/index.php index.php index.html index.htm;

    location / {
        expires max;
        try_files $uri $uri/ /pihole/index.php?$args =404;
        add_header X-Pi-hole "A black hole for Internet advertisements";

    location ~ .php$ {
        include fastcgi.conf;
        fastcgi_intercept_errors on;
        fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
        #fastcgi_param VIRTUAL_HOST "pi.hole";
        #fastcgi_param SERVER_NAME $host;
        fastcgi_param SERVER_NAME "pi.hole";
    location /admin {
        root /srv/http/pihole;
        index index.php index.html index.htm;
        add_header X-Pi-hole "The Pi-hole Web interface is working!";
        add_header X-Frame-Options "DENY";
    location ~ /.ttf {
        add_header Access-Control-Allow-Origin "*";

    location ~ /admin/. {
        deny all;

    location ~ /.ht {
        deny all;

I tried adding the ip to the listener and playing with the fastcgi_param for the host name to no better end.

The user running nginx is the same for php-fpm and has ownership and read-write permissions and the root and down the tree.

What am I doing wrong?

Go to Source
Author: superAnnoyingUser