What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.


While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

Forward SSH from Reverse Proxy Server through VPN to Home Server

I’m trying to set up GitLab on my home server. HTTPS is working and I can get to GitLab’s interface, but SSH is not and thus I can’t push code to the server.

Here is the setup:

Cloudflare <--> Reverse Proxy (nginx, hosted on Digital Ocean) <--- VPN ---> Untangle Firewall <--> GitLab Server (on ESXi)

If I try to SSH directly from the Reverse Proxy to the GitLab server (over VPN connection), it works perfect.

If I try to SSH from my laptop using the domain name, I get:

kex_exchange_identification: Connection closed by remote host
Connection closed by port 2095

If I try to SSH from my laptop using the Reverse Proxy’s IP (thus cutting out Cloudflare), I get:

Bad packet length 1231976033.
ssh_dispatch_run_fatal: Connection to {{ IP }} port 2095: message authentication code incorrect

I’m currently trying to use the nginx stream module to do so, and this is the stream setup:

stream {
        upstream git-ssh {
                server {{INTERNAL GITLAB IP}}:22;
        server {
                listen 2095;
                proxy_pass {{INTERNAL GITLAB IP}}:22;
                proxy_protocol on;

The reason I have upstream git-ssh and then don’t use it was because I was wondering if that was the problem, but it makes no difference if I use it or not.

I’m not familiar with iptables, but I tried the following commands:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2095 -j DNAT --to-destination {{GITLAB IP}}:22
sudo iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 2095 -j SNAT --to-source {{PROXY IP}}

But it didn’t seem to work. ssh just sits there returning nothing and eventually times out.

I am lost now, and was hoping someone could give me pointers?

Go to Source
Author: Cody Dostal

How can use multi location and regex in nginx?

Please help me with nginx.conf that is regex and multi location.

When I create the nginx.conf with next location, proxy_pass works correctly:

location = /test-a-1-1 {
       proxy_pass http://test-a-1-1;
       proxy_set_header Host $host;
       proxy_set_header X-Real-IP $real_remote_addr;
       proxy_redirect off;

  location = /test-b-1-2 {
      proxy_pass http://test-b-1-2;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $real_remote_addr;
      proxy_redirect off;

But I have many locations and I want use multi location, but in this configuration i get the error –

test-b-1-2 could not be resolved (3: Host not found)

location ~ ^/(test-a-[0-9]-[0-9]|test-b-[0-9]-[0-9]) {
                set $test $1;
                proxy_pass http://$test;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_redirect off;

Please help me with regex and multi locations.
Thank you.

Go to Source
Author: perrfect

nginx 1.14 proxy to serve stale content when upstream is not available

I have nginx running inside a Docker container on Docker for Windows with WSL2. It’s function is as a caching proxy to an upstream IIS server.

container/userspace: CentOS 8 container running SystemD as pid1.
nginx version: 1.14 as provided by CentOS 8.
The config file dates back from nginx 1.1 days on RHEL6, but reports no syntax errors.

Problem: When upstream is down, instead of service stale content, Nginx reports “Gateway Timeout.”
Expectation: Nginx would still serve static assets from cache.

The proxying appears to work as I can see requests Hit and Miss, respectively in the access.log.

Could anyone with more nginx experience comment on this use case and if there’s anything obviously missing in the configuration?


user              nginx;
worker_processes  1;
worker_rlimit_nofile 1000000;  # Added

error_log  /var/log/nginx/error.log;
error_log  /var/log/nginx/error.log  notice;
error_log  /var/log/nginx/error.log  info;

pid        /var/run/nginx.pid;
load_module /usr/lib64/nginx/modules/ngx_http_perl_module.so;

events {
    worker_connections  9192;  #, increased from 1024 to 9192
    use epoll;  # Added
    multi_accept on;  # Added

#   debug_connection;

http {
    # Include the perl module
      perl_modules perl/lib;
    # Request_Uri to Lowercase Request_Uri
      perl_set $uri_request_lowercase 'sub {
        my $r = shift;
        my $request_uri = $r->$request_uri;
        $request_uri = lc($request_uri);
        return $request_uri;

          # Request to lowercase request_uri
      perl_set $uri_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        return $uri;

        # Returns everything to the left of ?
     perl_set $uri_left_request_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        my $request_uri = substr($uri, 0, index($uri, "?"));
        return $request_uri;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" -  "$upstream_cache_status" '
                      '"$request_method $scheme://$host$request_uri $server_protocol" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    open_file_cache max=100000 inactive=30s;  # Added
    reset_timedout_connection on;  # Added

    gzip  on;  #, Uncommented this to set gzip ON
    gzip_comp_level     1;  # Added
    gzip_disable        msie6;  # Added
    gzip_proxied        any;  # Added
    gzip_types          text/plain text/css application/x-javascript text/xml application/xml application/rss+xml text/javascript;  # Added

    #  Cache Configurations
    proxy_cache_path /var/cache/nginx/www-WebSite.com levels=1:2 keys_zone=www-WebSite.com-server-cache:300m max_size=5000m inactive=30d;

    proxy_temp_path /tmp/nginx;  # Added

    # Load config files from the /etc/nginx/conf.d directory
    # The default server is in conf.d/default.conf
    include /etc/nginx/conf.d/*.conf;



server {
    listen      443 ssl;    
    server_name     www.WebSite.com;
    ssl_certificate /etc/nginx/ssl/__WebSite___cert.cer;
    ssl_certificate_key  /etc/nginx/ssl/wildcard-WebSite.com--.key;
    location ~* /_layouts/authenticate.aspx.*$ {
    root  /usr/share/nginx/html;
    rewrite ^ /error502.html?;
    location / {
        proxy_pass                  https://www.WebSite.com/;
        add_header X-dpu-cachesource "DR-NGINX-CONTAINER";
        proxy_connect_timeout       15s;
        proxy_cache_key             "$scheme://$host$uri";
        proxy_cache                 www-WebSite.com--server-cache;
        proxy_cache_valid           302 200 30d;
        proxy_cache_use_stale       updating error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
        proxy_ignore_headers        X-Accel-Expires Expires Cache-Control Set-Cookie;
        proxy_set_header            X-Real-IP        $remote_addr;
        proxy_set_header            X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_max_temp_file_size    15m;
        client_max_body_size        50m;
        client_body_buffer_size     128k;
        proxy_send_timeout          5m;
        proxy_read_timeout          5m;
        proxy_buffer_size           4k;
        proxy_buffers               4 32k;
        proxy_busy_buffers_size     64k;
        proxy_temp_file_write_size  64k;
        error_page 502 /error502.html;

   location = /error502.html {
      root  /usr/share/nginx/html;


Go to Source
Author: Yolo Perdiem

NGINX serving by IP only, not by server name

A Raspbery Pi running (arm-)Arch sits behind my router NAT. The RasPi has a static IP and an nginx serving on port 8093. (The nginx is listening on port 80 for another webpage.)

The server_name is “pi.hole” and it is resolved correctly by the source machine to

The interface opens successfully in my browser at “”

A “404 Not Found” pops when opening “pi.hole”

Bellow are my /etc/nginx/nginx.conf

user http;

worker_processes auto;

worker_rlimit_nofile 10240;

events {
    # Use epoll on Linux 2.6+
    use epoll;
    # Max number of simultaneous connections per worker process
    worker_connections 2048;
    # Accept all new connections at one time
    multi_accept on;

http {

    server_tokens off;

    sendfile on;

    tcp_nopush on;

    tcp_nodelay off;
    send_timeout 30;

    keepalive_timeout 60;

    keepalive_requests 200;
    reset_timedout_connection on;
    types_hash_max_size 2048;

    server_names_hash_bucket_size 64;

    include /etc/nginx/mime.types;
    default_type text/html;
    charset UTF-8;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    gzip on;

    gzip_min_length 1000;

    gzip_disable "msie6";
    gzip_proxied any;

    gzip_comp_level 5;

    open_file_cache max=1000 inactive=20s;
    open_file_cache_valid    30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors   on;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

and /etc/nginx/conf.d/pihole.conf

# /etc/nginx/conf.d/pihole.conf
# https://github.com/pi-hole/pi-hole/wiki/Nginx-Configuration

server {
    listen ;

    root /srv/http/pihole;
    server_name pi.hole;
    autoindex off;

    proxy_intercept_errors on;
    error_page 404 /pihole/index.php;

    index pihole/index.php index.php index.html index.htm;

    location / {
        expires max;
        try_files $uri $uri/ /pihole/index.php?$args =404;
        add_header X-Pi-hole "A black hole for Internet advertisements";

    location ~ .php$ {
        include fastcgi.conf;
        fastcgi_intercept_errors on;
        fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
        #fastcgi_param VIRTUAL_HOST "pi.hole";
        #fastcgi_param SERVER_NAME $host;
        fastcgi_param SERVER_NAME "pi.hole";
    location /admin {
        root /srv/http/pihole;
        index index.php index.html index.htm;
        add_header X-Pi-hole "The Pi-hole Web interface is working!";
        add_header X-Frame-Options "DENY";
    location ~ /.ttf {
        add_header Access-Control-Allow-Origin "*";

    location ~ /admin/. {
        deny all;

    location ~ /.ht {
        deny all;

I tried adding the ip to the listener and playing with the fastcgi_param for the host name to no better end.

The user running nginx is the same for php-fpm and has ownership and read-write permissions and the root and down the tree.

What am I doing wrong?

Go to Source
Author: superAnnoyingUser

Nginx with Varnish : all listen directive pointing to 808* ports but nginx still listen to 80

I’m running instances of Symfony or Drupal websites on two Debian servers, with Nginx listening to 443, Varnish listening to 80 and passing to nginx on listening custom ports 80** for each vhost.

Recently I added a new website to one of the servers. Then I began to run in this well documented error nginx: [emerg] bind() to [::]:80 failed (98: Address already in use).

Despite there is no nginx server block at all listening to :80 port, neither any server block without listen directive, Nginx began to listen on port 80 all together with the custom ports.

sudo netstat -tlpn| grep nginx
tcp        0      0  *               LISTEN      4191/nginx: master  
tcp        0      0    *               LISTEN      4191/nginx: master  
tcp        0      0  *               LISTEN      4191/nginx: master  
tcp        0      0 x.x.x.x:8082  *               LISTEN      4191/nginx: master  
tcp        0      0 y.y.y.y:8083  *               LISTEN      4191/nginx: master  
tcp        0      0 z.z.z.z:8084  *               LISTEN      4191/nginx: master  
tcp        0      0   *               LISTEN      4191/nginx: master  
tcp        0      0  *               LISTEN      4191/nginx: master  
tcp6       0      0 :::8080                 :::*                    LISTEN      4191/nginx: master  
tcp6       0      0 :::80                   :::*                    LISTEN      4191/nginx: master  
tcp6       0      0 :::8081                 :::*                    LISTEN      4191/nginx: master  
tcp6       0      0 :::443                  :::*                    LISTEN      4191/nginx: master  
tcp6       0      0 :::8000                 :::*                    LISTEN      4191/nginx: master

I also already read all the docs and posts about handling dual-stack IPv4 and IPv6 correct new syntax, and tried all possible syntaxes such as below, no way.

Working directive before crash : listen x.x.x.x:8082;
Tried adding listen [::]:8082 ipv6only=on;. No change.

I listed, and killed process many times with sudo fuser -k 80/tcp before restarting systemctl varnish, nginx, even daemon-reload…

Last, I checked my history but can’t find what could have caused this sudden behavior. The lone point I’m not sure about is I changed a couple of sysctl.conf params, but hopefully reverted them, just in case, I’m not used to this part od administration : cat /etc/sysctl.conf | grep net.ipv4.conf

#net.ipv4.conf.all.accept_redirects = 0
# net.ipv4.conf.all.secure_redirects = 1
#net.ipv4.conf.all.send_redirects = 0
#net.ipv4.conf.all.accept_source_route = 0
#net.ipv4.conf.all.log_martians = 1

Here’s my configuration.

cat /etc/nginx/nginx.conf (relevant 2 lines, no html / server block in it)

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

cat /etc/nginx/conf.d/default.conf

server {
        listen 8000 default_server;
        listen [::]:8000 ipv6only=on default_server;
        server_name _;

        listen 443 ssl default_server;
        listen [::]:443 ssl ipv6only=on default_server;

One of the sites-available vhosts (they all follow exactly same pattern) :

server { # this block only redirects www to non www
        listen x.x.x.x:443 ssl;
        server_name www.example.com;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_certificate /var/www/clients/client0/web3/ssl/example.com-le.crt;
        ssl_certificate_key /var/www/clients/client0/web3/ssl/example.com-le.key;

        return 301 https://example.com$request_uri;

server {
        listen x.x.x.x:443 ssl;
        server_name example.com

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_certificate /var/www/clients/client0/web3/ssl/example.com-le.crt;
        ssl_certificate_key /var/www/clients/client0/web3/ssl/example.com-le.key;

        location / {
            # Pass the request on to Varnish.
            # Pass some headers to the downstream server, so it can identify the host.
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # Tell any web apps like Drupal that the session is HTTPS.
            proxy_set_header X-Forwarded-Proto https;
            proxy_redirect     off;
server {
        listen x.x.x.x:8082;
#       listen [::]:8082 ipv6only=on;

        server_name example.com www.example.com;

        root   /var/www/example.com/web/public;

        location / {
            # try to serve file directly, fallback to index.php
            try_files $uri /index.php$is_args$args;

       location ~ ^/index.php(/|$) {
            fastcgi_split_path_info ^(.+.php)(/.*)$;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
            fastcgi_param DOCUMENT_ROOT $realpath_root;
        location ~ .php$ {
           # return 404;

        error_log /var/log/ispconfig/httpd/example.com/error.log;
        access_log /var/log/ispconfig/httpd/example.com/access.log combined;

        location ~ /. {
                        deny all;

        location ^~ /.well-known/acme-challenge/ {
             access_log off;
             log_not_found off;
             root /usr/local/ispconfig/interface/acme/;
             autoindex off;
             try_files $uri $uri/ =404;

        location = /favicon.ico {
            log_not_found off;
            access_log off;
            expires max;
            add_header Cache-Control "public, must-revalidate, proxy-revalidate";

        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;

cat /etc/default/varnish relevant part

DAEMON_OPTS="-a :80 
             -T localhost:6082 
             -f /etc/varnish/default.vcl 
             -S /etc/varnish/secret 
             -s malloc,3G"

I’m wondering what could have caused a config I’m working with since years to bug ?

I carefully studied these Q&A and a bunch of doc or posts, with no success : Nginx tries to run on port 80 but the configs have been removed ; Nginx will not start (Address already in use) ; nginx – bind() to failed (98: Address already in use)

Go to Source
Author: Kojo

Make host header correct from upstream

I configured simple load balancer scheme on the windows:

upstream app.local {
    server app1.local:8001 fail_timeout=10s max_fails=10;
    server app2.local:8002 fail_timeout=10s max_fails=10;

server {
    listen 8000;

    location / {
        proxy_pass http://app.local;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;

Changed hosts file like this       app.local       app1.local       app2.local

All fine, but my web servers behind app1.local and app2.local receive incorrect headers Host: app.local and therefore don’t want resolve requests. I readed
the post where the same problem but top answer not resolved my and i don’t want to use the double layer proxy option straight off.

Go to Source
Author: Vasil Akhmetov