nginx 1.14 proxy to serve stale content when upstream is not available

I have nginx running inside a Docker container on Docker for Windows with WSL2. It’s function is as a caching proxy to an upstream IIS server.

container/userspace: CentOS 8 container running SystemD as pid1.
nginx version: 1.14 as provided by CentOS 8.
The config file dates back from nginx 1.1 days on RHEL6, but reports no syntax errors.

Problem: When upstream is down, instead of service stale content, Nginx reports “Gateway Timeout.”
Expectation: Nginx would still serve static assets from cache.

The proxying appears to work as I can see requests Hit and Miss, respectively in the access.log.

Could anyone with more nginx experience comment on this use case and if there’s anything obviously missing in the configuration?


user              nginx;
worker_processes  1;
worker_rlimit_nofile 1000000;  # Added

error_log  /var/log/nginx/error.log;
error_log  /var/log/nginx/error.log  notice;
error_log  /var/log/nginx/error.log  info;

pid        /var/run/;
load_module /usr/lib64/nginx/modules/;

events {
    worker_connections  9192;  #, increased from 1024 to 9192
    use epoll;  # Added
    multi_accept on;  # Added

#   debug_connection;

http {
    # Include the perl module
      perl_modules perl/lib;
    # Request_Uri to Lowercase Request_Uri
      perl_set $uri_request_lowercase 'sub {
        my $r = shift;
        my $request_uri = $r->$request_uri;
        $request_uri = lc($request_uri);
        return $request_uri;

          # Request to lowercase request_uri
      perl_set $uri_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        return $uri;

        # Returns everything to the left of ?
     perl_set $uri_left_request_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        my $request_uri = substr($uri, 0, index($uri, "?"));
        return $request_uri;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" -  "$upstream_cache_status" '
                      '"$request_method $scheme://$host$request_uri $server_protocol" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    open_file_cache max=100000 inactive=30s;  # Added
    reset_timedout_connection on;  # Added

    gzip  on;  #, Uncommented this to set gzip ON
    gzip_comp_level     1;  # Added
    gzip_disable        msie6;  # Added
    gzip_proxied        any;  # Added
    gzip_types          text/plain text/css application/x-javascript text/xml application/xml application/rss+xml text/javascript;  # Added

    #  Cache Configurations
    proxy_cache_path /var/cache/nginx/ levels=1:2 max_size=5000m inactive=30d;

    proxy_temp_path /tmp/nginx;  # Added

    # Load config files from the /etc/nginx/conf.d directory
    # The default server is in conf.d/default.conf
    include /etc/nginx/conf.d/*.conf;



server {
    listen      443 ssl;    
    ssl_certificate /etc/nginx/ssl/__WebSite___cert.cer;
    ssl_certificate_key  /etc/nginx/ssl/;
    location ~* /_layouts/authenticate.aspx.*$ {
    root  /usr/share/nginx/html;
    rewrite ^ /error502.html?;
    location / {
        proxy_pass        ;
        add_header X-dpu-cachesource "DR-NGINX-CONTAINER";
        proxy_connect_timeout       15s;
        proxy_cache_key             "$scheme://$host$uri";
        proxy_cache       ;
        proxy_cache_valid           302 200 30d;
        proxy_cache_use_stale       updating error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
        proxy_ignore_headers        X-Accel-Expires Expires Cache-Control Set-Cookie;
        proxy_set_header            X-Real-IP        $remote_addr;
        proxy_set_header            X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_max_temp_file_size    15m;
        client_max_body_size        50m;
        client_body_buffer_size     128k;
        proxy_send_timeout          5m;
        proxy_read_timeout          5m;
        proxy_buffer_size           4k;
        proxy_buffers               4 32k;
        proxy_busy_buffers_size     64k;
        proxy_temp_file_write_size  64k;
        error_page 502 /error502.html;

   location = /error502.html {
      root  /usr/share/nginx/html;


Go to Source
Author: Yolo Perdiem

Configuring HAProxy cache-store to only cache specific file types

I’m building a new load balancer and need to configure the cache to only store certain file types. At the moment, my config only wants to cache everything small enough to fit. I’ve tried setting up a variable based on the request and calling that in the cache-store statement, but based on the behavior of the service, it’s caching everything.

We configured a test box to pull from and included a delay in the response so that we’d be able to determine if the response was from the test server or the cache. We can also see the curls in the access log if it’s actually hitting the server. I’m not concerned that my test is faulty; I have clear proof when a request hits the server vs when it’s pulled from cache.

Here are the relevant portions of my config

frontend <redacted>
        bind <IP>:80
        bind <IP>:443 ssl crt <certificate.pem> ssl-min-ver TLSv1.2 alpn h2,http/1.1

        http-request set-var(req.path) hdr(Path) if { path_end .jpeg .jpg .png .gif .svg .ico .css .less .ttf .eot }

        acl cachedata path_end .jpg .jpeg .png .gif .svg .ico .css .less .ttf .eot

        http-request add-header X-Forwarded-Proto https if { ssl_fc }
        http-request cache-use lb if cachedata
        http-response cache-store lb if { var(req.path) -m end .jpg .jpeg .png .gif .svg .ico .css .less .ttf .eot }

        redirect scheme https code 301 if !{ ssl_fc }

        default_backend <redacted>

cache lb
        total-max-size 1024
        max-age 600

Go to Source
Author: Brad R

Can server-side caching misconfiguration lead to stolen logins?

If a webapp sends Cache-Control: private it shouldn’t be cached for example with nginx proxy_cache. What could happen if it was cached anyhow? Could another visitor see the personalized login of another user? Might another visitor then being logged in as another user?

Go to Source
Author: adrelanos