Forward SSH from Reverse Proxy Server through VPN to Home Server

I’m trying to set up GitLab on my home server. HTTPS is working and I can get to GitLab’s interface, but SSH is not and thus I can’t push code to the server.

Here is the setup:

Cloudflare <--> Reverse Proxy (nginx, hosted on Digital Ocean) <--- VPN ---> Untangle Firewall <--> GitLab Server (on ESXi)

If I try to SSH directly from the Reverse Proxy to the GitLab server (over VPN connection), it works perfect.

If I try to SSH from my laptop using the domain name, I get:

kex_exchange_identification: Connection closed by remote host
Connection closed by port 2095

If I try to SSH from my laptop using the Reverse Proxy’s IP (thus cutting out Cloudflare), I get:

Bad packet length 1231976033.
ssh_dispatch_run_fatal: Connection to {{ IP }} port 2095: message authentication code incorrect

I’m currently trying to use the nginx stream module to do so, and this is the stream setup:

stream {
        upstream git-ssh {
                server {{INTERNAL GITLAB IP}}:22;
        server {
                listen 2095;
                proxy_pass {{INTERNAL GITLAB IP}}:22;
                proxy_protocol on;

The reason I have upstream git-ssh and then don’t use it was because I was wondering if that was the problem, but it makes no difference if I use it or not.

I’m not familiar with iptables, but I tried the following commands:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2095 -j DNAT --to-destination {{GITLAB IP}}:22
sudo iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 2095 -j SNAT --to-source {{PROXY IP}}

But it didn’t seem to work. ssh just sits there returning nothing and eventually times out.

I am lost now, and was hoping someone could give me pointers?

Go to Source
Author: Cody Dostal

nginx 1.14 proxy to serve stale content when upstream is not available

I have nginx running inside a Docker container on Docker for Windows with WSL2. It’s function is as a caching proxy to an upstream IIS server.

container/userspace: CentOS 8 container running SystemD as pid1.
nginx version: 1.14 as provided by CentOS 8.
The config file dates back from nginx 1.1 days on RHEL6, but reports no syntax errors.

Problem: When upstream is down, instead of service stale content, Nginx reports “Gateway Timeout.”
Expectation: Nginx would still serve static assets from cache.

The proxying appears to work as I can see requests Hit and Miss, respectively in the access.log.

Could anyone with more nginx experience comment on this use case and if there’s anything obviously missing in the configuration?


user              nginx;
worker_processes  1;
worker_rlimit_nofile 1000000;  # Added

error_log  /var/log/nginx/error.log;
error_log  /var/log/nginx/error.log  notice;
error_log  /var/log/nginx/error.log  info;

pid        /var/run/;
load_module /usr/lib64/nginx/modules/;

events {
    worker_connections  9192;  #, increased from 1024 to 9192
    use epoll;  # Added
    multi_accept on;  # Added

#   debug_connection;

http {
    # Include the perl module
      perl_modules perl/lib;
    # Request_Uri to Lowercase Request_Uri
      perl_set $uri_request_lowercase 'sub {
        my $r = shift;
        my $request_uri = $r->$request_uri;
        $request_uri = lc($request_uri);
        return $request_uri;

          # Request to lowercase request_uri
      perl_set $uri_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        return $uri;

        # Returns everything to the left of ?
     perl_set $uri_left_request_lowercase 'sub {
        my $r = shift;
        my $uri = $r->uri;
        $uri = lc($uri);
        my $request_uri = substr($uri, 0, index($uri, "?"));
        return $request_uri;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" -  "$upstream_cache_status" '
                      '"$request_method $scheme://$host$request_uri $server_protocol" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    open_file_cache max=100000 inactive=30s;  # Added
    reset_timedout_connection on;  # Added

    gzip  on;  #, Uncommented this to set gzip ON
    gzip_comp_level     1;  # Added
    gzip_disable        msie6;  # Added
    gzip_proxied        any;  # Added
    gzip_types          text/plain text/css application/x-javascript text/xml application/xml application/rss+xml text/javascript;  # Added

    #  Cache Configurations
    proxy_cache_path /var/cache/nginx/ levels=1:2 max_size=5000m inactive=30d;

    proxy_temp_path /tmp/nginx;  # Added

    # Load config files from the /etc/nginx/conf.d directory
    # The default server is in conf.d/default.conf
    include /etc/nginx/conf.d/*.conf;



server {
    listen      443 ssl;    
    ssl_certificate /etc/nginx/ssl/__WebSite___cert.cer;
    ssl_certificate_key  /etc/nginx/ssl/;
    location ~* /_layouts/authenticate.aspx.*$ {
    root  /usr/share/nginx/html;
    rewrite ^ /error502.html?;
    location / {
        proxy_pass        ;
        add_header X-dpu-cachesource "DR-NGINX-CONTAINER";
        proxy_connect_timeout       15s;
        proxy_cache_key             "$scheme://$host$uri";
        proxy_cache       ;
        proxy_cache_valid           302 200 30d;
        proxy_cache_use_stale       updating error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
        proxy_ignore_headers        X-Accel-Expires Expires Cache-Control Set-Cookie;
        proxy_set_header            X-Real-IP        $remote_addr;
        proxy_set_header            X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_max_temp_file_size    15m;
        client_max_body_size        50m;
        client_body_buffer_size     128k;
        proxy_send_timeout          5m;
        proxy_read_timeout          5m;
        proxy_buffer_size           4k;
        proxy_buffers               4 32k;
        proxy_busy_buffers_size     64k;
        proxy_temp_file_write_size  64k;
        error_page 502 /error502.html;

   location = /error502.html {
      root  /usr/share/nginx/html;


Go to Source
Author: Yolo Perdiem

Setting up a TCP-SNI proxy that dynamically forwards SSL traffic to any hostname that the SNI might contain

I’m firstly gonna summerize my goal:

I’ll setup a DNS server and configure my smart tv to use it. I’ll set the DNS server up so that requests to specific DNS zones will not actually be resolved, rather the DNS server will return the IP of my proxy server. The proxy server needs to accept any HTTPS request, inspect the SNI, and forward the request to the corresponding host. I cannot statically configure the hosts to which the proxy shall pass the incoming requests, as those hostnames are being “randomly” (= outside of my control) generated in a specific DNS zone.

So far I’ve looked into nginx’s ngx_stream_ssl_preread_module, as well as into HProxy. So far, I have not found a way to make them proxy pass the traffic to $requesthostname, it seems like you always need to specify backends to which you pass the traffic.

While inspecting HTTPS traffic on my local machine using mitmproxy, I realized that it behaves as I desire, in that it forwards all HTTPS requests to the corresponding hostnames. However, as I cannot install mitmproxy’s CA certificate on my smart tv, I cannot use it for this purpose.

Does anybody know a proxy software that serves my purpose, or a way to configure one of the proxyservers I mentioned in such a way that it behaves in such a manner?

Help is greatly appreciated, thanks in advance

Go to Source
Author: Max Luchterhand

Traefik 2 – Passing port in the URL

So I’m pretty sure I just need someone to break things down for me in a way I can understand.

I’m running traefik 2, with docker and some rule tomls for other internal services.

I am using HASSIO as on of these rules tomls, I want to be able to reach http://URL:1880/endpoint/ui for the node red integration.

if i browse to the internal address it works but it’s not working externally using the url.

the toml is as follows:

      entryPoints = ["https"]
      rule = "Host(`REMOVED`)"
      service = "hassio-svc"
      middlewares = ["chain-no-auth"]
        certresolver = "dns-cloudflare"

        passHostHeader = false
          url = ""

my question is, can you have it so when you click a link in the site and it redirects to an address with a port in the URL that gets passed through?

Go to Source
Author: Nigel Tatschner

Make host header correct from upstream

I configured simple load balancer scheme on the windows:

upstream app.local {
    server app1.local:8001 fail_timeout=10s max_fails=10;
    server app2.local:8002 fail_timeout=10s max_fails=10;

server {
    listen 8000;

    location / {
        proxy_pass http://app.local;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;

Changed hosts file like this       app.local       app1.local       app2.local

All fine, but my web servers behind app1.local and app2.local receive incorrect headers Host: app.local and therefore don’t want resolve requests. I readed
the post where the same problem but top answer not resolved my and i don’t want to use the double layer proxy option straight off.

Go to Source
Author: Vasil Akhmetov