Forward SSH from Reverse Proxy Server through VPN to Home Server

I’m trying to set up GitLab on my home server. HTTPS is working and I can get to GitLab’s interface, but SSH is not and thus I can’t push code to the server.

Here is the setup:

Cloudflare <--> Reverse Proxy (nginx, hosted on Digital Ocean) <--- VPN ---> Untangle Firewall <--> GitLab Server (on ESXi)

If I try to SSH directly from the Reverse Proxy to the GitLab server (over VPN connection), it works perfect.

If I try to SSH from my laptop using the domain name, I get:

kex_exchange_identification: Connection closed by remote host
Connection closed by 104.31.73.156 port 2095

If I try to SSH from my laptop using the Reverse Proxy’s IP (thus cutting out Cloudflare), I get:

Bad packet length 1231976033.
ssh_dispatch_run_fatal: Connection to {{ IP }} port 2095: message authentication code incorrect

I’m currently trying to use the nginx stream module to do so, and this is the stream setup:

stream {
        upstream git-ssh {
                server {{INTERNAL GITLAB IP}}:22;
        }
        server {
                listen 2095;
                proxy_pass {{INTERNAL GITLAB IP}}:22;
                proxy_protocol on;
        }
}

The reason I have upstream git-ssh and then don’t use it was because I was wondering if that was the problem, but it makes no difference if I use it or not.

I’m not familiar with iptables, but I tried the following commands:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2095 -j DNAT --to-destination {{GITLAB IP}}:22
sudo iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 2095 -j SNAT --to-source {{PROXY IP}}

But it didn’t seem to work. ssh just sits there returning nothing and eventually times out.

I am lost now, and was hoping someone could give me pointers?

Go to Source
Author: Cody Dostal

ssh connection “client_loop: send disconnect: Broken pipe” or “Connection reset by port 22”

I have been using ssh to access remote servers for many months, but recently I haven’t been able to establish a reliable connection. Sometimes I cannot login and get the message “Connection reset by port 22”, when I can login I get the error message “client_loop: send disconnect: Broken pipe” in a few minutes (even if the terminal is not idle).

My ~/.ssh/config file has:

Host *  

     ServerAliveInterval 300
     ServerAliveCountMax 2
     TCPKeepAlive yes

My /etc/ssh/sshd_config file has:

#ClientAliveInterval 300
#ClientAliveCountMax 3

I recently upgraded by xfinity plan to a faster speed and the problem started happening then. But xfinity insists the issue is on my end. Note that my roommate also has the same issue with ssh…

Is there something that I’m missing on my end? Any help would be greatly appreciated!
(I’m running on a Mac)

Go to Source
Author: Ashka Shah

Is using mbuffer safe on a public network for zfs send raw encrypted data?

I use ZFS send and receive to send incremental snapshots of an encrypted dataset in raw mode over SSH to a backup server using mbuffer over the internet.

Using this strategy the transferspeed increases greatly. Only using a 10 year old Core 2 Duo, I hit a transferspeed of 750Mbit on my test bench, because the data isn’t decrypted, encrypted and compressed again.

Using mbuffer however, you send data over a raw TCP stream, no encryption is used. This is okay for a trusted network, but normally not for a public network.

But I send raw encrypted data and all commands to establish the data transfer are still encrypted using SSH.

Is it safe to use mbuffer to send data over a public network when sending raw encrypted data?

Go to Source
Author: Hans Oostendorp

haproxy container not responding that i have created with remote machine ansible playbook although container is running

I have alredy created ansible playbook for one haproxy container and 2 apache container. Moreover I also mapped ports and volumes in order to distribute client request with haproxy to apache servers. but haproxy server not responding. containers were created and mapped. . . here is my playbook .—
.

  • hosts: all
    vars:
    img:
    – httpd:latest
    – haproxy:latest
    tasks:

            - name: pull image
              docker_image:
                  name: "{{ item }}"
                  source: pull
              loop: "{{ img }}"
            - name: copy my contain1
              copy:
                      src: ./index1.html
                      dest: /home/roy/volume/index.html
              notify: apache1
            - name: copy my contain2
              copy:
                      src: ./index2.html
                      dest: /home/roy/volume1/index.html
              notify: apache2
            - name: create containers
              docker_container:
                 name: haproxycon1
                 image: haproxy:latest
                 volumes:
                         - /home/roy/ha/:/usr/local/etc/haproxy/
                 state: stopped
                 ports:
                         - "80:80"
                 command: sleep 1d
              notify: pop
              register: haproxycon1
              ignore_errors: yes
            - name: create apache1 container
              docker_container:
                  name:  con1
                  image: httpd:latest
                  state: started
                  volumes:
                          - /home/roy/volume:/usr/local/apache2/htdocs
                  exposed_ports:
                          - 80
              register: con1
              ignore_errors: yes
            - name: create apache2
              docker_container:
                      name: con2
                      image: httpd:latest
                      state: started
                      volumes:
                              - /home/roy/volume1:/usr/local/apache2/htdocs
                      exposed_ports:
                              - 80
              register: con2
              ignore_errors: yes
            - name: haproxyconf
              template:
                     src: ./haproxy.cfg
                     dest: /home/roy/ha/haproxy.cfg
              notify: pop
    

    handlers:
    – name: pop
    docker_container:
    name: haproxycon1
    state: started
    – name: apache1
    docker_container:
    name: con1
    state: started
    – name: apache2
    docker_container:
    name: con2
    state: started

Go to Source
Author: KKSpro

Can’t connect to SSH on private IP through VPN gateway

Morning, everyone,

I’ve been having a strange problem for a few hours. I was SSH connected on my Azure virtual machine and my SSH connection suddenly froze (without disconnecting). I closed my shell and restarted my SSH session without success. I checked the NSGs with neighboring machines I connect well to, the NSGs are almost identical. I tried to reboot the machine, redeploy the machine, reset the SSH configuration, reset the user account, without success too. I can easily connect to other machines on the same subnet. I can connect on public IP. I feel like there’s a problem between the VPN gateway and the private IP… Any ideas?

Thank you

Note : I have the following entry in the health status :

Unavailable
At samedi 4 juillet 2020 à 3:15:32 PM UTC+4, the Azure monitoring system received the following information regarding your Virtual machine:
Your virtual machine is unavailable. We’re working to automatically recover your virtual machine and to determine the source of the problem. No additional action is required from you at this time.
Recommended Steps
Check back here for status updates
Redeploy this virtual machine to a different host server

Go to Source
Author: Bruno Carnazzi

How to create a user and copy corresponding pub file to authorized_key using AWS CloudFormation?

I am having trouble to create a user and copy the corresponding pub file called authorized_keys into the .ssh folder on the instance using AWS Cloud Formation. I do this, because I want to connect with this user using SSH. When I check the SystemLog of the created instance, it does not seem like the user is created or any file is copied as authorized_keys in the .ssh directory, this is my code:

LinuxEC2Instance:
Type: AWS::EC2::Instance
Metadata:
  AWS::CloudFormation::Init:
    config:
      users:
        ansible:
          groups:
            - "exampleuser"
          uid: 1
          homeDir: "/home/exampleuser"
      files:
        /home/exampleuser/.ssh/authorized_keys:
          content: !Sub |
            '{{ resolve:secretsmanager:
              arn:aws:secretsmanager:availability-zone:account-id:secret:keyname:
                SecretString:
                  keystring }}'
          mode: "000600"
          owner: "exampleuser"
          group: "exampleuser"

Am I missing something so that the user is created and the file is also being copied?

Go to Source
Author: Benny

WPTerm ssh connection not working

I’m running wordpress on a docker container. I’ve logged in via the admin console and installed the WPTerm plugin. When starting the WPTerm and using ssh with the following arguments

./ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null -v -T -i ./id_rsa root@172.10.10.2

I get the error

...
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentication succeeded (none).
Authenticated to 172.10.10.2 ([172.10.10.2]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
shell request failed on channel 0

I have tried different settings for WPTerm exec, shell_exec, system, passthru and popen but all of them throw the same error.

How can I get the ssh client to open a shell?

Go to Source
Author: wasp256

Why can’t expose my local port publicly with ssh remote forwording via vps?

I build a simple web on local pc,want to expose my local port with ssh remote forwording via my vps.
Setting in my vps.

vim /etc/ssh/sshd_config
GatewayPorts yes
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes

systemctl restart sshd

Open port 8001 on my vps.

firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload

Build a ssh remote port forwording on my local pc.

ssh -fNR  0.0.0.0:8001:localhost:80  root@vps_ip

Check port 8001 on my vps.

netstat -plant  | grep  8001
tcp        0      0 127.0.0.1:8001          0.0.0.0:*               LISTEN      797/sshd: root      
tcp6       0      0 ::1:8001                :::*                    LISTEN      797/sshd: root  

It is ready to listen on 8001.
To ping the vps_ip from other pc(not the previous local pc),shows that connection between pc is in good status.
Input http://vps_ip:8001 in other pc.

The connection was reset

The connection to the server was reset while the page was loading.

    The site could be temporarily unavailable or too busy. Try again in a few moments.
    If you are unable to load any pages, check your computer’s network connection.
    If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.

Why can’t expose my local port publicly with ssh remote forwording via vps?

Go to Source
Author: it_is_a_literature