l2tp/ipsec unable to connect on linux

I’m trying to connect to cisco l2tp/ipsec vpn with PSK and IKEv1 username/password.

According to this article, I’ve found that server supports following authentification methods:

SA=(Enc=3DES Hash=MD5 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=28800)
SA=(Enc=3DES Hash=SHA1 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=28800)
SA=(Enc=AES KeyLength=128 Hash=SHA1 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=28800)
SA=(Enc=AES KeyLength=256 Hash=SHA1 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=28800)

I’m using networkmanager-l2tp package. Tried both openswan and libreswan (manually built with USE_DH2=true as described in this patchnote).

My .nmconnection file looks like this:

[connection]
id=etis
uuid=70147d0a-5d7f-467a-80ee-9048601960e1
type=vpn
permissions=user:***:;

[vpn]
gateway=vpn.psu.ru
ipsec-enabled=yes
ipsec-esp=aes128-sha1,3des-md5
ipsec-ike=aes128-sha1-modp1024,3des-sha1-modp1024
ipsec-psk=***
password-flags=1
user=***
service-type=org.freedesktop.NetworkManager.l2tp

When I’m trying to connect I’m getting the following log:

log using strongswan

log using libreswan with USE_DH2=true

From what I see, it seems like both ways ipsec connection is being established successfully, but then this happens:

xl2tpd[106869]: Listening on IP address 0.0.0.0, port 1701
xl2tpd[106869]: Connecting to host 212.192.80.206, port 1701
xl2tpd[106869]: death_handler: Fatal signal 15 received

Strongswan log also has this suspicious message in between of the above:

charon[78694]: 01[NET] received packet: from 212.192.80.206[4500] to 192.168.5.28[4500] (164 bytes)
charon[78694]: 01[IKE] received retransmit of response with ID 1610789051, but next request already sent

At this point I’ve depleted my google skills. If anybody could tell me where to go next or at least tell me if this problem is connected with ipsec or l2tp part of the equation, I would greately appreciate that.

Go to Source
Author: Denis Sheremet

Linux HTB: More than 70% of ceil rate is never achieved

Background:-

I have an arm based system, which has HTB setup on the eth and wlan interface.
Here is the HTB configuration:-

tc class add dev eth1 parent 1:1 classid 1:1 htb rate 1Gbit ceil 1Gbit burst 18000b cburst 18000b
tc class add dev eth1 parent 1:1 classid 1:a100 htb rate 60Mbit ceil 60Mbit burst 18000b cburst 18000b
tc class add dev eth1 parent 1:a100 classid 1:10f htb rate 100Kbit ceil 60Mbit burst 18000b cburst 18000b
tc class add dev eth1 parent 1:10f classid 1:100 htb rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b prio 3
tc class add dev eth1 parent 1:10f classid 1:101 htb rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b prio 2
tc class add dev eth1 parent 1:10f classid 1:102 htb rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b prio 1
tc class add dev eth1 parent 1:10f classid 1:103 htb rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b prio 0

Here is there graph representation:-

+---(1:1) htb rate 1Gbit ceil 1Gbit burst 18000b cburst 18000b 
     |    Sent 200796370 bytes 152179 pkt (dropped 0, overlimits 0 requeues 0) 
     |    rate 0bit 0pps backlog 0b 0p requeues 0 
     |
     +---(1:54) htb prio 2 rate 50Mbit ceil 1Gbit burst 18000b cburst 18000b 
     |          Sent 2521539 bytes 19693 pkt (dropped 0, overlimits 0 requeues 0) 
     |          rate 0bit 0pps backlog 0b 0p requeues 0 
     |     
     +---(1:f100) htb rate 60Mbit ceil 60Mbit burst 18000b cburst 18000b 
          |       Sent 198274831 bytes 132486 pkt (dropped 0, overlimits 0 requeues 0) 
          |       rate 0bit 0pps backlog 0b 0p requeues 0 
          |
          +---(1:10f) htb rate 100Kbit ceil 60Mbit burst 18000b cburst 18000b 
               |      Sent 198274831 bytes 132486 pkt (dropped 0, overlimits 0 requeues 0) 
               |      rate 0bit 0pps backlog 0b 0p requeues 0 
               |
               +---(1:101) htb prio 2 rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b 
               |           Sent 198208856 bytes 132155 pkt (dropped 82134, overlimits 0 requeues 0) 
               |           rate 0bit 0pps backlog 0b 0p requeues 0 
               |     
               +---(1:100) htb prio 3 rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b 
               |           Sent 64079 bytes 299 pkt (dropped 0, overlimits 0 requeues 0) 
               |           rate 0bit 0pps backlog 0b 0p requeues 0 
               |     
               +---(1:103) htb prio 0 rate 25Kbit ceil 100Kbit burst 18000b cburst 18000b 
               |           Sent 630 bytes 7 pkt (dropped 0, overlimits 0 requeues 0) 
               |           rate 0bit 0pps backlog 0b 0p requeues 0 
               |     
               +---(1:102) htb prio 1 rate 25Kbit ceil 60Mbit burst 18000b cburst 18000b 
                           Sent 1266 bytes 25 pkt (dropped 0, overlimits 0 requeues 0) 
                           rate 0bit 0pps backlog 0b 0p requeues 0

The problem:
I always achieve only 70% (max) of the ceil rate even with iperf UDP traffic in local network, with 60Mbps as uplink and downlink limit set, I barely get 40Mbps. From the above graph, you can see that the classid 1:101 (data class) has a lot of packets dropped, I am trying to understand why this happens, since it shouldn’t run out of tokens when catering to throughput below the ceil rate.

Please let me know if more info is needed to debug this.

Go to Source
Author: Vo1dSpace