cant restore nuget packege on ubuntu after upgrae to .net core 3.1

i am running .net core 2.1 on Ubuntu 18.04 so far so good

i upgrade my project to .net core 3.1 and installed the 3.1 SDK

since than i cant use ‘dotnet restore’

this is the full output

/usr/share/dotnet/sdk/3.1.401/NuGet.targets(128,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json. [/repos/myproj/portal/PortalApi/PortalApi.csproj]
/usr/share/dotnet/sdk/3.1.401/NuGet.targets(128,5): error :   The SSL connection could not be established, see inner exception. [/repos/myproj/portal/PortalApi/PortalApi.csproj]
/usr/share/dotnet/sdk/3.1.401/NuGet.targets(128,5): error :   The remote certificate is invalid according to the validation procedure. [/repos/myproj/portal/PortalApi/PortalApi.csproj]

any idea why ?

thanks

Go to Source
Author: David Munsa

Kubernetes API server unexpectedly stops responding

I have a managed Kubernetes cluster in Azure (AKS). There is one pod running a simple web service that responds to REST API calls from outside and calls the Kubernetes API server. These calls list and create some jobs.

For AKS, I have the ‘advanced’ (Azure CNI) networking with a custom routing table that redirects traffic to a virtual appliance – this is my company’s setup.

I’m using the official Python client for Kubernetes. The calls look like:

config.load_incluster_config()
k8s_batch_api_client = client.BatchV1Api()
jobs = k8s_batch_api_client.list_namespaced_job(namespace = 'default')

So nothing special.

Most of the time, everything is working fine. However, from time to time, the Kubernetes API server just doesn’t respond to the requests, so my pod’s web service gets restarted after a timeout (it runs a gunicorn-based web server).

I installed tcpdump on my pod and sniffed the TCP traffic. I’m not a networking nerd, so bear with me.

The Python client keeps a TCP connection pool (using the urllib3 library). And it seems that the Kubernetes API server just silently ‘loses’ a TCP connection, just doesn’t react anymore without closing the connection.

In Wireshark, I see this for a working request-response:

2438   09:41:50,796695   10.214.140.39    192.168.0.1     TLSv1.3   1614   Application Data
2439   09:41:50,798552   192.168.0.1      10.214.140.39   TCP       66     443 → 56480 [ACK]
2440   09:41:50,804064   192.168.0.1      10.214.140.39   TLSv1.3   2196   Application Data

10.214.140.39 is my pod, 192.168.0.1 is the Kubernetes API server. We see a request and a response here.

But then:

2469   09:48:48,853533   10.214.140.39   192.168.0.1      TLSv1.3   1580   Application Data
2470   09:48:48,853604   10.214.140.39   192.168.0.1      TLSv1.3   1279   Application Data
2471   09:48:48,868222   10.214.140.39   192.168.0.1      TCP       1279   [TCP Retransmission] 56480 → 443 [PSH, ACK]
2472   09:48:49,076276   10.214.140.39   192.168.0.1      TCP       1452   [TCP Retransmission] 56480 → 443 [ACK]
... lots of retransmissions...

I see no FIN TCP packet from the Kubernetes API server (which would mean, the server wants to close the connection).

After restarting (2 minutes of retransmissions -> reboot), my pod can establish a connection to the API server right away – so the API server itself isn’t overloaded.

The same app runs without any issues on my local Minikube cluster (but there’s of course only one node, so not really representative).

How can I investigate the issue further? Can it be caused by the client side (by my pod or by the Python client)? Is there any special setting I must change on AKS or on my client side to avoid this? Does it look like a ‘server bug’ or a ‘network issue’?

Go to Source
Author: dymanoid

Old LetsEncrypt SSL certificate still being served by EC2 instance

I’m working on renewing the SSL certificate for my websites. They are all running on the same EC2 instance with the same Apache server. Two of my domains recently expired so I also tried to remove them from the certificate.

I think I renewed the certificate correctly because when I run the following, it appears to suggest the certificate is up-to-date:

$ /usr/local/bin/certbot-auto certonly --webroot -w /var/www/html -d apptic.me -d www.apptic.me -d athenaforyou.com -d www.athenaforyou.com -d cloudlease.me -d www.cloudlease.me -d foleyrounds.com -d www.foleyrounds.com -d ugba198.com -d www.ugba198.com -d wildcattribune.com -d www.wildcattribune.com
Requesting to rerun /usr/local/bin/certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Cert not yet due for renewal

You have an existing certificate that has exactly the same domains or certificate name you requested and isn't close to expiry.
(ref: /etc/letsencrypt/renewal/apptic.me.conf)

What would you like to do?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Keep the existing certificate for now
2: Renew & replace the cert (limit ~5 per 7 days)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1
Keeping the existing certificate

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Certificate not yet due for renewal; no action taken.

Then I restart Apache with sudo systemctl restart httpd, which produces no output.

But when I navigate to my website at www.apptic.me, I see there is still an SSL error saying the certificate is expired. What could be happening?

I’m running Apache 2.4:

$ yum info httpd
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
236 packages excluded due to repository priority protections
Installed Packages
Name        : httpd
Arch        : x86_64
Version     : 2.4.43
Release     : 1.amzn2
Size        : 4.0 M
Repo        : installed
From repo   : amzn2-core
Summary     : Apache HTTP Server
URL         : https://httpd.apache.org/
License     : ASL 2.0
Description : The Apache HTTP Server is a powerful, efficient, and extensible
            : web server.

Go to Source
Author: Neel