NGINX serving by IP only, not by server name

A Raspbery Pi running (arm-)Arch sits behind my router NAT. The RasPi has a static IP 192.168.1.6 and an nginx serving on port 8093. (The nginx is listening on port 80 for another webpage.)

The server_name is “pi.hole” and it is resolved correctly by the source machine to 192.168.1.6

The interface opens successfully in my browser at “http://192.168.1.6:8093”

A “404 Not Found” pops when opening “pi.hole”

Bellow are my /etc/nginx/nginx.conf

user http;

worker_processes auto;

worker_rlimit_nofile 10240;

events {
    # Use epoll on Linux 2.6+
    use epoll;
    # Max number of simultaneous connections per worker process
    worker_connections 2048;
    # Accept all new connections at one time
    multi_accept on;
}

http {

    server_tokens off;

    sendfile on;

    tcp_nopush on;

    tcp_nodelay off;
    
    send_timeout 30;

    keepalive_timeout 60;

    keepalive_requests 200;
    reset_timedout_connection on;
    
    types_hash_max_size 2048;

    server_names_hash_bucket_size 64;

    include /etc/nginx/mime.types;
    default_type text/html;
    charset UTF-8;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    gzip on;

    gzip_min_length 1000;

    gzip_disable "msie6";
    gzip_proxied any;

    gzip_comp_level 5;
    
    gzip_types
        text/plain
        text/css
        application/json
        application/x-javascript
        text/xml
        application/xml
        application/xml+rss
        text/javascript
        application/javascript
    application/octet-stream;


    open_file_cache max=1000 inactive=20s;
    open_file_cache_valid    30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors   on;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}


and /etc/nginx/conf.d/pihole.conf

# /etc/nginx/conf.d/pihole.conf
#
# https://github.com/pi-hole/pi-hole/wiki/Nginx-Configuration
#

server {
    listen 192.168.1.6:8093 ;

    root /srv/http/pihole;
    server_name pi.hole;
    autoindex off;

    proxy_intercept_errors on;
    error_page 404 /pihole/index.php;

    index pihole/index.php index.php index.html index.htm;

    location / {
        expires max;
        try_files $uri $uri/ /pihole/index.php?$args =404;
        add_header X-Pi-hole "A black hole for Internet advertisements";
    }

    location ~ .php$ {
        include fastcgi.conf;
        fastcgi_intercept_errors on;
        fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
        #fastcgi_pass 127.0.0.1:9000;
        #fastcgi_param VIRTUAL_HOST "pi.hole";
        #fastcgi_param SERVER_NAME $host;
        fastcgi_param SERVER_NAME "pi.hole";
    }
    
    location /admin {
        root /srv/http/pihole;
        index index.php index.html index.htm;
        add_header X-Pi-hole "The Pi-hole Web interface is working!";
        add_header X-Frame-Options "DENY";
    }
    
    location ~ /.ttf {
        add_header Access-Control-Allow-Origin "*";
    }

    location ~ /admin/. {
        deny all;
    }

    location ~ /.ht {
        deny all;
    }
}

I tried adding the ip to the listener and playing with the fastcgi_param for the host name to no better end.

The user running nginx is the same for php-fpm and has ownership and read-write permissions and the root and down the tree.

What am I doing wrong?

Go to Source
Author: superAnnoyingUser

What is a recommended authentication architecture for a front GUI app that I want to control but that will be used by others to control their servers?

I have a front end (WEB GUI) app that I designed (Python for now + JavaScript in the future) that I use to access a controller, it uses REST APIs.

I want to publish this app in the cloud so that others could use it.

The biggest issue I am seeing is the security side as the app needs to authenticate with the remote server (a controller itself) and start sending tasks to the controller that will translate that in internal REST APIs to control for processes on downstream servers

Is there an authentication flow that will guarantee the owners of the controllers that I (the publisher of the front end) do not intercept the authentication flow and I gain unwanted access to their servers ?

My idea is to use a two steps authentication/authorization process like below. Is there a better way?
Please edit this diagram if you have suggestions
enter image description here

Go to Source
Author: MiniMe

Is there a way to make Prometheus to scrape only the metrics which are present in Grafana?

As a customer of Grafna Cloud, I need to minimize the number of DataSeries that are being sent (remote_write) to Grafana Cloud, to avoid being overcharged for metrics that I don’t really use.

This can be done by filtering out the metrics that I don’t use in my Grafana dashboards, is there a way to do that automatically? Without having to write down all the metric_relabel_configs rules manually?

Go to Source
Author: Meir Gabay

Is it a good practice to have an endpoint URL with parameter accepting different type of values?

In my current maintenance project, there is REST API resource URL like this:

/sites/<site id or site code>/buildings/<building id or building code>

In this endpoint URL, there are two parameters,

  • <site id or site code>
  • <building id or building code>

As the name indicates, these two parameters are ambiguous, say the value of the first parameter can be either site id or site code, the value of the second parameter can be either building id or building code. However, implicitly it means,

For instance, there is a building with 1 as building id and rake as building code, and it is located in the site with 5 as the site id and SF as the site code, then the following endpoint URL should retrieve the same result:

  • /sites/1/buildings/5
  • /sites/rake/building/5
  • /sites/1/buildings/sf
  • /sites/rake/building/sf

The implementation of such resource endpoint contains lots of if conditions due to the ambiguity**. However, from the end-user’s aspect, this seems to be handy

My quesiton is whether such endpoint design is a good practice or a typical bad practice?

Go to Source
Author: Rui

How to determine whether the Jenkins build was started by a user or the scheduler?

I have a Jenkins job which does some unit-testing for some code. It can be started in two possible ways:

  1. By the scheduler – it polls the Perforce server, and starts the job when it detects a code change
  2. By a user, on demand – it downloads a shelved/stashed code change, and checks whether it breaks the tests

The second way is called “Build Review”:

Build Review


If the job discovers problems, I want to send emails conditionally, to avoid spam:

  1. If invoked by the scheduler, send mails to everyone who did check-ins since last successful build (called “developers” in Jenkins jargon)
  2. If invoked by a specific user, send a mail only to that user

I am using the Any Build Step plugin to add two conditional instances of Editable email Notification into my job settings. But what are the conditions? How can I tell it to send mail only if invoked in a specific way, (1) or (2) above?

Go to Source
Author: anatolyg

external HDD stopped working

Hi guys I usually work with large files on an external HDD (1 Tb Toshiba) but after rebooting the laptop for an update I can’t see it anymore.

I am working with Ubuntu 18.04.4 LTS.

When I do dmesg | tail I get the following:

[ 2605.778978] usb usb1-port1: attempt power cycle
[ 2606.430884] usb 1-1: new full-speed USB device number 32 using xhci_hcd
[ 2606.431085] usb 1-1: Device not responding to setup address.
[ 2606.638937] usb 1-1: Device not responding to setup address.
[ 2606.846862] usb 1-1: device not accepting address 32, error -71
[ 2606.974698] usb 1-1: new full-speed USB device number 33 using xhci_hcd
[ 2606.974806] usb 1-1: Device not responding to setup address.
[ 2607.182967] usb 1-1: Device not responding to setup address.
[ 2607.390882] usb 1-1: device not accepting address 33, error -71
[ 2607.391006] usb usb1-port1: unable to enumerate USB device

I cannot manually mount it using fdisk -l (no additional sd* after I plug it in)

I tried on a mac laptop and also there I plug the disk and it doesn’t recognize it. I have white lights blinking on the device and instead of getting a steady blue light it turns off and I can hear it stops rotating.

Go to Source
Author: Nicola Pontillo

Fresh install OK, upgrading PHP 5.4 -> 7.3 fails with “Use of undefined constant WP_CONTENT_DIR”

On my HostGator shared hosting with a CPanel option to set PHP per domain, if I turn on PHP 7.3 (or other 7.x versions) for my existing blog I get an error page:

Warning: Use of undefined constant WP_CONTENT_DIR – assumed ‘WP_CONTENT_DIR’ (this will throw an Error in a future version of PHP) in /my_site/public_html/wp-includes/load.php on line 141
Your PHP installation appears to be missing the MySQL extension which is required by WordPress

If I Google this, all the advice is about how to install mysqld, but it’s surely already installed as doing a fresh install of WordPress on a separate sub-domain works perfectly under PHP 7.3, so I must be missing something in my older site’s WordPress wp-config.php, perhaps. Looking at working and failing versions, though, I cannot see an obvious difference.

Go to Source
Author: Ken Y-N

Link in Java returns HTTP instead of HTTPS

Many times your application can be running behind a reverse proxy or a load balancer, and often in between the protocol is just plain old HTTP. This is a very common setup and one that has been around for several years.

When this happens, when generating URLs through the Java Link class on Spring boot, you might get the wrong scheme or protocol. What should have been HTTPS becomes HTTP. When you’re building APIs that follows HATEOAS/HAL, this becomes an issue. With just a single letter ‘S’ your URLs become invalid.

First off, this should be already handled properly by the underlying framework but if the proxy in front of your app was misconfigured this leads to the problem happening and it is then difficult, maybe impossible, to tell whether the client connecting to the app used HTTP or HTTPS.

Luckily, you can force it somehow to use the proper protocol. This is a solution nonetheless, which works, but I would say having the reverse proxy or load balancer configured correctly is the proper way to do it.

ANSWER

The band-aid solution is to manipulate your Spring Boot app depending on the environment.

For example, if I’m just developing/using it locally HTTP is fine. With profiles setup as it ought to be – dev, test, prod – you can get the correct environment and have your URL use HTTP or HTTPS depending on it. When you deploy it to your server, you know you want it to be using secure HTTP so now you can override the Link to use the right scheme in this case.

Here is a one way on how to do manipulate the protocol or scheme of a Link.

Oh, and getting the profile can be done in 2 ways that I know of in Spring Boot.

First is injecting it via annotation.

@Value("${spring.profiles.active}")
private String activeProfile;

and/or autowiring Environment

@Autowired
private Environment environment;
 
public void getActiveProfiles() {
   for (String profileName : environment.getActiveProfiles()) {
      System.out.println("Active profile: " + profileName);
   }  
}

SSH Time out Error

I am looking for new ideas on how can I do more better.

Home laptop (home Laptop can access only Linux VM)
Linux VM (This machine can access only jump box)
jump box VM

I can ssh fine from Linux VM to jump box keys are setup.

one user is set up in Linux VM called joe and Joe .bash_profile looks like this.

ssh 10.0.0.1 || ssh 10.0.0.2

Note: There is two nic on jump box if one is down we can use the other nic for login.

Let’s say first nic is down and when I do telnet from my home laptop (telnet Linux VM) and provide user name joe it should automatically connect us to 10.0.0.1 but one is down it’s giving message timeout and taking a long time to connect to other nic.

is there any way when I do telnet and give user name check first ssh connection if down automatically connects to other second one like in 2-3 secs?

Right now it will give us a message

Time out
time out
time out
And then it will try to connect the second nic.

I can more clarify if you guys have any more questions for me.

Go to Source
Author: John

ANSWER

Resolving hosts can add up to the connection attempt, so specifying an IP address directly can shave off some of those precious milliseconds. If it’s only in a local network with the IP addresses all laid out that should not be the case. At least that’s what I think.

Perhaps explicitly setting the ConnectTimeout option to a shorter one. Like so,

ssh -o ConnectTimeout=10 user@host

This can help. Adjust the number to a sweet spot that works for your ssh setup as necessary.

How to configure Krusader’s extract context menu to use something other than ark?

I’ve found this context menu in krusader to be very unreliable over the years, it has hardly ever worked for me. Surely it’s possible to configure the context menu to use a custom command, like simply using the zip , ot tar CLI command. I don’t find this in the application settings.

The VFS plugins work great, how come this context menu has to use ARK?

Go to Source
Author: barrymac

ANSWER

Being KDE, it is a very highly configurable desktop environment to the point that it can become overwhelming. It gives you practically a myriad options for customizations for just about everything. I know because I had been a KDE fan/user for several years.

But I think the issue here is on file associations. Krusader, being a KDE application should follow system-wide settings. I don’t see any reason why it should not. As such, it is in my opinion that the file manager by default will use whatever is associated to handle or open a certain file. I’d look for KDE’s file association settings in the Control Panel or some such. Then assign a different application for that file to your liking. That should do the trick.

As for more customizations like using your own script perhaps? It could be possible still under file associations to assign a different application. Don’t have a KDE desktop installed right now so I can’t check this out. Certainly can’t remember too, nor have gone to the extent of doing something like this to handle archives.

Copying a MySQL 5.6 server settings for a new environment?

I have a MySQL 5.6 Server running on a Windows 8 VM on a steel case server. It’s been having issues (The Windows 8, which is then causing the MySQL to have issues). Therefore I was tasked with putting the MySQL 5.6 Server on a Windows 10 VM for stability reasons. The amount of ram the server is allowed and other settings have been tweaked over time. Is there an way I can export these settings, so when I install MySQL on my Windows 10 VM, I can just tell it to use the same?

Go to Source
Author: bjk116

ANSWER

This should be as straightforward as copying the configuration file – my.ini is the default – of the old MySQL server to the new one. It is a regular text file so copying it should not be an issue at all.

If there are other supporting configuration files for the DB server you might have customized, copy those as well.

Restart the MySQL server as necessary. If there are issues, like for example when using InnoDB engine, deleting the datadir should fix that quickly. Of course, I’m assuming this is on a fresh MySQL install. Don’t do this otherwise. You can migrate the data properly through a mysqldump for example.

Adding HTML to posts with certain tags?

When I want to add a site-wide HTML code (before the closing tag) to each post and page in my site, I always go to my:

  • WordPress Panel > Customize > Theme Settings > Header/Footer Scripts > Footer Scripts

And paste the code there. Works perfectly fine.

But now I want to add this code only to my posts with certain tags. I tried this (tags are X Y and Z):

function wpb_hook_javascript_footer() {
  if (has_tag ('X' 'Y' 'Z')) { 
    ?>
          // My HTML Code goes here
    <?php
  }
}
add_action('wp_footer', 'wpb_hook_javascript_footer');

But I guess it doesn’t work (update keeps loading, never ends)?

Any words of wisdom?

Go to Source
Author: user190411

No sound playback during call in Slack – Ubuntu 20.04

I’m on 20.04 and downloaded Slack using snap. I downloaded Spotify via snap as well as my speakers work. Not sure why Slack isn’t working. I tried every single sound source.

Go to Source
Author: Dan

ANSWER

Is it possible there is another volume control setting for Slack application that was missed out? It is sometimes hidden deep in the Sound Settings under Applications.

I had a similar experience with Zoom for Linux. I could not hear a sound at all. The main system volume was all the way up. The app was connected to the device’s audio correctly. I reconnected a couple of times. Only to find out later on that there was yet another set of input/output controls for Zoom that was not so obvious. I only had to turn it up to hear sound during a call. By default it always seems to be at 0 when I join a call, which is weird.

What firewall should I use?

What is the most comprehensive free firewall solution available for Windows? Personal machine, not work machine. I see a ton of choices online, but not sure which one to pick.

Go to Source
Author: PerpetualLearner

ANSWER

For home use I am fine with the built-in and free Windows Firewall (or Windows Defender Firewall) most of the time. It serves its purpose. Does the job. Not overwhelming to use. Customizable enough.

I would also explore the security options that comes with my home router. Many have it included nowadays, and with a little tweaking, should be able to add an extra layer of protection to one’s home network..