How can use multi location and regex in nginx?

Please help me with nginx.conf that is regex and multi location.

When I create the nginx.conf with next location, proxy_pass works correctly:

location = /test-a-1-1 {
       proxy_pass http://test-a-1-1;
       proxy_set_header Host $host;
       proxy_set_header X-Real-IP $real_remote_addr;
       proxy_redirect off;
   }

  location = /test-b-1-2 {
      proxy_pass http://test-b-1-2;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $real_remote_addr;
      proxy_redirect off;
  }

But I have many locations and I want use multi location, but in this configuration i get the error –

test-b-1-2 could not be resolved (3: Host not found)

resolver 172.16.10.3
location ~ ^/(test-a-[0-9]-[0-9]|test-b-[0-9]-[0-9]) {
                set $test $1;
                proxy_pass http://$test;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_redirect off;
                }

Please help me with regex and multi locations.
Thank you.

Go to Source
Author: perrfect

How to obtain the path of a Bash script, when itโ€™s executed through a symlink?

I want to obtain the path of a Bash script, is when it’s executed through a symlink.

In this case, $0 is -bash, while BASH_SOURCE is the symlink’s path.

Can I obtain the script’s path through some shell variable, built-in shell command, or external command?

Will BASH_SOURCE always holds the initial symlink’s path, even if the script is executed through several levels of indirection (multiple symlinks)?

Could I use ls with BASH_SOURCE to always retrieve the script’s path?

Go to Source
Author: Shuzheng

How can I recover files that have dissapeared?

I’m honestly in tears, I’ve just lost 60GB of music I’ve been collecting for over 15 years.

I mounted a drive to /mnt and moved all the music files over to it. When it finished I checked that the files were all there and then shut down my computer and rebooted it into Windows. The files were not there, shut down again and booted back into Linux and they’ve all vanished. I have no idea what to do. Have they actually been deleted?

Any help is hugely appreciated ๐Ÿ™

Go to Source
Author: Shaun Wild

Active Directory Name Change

Our Active directory will change the accounts name for example John Doe Jdoe@abcd.com to John.Doe@abcd.com.

My question is do I need to change all the SQL Logins individually or the person can login to SQL server management studio with their old names. What also happens if the login is an owner of a database or job.

Go to Source
Author: SQL_NoExpert

Export Multiple Google Calendars to Google Sheets

I have several Google Calendars I’m subscribed to, one for each of my sales and logistics team members. I want to grab the events from each calendar for both today and tomorrow and put them into my main Logistics spreadsheet. I’m stuck on the “get all the events” part; I’m pretty sure I can get the right info to the cells I want them to once I’ve figured this out.

Here’s what I have so far, which is my newbie attempt to go through each calendar and grab the events. I’m getting the error that the parameters for CalendarApp.Calendar.getEventsForDay are incorrect.

Exception: The parameters (number) don’t match the method signature for CalendarApp.Calendar.getEventsForDay.
at calendarGrabTodayTomorrow(Code:85:28)

  var calendars = CalendarApp.getAllCalendars();
  var date = new Date();
  var dateTom = (date.getDate()+1);
    for (var i=0; i < calendars.length; i++) {
    var curCal = calendars[i];
    var eventsTod = curCal.getEventsForDay(date);
    var eventsTom = curCal.getEventsForDay(dateTom);
  Logger.log(eventsTod);
    Logger.log(eventsTom);}
}

Any help on this is greatly appreciated!

Go to Source
Author: Jeff Gibson

How to authenticate an Add-In on interprocess communication

We are considerung to build an Windows application that is split in 2 parts:
One part running as a Windows service and the other part as Add-Ins.
There could be different kinds of Add-Ins:
For Microsoft Office, for Microsoft Management Console (MMC) and for the PowerShell.

The service as well as the Add-Ins are digitally signed with a companies certificate and are all running on the same machine.
The service runs in a different user account than the Add-Ins.
The Add-Ins may run in various interactive user accounts.
The Add-Ins need to communicate with the Windows service to exchange a secret that is known to the service only, but is needed at the Add-Ins to get access to some sensitive data.
Therefor the Add-In would connect to the service (via e.g. named pipe) and requests that secret on demand.
But we need to avoid that another application/process does the same and gets the secret.
Only those Add-Ins that belong to our application (that are digitally signed with our certificate) may get the secret. In other words, the Add-Ins need some way to authenticate themself to the service.

So my question is:
How can the Add-Ins authenticate themself to the service while requesting the secret? Is it possible to use the digital signature (or someting else that is unique to them and to the service) for this?

It might get more difficult when considering, that the Add-Ins are hosted by processes that might be signed with different certificates (e.g. Word.exe, mmc.exe, …).

We are implementing on the .NET Framework.

Thanks for any kind of help.

Go to Source
Author: MartinM

Install and remove a program in the same Puppet declaration

The program I want on my VM requires 7-zip as part of the installation process but not after the install. I want to leave the VM as clean as possible by removing 7-zip after the install is done. In puppet it seems like it would be easy enough to declare once to ensure present and again later to ensure absent.

I am currently stuck with a Duplicate declaration error. In my case 7-zip but this could appear for any package.

Duplicate declaration: Package[7-Zip 19.00 (x64 edition)] is already
declared

I have the code split into multiple classes but here is what is looks like in a single manifest.

$installer_location = 'puppet:///resources/'
$installer_name = "7z1900-${::architecture}.msi"
$install_directory = 'C:/temp/'

#Create directory for installer
file { 'install_dir':
  ensure => directory,
  path   => $install_directory,
}

# Copy installer local 
file { 'Copy_7zip_installer':
  ensure => present,
  path   => "${install_directory}${installer_name}",
  source => "${installer_location}${installer_name}",
}

# install 7zip 
package { 'Install 7-zip':
  ensure   => present,
  provider => 'windows',
  source   => "${install_directory}${installer_name}",
}

#Use 7-zip to unpack archive and do pipelines agent install
String $package_path = lookup('azure_pipelines_agent::package_source')
String $package_name = lookup('azure_pipelines_agent::package_name')
String $install_path = lookup('azure_pipelines_agent::install_path')


# Create the install directory
file { 'install-path':
ensure => directory,
  path   => $install_path,
}

# Copy Azure pipeline package
# Due to a bug the archive forge module requires 7-zip 
# instead of defaulting to powershell for .zip files
archive { 'extract_azure_package':
  source       => $package_path,
  path         => "${install_path}${package_name}",
  extract      => true,
  extract_path => $install_path,
  cleanup      => true,
}

# uninstall 7zip 
# The title (or name) of the package must match the value of the package’s DisplayName property in the registry
# https://puppet.com/docs/puppet/6.17/resources_package_windows.html
package { '7-Zip 19.00 (x64 edition)':
  ensure   => absent,
  provider => 'windows',
  source   => "${install_directory}${installer_name}",
}

# ... Configuration and setup of Azure Pipeline Agent

Is there a simple means of working around what puppet sees as a duplicate declaration?

Go to Source
Author: JaredStroeb

TornadoFX: How to organize application?

I’m trying to grasp the main concepts behind TornadoFX, i.e. when and how to use a view model.

Let’s take the following example: I have a main view with a borderpane as root. This borderpane contains another view with a button. When I press this button a REST call is made and when the response is 403 then the main view should be replaced with a login view (WebView) where the user can log in. When the login is successful, an access token is received and stored somehow.

I was thinking about using a response interceptor defined in a view model injected into the main view. When a 403 is intercepted, a boolean property is updated which triggers a call to “replaceWith” in the main view. The login view replaces the main view and when the login is done a scope containing the access token is created and given to the main view with another call to “replaceWith”. The main view then (or the main view model) uses this token to make successive REST calls as long as the token is valid.

Any suggestion how I could model this kind of flow? Make REST calls as long as response is not 403, then show login view and, after successfull authorization, the main view is shown again and the REST call is repeated using the new access token.

Go to Source
Author: matthjes

How to have only one login for multiple websites on the same linux machine?

I’m having multiple websites (like mediawiki, gitea and kanboard) and im thinking about making the login the same on all services.

But how do I do that?

I looked at AD but I read that it’s Windows only?
I also heard about “FreeIPA” but I read that it’s used for linux client machines?
Google is sadly not really helping me with that, as the question is too broad to get good results.

All I want is to make the logins on all my websites the same, so that if I create a user (or change password) once it gets updated on every website/service.
I don’t necessarily need the client OS (example: windows login) to be involved, but if that is necessary then I will do that.

Any help or directions would be greatly appreciated.

BR
Chris

Go to Source
Author: Chris

Oracle 12c – Compile A Trigger In SqlDeveloper and SqlPlus

When I try to compile a trigger on SqlDeveloper with right click on trigger and select compile, it’s compiling. But when I try to compile a trigger on SqlPlus with this command: alter trigger SCHEMA_NAME.TRIGGER_NAME compile; it’s not compiling but it gives an output about successfully compiling. What is the meaning of this?

Go to Source
Author: jrdba

Problem when sending file via ajax

I am trying to send a file via ajax but I get a 400 bad request error.
My code

data.coverQuestions.medicalMalpractice.file = docFile.prop('files')[0];

$.ajax({
url: '/wp-admin/admin-ajax.php',
method: 'post',
data: {
  action: 'insurance_form_data',
  data,
},
contentType: false,
processData: false,
success (res){
    console.log(res);
}
});

If remove the parameters and do not send files, then in this case it is successfully sent.

contentType: false processData: false

What could be the problem?

Go to Source
Author: Pavel

Return “continue” from function called from loop

I’m currently refactoring a script which has slowly grown beyond control. I’m trying to spin off repetition into functions. However, I have a repeated test being called from a loop, and want it to return continue.

Shellcheck says

SC2104: In functions, use `return` instead of `continue`.

And the shellcheck wiki says don’t do it. But is there a way?

Below is an example:

#!/bin/sh

AFunction () {
    if [[ "${RED}" -lt 3 ]]; then
        echo "cont"
        continue
    else
        echo "nope"
    fi
}

for i in 1 2 3 4 5
do
    RED=${i}
    AFunction
    echo ${i}
done

This is the output:

cont
1
cont
2
nope
3
nope
4
nope
5

But I would expect

cont
cont
nope
3
nope
4
nope
5

Go to Source
Author: Stripy42

Key handling for shared-key encryption with sodium

Being not a cryptography expert, I am having some basic questions on how to manage keys wrt. sodium-plus. Let me briefly explain the context: the use case involves sending data from a web frontend to a backend, but the backend should not be able to read it (deliberate design choice due to privacy concerns). The data in question needs to be usable from different client machines (the same frontend used at different times on differnet machines). It should be en- and decrypted using a secret that is under the control of the user and not stored by the application. There is no second user involved that should be able to decrypt the data, so I see this as a scenario for using a shared-key encyption approach.

I am looking into using sodium-plus.js for this and in particular to use crypto_secretbox, but am actually not clear on how to manage the key part in the scenario — ultimately, the user needs to have a way to access the same data on a different machine. Looking through the API documentation, I see two options:

  1. Generate a random key, convert it to a hex string, present the hex presentation to the user and leave it up to the user how she stores it. Then the user could use this hex presentation on the next client machine to decrypt her data.
    Unfortunately, I seem to be unable to re-create a cryptographic key from the hex presentation (hex2bin returns a (Promise for a) string). Is this even feasible? Also, I’m not at all convinced that this approach is not entirely defeating the idea of generating a random key in the first place?
  2. Derive a key from a password via crypto_pwhash that the user has to specify. However, this requires also a salt, so I’m back in a similar unclear situation on how to handle it: if the user would give the same password on a different machine (on which to decrypt the data) I also have to use the same salt to generate the same cryptographic key. How do people handle this?

If I could easily have read up on all of this, I would appreciate pointers, as my search-fu seems to fail me.

Go to Source
Author: schaueho