How to set static ip on Hyper-V vm when host has one network card?

I am running Hyper-V on Windows 10 and created an Ubuntu server vm running postgresql. My pc has a single nic card.
There is a single network adapter on the VM that is connected to the Default switch in Hyper-V. I am able to connect to the internet from the vm and can connect to the instance of Postgresql on the VM from the host and everything works fine.
But each time I reboot the VM it gets a new IP address so everyday I have to reconfigure the connection string on my app that makes calls to the Postgresql instance on the vm.
How can I configure the ip addresses that are assigned to the devices on the vlan (not sure if that is the correct term) that are sitting behind the Default Switch? I am guessing there is a DHCP server somewhere but I can’t find anything.

I just want to have the same ip always assigned to the VM or be able to set the ip manually and know that it is not going to be assigned to another VM, but am at a loss as to how to do what seems like should be a super simple thing.

Go to Source
Author: Wirly Dev

How to Stop Chrome From Highlighting Things I Click On?

Today Google Chrome has decided to do this annoying new thing where, on many sites, when I click on a box that I can type in it surrounds the text box in a black outline with a bit of white around the edge. It’s doing the same thing for drop down menus as well, placing a box around any option I move my mouse over with the inside highlighted blue. Check boxes are also much more blue than they used the be.

How do I stop chrome from doing this? Is there a setting in the flags that I need to disable or in the settings somewhere?

Go to Source
Author: Demon

Why do tempdb spills still occur even with good row and data size estimates (better than actuals)?

We’re seeing tempdb DB spills for some hashing operations. If the estimates are indeed good as shown what would be the next thing(s) to look for? Looking for a generic answer without having to resort to the specifc query.

This is part of an SP. Just switched to 2019 version to see if it would auto adjust but still getting spills so far.

Microsoft SQL Server 2019 (RTM) – 15.0.2000.5 (X64)

Hash Match Warnings

Go to Source
Author: crokusek

Why F#, Rust and Others Use Option Type Instead Of Nullable types like C# 8 Or TypeScript?

AFAIK, Option type will have runtime overhead, while nullable types won’t, because Option time is an enum (consuming memory).

Why not just mark optional references as optional, then the compiler can follow code execution and find whenever it can’t more be null?

Edit: I see I was misunderstood. I understand and agree with the advantages of avoiding null pointers. I’m not talking about arbitrary pointers that accept null. I’m only asking why not use compile-time metadata, like C# 8’s nullable reference types and TypeScript with strict null checks, where default pointers can’t be null and there’s a special syntax (mostly ?) to indicate a pointer that can accept null.

Edit 2:

Also, Some is strange, in my opinion. Implicit conversion would be better. But that a language feature and not relevant.

Go to Source
Author: Chayim Friedman

Automating puppet master deployment with r10k

I am looking to automate the process of standing up and tearing down a development environment. Puppet currently manages the production and staging systems. I am looking to leverage r10k to populate the Puppet master environments, starting with dev. My issue is, how do I provide a secret key to the Puppet master that it can use to access bit bucket repo’s in an automated fashion? My plan is to use Terraform to codify the infrastructure and puppet to manage the systems. When the development suite isn’t needed, I would like to be able to tear it down completely.

Go to Source
Author: deejay

How to get where an object first instantiated

I have built a theme options class so i can build as many option pages as i want, an i implement a singleton design pattern to fetch options only once, and i have WPML plugin installed so i am trying to make options multilingual using the available hook wpml_multilingual_options and every thing seems to be saved and also displayed correctly on frontend but unfortunately in the admin it keeps showing the none translated options, so i am guessing that the singleton object is called somehow before the init hook where the wpml_multilingual_options is hooked to. But i can’t find where exactly that singleton object is. can i track an object of the same singleton to find out the path of where the first instantiation occurs.

Go to Source
Author: Mohamed Omar

Code check – erratic wifi connection on ESP 8266?

Looking for a quick scan of my code to check if there’s anything daft I’m doing here.

Have built wifi temperature sensors that I have in various locations, but I’m noticing that they’re a little erratic in whether they are submitting data or not.

A good example is the attached code for a server unit device – it’s sitting within a metre of the Orbi satellite, so naturally has a pretty rock solid connection. I’m just wondering if there’s anything in my code that’s making it less resilient and potentially failing to submit.

This code has submitted data to my influxdb for a few hours yesterday (every 15 mins), but then has failed to submit anything new since then. I’ve hit the reset button on the ESP8266 and it’s started again.

Any help most appreciated!

#include <DHT.h>
#include <DHT_U.h>
#include <ESP8266WiFi.h>
#include <WiFiClient.h>
#include <WiFiUdp.h>
#include <ESP8266mDNS.h>
#include <ArduinoOTA.h>
#include <InfluxDbClient.h>


// v2 Temp sensor
// Connecting to Home NAS

// Instructions
// 1. Update SensorName

#define SSID "xxx" //your network name
#define PASS "xxx" //your network password
#define VersionID "v3"


#define SensorName "ESPTemp_ServerUnit" //name of sensor used for InfluxDB and Home Assistant
// Temp Sensor 1 - ESPTemp_GardenTropical
// Temp Sensor 2 - ESPTemp_GardenRoom
// Temp Sensor 3 - ESPTemp_Greenhouse
// Temp Sensor 4 - ESPTemp_OutsideGreenhouse
// Temp Sensor 5 - ESPTemp_ServerUnit

// Connection Parameters for Jupiter InfluxDB
#define INFLUXDB_URL "http://192.168.1.5:8086"
#define INFLUXDB_DB_NAME "home_assistant"
#define INFLUXDB_USER "xxx"
#define INFLUXDB_PASSWORD "xxx"


// Single InfluxDB instance
InfluxDBClient client(INFLUXDB_URL, INFLUXDB_DB_NAME);
// Define data point with measurement name 'DaveTest`
Point sensor("BrynyneuaddSensors");


#define PORT 80
#define DHTPIN 4     // what pin the DHT sensor is connected to
#define DHTTYPE DHT22   // Change to DHT22 if that's what you have
#define BAUD_RATE 115200 //Another common value is 9600

// 900000 ms = every 15 mins it posts to database
const unsigned long delayTime = 15 * 60 * 1000;
unsigned long delayCounter = 0;

DHT dht(DHTPIN, DHTTYPE);


//this runs once
void setup()
{
  Serial.begin(BAUD_RATE);

  // Connect to WIFI
  WiFi.begin(SSID, PASS);
  while (WiFi.status() != WL_CONNECTED)
  {
    delay(500);
    Serial.print("*");
  }

  // Initialise OTA Routine
  ArduinoOTA.onStart([]() {
    Serial.println("Start");
  });
  ArduinoOTA.onEnd([]() {
    Serial.println("nEnd");
  });
  ArduinoOTA.onProgress([](unsigned int progress, unsigned int total) {
    Serial.printf("Progress: %u%%r", (progress / (total / 100)));
  });
  ArduinoOTA.onError([](ota_error_t error) {
    Serial.printf("Error[%u]: ", error);
    if (error == OTA_AUTH_ERROR) Serial.println("Auth Failed");
    else if (error == OTA_BEGIN_ERROR) Serial.println("Begin Failed");
    else if (error == OTA_CONNECT_ERROR) Serial.println("Connect Failed");
    else if (error == OTA_RECEIVE_ERROR) Serial.println("Receive Failed");
    else if (error == OTA_END_ERROR) Serial.println("End Failed");
  });
  //Set the unique name of device
  ArduinoOTA.setHostname(SensorName);
  ArduinoOTA.begin();
  Serial.println("Ready");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());



  //initalize DHT sensor
  dht.begin();

  // set InfluxDB database connection parameters
  client.setConnectionParamsV1(INFLUXDB_URL, INFLUXDB_DB_NAME, INFLUXDB_USER, INFLUXDB_PASSWORD);

  // Add constant tags - only once
  sensor.addTag("device", SensorName);

  // Check server connection
  if (client.validateConnection()) {
    Serial.print("Connected to InfluxDB: ");
    Serial.println(client.getServerUrl());
  } else {
    Serial.print("InfluxDB connection failed: ");
    Serial.println(client.getLastErrorMessage());
    Serial.println(client.getServerUrl());
    Serial.println("Exiting DB Connection");

  }

}

//this runs over and over
void loop() {
  ArduinoOTA.handle();

  if (millis() - delayCounter > delayTime ) {

  //  Serial.println("DelayCounter= ");
  //  Serial.println(delayCounter);
 //   Serial.println("delayTime= ");
  //  Serial.println(delayTime);

    float h = dht.readHumidity();
    Serial.print("Humidity: ");
    Serial.println(h);
    // Read temperature as Fahrenheit (isFahrenheit = true)
    float c = dht.readTemperature();
    Serial.print("Temperature: ");
    Serial.println(c);

    // Check if any reads failed and exit early (to try again).
    if (isnan(h) || isnan(c)) {
      Serial.println("Reading DHT22 Failed, exiting");
      return;
    }

    //update Influx DB channel with new values
    updateTemp(c, h);

    Serial.print("Writing to InfluxDB: ");


    //INFLUXDB - clear temp data so it doesn't repeat
    sensor.clearFields();

    // Update Influx DB
    sensor.addField("Temperature", c);
    sensor.addField("Humidity", h);

    Serial.println(sensor.toLineProtocol());
    // Write data
    client.writePoint(sensor);

    delayCounter = millis();  // reset the timer
  }
}

bool updateTemp(float tempC, float humid) {

  WiFiClient client;    // Create a WiFiClient to for TCP connection


  Serial.println("Receiving HTTP response");
  while (client.available()) {
    char ch = static_cast<char>(client.read());
    Serial.print(ch);
  }
  Serial.println();


  Serial.println("Closing TCP connection");
  client.stop();
  return true;
}

Go to Source
Author: Alphatester77

How can I unify my distributed build environments using Jenkins and Docker?

I’ve recently taken over the CI management for a medium-sized team of developers. Currently, we have one Jenkins server instance as well as six other instances of Jenkins agents to build upon. We have been using labels to determine which builds to run. For example, some instances are setup to run under environments A or B. In the future, we would like to add more instances of A and B, as well as add new environments, e.g., C, D.

My goal now is to unify these build environments and define them in one place to allow for better scale-ability and ease of management of these instances. Docker seems to be a solution to this approach. I know that I can create a Docker image for each of my individual environments. What I don’t know is how to distribute the image into containers across my six machine instances. The solution I am looking for involves allowing multiple docker containers to be spun up on a single instance (docker host?), but only if needed (i.e., the scenario when all other hosts are currently hosting a container). Additionally, I need a solution that supports Windows and Linux based hosts.

I know that Docker support in Jenkins seems to be based largely via plugins, so I was hoping for some direction in which plugin to research first.

Thanks for your time.

Go to Source
Author: qgoehrig

how to append a string at the end of a line but also before some n characters

  • List item

how to add some string at the end of the line in a file : But also before 3 charcters .

I tried but ended up by appending at the last of the string .can anyone help me pls

Example :
I/P:
file1.txt:

content :
line1 :a
line2 :bbbb
lin3: vvvvv
line4: cccccc
line5: dd
line6 :rules(‘test’, ‘.*’, [abc,bcd]).

i tried using
sed -i ‘/rules/ s/$/ ***/’ file1.txt

O/P:
rules(‘test’,[abc,bcd]). ***

But preffered o/p:
rules(‘test’,[abc,bcd,***]).

Go to Source
Author: yugandhar

ubuntu stuck on login page – can’t login with keyboard

I don’t know why but my ubuntu session got stuck at the login page suddenly and I can’t ge through any longer. When starting the VM, i get the following screen :

enter image description here

I’ve tried CTRL+ALT+F1 and CTRL+ALT+F7 but no luck… my keyboard and mouse aren’t working. I’ve also tried CTRL+ALT+F3 and ran

sudo apt install --reinstall ubuntu-desktop

sudo apt install --reinstall ubuntu-session

but still the same.

enter image description here

I’m wondering what’s wrong and what I can do. If ever it is not possible to recover my session, is there a way to create another VM with the disk of this one… i’d hate to loose everything.

Go to Source
Author: Boris M

Is it ok to share relevant login information on an API’s public endpoint?

I have an (https-only) API that uses OAuth2 for authentication and authorization via access tokens.

Any request to the API needs an access token to the Authorization header. The API validates that this access token is signed by a specific tenant (e.g. google), is targeted to a specific audience, and has some specific scopes.

As a user/service, it would be really helpful if the API would expose, on a public endpoint, such information, so that I know from where I should fetch an access token from (e.g. through PKCE), and which scopes I need to request the token from.

A natural mechanism for this is for the API to have a public endpoint with something like

{
    "provider_uri": "https://accounts.google.com/.well-known/openid-configuration",
    "client_id":"...apps.googleusercontent.com",
    "scope":"openid ..."
}

Is this a valid approach? What other relevant idioms exist to address such problem?

Go to Source
Author: Jorge Leitao

Is this an anti-pattern to have a service have both APIs and listening to events?

I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.

For updating the database row from Notifications, I can think of 2 approaches:

  1. Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?

  2. Listener process directly updates the service.

I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.

In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.

Can someone help me here to tell if one of them is anti-pattern and which one is better to use?

Go to Source
Author: hatellaCollege

Slab SReclaimable memory can not be recycled?

CentOS Linux release 7.2.1511 (Core)

Linux version 3.10.0-514.26.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP TueJul 4 15:04:05 UTC 2017

/proc/meminfo:

MemTotal:       16267428 kB
MemFree:          237816 kB
MemAvailable:    7501712 kB
Buffers:           18076 kB
Cached:           745340 kB
SwapCached:            0 kB
Active:          5015316 kB
Inactive:         152100 kB
Active(anon):    4404088 kB
Inactive(anon):      972 kB
Active(file):     611228 kB
Inactive(file):   151128 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:              1928 kB
Writeback:             0 kB
AnonPages:       4404052 kB
Mapped:            36320 kB
Shmem:              1008 kB
Slab:           10579260 kB
SReclaimable:    6839864 kB
SUnreclaim:      3739396 kB
KernelStack:       19232 kB
PageTables:        25760 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     8133712 kB
Committed_AS:    7992196 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       94920 kB
VmallocChunk:   34359635708 kB
HardwareCorrupted:     0 kB
AnonHugePages:   2297856 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      161664 kB
DirectMap2M:    10323968 kB
DirectMap1G:     8388608 kB

slabtop:

 Active / Total Objects (% used)    : 18223363 / 42966058 (42.4%)
 Active / Total Slabs (% used)      : 1183671 / 1183671 (100.0%)
 Active / Total Caches (% used)     : 73 / 95 (76.8%)
 Active / Total Size (% used)       : 4513721.33K / 10427564.51K (43.3%)
 Minimum / Average / Maximum Object : 0.01K / 0.24K / 8.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
6763716 6763212  99%    0.11K 187881       36    751524K sysfs_dir_cache
5604032 949314  16%    0.06K  87563       64    350252K kmalloc-64
4202094  67116   1%    0.04K  41197      102    164788K ext4_extent_status
3893484 2373049  60%    0.19K 185404       21    741616K dentry
3748191 1802716  48%    0.58K 138832       27   2221312K inode_cache
3251724 987321  30%    0.09K  77422       42    309688K kmalloc-96
2611301 1924963  73%    0.57K  93416       28   1494656K radix_tree_node
2590224 764829  29%    0.10K  66416       39    265664K buffer_head
2042465 284009  13%    0.05K  24029       85     96116K shared_policy_node
1802221 613054  34%    1.01K  58287       31   1865184K ext4_inode_cache
1263674 182269  14%    0.31K  50548       25    404384K nf_conntrack_ffffffff81aa0e80
1251789 210295  16%    0.19K  59609       21    238436K kmalloc-192
726016 686721  94%    0.03K   5672      128     22688K kmalloc-32
712992   9160   1%    0.50K  22281       32    356496K kmalloc-512
591360  43401   7%    0.12K  18480       32     73920K kmalloc-128
579564   4356   0%    0.11K  16099       36     64396K jbd2_journal_head
310514   4893   1%    2.00K  19576       16    626432K kmalloc-2048
183680 181248  98%    0.06K   2870       64     11480K kmem_cache_node
181936 180969  99%    0.25K   5686       32     45488K kmem_cache
130254   1632   1%    0.04K   1277      102      5108K Acpi-Namespace
 84512  19793  23%    1.00K   2641       32     84512K kmalloc-1024
 83312   2464   2%    0.25K   2608       32     20864K dquot
 80224  12022  14%    0.25K   2574       32     20592K kmalloc-256
 53538   2009   3%    1.94K   3347       16    107104K TCP
 39490  15690  39%    4.00K   5106        8    163392K kmalloc-4096
 28800    860   2%    1.56K   1440       20     46080K mm_struct
 23808  20992  88%    0.02K     93      256       372K kmalloc-16

To the problem:

I running some docker containers on this host, set mem limit about 13g(5g actually used). I want to start another java process, but killed by oom killer. SReclaimable of Slab can not be freed.

Things I tried

echo 3 > /proc/sys/vm/drop_caches

Go to Source
Author: nuaa_tiger

Backup and upgrade MongoDB databases

I have a web application using MongoDB (Version 2.6.12), hosted in a (Ubuntu 16.04) server of DigitalOcean.

I like to use Robo 3T to connect to the remote database and do simple queries.

Sometimes I backup the database with mongodump, and mostly rely on the weekly auto server snapshot provided by DigitalOcean.

Now, I need to undertake queries containing like $lookup, they told me that MongoDB Version 2.16.12 does not support that. So I need to seriously backup my database and use a more recent MongoDB. I still want to keep using Robo3T to do queries to production database (by preference) or backup database (if it is updated very often, e.g., every day).

I have several questions:

1) I would prefer to have a more regular auto-backup (e.g., every day) of the database. Which way is recommended? (Additionally, it seems that Atlas does NOT support Cloud Provider Snapshots for clusters served on DigitalOcean?)

2) If I buy a new server hosted on Azure and install MongoDB 4.2, and copy the whole database by mongodump to the new server, will it work?

Go to Source
Author: SoftTimur