Getting the exact and closest Search using CONTAINS/CONTAINSTABLE in SQL

I have a table that contains 30 000+ records named as ProductCodes that contains a column ‘Value‘.

I have made a query that’ll return the records matching and containing a passed String using CONTAINS.

Select TOP 5 * from ProductCodes
WHERE CONTAINS(Value,'"pepper*"')

It does return the records contains ‘Pepper’ e.g. Pepper and Bayleaf, Peppermint, Dr. Pepper etc.
I also have an exact record that only contains the exact string, ‘Pepper’ which is displayed in the mid of the results. I only need to select the TOP 5 of the records and also the exacting match of passed string.

I also tried to use the CONTAINSTABLE

SELECT Value, RANK FROM
ProductCodes PC
INNER JOIN
CONTAINSTABLE(ProductCodes,Value,N'pepper') AS KEY_TBL
ON PC.Value = KEY_TBL.[KEY]
ORDER BY RANK DESC

I tried using a WHERE rank = 1000 which I read somewhere that’s the rank of the closest match but as I figured out the rank of the record ‘Pepper’ is 128 and phrases that have more than 1 word match have a higher rank.

I am fairly new to using FREETEXT and Semantics so I am still learning on it. How can I take the exact match and ‘partial’ match in my TOP 5? TIA

edit:

I have added a UNION ALL query where it first selects the equal record and combining it with my CONTAINS query.

SELECT TOP 5 * FROM (
 SELECT TOP 5 * From ProductCodes 
 WHERE Value = 'pepper'
 UNION ALL 
 Select TOP 5 * from ProductCodes
 WHERE CONTAINS(Value,'"pepper*"')
) as Value

Now, I can get the exacting match and partial match but wonder how it’ll make my query slower.

Go to Source
Author: bish-cuit

Would an anti-virus software make any other processes consume more CPU?

Would an anti-virus software make any other processes that it scans to consume more CPU?

We recently switched to a new Next-Gen, AI based Anti-Virus software. On couple of our Linux servers that runs Java processes, there has been frequent high spikes in CPU usage recently. Apparently, it is Java that is the highest consumer of CPU while the anti-virus process isn’t using a lot. I am pretty sure that it is not the anti-virus behind the higher CPU usage since the these spikes started a couple of days before even the AV software was installed. So, right now, I am in the process on convincing others that it is not the anti-virus (trust me, no one is believing what they see at top -c) . Before I go back to the team, I would like to make sure that when anti-virus scans every single file that gets opened by other processes, the resultant CPU usage would show up against the anti-virus process and not against the process that opens the file. Is this how it will appear in CPU usage stats?

Server run Amazon Linux and the Anti-Virus is Crowdstrike Falcon.

Go to Source
Author: Citylight

What’s the easiest language for widest platform distibution at the moment? [closed]

If you were going to write a lightly demanding app (ie. not cutting edge 3D gaming but also not flappy bird), and you wanted to target Windows, macOS, Linux, iOS, Android, and maybe even Switch, what language would you choose? And why?

Back in the day, Java sacrificed speed for easier portability, but what about for today’s modern platforms? Has anyone successfully developed a reliable and relatively easy pipeline to support them?

Go to Source
Author: Django Reinhardt

Form to Collecting Students’ Cell Numbers

I am a college instructor and I will be teaching Summer classes next week. My school allows instructors to broadcast time-sensitive urgent message to students’ cellphones. Therefore I would like to collect my students’ numbers.

I am wondering if there is any free online-form out there (Google spreadsheet, form perhaps) that I can email to students, asking them to enter their cell numbers but they won’t see others’ numbers, thus satisfying my school’s privacy policy.

I am a computer-savvy person but definitely not a heavy code-writer. Thank you for your time and helps.

Go to Source
Author: A.Magnus

Linking nodes in a cluster to run a single instance of a docker container

In this cluster, there are 11 identical nodes, each with 16gb of memory. So far, I’ve been able to connect each node of the cluster to a docker swarm hosted on the control node. My end goal is to run a single instance of a container that utilizes the resources of each node so the total memory of that container would be 176gb. How should I accomplish this?

Each worker node is running Ubuntu Server 20.04 and the control node is running Ubuntu Desktop 16.04, all with docker engine 19.03.8.

Go to Source
Author: AJ_4real

SQL Server 2019 performance worse than 2012… am I missing something?

We have a SQL Server 2012 server which far outperforms a SQL Server 2019 database on (as far as I can see) the same infrastructure. We are hosting both databases on a cloud platform with the same SLAs. Both have 180GB RAM and 16 processors.

However there are a few key differences.

  1. The 2012 database server is Enterprise, the 2019 is Standard. As far as I know, this shouldn’t make a difference
  2. The 2012 database was restored to the 2019 server and it’s version changed to 150 (2019)
  3. MAXDOP on the 2012 server was 0, 2019 server it is set to 8 as recommended by Microsoft and others
  4. Cost threshold for parallelism = 5 on 2012 server, 20 on 2019 server

Other database settings were not changed, so the following settings are default on 2019, I believe:

  • Legacy Cardinality Estimation = OFF
  • Parameter Sniffing = ON
  • Query Optimiser Fixes = OFF

Mainly the type of queries we do are large complex multi join queries performing updates and inserts, with the occasional small selects from users. We load large files to the database and then process the data in large queries, usually one at a time. In between these large “loads” we have users doing selects on other database tables not being loaded/processed in preparation for future load/process steps. Generally we are getting between 30%-50% performance reductions in processing. I figured this was because of the MAXDOP setting, but altering it to 0 made no difference over a series of runs.

Our major symptom is we are getting lock timeouts when we try to connect to the 2019 server while it is busy processing, whereas the 2012 server still services connections, just very slowly. I was thinking of setting the connection timeout setting on the server to a high amount, however I suspect we still won’t get responses from the server. It’s like it’s blocking all new connections if its even slightly busy.

Are there other things I should try? Are those database settings worth messing around with?

I could dive in further and start looking at DMVs, however this seems to be close to a “like for like” environment upgrade with considerable drops in performance. Just checking there isn’t something else I should check before doing a bigger investigation.

Go to Source
Author: blobbles

not able to get .textframe.charactor.text

every thing is fine but I am not able to get display of text message in shapes

private Sub ShowPleaseWait()

With digitalcertificate

    .Shapes("Wait").Delete

    ActiveSheet.Shapes.AddShape(msoShapeRectangle, 500, 150, 300, 200).Name = "Wait"

      With .Shapes("Wait")

            .TextFrame.Characters.Text = "Please Wait.... Processing Data"

            ActiveSheet.Fill.ForeColor.SchemeColor = 44

      End With

End With

With TextFrame.Characters(Start:=1, Length:=32).Font

    .Name = "Arial"

    .FontStyle = "Bold"

End With

End Sub

Go to Source
Author: Deepak Bhanushali

Is there a less complicated alternative to handling this simple mySQL query in Node?

To Put My Question In Better Context…

I am about done writing my first real-world Node application, which would be classified as a REST API. For myself, it was a bit challenging to wrap my head around Node’s Async event processing. I still don’t think I fully grasp it, as you will see by the specifics of this post. That being said…

Am I Making This Overly Complicated?

I found some code snippets online that helped me get my API working. Below is one function that deals with finding a client. I guess you would call the file this is in, a Controller, for those of you familiar with MVC. But this being Node, and NOT MVC, my question is this:

GET http://localhost/clients/3 -> brings me to this code…

// Find a single client with a Id
exports.findOne = (req, res) => {
  Client.findById(req.params.clientId, (err, data) => {
    if (err) {
      if (err.kind === "not_found") {
        res.status(404).send({
          message: `Not found Client with id ${req.params.clientId}.`
        });
      } else {
        res.status(500).send({
          message: "Error retrieving Client with id " + req.params.clientId
        });
      }
    } else res.send(data);
  });
};

What is the reason for this call to have a callback itself???

Client.findById(req.params.clientId, (err, data) => {

which in turn, looks like this:

Client.findById = (clientId, result) => {
  sql.query(`SELECT * FROM clients WHERE id = ${clientId}`, (err, res) => {
    if (err) {
      console.log("error: ", err);
      result(err, null);
      return;
    }

    if (res.length) {
      console.log("found client: ", res[0]);
      result(null, res[0]);
      return;
    }

    // not found client with the id
    result({ kind: "not_found" }, null);
  });
};

This seems like a lot of work for a simple query function. Coming from a PHP background, this could be done in very few lines of code there.

The whole thing seems complicated. Is all this really necessary for such a simple API that returns a client record of only four columns?

For that matter, do I even need that intermediate function (controller)? What’s the matter with just routing right to the final function (in this case, a function named findById ) ??

I’d sure appreciate some input on this before I get too far ahead. I have another dozen endpoints to code, so if I need to change directions, now would be the time.

Thanks!

Go to Source
Author: John S.

Creating a subscription platform with the ability to post dynamic content to user pages

I have a business case which I want to share and ask for suggestions.
Need to create a subscription platform for an on demand copy system.
The user chooses a subscription plan, raises requests to get personalized copies, the clients team works on those copies and the approved copies should be available on the users dashboard.
Any thoughts on how to go about it ?

Go to Source
Author: Penter Projects

Gnome how to move and resize window using one hotkey like xfce(alt) or elementary os(super)

How to move and resize window in gnome desktop environment like xfce or elementary os?

In xfce, we can hold alt key, and using left mouse button to move window, or using right mouse button to resize window, similarly, in elementary os is super key, but in gnome ,how to set super key to drag or resize window? Thank you!

Go to Source
Author: 鸿则_

Ubuntu specific RAM issues – chrome crashing and memtest86 freezing

G’day,

I’m running 4x8GB sticks of corsair vengeance LPX RAM, a 5700xt and an r5 3600x.

I just installed ubuntu 20.04 a week ago to dual-boot with windows and I’ve been having RAM-related problems. Google chrome pages have been crashing with the aw, snap error: SIGSEGV. Chrome will also completely crash if I have too many tabs open. I ran memtest86 from the ubuntu boot menu and it freezes at 16% on the first pass when I have all my ram in and 35% when only 2 sticks are in (doesn’t seem to matter which two sticks or which DIMM slots, either). I’ve yet to be able to get past 2 seconds into a memtest86 test.

I don’t have these issues when I run windows as the OS and the Windows Memory Diagnostic didn’t return any errors. It seems like the issues come when I try to access more than a certain amount of RAM in Ubuntu but I’m not sure.

The command free -m shows the amount of memory i’d expect.

Does anyone have any thoughts? I’ve tried installing PPA drivers for my GPU and had a look at the latency timings for my RAM in the UEFI but no luck.

Cheers.

Go to Source
Author: RuddyDev

How to bootstrap trust in an on-premise environment?

As part of moving from few on-premise monoliths to multiple on-premise microservices, I’m trying to improve the situation where database passwords and other credentials are stored in configuration files in /etc.

Regardless of the the technology used, the consumer of the secrets needs to authenticate with a secret store somehow. How is this initial secret-consumer-authenication trust established?

It seems we have a chicken-and-egg problem. In order to get credentials from a server, we need to have a /etc/secretCredentials.yaml file with a cert, token or password. Then I (almost) might as well stick to the configuration files as today.

If I wanted to use something like HashiCorp Vault (which seems to be the market leader) for this, there is a Secure Introduction of Vault Clients article. It outlines three methods:

  • Platform Integration: Great if you’re on AliCloud, AWS, Azure, GCP. We’re not
  • Trusted Orchestrator: Great if you’re using Terraform, Puppet, Chef. We’re not
  • Vault Agent: The remaining candidate

When looking at the various Vault Auth Methods available to the Vault Agent, they all look like they boil down to having a cert, token or password stored locally. Vault’s AppRole Pull Authentication article describes the challenge perfectly, but then doesn’t describe how the app gets the SecretID 🙁

The only thing I can think of is IP address. But our servers are all running in the same virtualization environment, and so today, they all have random IP addresses from the same DHCP pool, making it hard to create ACLs based on IP address. We could change that. But even then, is request IP address/subnet sufficiently safe to use as a valid credential?

We can’t be the first in the universe to hit this.
Are there alternatives to having a /etc/secretCredentials.yaml file or ACLs based on IP address, or is that the best we can do?
What is the relevant terminology and what are the best-practices, so we don’t invent our own (insecure) solution?

Go to Source
Author: Peter V. Mørch

Old LetsEncrypt SSL certificate still being served by EC2 instance

I’m working on renewing the SSL certificate for my websites. They are all running on the same EC2 instance with the same Apache server. Two of my domains recently expired so I also tried to remove them from the certificate.

I think I renewed the certificate correctly because when I run the following, it appears to suggest the certificate is up-to-date:

$ /usr/local/bin/certbot-auto certonly --webroot -w /var/www/html -d apptic.me -d www.apptic.me -d athenaforyou.com -d www.athenaforyou.com -d cloudlease.me -d www.cloudlease.me -d foleyrounds.com -d www.foleyrounds.com -d ugba198.com -d www.ugba198.com -d wildcattribune.com -d www.wildcattribune.com
Requesting to rerun /usr/local/bin/certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Cert not yet due for renewal

You have an existing certificate that has exactly the same domains or certificate name you requested and isn't close to expiry.
(ref: /etc/letsencrypt/renewal/apptic.me.conf)

What would you like to do?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Keep the existing certificate for now
2: Renew & replace the cert (limit ~5 per 7 days)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1
Keeping the existing certificate

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Certificate not yet due for renewal; no action taken.

Then I restart Apache with sudo systemctl restart httpd, which produces no output.

But when I navigate to my website at www.apptic.me, I see there is still an SSL error saying the certificate is expired. What could be happening?

I’m running Apache 2.4:

$ yum info httpd
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
236 packages excluded due to repository priority protections
Installed Packages
Name        : httpd
Arch        : x86_64
Version     : 2.4.43
Release     : 1.amzn2
Size        : 4.0 M
Repo        : installed
From repo   : amzn2-core
Summary     : Apache HTTP Server
URL         : https://httpd.apache.org/
License     : ASL 2.0
Description : The Apache HTTP Server is a powerful, efficient, and extensible
            : web server.

Go to Source
Author: Neel

traefik two hosts in the same config

I’m trying to run Nexus in swarm with traefik, nexus has a admin interface that runs on port 8081 and also has a nexus docker registry repository running on port 9615.

My goal is access registry through registry.docker.domain and nexus by nexus.domain but I am not able to do this.

traefik config on compose

ports:
  - 8081
  - 9615
deploy:
  labels:
    - "traefik.enable=true"
    # Config Nexus
    - "traefik.http.router.nexus.rule=Host('nexus.domain')"
    - "traefik.http.services.nexus.loadbalancer.server.port=8081"
    - "traefik.http.routers.nexus.entrypoints=web-secure"
    - "traefik.http.routers.nexus.service=nexus"

    # Config Registry
    - "traefik.http.router.registry.rule=Host('registry.docker')"
    - "traefik.http.services.registry.loadbalancer.server.port=9615"
    - "traefik.http.routers.registry.entrypoints=web-secure"
    - "traefik.http.routers.registry.tls=true"
    - "traefik.http.routers.registry.service=registry"

If I set up only registry labels I can access the registry by https.://registry.domanin and nexus by registry.domain:PORT

If I set up only nexus labels I can access nexus by http:.//Nexus.domain

But the config above, with labels for both at the same time, didn’t work, nexus and registry return 404.

Obs: I’m using traefik 2.2, the documentation and the use are very different from 1.6

Go to Source
Author: Joao Vitorino