What exactly does PostgreSQL (or other databases) do internally when you “prepare” a query rather than just running it directly?

When I face a huge number of repeated INSERTs in a loop, I tend to first create a “prepare skeleton query” prior to the loop, and in the loop, I simply “execute” this prepared query and send it all the values. I heard long ago, and can understand in some abstract sense, that this is more optimized than just having the loop with a normal parameterized query in it.

However, I don’t understand what exactly PG is doing that makes it so much faster. If it even is so much faster. I frankly have never done any real benchmarks to compare the two different methods.

Also, shouldn’t PG be “warmed up” to a query once you repeat it several (let alone many) times in very short succession, and perhaps do the same thing as I do manually, but internally, when I just use a normal parameterized query?

I find myself constantly second-guessing what the database is up to internally. I have no real grasp of how “smart” it is. I fear that I’m doing a lot of things which are meaningless because it’s already taken care of internally by its smart code.

Maybe “preparing” and “executing” is an archaic practice which has no real benefit these days?

It sounds like PG is basically allocating resources to “prepare” for upcoming huge amounts of similar INSERT queries, but I don’t understand what exactly it would do differently compared to just executing them one by one. I also don’t understand what the point would be of a non-INSERT prepared query/statement.

PS: Just to clear up any confusion: I never use non-parameterized queries, whether they are prepared or not. A lot of people confuse “parameterized queries” and “prepared statements”. Even I called the latter “prepared queries” in this question…

Go to Source
Author: Jevontay Rohler

How to actually set up MariaDB replication/high availability

I’m setting up a high availability LAMP stack – so far, I have two servers with HAProxy running on both their frontends that own a floating IP. HAProxy redirects requests to backend Apache servers running on each of them (same machine, port 8000).

This all works, and I’m happy with this so far for high availability. I have yet to deal with the databases yet however.

Since I only have two servers, I don’t think I can do master-master like Galera cluster due to the requirement of quorum. So, I planned to do master-slave read only replication.

Most of the guides I see online don’t mention the MaxScale proxy that MariaDB suggests in its high availability guide (page 4 for example)

I’m just really confused overall on what to do and how to set it up?

From my perspective, I’d like to set up a MaxScale proxy on one of my servers and run MariaDB on both servers; MaxScale will handle things like redirects. How can I actually accomplish what I’m looking for?

Go to Source
Author: forkwasher

Benefits of “Bluetooth LE Privacy” feature?

For making an (until now, un-) educated guess about the necessity of spending the effort of including the “Bluetooth LE Privacy” feature in a consumer, embedded device’s BT software, I am seeking information about the “necessity” from an information security standpoint of offering this feature – is it useful? does it really solve a security issue or is it already broken?

The BT SIG itself is fairly quiet about this feature besides well-worded blog posts, so shedding a little light on this would help tremendously in making the decision to “go the extra mile” or just leave it aside.

Go to Source
Author: Christian

WordPress API single post

I’m grabbing a list of posts from Blog A to Blog B using the wordpress API, so far everything is fine, but how do I access the single post inside Blog B? How do I make it so when I access /blog/post-name it shows me the content from the same post from Blog A?

Anyone has an idea?

Go to Source
Author: Radu033

Pattern name for “Platform independant class with Hooks + sub class implementing platform specific”?

I am developing an android app to process jpg exif meta data in a workflow.

The Workflow class is implemented platform independant (i.e. code runs on android and on j2se).

The Workflow contains and calls protected virtual methods that do nothing for android specific .functions (i.e save changes to android media database)

package de.k3b.media;

// platform independant
public class Workflow {
    public void processPhoto(...) {
        ...
        saveChangesToMediaDatabase(....);
        ...
    }
    
    protected void saveChangesToMediaDatabase(...) {    
        // to nothing in platform independant version
    }
}

I also have a android platform specific sub class that implements the platform specific code

package de.k3b.android.media;

// android platform dependant
public class AndroidWorkflow extends Workflow {
    @Override
    protected void saveChangesToMediaDatabase(...) {    
        ... do something with android media database
    }
}

I can use Workflow.processPhoto(...) in non-android j2se apps and in automated unit or integration tests
and i can use AndroidWorkflow.processPhoto(...) in my android app

My question: Is this an established pattern and is there a name for this pattern?
My current pattern name “Platform independant class with Hooks” and “Platform specific Hook implementation”.

I hope to find a better (??established??) name for this pattern.


Remarks:

One established pattern to have platformindependat code is by using a Facade pattern.

Example: I use a FileFacade with a j2se implementation based on java.io.File and AndroidFileFacade which is based on android specific DocumentFile.

Although the goal of platform independance is the same the way how this is achieved is different.

Go to Source
Author: k3b

How to update the 3rd level of a json

I have a json file that looks like this

{
    "PL001": {
        "player_name": "Player 1",
        "player_email": "playeremail@email.com",
        "hobbies": {
          "SP001": {
            "sport": "Soccer",
            "positions": {
              "FL1":{
                "position": "Goalie"
              }
            }
          }
        }
    },
    "PL002": {
      "player_name": "Player 2",
      "player_email": "playeremail2@email.com",
      "hobbies": {
        "SP002": {
          "sport": "Hockey",
          "positions": {
            "FL2":{
              "position": "goaltender"
            }
          }
        }
      }
    }
}

What I need to do is change my positions codes to my new ones so for example my FL2 will be PLFL2

I’ve managed to get to the positions code but I’m not sure as to how I should go about pushing the new code up without loosing data.

Here is my codes

$old_code = "FL2";
$new_code = "PLFL2";


$json = json_decode(file_get_contents(storage_path('/players.json')));

$result = [];
foreach ($json as $key => $value)
{
  foreach($value->hobbies as $hobbiesCode => $hobby)
  {
    foreach ($hobby->positions as $positionCode => $position)
    {
      $positionCode = $new_code;
    }
  }
}

and that is where I get stuck. I’m not sure how to now go and update my json file with the new codes

Go to Source
Author: Aurilie

Display a selectable map in wordpress

I have a problem. I bought a theme, I created a page called “Harta”. The page looks like this:

enter image description here

Under the “test” part I want to display a world countries map. The map needs to be selectable, I need to select two countries, after selecting them, their names should be displayed into input elements of a form and then save the data of the form in the database. Any suggestions how to do that? Is there any plugin in wordpress that deals with the maps?

Go to Source
Author: Çlirim Kastrati

SANE not detecting scanner by ID (pipe error) or path (invalid arg)

I have a Canon P-208II, which uses the canon_dr backend [0].

However, I noticed from lsusb that the product ID of mine (0x1660) differs from that in /etc/sane.d/canon_dr.conf and /usr/lib/udev/rules/49-sane.rules (0x165f), so I duplicated those lines, modifying the ID to match.

$ lsusb
[...]
Bus 001 Device 017: ID 1083:1660 Canon Electronics, Inc. CANON   P-208II
[...]

Despite that, neither sane-find-scanner nor scanimage -L detect the scanner, with verbose logging I can see it trying to access and getting permissions errors, or with root:

could not fetch string descriptor: Pipe error

in the same location.

I also tried sane-find-scanner /dev/usb/001/017, which says failed to open (Invalid argument) – I’m not sure what else to give as ‘devname’ [1]?

Is there somewhere else I need to change the product ID, or something to do to make the changes I have made take effect? Or is the pipe error a smoking gun?


[0] – http://www.sane-project.org/sane-mfgs.html

[1] – http://www.sane-project.org/man/sane-find-scanner.1.html

Go to Source
Author: OJFord

Can’t open GRUB in ubuntu 18.04

Its a dual boot i.e. windows 10 and ubuntu 18.04. Ubuntu is filled as preferred choice hence GRUB doesn’t come by default.
I went through many resources available and they all said press shift before the logo of ubuntu appears i.e. while the manufacturer logo is there in the screen( in my case its Lenovo ).
I have tried tapping shift continuously , long pressed shift and even kept it pressed before turning the computer ON but in none of the cases GRUB window opened. Have tried it with both left as well as right shift.I am trying to open ubuntu’s recovery mode.

Go to Source
Author: Aaryan Gupta

MySQL binlog format change from MIXED to ROW

One of my customer has MySQL master-slave setup at on-premises, where the binlog format is kept to be MIXED. Now, we are planning to migrate the DB to AWS RDS using DMS, the binlog format should be kept as ROW. I am not sure whether MIXED to ROW at this point of time at source will impact the source DB or replications. I see MySQL docs talking about the implication of STATEMENT to ROW change, but not from MIXED to ROW. Can someone help me on this.?

If the above is not doable, another plan is to go with Daisy chain approach, where an EC2 instance is created as slave connected to the slave DB of on-prem. This EC2 slave should have the binlog format as ROW. Even in this case, at on-prem the binlog would be MIXED and EC2 slave would be ROW. I am not sure whether this approach (within the same DB) will work, or anything break.

Any expert suggestion/solution (with MySQL document reference for the solution, if possible) would be much appreciated.

Go to Source
Author: serverstackqns

Correctly installing / configuring locally built qemu / libvirt

On Ubuntu 18.04, the default installation of qemu is something like version 3, and I needed virtiofs which has built-in support in later versions. So I uninstalled the qemu and related packages, downloaded the qemu 5.0 sources and complied it locally.

All worked well, including make install, which put the binaries in /usr/local/ which I guess is the correct default unless told otherwise.

Most things are working OK, but I’m now trying to get graceful shutdown / restart of guests working when the host is restarted, and have hit 2 snags so far.

  1. On host startup, I would see /usr/local/libexec/libvirt-guests.sh: 29: .: Can't open /usr/local/bin/gettext.sh. Of course, that’s not where getttext.sh normally lives, but I can get round that by ln -s /usr/bin/gettext.sh /usr/local/bin/gettext.sh
  2. No failure message there now, but later in the host boot logs I see libvirt-guests.sh[2166]: touch: cannot touch '/usr/local/var/lock/subsys/libvirt-guests': No such file or directory

I could go on symlinking things so they appear accessible to libvirt, but I’m wondering if the correct fix is actually to install qemu where it expects to be.

So, first question, is reinstalling the right approach, or have I just missed some basic configuration which would leave the local package where it is, but allow everything to work as expected?

If not, I guess I will have to run ./configure --prefix=/usr and rebuild, but how could I remove the currently installed version in /usr/local/ cleanly first? And, I’d ideally like to keep my current VM configurations. Searching for an XML file for a particular domain, I see 2 versions:

# find / -name 07x2.xml
/usr/local/var/run/libvirt/qemu/07x2.xml
/usr/local/etc/libvirt/qemu/07x2.xml

I’m not sure why there are 2, but I guess I could just virsh dumpxml before removing anything.

Go to Source
Author: dsl101

What naming convention should I use for a JSON RPC client API designed for multiple languages?

This is the documentation with the original RPC client API specification. The naming convention in the specification is camel case with the first letter in lower case.

Naming conventions might differ in subtle ways for different languages (camel case with vs. w/o capitalization of the first letter), but for some conventions like snake case (Python) or Swift’s Fluent Usage API changing the names in the original specification might increase the cognitive load when using the API for those already familiar with the specification.

When searching for different JSON RPC APIs on GitHub, some implementations seem to take advantage of reflection to intercept method calls and pass them to RPC request “as is” so method names for that language are the same as in the original spec. If reflection is not available the names are hardcoded and are mostly the same as the spec, changing only the capitalization of letters for some languages.

Some examples:

Not using Fluent Design in Swift

https://github.com/fanquake/CoreRPC/blob/master/Sources/CoreRPC/Blockchain.swift
https://github.com/brunophilipe/SwiftRPC/blob/master/SwiftRPC/SwiftRPC+Requests.swift

Not using snake case in Ruby

https://github.com/sinisterchipmunk/bitcoin-client/blob/master/lib/bitcoin-client/client.rb

Changing method names to camel case in C#

https://github.com/cryptean/bitcoinlib/blob/master/src/BitcoinLib/Services/RpcServices/RpcService/RpcService.cs

Go to Source
Author: rraallvv

Website animations not working properly in browsers

OS: Ubuntu 20.04 LTS; Graphics: GeForce GTX 1060 6GB/PCle/SSE2;

Some website animations aren’t working properly – they’re often jumpy – whereas they’re smooth on my Windows OS on the same PC. Some sites that rely on movement animations are basically unusable E.g. this site.

I’ve tried using Chrome and Firefox and the result is the same in both browsers. I’ve tried reinstalling Chrome too.

Go to Source
Author: House Leto