RDS Monitoring using PMM2, Missing Host Metrics for Postgres

I am using PMM2 for monitoring all my RDS instances.

I added Postgres as a remote database, But by adding a database as remote in PMM2 we miss host-level metrics like CPU, Memory etc. What is the recommended way of adding Postgres (RDS) or a way to inject missing metrics to get the complete Node Summary?

Go to Source
Author: xrage

ERROR: 42883: function “lock_database” does not exist at character 8

I’am working on a gis project with postrgresql, and with ruby on rails tools, I can write data to the db.
But when I checking the log of postgresql, I got the following error:

2020-08-23 22:39:32.454 CST [79106] chentoz@gis-edit ERROR: 42883: function “lock_database” does not exist at character 8

I got no clue from google about topic like “lock_database”, and there is also a saying that database lock is not implemented yet.

Have any guy met with error or some hints for digging the error ?

Go to Source
Author: tonyc

What database and cloud service should I use for an Instagram-like app?

I am a beginner in the world of databases and I’m so confused with the amount of tools available. I am building an Instagram like app in Flutter and wanted to know what database should I study for it?

And which cloud service should I use, AWS or Azure?

Go to Source
Author: Aeden Thomas

What exactly does PostgreSQL (or other databases) do internally when you “prepare” a query rather than just running it directly?

When I face a huge number of repeated INSERTs in a loop, I tend to first create a “prepare skeleton query” prior to the loop, and in the loop, I simply “execute” this prepared query and send it all the values. I heard long ago, and can understand in some abstract sense, that this is more optimized than just having the loop with a normal parameterized query in it.

However, I don’t understand what exactly PG is doing that makes it so much faster. If it even is so much faster. I frankly have never done any real benchmarks to compare the two different methods.

Also, shouldn’t PG be “warmed up” to a query once you repeat it several (let alone many) times in very short succession, and perhaps do the same thing as I do manually, but internally, when I just use a normal parameterized query?

I find myself constantly second-guessing what the database is up to internally. I have no real grasp of how “smart” it is. I fear that I’m doing a lot of things which are meaningless because it’s already taken care of internally by its smart code.

Maybe “preparing” and “executing” is an archaic practice which has no real benefit these days?

It sounds like PG is basically allocating resources to “prepare” for upcoming huge amounts of similar INSERT queries, but I don’t understand what exactly it would do differently compared to just executing them one by one. I also don’t understand what the point would be of a non-INSERT prepared query/statement.

PS: Just to clear up any confusion: I never use non-parameterized queries, whether they are prepared or not. A lot of people confuse “parameterized queries” and “prepared statements”. Even I called the latter “prepared queries” in this question…

Go to Source
Author: Jevontay Rohler

Postgresql: can’t start service database system was shut down

I install Postgresql using

sudo apt install postgresql postgresql-contrib

then I run

sudo systemctl restart postgresql

then I run

sudo systemctl status postgresql

It says

Active: active (exited) since ... 1s ago

So it looks like the restart command exits immediately.

the /var/log/postgresql/postgresql-12-main.log says:

2020-08-13 23:58:39.553 MSK [47081] LOG:  received fast shutdown request
2020-08-13 23:58:39.607 MSK [47081] LOG:  aborting any active transactions
2020-08-13 23:58:39.613 MSK [47081] LOG:  background worker "logical replication launcher" (PID 47088) exited with exit code 1
2020-08-13 23:58:39.613 MSK [47083] LOG:  shutting down
2020-08-13 23:58:39.888 MSK [47081] LOG:  database system is shut down
2020-08-13 23:58:40.246 MSK [47200] LOG:  starting PostgreSQL 12.2 (Ubuntu 12.2-4) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-8ubuntu1) 9.3.0, 64-bit
2020-08-13 23:58:40.247 MSK [47200] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2020-08-13 23:58:40.293 MSK [47200] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-08-13 23:58:40.470 MSK [47201] LOG:  database system was shut down at 2020-08-13 23:58:39 MSK
2020-08-13 23:58:40.508 MSK [47200] LOG:  database system is ready to accept connections

/etc/init.d/postgresql:

#!/bin/sh
set -e

### BEGIN INIT INFO
# Provides:     postgresql
# Required-Start:   $local_fs $remote_fs $network $time
# Required-Stop:    $local_fs $remote_fs $network $time
# Should-Start:     $syslog
# Should-Stop:      $syslog
# Default-Start:    2 3 4 5
# Default-Stop:     0 1 6
# Short-Description:    PostgreSQL RDBMS server
### END INIT INFO

# Setting environment variables for the postmaster here does not work; please
# set them in /etc/postgresql/<version>/<cluster>/environment instead.

[ -r /usr/share/postgresql-common/init.d-functions ] || exit 0

. /usr/share/postgresql-common/init.d-functions

# versions can be specified explicitly
if [ -n "$2" ]; then
    versions="$2 $3 $4 $5 $6 $7 $8 $9"
else
    get_versions
fi

case "$1" in
    start|stop|restart|reload)
        if [ "$1" = "start" ]; then
            create_socket_directory
        fi
    if [ -z "`pg_lsclusters -h`" ]; then
        log_warning_msg 'No PostgreSQL clusters exist; see "man pg_createcluster"'
        exit 0
    fi
    for v in $versions; do
        $1 $v || EXIT=$?
    done
    exit ${EXIT:-0}
        ;;
    status)
    LS=`pg_lsclusters -h`
    # no clusters -> unknown status
    [ -n "$LS" ] || exit 4
    echo "$LS" | awk 'BEGIN {rc=0} {if (match($4, "down")) rc=3; printf ("%s/%s (port %s): %sn", $1, $2, $3, $4)}; END {exit rc}'
    ;;
    force-reload)
    for v in $versions; do
        reload $v
    done
        ;;
    *)
        echo "Usage: $0 {start|stop|restart|reload|force-reload|status} [version ..]"
        exit 1
        ;;
esac

exit 0

How do I fix it?

PS: The problem I’m having is that Postgresql wont’ start. After the restart command its status should be active (running) while it is active (exited).

Go to Source
Author: parsecer

attach csv file on email sent from postgres

I have the following code to send emails and wanted to see if it’s possible to attach a csv file?

CREATE OR REPLACE FUNCTION public.send_email_html(text, text, text, text)
 RETURNS integer
 LANGUAGE plperlu
AS $function$
    use Net::SMTP;

    my ($recipientString, $sendFrom, $subject, $message) = @_;
    my @recipients = split /;/, $recipientString;
    my $smtp=Net::SMTP->new("localhost");
    
    $smtp->mail($sendFrom);
    foreach(@recipients)
    {
        $smtp->recipient($_);
    }
    $smtp->data();
    foreach(@recipients)
    {
        $smtp->datasend("To: $_n");
    }
    
    $smtp->datasend("Subject: $subjectn");
    $smtp->datasend("Content-Type: text/html;n");
    $smtp->datasend("$messagen");
    $smtp->quit();

    return 1;
$function$
;

Go to Source
Author: Matias

PostgreSQL rule to delete record before insertion, if the record has changed

I’m not sure it’s possible to do this with a rule, but what I want to do is, when you insert a record:

  • If it doesn’t exist, just insert it as usual.
  • If it exists, but each field is equal to what you’re inserting, then do nothing.
  • If it exists, but any field differs from what you’re inserting, then delete the old record and insert the new one.

The reason for this is that, in the last case, I want to use the cascade deletion productively. That is, if a record has changed, then everything that refers to it is now void and should be removed.

Let’s say my table is:

create table foo (
  id  integer primary key,
  foo integer,
  bar integer
);

My first attempt was to just delete the old record regardless (before checking the difference):

create rule refresh as
  on insert to foo
  do instead (
    delete from foo where id = new.id;
    insert into foo values (new.id, new.foo, new.bar));

This doesn’t work; it’s an infinite loop.

Is it possible to do what I’m after with rules, or would I need a full-blown trigger?

Go to Source
Author: Xophmeister

How to use the i command in SQL-Shell?

I’m new to PostgreSQL and I wanted to use or run a specific file on the SQL-Shell on windows but I`m getting some errors of what I think is not making sense from my knowledge.

I ran the following…

i 'C: UsersUsernameDocumentsperson.sql' : No such file or directory

I don`t know why this is not functioning with the quotes and by default this file exists, which I downloaded from a website!

Through my experimentation, I ran the following again without quotes with a different error

i C: UsersUsernameDocumentsperson.sql : Permission denied

I don’t know why it says at first it doesn’t exist but later getting no permission over the file. My question is what is the problem with the following executions and is something wrong with my running?

I looked at this site and saw common problems but it couldn’t help me further!

Thanks!

Go to Source
Author: Gregory

Postgres not using index in citext column

The following query is returning a SEQ Scan instead a Index.

select file_name from myschemadb.my_files where file_name = 'djsaidjasdjoasdjoaidad'

Engine

Postgres 11.5

My table:

CREATE TABLE myschemadb.my_files

    id int4 NOT NULL,
    file_name myschemadb.citext NOT NULL,
    status_id int4 NOT NULL,<br />
    file_key myschemadb.citext NOT NULL,
    is_fine bool NOT NULL DEFAULT true,
    create_date timestamptz NOT NULL DEFAULT now(),
    update_date timestamptz NULL,
    CONSTRAINT pk_my_files PRIMARY KEY (id)
);

The created index:

CREATE INDEX my_files_file_name_idx ON myschemadb.my_files USING btree (file_name);

Execution Plan

[
   {
      "Plan": {
         "Node Type": "Gather",
         "Parallel Aware": false,
         "Startup Cost": 1000,
         "Total Cost": 70105.63,
         "Plan Rows": 1,
         "Plan Width": 41,
         "Actual Startup Time": 109.537,
         "Actual Total Time": 110.638,
         "Actual Rows": 0,
         "Actual Loops": 1,
         "Output": [
            "file_name"
         ],
         "Workers Planned": 2,
         "Workers Launched": 2,
         "Single Copy": false,
         "Shared Hit Blocks": 58326,
         "Shared Read Blocks": 0,
         "Shared Dirtied Blocks": 0,
         "Shared Written Blocks": 0,
         "Local Hit Blocks": 0,
         "Local Read Blocks": 0,
         "Local Dirtied Blocks": 0,
         "Local Written Blocks": 0,
         "Temp Read Blocks": 0,
         "Temp Written Blocks": 0,
         "I/O Read Time": 0,
         "I/O Write Time": 0,
         "Plans": [
            {
               "Node Type": "Seq Scan",
               "Parent Relationship": "Outer",
               "Parallel Aware": true,
               "Relation Name": "my_files",
               "Schema": "myschemadb",
               "Alias": "my_files",
               "Startup Cost": 0,
               "Total Cost": 69105.53,
               "Plan Rows": 1,
               "Plan Width": 41,
               "Actual Startup Time": 107.42,
               "Actual Total Time": 107.42,
               "Actual Rows": 0,
               "Actual Loops": 3,
               "Output": [
                  "file_name"
               ],
               "Filter": "((my_files.file_name)::text = 'djsaidjasdjoasdjoaidad'::text)",
               "Rows Removed by Filter": 690443,
               "Shared Hit Blocks": 58326,
               "Shared Read Blocks": 0,
               "Shared Dirtied Blocks": 0,
               "Shared Written Blocks": 0,
               "Local Hit Blocks": 0,
               "Local Read Blocks": 0,
               "Local Dirtied Blocks": 0,
               "Local Written Blocks": 0,
               "Temp Read Blocks": 0,
               "Temp Written Blocks": 0,
               "I/O Read Time": 0,
               "I/O Write Time": 0,
               "Workers": [
                  {
                     "Worker Number": 0,
                     "Actual Startup Time": 106.121,
                     "Actual Total Time": 106.121,
                     "Actual Rows": 0,
                     "Actual Loops": 1,
                     "Shared Hit Blocks": 15754,
                     "Shared Read Blocks": 0,
                     "Shared Dirtied Blocks": 0,
                     "Shared Written Blocks": 0,
                     "Local Hit Blocks": 0,
                     "Local Read Blocks": 0,
                     "Local Dirtied Blocks": 0,
                     "Local Written Blocks": 0,
                     "Temp Read Blocks": 0,
                     "Temp Written Blocks": 0,
                     "I/O Read Time": 0,
                     "I/O Write Time": 0
                  },
                  {
                     "Worker Number": 1,
                     "Actual Startup Time": 106.821,
                     "Actual Total Time": 106.821,
                     "Actual Rows": 0,
                     "Actual Loops": 1,
                     "Shared Hit Blocks": 26303,
                     "Shared Read Blocks": 0,
                     "Shared Dirtied Blocks": 0,
                     "Shared Written Blocks": 0,
                     "Local Hit Blocks": 0,
                     "Local Read Blocks": 0,
                     "Local Dirtied Blocks": 0,
                     "Local Written Blocks": 0,
                     "Temp Read Blocks": 0,
                     "Temp Written Blocks": 0,
                     "I/O Read Time": 0,
                     "I/O Write Time": 0
                  }
               ]
            }
         ]
      },
      "Planning Time": 0.034,
      "Triggers": [],
      "Execution Time": 110.652
   }
]

I guess the problem is here:

"Filter": "((my_files.file_name)::text = 'djsaidjasdjoasdjoaidad'::text)",

This implicit conversion can be a problem.
But when i make a explicit conversion doesnt work too:

select file_name from myschemadb.file_history where file_name = 'djsaidjasdjoasdjoaidad'::myschemadb.citext

I see this link: Why does a comparison between CITEXT and TEXT fail?

but didn’t help me..

Go to Source
Author: Daniel1791

Select multiple rows closest to date with 1 result for item

How from table:

text | timestamp | pk int
id   | collected | uid
===============
x    | 01.11.12  | 1
x    | 02.11.12  | 2  
a    | 01.11.12  | 3  
b    | 01.11.12  | 4  

Get one row for each id where collected is closest to date 03.11.12 (of closest less):

text | timestamp | pk int
id   | collected | uid
===============
x    | 02.11.12  | 2  
a    | 01.11.12  | 3  
b    | 01.11.12  | 4  

Go to Source
Author: ZiiMakc

Create a Postgresql trigger that updates column with PK of other table

Hello and thanks for taking the time in advance to look at my question. This is my first try at using triggers and functions in PostgreSQL (using DBeaver) and I appreciate your feedback.

I’d like to update a column (village_id) every time a new row is entered or updated with the primary key value of a different table (which is not connected as a foreign key directly). See below for an outline of my tables and trigger + function.

Unfortunately, after using the code below I received an error each time I was entering a new row with information. I used an UPDATE statement, but I also considered using an insert into statement, but wasn’t sure what is more preferable. Thank you in advance.

  CREATE TABLE register_table
(register_id integer CONSTRAINT register_id_pk PRIMARY KEY,
  village_id integer NOT NULL
);
      CREATE TABLE er_table
(er_id integer CONSTRAINT er_id_pk PRIMARY KEY,
  register_id integer NOT NULL,
  village_id integer NOT NULL
);

CREATE OR REPLACE FUNCTION update_village_id_column()
    RETURNS trigger AS
  $BODY$
BEGIN
UPDATE schema_example.er_table
SET  village_id = register_table.village_id
from schema_example.register_table
WHERE (schema_example.register_id = new.register_id);

RETURN new; 
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

CREATE TRIGGER update_village_id
AFTER INSERT OR UPDATE
ON schema_example.er_table
FOR EACH ROW
EXECUTE PROCEDURE update_village_id_column();

Go to Source
Author: PGerrits

How to find the ports of all running PostgreSQL instances (as a non-rot user)

I have a monitoring script (running as an unprivileged user) that needs to check all instances of Postgres currently running on the same host (Ubuntu Linux 18.04). Is there an easy way to find all their port numbers?

I was thinking of listing all files matching /var/run/postgresql/.s.PGSQL.* but I’m not sure whether this is reliable or the the easiest way.

Go to Source
Author: EM0

Start PostGreSQL server manually and set verbose mode to see all queries

Is there a verbosity mode to get the list of queries executed on a PostGreSQL server in a terminal console when postgresql server is started manually ? If yes, how to use it ?

I’m starting the PostGreSQL server this way on macos:

/Applications/Postgres.app/Contents/Versions/12/bin/postgres -D "/Users/me/Library/Application Support/Postgres/var-12" -p 5432

Go to Source
Author: DevonDahon

Unable to create role on AWS RDS postgresql database

I’m trying to create a read-only user on an AWS RDS PostgreSQL database. I am logging in using psql, with the default user that was created in the RDS dashboard when I created the database. Yesterday I was able to create a role called readonly, but I realized I did not add some parameters I wanted to add such as NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION so I deleted the role. Then I went to create the role again and the operation timed out with a weird error, so I disconnected.

Well today when I tried to log in with the same user and create a role with command CREATE ROLE readonly;, I get: ERROR: permission denied to create role. I ran the command l and see this (I am myuser for example):

aws_pgadmin=CTc/aws_pgadmin+
myuser=CTc/aws_pgadmin

Any tips on how to resolve this? Do you think I triggered some kind of security protection so I am no longer allowed to create a role?

Go to Source
Author: caseym

Composite multicolumn index for geopoint range and numeric range query

I am building an app where the server needs to select rows based on some criteria/filters. One of them is the location of the user and the radius at which the user want’s to see posts and other filters such date range and filter for a value of another column. This is going to be for an ad-hoc event discovery app.

I have read about PostGIS, its geometry,geography types and I know there is a native point datatype. Based on this answer I understood that it is better to order from equality to range columns, even though I feel like geo point column should be the first.

Suppose the following few rows of a simplified events table (disregard the validity position data):

id  event_title                  event_position   event_type  is_public  start_date
    (varchar)                    (point lat/lon)  (smallint)  (boolean)  (timestamptz)
--  ---------------------------  ---------------  ---------   ---------  ----
 1  "John's Party"               (122,35)         0           0          2020-07-05
 2  "Revolution then Starbucks"  (123,30)         1           1          2020-07-06
 3  "Study for math exam"        (120,36)         2           1          2020-07-07
 4  "Party after exam"           (120,36)         1           1          2020-07-08
 5  "Hiking next to the city"    (95,40)          3           1          2020-07-09
 6  "Football match"             (-42,31)         4           1          2020-07-10

Imagine the table contains several thousand records at least, obviously not only 6.

So in this table a user would be able to query public events close to (122,34) by 100km (suppose first three rows fall into this area) and of event types 0, 1 or 2 falling between dates 2020-07-05 and 2020-07-07. The user would get the rows with ID 2 and 3.

This is the query I want to optimize with an appropriate index. My question is, how is it possible to create such an index? I thought about GiST or GIN index but not sure how these could help. Thanks!

Go to Source
Author: Najib