attach csv file on email sent from postgres

I have the following code to send emails and wanted to see if it’s possible to attach a csv file?

CREATE OR REPLACE FUNCTION public.send_email_html(text, text, text, text)
 RETURNS integer
 LANGUAGE plperlu
AS $function$
    use Net::SMTP;

    my ($recipientString, $sendFrom, $subject, $message) = @_;
    my @recipients = split /;/, $recipientString;
    my $smtp=Net::SMTP->new("localhost");
    
    $smtp->mail($sendFrom);
    foreach(@recipients)
    {
        $smtp->recipient($_);
    }
    $smtp->data();
    foreach(@recipients)
    {
        $smtp->datasend("To: $_n");
    }
    
    $smtp->datasend("Subject: $subjectn");
    $smtp->datasend("Content-Type: text/html;n");
    $smtp->datasend("$messagen");
    $smtp->quit();

    return 1;
$function$
;

Go to Source
Author: Matias

PostgreSQL rule to delete record before insertion, if the record has changed

I’m not sure it’s possible to do this with a rule, but what I want to do is, when you insert a record:

  • If it doesn’t exist, just insert it as usual.
  • If it exists, but each field is equal to what you’re inserting, then do nothing.
  • If it exists, but any field differs from what you’re inserting, then delete the old record and insert the new one.

The reason for this is that, in the last case, I want to use the cascade deletion productively. That is, if a record has changed, then everything that refers to it is now void and should be removed.

Let’s say my table is:

create table foo (
  id  integer primary key,
  foo integer,
  bar integer
);

My first attempt was to just delete the old record regardless (before checking the difference):

create rule refresh as
  on insert to foo
  do instead (
    delete from foo where id = new.id;
    insert into foo values (new.id, new.foo, new.bar));

This doesn’t work; it’s an infinite loop.

Is it possible to do what I’m after with rules, or would I need a full-blown trigger?

Go to Source
Author: Xophmeister

How to use the i command in SQL-Shell?

I’m new to PostgreSQL and I wanted to use or run a specific file on the SQL-Shell on windows but I`m getting some errors of what I think is not making sense from my knowledge.

I ran the following…

i 'C: UsersUsernameDocumentsperson.sql' : No such file or directory

I don`t know why this is not functioning with the quotes and by default this file exists, which I downloaded from a website!

Through my experimentation, I ran the following again without quotes with a different error

i C: UsersUsernameDocumentsperson.sql : Permission denied

I don’t know why it says at first it doesn’t exist but later getting no permission over the file. My question is what is the problem with the following executions and is something wrong with my running?

I looked at this site and saw common problems but it couldn’t help me further!

Thanks!

Go to Source
Author: Gregory

Postgres not using index in citext column

The following query is returning a SEQ Scan instead a Index.

select file_name from myschemadb.my_files where file_name = 'djsaidjasdjoasdjoaidad'

Engine

Postgres 11.5

My table:

CREATE TABLE myschemadb.my_files

    id int4 NOT NULL,
    file_name myschemadb.citext NOT NULL,
    status_id int4 NOT NULL,<br />
    file_key myschemadb.citext NOT NULL,
    is_fine bool NOT NULL DEFAULT true,
    create_date timestamptz NOT NULL DEFAULT now(),
    update_date timestamptz NULL,
    CONSTRAINT pk_my_files PRIMARY KEY (id)
);

The created index:

CREATE INDEX my_files_file_name_idx ON myschemadb.my_files USING btree (file_name);

Execution Plan

[
   {
      "Plan": {
         "Node Type": "Gather",
         "Parallel Aware": false,
         "Startup Cost": 1000,
         "Total Cost": 70105.63,
         "Plan Rows": 1,
         "Plan Width": 41,
         "Actual Startup Time": 109.537,
         "Actual Total Time": 110.638,
         "Actual Rows": 0,
         "Actual Loops": 1,
         "Output": [
            "file_name"
         ],
         "Workers Planned": 2,
         "Workers Launched": 2,
         "Single Copy": false,
         "Shared Hit Blocks": 58326,
         "Shared Read Blocks": 0,
         "Shared Dirtied Blocks": 0,
         "Shared Written Blocks": 0,
         "Local Hit Blocks": 0,
         "Local Read Blocks": 0,
         "Local Dirtied Blocks": 0,
         "Local Written Blocks": 0,
         "Temp Read Blocks": 0,
         "Temp Written Blocks": 0,
         "I/O Read Time": 0,
         "I/O Write Time": 0,
         "Plans": [
            {
               "Node Type": "Seq Scan",
               "Parent Relationship": "Outer",
               "Parallel Aware": true,
               "Relation Name": "my_files",
               "Schema": "myschemadb",
               "Alias": "my_files",
               "Startup Cost": 0,
               "Total Cost": 69105.53,
               "Plan Rows": 1,
               "Plan Width": 41,
               "Actual Startup Time": 107.42,
               "Actual Total Time": 107.42,
               "Actual Rows": 0,
               "Actual Loops": 3,
               "Output": [
                  "file_name"
               ],
               "Filter": "((my_files.file_name)::text = 'djsaidjasdjoasdjoaidad'::text)",
               "Rows Removed by Filter": 690443,
               "Shared Hit Blocks": 58326,
               "Shared Read Blocks": 0,
               "Shared Dirtied Blocks": 0,
               "Shared Written Blocks": 0,
               "Local Hit Blocks": 0,
               "Local Read Blocks": 0,
               "Local Dirtied Blocks": 0,
               "Local Written Blocks": 0,
               "Temp Read Blocks": 0,
               "Temp Written Blocks": 0,
               "I/O Read Time": 0,
               "I/O Write Time": 0,
               "Workers": [
                  {
                     "Worker Number": 0,
                     "Actual Startup Time": 106.121,
                     "Actual Total Time": 106.121,
                     "Actual Rows": 0,
                     "Actual Loops": 1,
                     "Shared Hit Blocks": 15754,
                     "Shared Read Blocks": 0,
                     "Shared Dirtied Blocks": 0,
                     "Shared Written Blocks": 0,
                     "Local Hit Blocks": 0,
                     "Local Read Blocks": 0,
                     "Local Dirtied Blocks": 0,
                     "Local Written Blocks": 0,
                     "Temp Read Blocks": 0,
                     "Temp Written Blocks": 0,
                     "I/O Read Time": 0,
                     "I/O Write Time": 0
                  },
                  {
                     "Worker Number": 1,
                     "Actual Startup Time": 106.821,
                     "Actual Total Time": 106.821,
                     "Actual Rows": 0,
                     "Actual Loops": 1,
                     "Shared Hit Blocks": 26303,
                     "Shared Read Blocks": 0,
                     "Shared Dirtied Blocks": 0,
                     "Shared Written Blocks": 0,
                     "Local Hit Blocks": 0,
                     "Local Read Blocks": 0,
                     "Local Dirtied Blocks": 0,
                     "Local Written Blocks": 0,
                     "Temp Read Blocks": 0,
                     "Temp Written Blocks": 0,
                     "I/O Read Time": 0,
                     "I/O Write Time": 0
                  }
               ]
            }
         ]
      },
      "Planning Time": 0.034,
      "Triggers": [],
      "Execution Time": 110.652
   }
]

I guess the problem is here:

"Filter": "((my_files.file_name)::text = 'djsaidjasdjoasdjoaidad'::text)",

This implicit conversion can be a problem.
But when i make a explicit conversion doesnt work too:

select file_name from myschemadb.file_history where file_name = 'djsaidjasdjoasdjoaidad'::myschemadb.citext

I see this link: Why does a comparison between CITEXT and TEXT fail?

but didn’t help me..

Go to Source
Author: Daniel1791

Select multiple rows closest to date with 1 result for item

How from table:

text | timestamp | pk int
id   | collected | uid
===============
x    | 01.11.12  | 1
x    | 02.11.12  | 2  
a    | 01.11.12  | 3  
b    | 01.11.12  | 4  

Get one row for each id where collected is closest to date 03.11.12 (of closest less):

text | timestamp | pk int
id   | collected | uid
===============
x    | 02.11.12  | 2  
a    | 01.11.12  | 3  
b    | 01.11.12  | 4  

Go to Source
Author: ZiiMakc

Create a Postgresql trigger that updates column with PK of other table

Hello and thanks for taking the time in advance to look at my question. This is my first try at using triggers and functions in PostgreSQL (using DBeaver) and I appreciate your feedback.

I’d like to update a column (village_id) every time a new row is entered or updated with the primary key value of a different table (which is not connected as a foreign key directly). See below for an outline of my tables and trigger + function.

Unfortunately, after using the code below I received an error each time I was entering a new row with information. I used an UPDATE statement, but I also considered using an insert into statement, but wasn’t sure what is more preferable. Thank you in advance.

  CREATE TABLE register_table
(register_id integer CONSTRAINT register_id_pk PRIMARY KEY,
  village_id integer NOT NULL
);
      CREATE TABLE er_table
(er_id integer CONSTRAINT er_id_pk PRIMARY KEY,
  register_id integer NOT NULL,
  village_id integer NOT NULL
);

CREATE OR REPLACE FUNCTION update_village_id_column()
    RETURNS trigger AS
  $BODY$
BEGIN
UPDATE schema_example.er_table
SET  village_id = register_table.village_id
from schema_example.register_table
WHERE (schema_example.register_id = new.register_id);

RETURN new; 
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

CREATE TRIGGER update_village_id
AFTER INSERT OR UPDATE
ON schema_example.er_table
FOR EACH ROW
EXECUTE PROCEDURE update_village_id_column();

Go to Source
Author: PGerrits

How to find the ports of all running PostgreSQL instances (as a non-rot user)

I have a monitoring script (running as an unprivileged user) that needs to check all instances of Postgres currently running on the same host (Ubuntu Linux 18.04). Is there an easy way to find all their port numbers?

I was thinking of listing all files matching /var/run/postgresql/.s.PGSQL.* but I’m not sure whether this is reliable or the the easiest way.

Go to Source
Author: EM0

Start PostGreSQL server manually and set verbose mode to see all queries

Is there a verbosity mode to get the list of queries executed on a PostGreSQL server in a terminal console when postgresql server is started manually ? If yes, how to use it ?

I’m starting the PostGreSQL server this way on macos:

/Applications/Postgres.app/Contents/Versions/12/bin/postgres -D "/Users/me/Library/Application Support/Postgres/var-12" -p 5432

Go to Source
Author: DevonDahon

Unable to create role on AWS RDS postgresql database

I’m trying to create a read-only user on an AWS RDS PostgreSQL database. I am logging in using psql, with the default user that was created in the RDS dashboard when I created the database. Yesterday I was able to create a role called readonly, but I realized I did not add some parameters I wanted to add such as NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION so I deleted the role. Then I went to create the role again and the operation timed out with a weird error, so I disconnected.

Well today when I tried to log in with the same user and create a role with command CREATE ROLE readonly;, I get: ERROR: permission denied to create role. I ran the command l and see this (I am myuser for example):

aws_pgadmin=CTc/aws_pgadmin+
myuser=CTc/aws_pgadmin

Any tips on how to resolve this? Do you think I triggered some kind of security protection so I am no longer allowed to create a role?

Go to Source
Author: caseym

Composite multicolumn index for geopoint range and numeric range query

I am building an app where the server needs to select rows based on some criteria/filters. One of them is the location of the user and the radius at which the user want’s to see posts and other filters such date range and filter for a value of another column. This is going to be for an ad-hoc event discovery app.

I have read about PostGIS, its geometry,geography types and I know there is a native point datatype. Based on this answer I understood that it is better to order from equality to range columns, even though I feel like geo point column should be the first.

Suppose the following few rows of a simplified events table (disregard the validity position data):

id  event_title                  event_position   event_type  is_public  start_date
    (varchar)                    (point lat/lon)  (smallint)  (boolean)  (timestamptz)
--  ---------------------------  ---------------  ---------   ---------  ----
 1  "John's Party"               (122,35)         0           0          2020-07-05
 2  "Revolution then Starbucks"  (123,30)         1           1          2020-07-06
 3  "Study for math exam"        (120,36)         2           1          2020-07-07
 4  "Party after exam"           (120,36)         1           1          2020-07-08
 5  "Hiking next to the city"    (95,40)          3           1          2020-07-09
 6  "Football match"             (-42,31)         4           1          2020-07-10

Imagine the table contains several thousand records at least, obviously not only 6.

So in this table a user would be able to query public events close to (122,34) by 100km (suppose first three rows fall into this area) and of event types 0, 1 or 2 falling between dates 2020-07-05 and 2020-07-07. The user would get the rows with ID 2 and 3.

This is the query I want to optimize with an appropriate index. My question is, how is it possible to create such an index? I thought about GiST or GIN index but not sure how these could help. Thanks!

Go to Source
Author: Najib

postgresql only allows connections via unix socket even though my pg_hba.conf lists 127.0.0.1/32?

I have the following pg_hba.conf:

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# IPv6 local connections:
host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     trust
host    replication     all             127.0.0.1/32            trust
host    replication     all             ::1/128                 trust

With the above I assume host all all 127.0.0.1/32 trust would allow TCP connections from localhost. However this does not seem to be the case

[root@XenonKiloCranberry:~]# psql -U postgres
psql (11.7)
Type "help" for help.

postgres=# q

[root@XenonKiloCranberry:~]# psql -U postgres -h 127.0.0.1
psql: FATAL:  no pg_hba.conf entry for host "127.0.0.1", user "postgres", database "postgres", SSL off

Where am I going wrong?

Go to Source
Author: Chris Stryczynski

Postgres PLPGSQL ‘un-stored’ procedures?

I’m getting the following error when I run the query below:

ERROR: query has no destination for result data Hint: If you want to
discard the results of a SELECT, use PERFORM instead. Where: PL/pgSQL
function inline_code_block line 12 at SQL statement

I don’t want to discard the results though! I want to display them. What’s happening here; is Postgres refusing to provide results from a block that isn’t a UDF or stored procedure? Or some other syntaxy thing?

DO
$$
declare 
    tenant text;
    result1 integer;
    result2 integer;
BEGIN

tenant='mycustomer1';
EXECUTE format('SELECT count(*) from ' || tenant || E'.accounts_user;') INTO result1;

tenant='mycustomer2';
EXECUTE format('SELECT count(*) from ' || tenant || E'.accounts_user;') INTO result2;

select result1, result2;

END;
$$ LANGUAGE plpgsql;

Background: I’m trying to create a set of dynamic queries that can get values from multiple PG schemas to return in a single report.

Context: I’m using Metabase to run the query, with a read only connection talking to RDS / Postgres. My plan is to store the queries as Metabase items. I was originally going to try creating a temp table to put my results in, but I’m not allowed to CREATE anything, including UDFs and stored procedures, hence my thinking of this as an un-stored procedure.

Go to Source
Author: Kyle Hodgson

Postgresql values not inserting

Hi I am trying to add following values to my table, but it gives following error,

ERROR: syntax error at or near “::” LINE 1:
…tion))/nullif((s_totalnorevws),0),’FM99,999,999′))::characte…

insert into testschema.tbls values (212,to_char(((s_sumdduration))/nullif((s_totalnorevws),0),'FM99,999,999'))::character varying as ob_arht);

Go to Source
Author: Aditya Vyas-Lakhan

What column type can store txid_current value?

AFAIK txid_current returns uint 64 value (source).
However, the biggest int column type is bigint which is signed (docs).
From what I understand, NUMERIC type is slow and should be used “for storing monetary amounts and other quantities where exactness is required” (docs).

Should I store the txid_current value as text?

I need to perform mainly greater than queries on this field:

select * from t where fxid > 438298432

What would be the most efficient (from search speed POV) for full-transaction-id (unsigned integer 64)?

Go to Source
Author: Michael

pg restore – where is the schema?

I successfully restored a dump from Postgresql 11 into 9.3.9.

postgres=# drop database ambari
DROP DATABASE
[dumps]$ pg_restore -U postgres  -h db ambari.dump
-- Dumped from database version 11.7
-- Dumped by pg_dump version 11.7
-- Name: ambari; Type: SCHEMA; Schema: -; Owner: ambari
CREATE SCHEMA ambari;
ALTER SCHEMA ambari OWNER TO ambari;
SET default_tablespace = '';
-- Name: adminpermission; Type: TABLE; Schema: ambari; Owner: ambari
CREATE TABLE ambari.adminpermission (
  permission_id bigint NOT NULL,
  permission_name character varying(255) NOT NULL,
  resource_type_id integer NOT NULL,
  permission_label character varying(255),
  principal_id bigint NOT NULL,
  sort_order smallint DEFAULT 1 NOT NULL
);
ALTER TABLE ambari.adminpermission OWNER TO ambari;

etc

-- PostgreSQL database dump complete

When I list the databases, there is no ambari, but there was in original database.

[dumps]$ psql -U postgres  -h db
postgres=# l
                               List of databases
Name     |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
jiradb7     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
postgres    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0   | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
template1   | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +

I cannot find it between the schemas either:

postgres=# dn
List of schemas
Name  |  Owner
public | postgres


postgres-# select nspname from pg_catalog.pg_namespace;
  nspname
pg_toast
pg_temp_1
pg_toast_temp_1
pg_catalog
public
information_schema

My question where the ambari schema is?

Go to Source
Author: Leos Literak