PostgreSQL 14: Part 4 or the "January offensive" (Commitfest 2021-01)

PostgreSQL 14 is coming! After the first three relatively modest commitfests ( July , September , November ), major changes took place.



Here are just a few questions to start with:



  • Can ranges contain gaps in values?
  • Why do we need index notation for the json type?
  • Can an index grow less than a table with frequent updates? And not to grow at all?
  • How long have sessions been idle in idle_in_transaction?
  • How to build an ER diagram for system catalog tables?




Client applications



psql: \ dX - view extended statistics

commit: ad600bba



New command \ dX displays extended statistics objects as a list.



CREATE STATISTICS flights_from_to (dependencies, MCV) 
    ON departure_airport, arrival_airport 
    FROM flights;
\x

\dX

      
      



List of extended statistics
-[ RECORD 1 ]+------------------------------------------------
Schema       | bookings
Name         | flights_from_to
Definition   | departure_airport, arrival_airport FROM flights
Ndistinct    |
Dependencies | defined
MCV          | defined

      
      





For each type of statistics (Dependencies, Ndistinct, MCV), only the fact of collection is displayed. The values ​​themselves need to be looked at in pg_statistic_ext_data, which, by default, is only accessible by superusers.



psql: \ dtS shows TOAST tables

commit: 7d80441d



A separate TOAST table could be viewed with the \ d command before. However, it was not possible to get a list of such tables with the \ dt or \ dtS command. The omission has been corrected, \ dtS now shows TOAST tables as they are service tables.



But there is one caveat. All TOAST tables are located in the pg_toast schema, which is unlikely to be included in search_path . Therefore, to get the list, you need to specify the appropriate template:



\dtS pg_toast.*165*

      
      



                 List of relations
  Schema  |      Name      |    Type     |  Owner   
----------+----------------+-------------+----------
 pg_toast | pg_toast_16529 | TOAST table | postgres
 pg_toast | pg_toast_16539 | TOAST table | postgres
 pg_toast | pg_toast_16580 | TOAST table | postgres

      
      





Psql tab completion improved for CLOSE, FETCH, MOVE and DECLARE commands

commit: 3f238b88 No



additional description required.



Documentation



Documentation proofreading and editing

commit: 2a5862f0



Many people note that PostgreSQL has excellent documentation. But it is written by the developers themselves, who are usually not considered masters of the pen. How do you manage to maintain high quality? It's simple. As with any writing, you need editors and proofreaders. And so Justin Prizzby has been doing a huge and important job for the past two years: proofreading the documentation. The result is a large list of 18 patches. And Mikael Paquier, as a committer, helped him.



And that's just one big commit. And the number of small patches that improve the documentation simply cannot be counted.



miscellanea



Idle_session_timeout parameter - force termination of idle sessions

commit: 9877374b The



new idle_session_timeout parameter specifies the idle session timeout. If the set time limit is exceeded, the session will be terminated. The parameter is very similar to the idle_in_transaction_session_timeout ,which appeared in 9.6 , but only affects sessions in which there is no started transaction. Therefore, if you want to interrupt idle sessions, regardless of whether a transaction is started in them or not, then both parameters should be set.



It is recommended that this parameter be used with extreme caution on systems where connection pullers or postgres_fdw connections are used.



The parameter can be set by any user for his session. In the following example, after setting the parameter and waiting for a second, we see that a session termination entry appears in the server log. After that, the attempt to execute the request fails, but psql automatically establishes a new connection:



SET idle_session_timeout = '1000ms';
-- 
\! tail -n 1 logfile

      
      



2021-02-01 12:25:06.716 MSK [5262] FATAL:  terminating connection due to idle-session timeout

      
      



SHOW idle_session_timeout;

      
      



FATAL:  terminating connection due to idle-session timeout
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.

      
      



SHOW idle_session_timeout;

      
      



 idle_session_timeout
----------------------
 0

      
      





Description by depesz.



GSS information in server log message

commit: dc11f31a



New session connection message in server log has been updated with GSS information if this authentication method was used.



pageinspect: functions for indexes GiST

commit: 756ab291



For anyone interested in exploring the organization and storage of GiST indexes, the pageinspect extension offers new features .



Correct EXPLAIN behavior in commands with IF NOT EXISTS

commit: e665769e



An attempt to create an existing table with the IF NOT EXISTS option results in a warning:



CREATE TABLE IF NOT EXISTS tickets AS SELECT * FROM bookings;

      
      



NOTICE:  relation "tickets" already exists, skipping

      
      





However, getting the plan for such a command leads to unexpected results. It's just that EXPLAIN outputs a plan for the SELECT, which the command manages to build before checking the existence of the tickets table. And no warnings!



EXPLAIN CREATE TABLE IF NOT EXISTS tickets AS SELECT * FROM bookings;

      
      



                           QUERY PLAN                            
-----------------------------------------------------------------
 Seq Scan on bookings  (cost=0.00..4301.88 rows=262788 width=21)

      
      





EXPLAIN ANALYZE fails instead of warning:



EXPLAIN ANALYZE CREATE TABLE IF NOT EXISTS tickets AS SELECT * FROM bookings;

      
      



ERROR:  relation "tickets" already exists

      
      





In version 14, the behavior became predictable:



EXPLAIN CREATE TABLE IF NOT EXISTS tickets AS SELECT * FROM bookings;

      
      



NOTICE:  relation "tickets" already exists, skipping
 QUERY PLAN
------------
(0 rows)

      
      



EXPLAIN ANALYZE CREATE TABLE IF NOT EXISTS tickets AS SELECT * FROM bookings;

      
      



NOTICE:  relation "tickets" already exists, skipping
 QUERY PLAN
------------
(0 rows)

      
      





Same changes for EXPLAIN [ANALYZE] CREATE MATERIALIZED VIEW IF NOT EXISTS command.



Added primary and unique keys to system catalog tables

commit: dfb75e47 , 62f34097



Added integrity constraints to system catalog tables: primary and unique keys. There were unique indices before, now restrictions are made on their basis.



This is how it looks:



\d pg_class

      
      



                     Table "pg_catalog.pg_class"
       Column        |     Type     | Collation | Nullable | Default
---------------------+--------------+-----------+----------+---------
 oid                 | oid          |           | not null |
 relname             | name         |           | not null |
 relnamespace        | oid          |           | not null |

... ...

Indexes:
    "pg_class_oid_index" PRIMARY KEY, btree (oid)
    "pg_class_relname_nsp_index" UNIQUE CONSTRAINT, btree (relname, relnamespace)
    "pg_class_tblspc_relfilenode_index" btree (reltablespace, relfilenode)

      
      





But there are exceptions: pg_depend, pg_shdepend. These two tables have two indexes and both are non-unique. It was not possible to come up with a unique combination of columns. I must admit that understanding how the dependency table works is not a trivial task. And the inability to create a primary key is a clear confirmation of this.



But foreign keys were not added. There are good reasons for this:



  • For a number of tables, oid columns can be 0 when there is no real OID to refer to. To create a foreign key, you will need to replace 0 with NULL everywhere, and this is a huge rewriting work that you are not ready to take on.
  • A number of tables have a column with a potential foreign key not just of type oid, but oid []. It is impossible to create a foreign key from an array.


The original idea of ​​the patch was to build a diagram of the relationships between the tables in the system catalog based on information from the database. This could be done automatically by external tools. After all, building a diagram without information about foreign keys can only be done manually and with regular changes after each release, which is extremely inconvenient.



Realizing that the task of the patch was not complete, immediately after the commit, Tom Lane initiated a new discussion in which he proposed a compromise solution in the form of a function that returns a list of foreign keys for all tables in the system catalog. The patch was adopted after the close of the January commitfest, but it's more logical to describe it now.



So, the system catalog tables have no foreign keys. But we can get information about them by calling the pg_get_catalog_foreign_keys function. The following query shows who is referring to pg_attribute:



SELECT fktable, fkcols, is_array, is_opt
FROM   pg_get_catalog_foreign_keys()
WHERE  pktable = 'pg_attribute'::regclass
AND    pkcols = ARRAY['attrelid','attnum'];

      
      



       fktable        |        fkcols         | is_array | is_opt
----------------------+-----------------------+----------+--------
 pg_attrdef           | {adrelid,adnum}       | f        | f
 pg_constraint        | {conrelid,conkey}     | t        | t
 pg_constraint        | {confrelid,confkey}   | t        | f
 pg_index             | {indrelid,indkey}     | t        | t
 pg_statistic_ext     | {stxrelid,stxkeys}    | t        | f
 pg_statistic         | {starelid,staattnum}  | f        | f
 pg_trigger           | {tgrelid,tgattr}      | t        | f
 pg_partitioned_table | {partrelid,partattrs} | t        | t
(8 rows)

      
      





Monitoring



Log_recovery_conflict_waits parameter - logging long waits to resolve recovery conflicts

commit: 0650ff23



When the new log_recovery_conflict_waits parameter is enabled , the waiting for recovery conflict resolution by startup process will be logged to the server log if the timeout exceeds deadlock_timeout.



Let's simulate the situation. On the replica, enable the parameter, then start the transaction and wait:



ALTER SYSTEM SET log_recovery_conflict_waits = on;
SELECT pg_reload_conf();

BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT count(*) FROM t;

      
      





And now on the master:



DELETE FROM t;
VACUUM t;

      
      





After a second ( deadlock_timeout ), a detailed message describing the conflict will appear in the replica log. Here is the number of the conflicting process, and the LSN position where they got stuck, and the file name (read tables) with the block number:



LOG:  recovery still waiting after 1023.267 ms: recovery conflict on snapshot
DETAIL:  Conflicting process: 29119.
CONTEXT:  WAL redo at 0/1001BEB0 for Heap2/CLEAN: latestRemovedXid 717; blkref #0: rel 1663/16384/17198, blk 0

      
      





After another 30 seconds of waiting ( max_standby_streaming_delay ), the session on the replica will be terminated, as it should be in such cases.



This is a continuation of the work started and described in the previous commitfest.



Pg_stat_database view - added statistics on user sessions

commit: 960869da



Monitoring system developers have more work to do. Helpful and cool changes await users of monitoring systems!



Many columns have appeared in pg_stat_database with additional information about user sessions in each database in the cluster:



  • session_time - total time of all sessions spent in this database;
  • active_time ― , ;
  • idle_in_transaction_time ― ;
  • sessions ― ;
  • sessions_abandoned ― - ;
  • sessions_fatal ― - FATAL;
  • sessions_killed ― .


Description by depesz.



ps: updating the status of processes when a checkpoint is executed

commit: df9274ad You can



monitor startup and checkpointer processes by querying pg_stat_activity. But there are three situations where the pg_stat_activity view is not available and the checkpointer works. These are the checkpoint at the end of the crash recovery process, the checkpoint during the server stop, and the restart point during the replica stop.



In these three situations, you can monitor the status of the startup and checkpointer processes in the operating system, for example, using the ps utility.



A typical example is disaster recovery. In the end, after rolling over the changes from WAL, the startup process performs a checkpoint and this may take some time. However, the status of the startup process does not change and shows "recovering NNN". Although it would be useful to know that the roll forward of changes is complete and it remains to wait for the checkpoint to complete. The status is now updated to lower the DBA's alert level in an emergency.



pg_stat_statements: When the

commit stats were flushed : 2e0fedf0



There is no doubt that the pg_stat_statements stats should be flushed regularly. Otherwise, what's the point of accumulating information about requests executed yesterday, a week ago, a month, a year ...



But how do you know when the statistics were last reset? Very simple. We look into pg_stat_statements_info:



SELECT now(), pg_stat_statements_reset();

      
      



              now              | pg_stat_statements_reset
-------------------------------+--------------------------
 2021-02-03 13:25:44.738188+03 |

      
      



SELECT * FROM pg_stat_statements_info;

      
      



 dealloc |          stats_reset          
---------+-------------------------------
       0 | 2021-02-03 13:25:44.738468+03

      
      





The pg_stat_statements_info view was introduced in version 14. You can read about the dealloc column in the previous article.



Description by depesz.



Even more useful is the idea of ​​not only dumping statistics regularly, but also saving the contents of pg_stat_statements before each dump. Then, in the presence of many regularly performed slices, it is possible to obtain information for time intervals in the past. This approach is used by the pgpro_pwr monitoring extension .



COPY

commit progress : 8a4f618e



The pg_stat_progress_ * family of views has been updated! You can now monitor the progress of the COPY command.



Let's make a logical copy of the demo database:



\! pg_dump -d demo -Fc -f demo.dump

      
      





Now let's expand the copy in the postgres database into two threads and, while the process is going on, take a look at the pg_stat_progress_copy view:



\! pg_restore tickets.dump -d postgres -j 2 &
SELECT pid, datname, relid::regclass, bytes_processed, bytes_total, lines_processed
FROM   pg_stat_progress_copy\gx

      
      



-[ RECORD 1 ]---+-------------------------
pid             | 18771
datname         | postgres
relid           | bookings.tickets
bytes_processed | 19088527
bytes_total     | 0
lines_processed | 189820
-[ RECORD 2 ]---+-------------------------
pid             | 18772
datname         | postgres
relid           | bookings.boarding_passes
bytes_processed | 14833287
bytes_total     | 0
lines_processed | 567652

      
      





The bytes_total column would have been filled with the file size when the COPY… FROM 'file' command was executed. But in the above example, the download is from a copy of pg_dump, so the size is not known.



The status can be monitored not only for downloading (COPY… FROM), but also for downloading (COPY… TO) data.



Description by depesz.



Performance



Optimizing buffer cache flushing

commit: d6ad34f3 , bea449c6



A number of operations require that all buffers associated with a specific table be removed from the buffer cache. These operations include TRUNCATE and DROP table commands, an interrupted CREATE TABLE AS SELECT command, and VACUUM when empty blocks need to be removed from the end of a table.



To remove, the entire buffer cache is scanned, which can be costly for large cache sizes. Now for small tables a special structure will be kept in memory with information about occupied buffers, which will avoid scanning the entire buffer cache.



Testing has shown that with shared_buffers of 100GB or more, truncating thousands of tables is more than 100 times faster.



This is a continuation of the work started in version 13.



postgres_fdw: batch mode for inserting

commit records : b663a413 Fetching



data from external tables postgres_fdw uses batch mode. Records are transferred from the external server in batches of 100 (the default value of the fetch_size parameter ). This is significantly faster than sending them one at a time. But insert, change, delete work line by line. And therefore very slowly.



The FDW API has been improved for optimization. True, it was improved only in the part of the batch mode for insert operations. Updates and deletions are for future reference. Of course, postgres_fdw was the first wrapper to take advantage of the new API.



Let's see what happened. Configure postgres_fdw to work with external tables in the demo database:



CREATE EXTENSION postgres_fdw;
CREATE SERVER remote_server
    FOREIGN DATA WRAPPER postgres_fdw
    OPTIONS (dbname 'postgres');
CREATE USER MAPPING FOR postgres
    SERVER remote_server
    OPTIONS (user 'postgres');

      
      





The external table will be located in the neighboring postgres database:



postgres=# CREATE TABLE bookings(
    book_ref char(6),book_date timestamptz, total_amount numeric(10,2)
);

      
      





Let's take the speed of insertion into the local table as the reference speed. And enable timing for measurements:



CREATE TABLE bookings_local (LIKE bookings);
\timing

      
      





Insert into a local table:



INSERT INTO bookings_local SELECT * FROM bookings;

      
      



INSERT 0 262788
Time: 165,653 ms

      
      





And now insert into an external table, batch mode is disabled. (Actually it is enabled, just by default the batch size is 1 line).



CREATE FOREIGN TABLE bookings_remote_no_batch (
    book_ref char(6), book_date timestamptz, total_amount numeric(10,2)
) SERVER remote_server OPTIONS (
    schema_name 'public', table_name 'bookings'
);

INSERT INTO bookings_remote_no_batch SELECT * FROM bookings;

      
      



INSERT 0 262788
Time: 6729,867 ms (00:06,730)

      
      





Almost 40 times slower! And this is between the bases of the same cluster, where there are no network delays.



Let's repeat the experiment, but set the batch size (batch_size) to 100.



CREATE FOREIGN TABLE bookings_remote_batch_100 (
    book_ref char(6), book_date timestamptz, total_amount numeric(10,2)
) SERVER remote_server OPTIONS (
    schema_name 'public', table_name 'bookings',
    batch_size '100'
);

INSERT INTO bookings_remote_batch_100 SELECT * FROM bookings;

      
      



INSERT 0 262788
Time: 679,632 ms

      
      





Quite another matter. Of course, the loss to local insertion is still noticeable, ~ 4 times, but still not 40!



And finally. The size of the batch for insertion (batch_size) could be set in the parameters of the external server, then it would be valid for all tables in which it is not explicitly defined.



Deleting index rows from bottom to top

commit: 9dc718bd , d168b666



This optimization tries to avoid dividing the index page by two in UPDATE operations to the last, in situations where the index columns have not changed. Before adding a new version of a row to the index, you need to see if you can remove unnecessary rows on this page. For example, if you find a chain of unnecessary duplicate index rows that reference the same table row, you can delete those rows. Peter Geigan, the author of the patch, called this "bottom-up deletion".



A similar problem (to avoid index growth) is solved by the HOT update optimization. If the UPDATE does not change any of the indexed columns, then new versions of the rows in the indexes may not be created. And if there are several indexes on the table, and the column of only one of them changes? In this case, HOT update is not an assistant.



Let's check what "delete from the bottom up" can do. For the experiment, let's take a table with two separately indexed columns and automatic cleaning disabled.



CREATE TABLE t(col1 int, col2 int) WITH (autovacuum_enabled=off);
CREATE INDEX t_col1 ON t(col1);
CREATE INDEX t_col2 ON t(col2);
INSERT INTO t VALUES (1, 1);

SELECT pg_relation_size('t') AS t_size,
       pg_relation_size('t_col1') AS t_col1_size,
       pg_relation_size('t_col2') AS t_col2_size;

      
      



 t_size | t_col1_size | t_col2_size
--------+-------------+-------------
   8192 |       16384 |       16384

      
      





There is one row in the table before the bulk update. The table size is one page, and both indexes occupy two pages (service page + data page).



Now we change only one column col2 100,000 times and look at the size of the table and indexes.



SELECT 'UPDATE t SET col2 = col2+1' FROM generate_series(1,100000)\gexec

SELECT pg_relation_size('t') AS t_size,
       pg_relation_size('t_col1') AS t_col1_size,
       pg_relation_size('t_col2') AS t_col2_size;

      
      



 t_size  | t_col1_size | t_col2_size
---------+-------------+-------------
 2818048 |     2121728 |     2260992

      
      





These results were obtained in PostgreSQL 12. As you can see, the HOT update did not work and both indexes grew almost equally in size.



Now this same experiment in PostgreSQL 13:



 t_size  | t_col1_size | t_col2_size
---------+-------------+-------------
 2818048 |      663552 |     2260992

      
      





The t_col1 index, in which there were no changes, increased much less, by ~ 3.5 times. This is the result of a famous version 13 optimization: index deduplication . But nevertheless he grew up.



And finally, let's see what's in PostgreSQL 14:



 t_size  | t_col1_size | t_col2_size
---------+-------------+-------------
 2818048 |       16384 |     2260992

      
      





Blimey! In the t_col1 index, there is only one page with data left. That's cool!



Of course, if autovacuum were turned on, it could have time to clear some of the dead lines during the experiment. But that's what the experiment is for. Moreover, in real conditions, with frequent updates (a great example is queue tables), autovacuum will definitely not have time to clean everything up in time.



Description from Viktor Egorov.



Parallel execution REINDEX CONCURRENTLY

commit: f9900df5



In the November commitfest article, I already wrote about non-blocking parallel execution of CREATE INDEX CONCURRENTLY. Similar optimization is now available for REINDEX CONCURRENTLY.



Procedural languages



Procedures are faster to execute

commit: ee895a65



Procedures were conceived to be able to complete transactions. If such a procedure that performs COMMIT is itself called many times, for example, in a loop, then at each iteration of the loop, all statements inside the procedure will be re-parsed.



There was no serious reason to re-parse commands, which was removed in the patch. Now calling procedures in a loop requires less work and resources. And, as a result, it runs faster.



PL / pgSQL: assignment operator completely redesigned

commit: 844fe9f1 , c9d52984 , 1788828d , 1c1cbe27



Without further ado:



DO $$
<<local>>
DECLARE
    a bookings[];
    x bookings;
BEGIN
    /*      */
    local.a[1].book_ref := 'ABCDEF';    
    local.a[1].book_date := current_date;    
    local.a[1].total_amount := 0;    

    /*    */
    local.a[2:3] := (SELECT array_agg(t.*)
                        FROM  (SELECT b.* FROM bookings b LIMIT 2) AS t
                       );
    FOREACH x IN ARRAY a LOOP
            RAISE NOTICE '%', x;
    END LOOP;
END;
$$;

      
      



NOTICE:  (ABCDEF,"2021-02-04 00:00:00+03",0.00)
NOTICE:  (00000F,"2017-07-05 03:12:00+03",265700.00)
NOTICE:  (000012,"2017-07-14 09:02:00+03",37900.00)
DO

      
      





Now, inside a PL / pgSQL block, you can assign values ​​to array elements of a composite type, as well as array slices.



To this end, the PL / pgSQL assignment operator has been completely redesigned. And the server parser has learned to parse PL / pgSQL expressions.



You no longer need to issue a command like «



SELECT expr to evaluate an expression »



. You can easily verify this by looking at the error message in the following example:



DO $$ BEGIN RAISE NOTICE '%', 2 + 'a'; END; $$;

      
      



ERROR:  invalid input syntax for type integer: "a"
LINE 1: 2 + 'a'
            ^
QUERY:  2 + 'a'
CONTEXT:  PL/pgSQL function inline_code_block line 1 at RAISE

      
      





The SELECT word is no longer in the QUERY line.



Replication



Replica handling of configuration parameter changes in the wizard

commit: 15251c0a



Changes in the wizard to settings that affect the server's shared memory size do not go unnoticed on physical replicas. When a WAL record arrives at the replica stating that these parameters have changed, the replica will automatically stop, terminating all current sessions. The list of parameters can be found in the documentation .



It's not very good. Therefore, we did this: receiving a record of changes in parameters, the replica continues to work, but pauses replication and issues a message to the log. The administrator can wait for important sessions to complete, update the parameters in the configuration files to match the values ​​on the master, and unpause replication. The replica will stop immediately after that, but since the configuration changes have already been made, it can be started immediately, minimizing downtime.



Change restore_command without restarting the server

commit: 942305a3



Continuation of the work of Sergei Kornilov, adopted in version 13. Then it became possible to change the parameters without restarting the server primary_conninfo , primary_slot_name and wal_receiver_create_temp_slot .



Now restore_command has been added to them .



Server



Improved usage of extended statistics

commit: 25a9e54d



Extended statistics are now used in more cases to assess the cardinality of conditions in queries. In particular, extended statistics will now be used when different conditions for which extended statistics can individually be used are combined via OR.



In the example, we will collect extended statistics for the airports of departure and arrival. And then we will calculate the number of flights between Sheremetyevo and Pulkovo or in the opposite direction.



CREATE STATISTICS s ON departure_airport, arrival_airport FROM flights;
ANALYZE flights;

      
      





The exact number of flights is 610. Compare with the planner's estimates in versions 13 and 14.



EXPLAIN SELECT *
FROM   flights
WHERE (departure_airport = 'SVO' AND arrival_airport = 'LED')
OR    (departure_airport = 'LED' AND arrival_airport = 'SVO');

      
      





PostgreSQL 13:



 Seq Scan on flights  (cost=0.00..1054.42 rows=885 width=63)

      
      





PostgreSQL 14:



 Seq Scan on flights  (cost=0.00..1054.42 rows=607 width=63)

      
      





As you can see, the estimate in version 14 is almost accurate.



General infrastructure for supporting index notation for any data type

commit: c7aba7c1 , 0ec5f7e7 , 676887a3



Index notation is used for working with arrays. For example, let's find the element with index 3:



SELECT (ARRAY[10,20,30,40,50])[3];

      
      



 array
-------
    30

      
      





But there are other data types where this syntax would be convenient to use. First of all, we are talking about json. It was with the idea of ​​supporting the index notation for json that Dmitry Dolgov's long journey of work on this patch began.



And now, a few years later, such support appeared. The first patch creates the necessary index notation infrastructure for arbitrary data types. The second patch adds index notation to the hstore type, and the third to the json b type .



Now, instead of special functions and operators, you can extract the necessary parts from the json value. Let's find the phone number in the contact details of one of the tickets:



SELECT contact_data, contact_data['phone'] AS phone
FROM   tickets
WHERE  ticket_no = '0005432000994'\gx

      
      



-[ RECORD 1 ]+-----------------------------------------------------------------------------------------------------------------------------------------------------------
contact_data | {"email": "antonova.irina04121972@postgrespro.ru", "phone": "+70844502960"}
phone        | "+70844502960"

      
      





Index notation can be used to write to jsonb as well. Let's add the address to the previously found contact of Irina Antonova:



UPDATE tickets
SET    contact_data['address'] =
           '{"city": "",
             "street": " ",
             "building": "7"
            }'::jsonb
WHERE ticket_no = '0005432000994';

      
      





Note that the address itself is compound and you can also use index notation to refer to parts of it:



SELECT contact_data['address'] AS address,
       contact_data['address']['city'] AS city,
       contact_data['address']['street'] AS street,
       contact_data['address']['building'] AS building,
       contact_data['phone'] AS phone,
       contact_data['email'] AS email
FROM   tickets
WHERE  ticket_no = '0005432000994'\gx

      
      



-[ RECORD 1 ]----------------------------------------------------------------
address  | {"city": "", "street": " ", "building": "7"}
city     | ""
street   | " "
building | "7"
phone    | "+70844502960"
email    | "antonova.irina04121972@postgrespro.ru"

      
      





It is very convenient!



(Clarification. All contacts in the demo database are fictitious and there is no such employee in Postgres Pro.)



Description for hstore from depesz.



SQL Commands



Multirange data types

commit: 6df7a969



Each range data type now has its own multirange data type. This type is essentially an array of individual ranges. Ranges within a multi-band type must not overlap, but there may be gaps between ranges.



Normal ranges are contiguous ranges of values ​​of the corresponding subtype: in4range range for int subtype, timestamptz range for timestamp subtype, etc. But what if you need to store ranges with gaps in some places? This is where multi-bands come to the rescue.



Let's say we want to store the commitfest times in the table for each PostgreSQL version. A single commitfest can be thought of as a month-long range. But how to represent all five commitfests of one version?



The range for the subtype timestamptz is called tstzrange and the multirange is tstzmultirange. The available types are described in the documentation . Create a table:



CREATE TABLE pg_commitfest (
    version text,
    working_period tstzmultirange
);

      
      





To form the values, we use the constructor:



INSERT INTO pg_commitfest VALUES
('13', tstzmultirange(
           tstzrange('2019-07-01', '2019-08-01', '[)'),
           tstzrange('2019-09-01', '2019-10-01', '[)'),
           tstzrange('2019-11-01', '2019-12-01', '[)'),
           tstzrange('2020-01-01', '2020-02-01', '[)'),
           tstzrange('2020-03-01', '2020-04-07', '[]')
       )
),
('14', tstzmultirange(
           tstzrange('2020-07-01', '2020-08-01', '[)'),
           tstzrange('2020-09-01', '2020-10-01', '[)'),
           tstzrange('2020-11-01', '2020-12-01', '[)'),
           tstzrange('2021-01-01', '2021-02-01', '[)'),
           tstzrange('2021-03-01', '2021-04-01', '[)')
       )
);

      
      





The list of functions and operators for working with multi-band types includes the same as for normal bands, plus those intended for multi-bands only.



For example, we can find out which version of PostgreSQL the developer community worked on over the past New Year:



SELECT version 
FROM   pg_commitfest
WHERE  working_period @> '2021-01-01'::timestamptz;

      
      



 version
---------
 14

      
      





Or start and end dates for work on version 13:



SELECT lower(working_period), upper(working_period) 
FROM   pg_commitfest
WHERE  version = '13';

      
      



         lower          |         upper          
------------------------+------------------------
 2019-07-01 00:00:00+03 | 2020-04-07 00:00:00+03

      
      





New custom multi-range types can be created. This is useful in cases where there is no built-in band and corresponding multi-band type. The same CREATE TYPE… AS RANGE command is used, in which you can specify a name for the automatically generated multi-range type.



For example, we are interested in ranges and multiranges of time, subtype time. To create a range, you need a function that calculates the difference between two values ​​of type time:



CREATE FUNCTION time_diff(a time, b time) RETURNS double precision
AS $$
    SELECT extract(epoch FROM (a - b));
$$ LANGUAGE sql STRICT IMMUTABLE;

      
      





We create a type for the time range, and at the same time for the multi-range:



CREATE TYPE timerange AS RANGE (
    subtype = time,
    subtype_diff = time_diff,
    multirange_type_name = timemultirange
);

      
      





Now the working time can be formed by the following expression:



SELECT timemultirange(
           timerange('09:00', '13:00', '[)'),
           timerange('14:00', '18:00', '[)')
       ) AS working_hours;

      
      



               working_hours               
-------------------------------------------
 {[09:00:00,13:00:00),[14:00:00,18:00:00)}

      
      





Description by depesz.



Ltrim and rtrim functions for binary strings

commit: a6cf3df4 You could



also trim bytes at the beginning and end of a bytea string at the same time using the btrim function. You can now trim off each edge separately with the new ltrim and rtrim functions for binary strings.



GRANTED BY phrase in the commands GRANT and REVOKE

the commit: 6aaaa76b



For compatibility with the SQL standard in the GRANT and REVOKE commands added optional phrase GRANTED BY. For example:



GRANT ALL ON TABLE table_name TO role_specification 
    [GRANTED BY role_specification];
REVOKE ALL ON TABLE table_name FROM role_specification 
    [GRANTED BY role_specification];

      
      





The role name in GRANTED BY must match the current role. So it will not work to issue / take away rights on behalf of another role. The phrase has been added to comply with the standard.



This is a continuation of the work described in the September commitfest article.



System administration



initdb --no-instructions

commit: e09155bd



The initdb utility is used to initialize the cluster. And at the end of its work, it displays an instruction how to start a cluster:



Success. You can now start the database server using:

    pg_ctl -D /usr/local/pgsql/data -l logfile start

      
      





But this is not always true. For example, in debian package distributions, the pg_ctlcluster utility is used to start the cluster, not pg_ctl. And her parameters are different.



With the new --no-instructions option, initdb will no longer provide startup advice, which package distributions can take advantage of.



pg_dump: restore a single partition as a standalone table

commit: 9a4c0e36 , 9eabfe30



If a partitioned table is included in the logical copy of pg_dump, then it will not be possible to restore a separate partition from such a copy as an independent table. Immediately after the CREATE TABLE command comes the ALTER TABLE… ATTACH PARTITION command, which is not only unnecessary in such a situation, but also ends with an error. we did not restore the parent table.



CREATE TABLE t (id int) PARTITION BY RANGE(id);
CREATE TABLE t_p1 PARTITION OF t FOR VALUES FROM (1) TO (1001);
CREATE TABLE t_p2 PARTITION OF t FOR VALUES FROM (1001) TO (2001);
\! pg_dump -Fc -f db.dump

\! pg_restore db.dump -t t_p1 -f -

      
      



...
CREATE TABLE public.t_p1 (
    id integer
);
ALTER TABLE ONLY public.t ATTACH PARTITION public.t_p1 FOR VALUES FROM (1) TO (1001);
...

      
      





Now ALTER TABLE… ATTACH PARTITION commands for all partitions are unloaded separately and after all commands for creating CREATE TABLE partitions. Therefore, when restoring a single partition specified with the -t option, only the CREATE TABLE command will be executed, which makes it possible to restore the partition as an independent table.






That's all for now. We are waiting for the final March 14th commitfest .



All Articles