Log slow queries by setting log_min_duration_statement
ALTER database postgres SET log_min_duration_statement = '250ms';
From The Crunchy Data Team
Just a few helpful tips how to use Postgres to it's fullest. Have one you'd like to share or something is unclear? Share with us on twitter @crunchydata
ALTER database postgres SET log_min_duration_statement = '250ms';
Control the types of statements that are logged for your database.
ALTER DATABASE postgres SET log_statement = 'all';
Valid values include all, ddl, none, mod
Log when database is waiting on a lock.
ALTER DATABASE postgres SET log_lock_waits = 'on';
Setting a statement timeout prevents queries from running longer than the specified time. You can set a statement timeout on the database, user, or session level. We recommend you set a global timeout on Postgres and then override that one specific users or sessions that need a longer allowed time to run.
ALTER DATABASE mydatabase SET statement_timeout = '60s';
SELECT
total_exec_time,
mean_exec_time as avg_ms,
calls,
query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
This query will provide the number of connection based on type.
SELECT count(*),
state
FROM pg_stat_activity
GROUP BY state;
If you see idle connections is above 20, it is recommended to explore using PgBouncer.
Will give you the size of the specific relation you pass in.
SELECT pg_relation_size('table_name');
-- For prettier formatting you can wrap with:
SELECT pg_size_pretty(pg_relation_size('table_name'));
Will report on all table sizes in descending order
SELECT relname AS relation,
pg_size_pretty (
pg_total_relation_size (C .oid)
) AS total_size
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C .relnamespace)
WHERE nspname NOT IN (
'pg_catalog',
'information_schema'
)
AND C .relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size (C .oid) DESC
Will return the unused indexes in descending order of size. Keep in mind you want to also check replicas before dropping indexes.
SELECT schemaname || '.' || relname AS table,
indexrelname AS index,
pg_size_pretty(pg_relation_size(i.indexrelid)) AS "index size",
idx_scan as "index scans"
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
WHERE NOT indisunique
AND idx_scan < 50
AND pg_relation_size(relid) > 5 * 8192
ORDER BY
pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,
pg_relation_size(i.indexrelid) DESC;
Will return the approximate count for a table based on PostgreSQL internal statistics. Useful for large tables where performing a `SELECT count(*)` is costly on performance.
SELECT reltuples::numeric as count
FROM pg_class
WHERE relname='table_name';
Adding `CONCURRENTLY` during index creation, while not permitted in a transaction, will not hold a lock on the table while creating your index.
CREATE INDEX CONCURRENTLY foobar ON foo (bar);
Will automatically print the time it took to run a query from within psql. *Of note this is the round trip time not simply query execution time.*
\timing
You can save this in your `.psqlrc` to be a default setting
Will automatically reorganize the query output based on your terminal window for better readability.
\x auto
You can save this in your `.psqlrc` to be a default setting
Will automatically open your last run query in your default `$EDITOR`. When you save and close it will execute that query.
\e
Will render the nulls as whatever character you specify. Handy for easier parsing of nulls vs. blank text.
\pset null 👻
You can save this in your `.psqlrc` to be a default setting
Will automatically save a history file for each **DBNAME**.
\set HISTFILE ~/.psql_history- :DBNAME
You can save this in your `.psqlrc` to be a default setting
Add "-E" (or --echo-hidden) option to psql in the command line. This option will display queries that internal psql commands generate (like "\dt mytable"). This is a cool way to learn more about system catalogs, or reuse queries issued by psql in your own tool.
psql -E
Add "-qtA" options to psql in the command line. Those options will have psql run in quiet mode ("-q"), return tuples only ("-t") in an unaligned fashion ("-A"). Combined with "-c" option to send a single query, it can be useful for your scripts if you want the data and only that back from Postgres. Returns one line per row.
psql -qtA
Add "-qtH" options to psql in the command line. Those options will have psql run in quiet mode ("-q"), return tuples only ("-t") in an HTML table ("-H"). Combined with "-c" option to send a single query, can be a fast way to embed the result of a query in an HTML page.
psql -qtH
Ctrl + R will start a search session and you can start typing part of the query or command to find and run it again. If you tag specific queries with a comment, this can help with searching later.
(reverse-i-search)
Will clear your screen in current psql session
\! clear
Will automatically run the last query every 2 seconds and display the output. You can also specify the query that will run after watch as well.
\watch
When you encounter an error when in interactive mode this will automatically rollback to just before the previous command, allowing you to continue working as you would expect.
\set ON_ERROR_ROLLBACK interactive
When providing the `--csv` value with a query, this command will run the specific query and return CSV to STDOUT.
psql <connection-string> --csv -c 'select * from test;'
Will execute the specified file when inside psql.
\i filename
Will give you a border around the output of your queries when in psql
\pset border 2
You can save this in your `.psqlrc` to be a default setting
Changes the linestyle to unicode, which when combined with above tip leads to much cleaner formatting
\pset linestyle unicode
You can save this in your `.psqlrc` to be a default setting
Coalesce will use the value and if the value is null display your specified string.
SELECT id,
coalesce(ip, 'no IP')
FROM logs;
You can supply two columns as well prior to your replacement value and the function will use first not null value.
Import foreign schema creates foreign tables representing those from the foreign server.
IMPORT FOREIGN SCHEMA "public";
You can IMPORT FOREIGN SCHEMA when mapping a foreign data wrapper to save you from building a new one
Generates values from the start to the end values supplied based on the interval. Values can be numbers or timestamps. Can be used in a FROM or JOIN clause or CTE. Commonly used when building charts and reports that require all dates to be filled.
SELECT * FROM
generate_series(now() - '3 month'::interval, now(), '1 day');
Will truncate the date to the specified level of precision. Some example precision levels include: month, week, day, hour, minute.
SELECT date_trunc('day', now());
You can add or subtract specific amounts of time of a timestamp by casting the value you want as an interval.
SELECT now() - '1 month'::interval;
This function will make your session sleep for 2.5 seconds. Useful in any testing tool executing a script in a given loop where you want to pause a bit between iterations, as an example.
select pg_sleep(2.5);
Once already creating a specific user role, you can user the `pg_read_all_data` to grant read only access to all tables.
GRANT pg_read_all_data TO username;
This query will kill every backend user "test" is connected to.
WITH pids AS (
SELECT pid
FROM pg_stat_activity
WHERE usename='test'
)
SELECT pg_terminate_backend(pid)
FROM pids;
This query will cancel every running query issued by the particular user "test".
WITH pids AS (
SELECT pid
FROM pg_stat_activity
WHERE username='test'
)
SELECT pg_cancel_backend(pid)
FROM pids;