赞
踩
CPU利用率到达100%,首先怀疑,是不是业务高峰活跃连接陡增,而数据库预留的资源不足造成的结果。我们需要查看下,问题发生时,活跃的连接数是否比平时多很多。对于RDS for PG,数据库上的连接数变化,可以从控制台的监控信息中看到。而当前活跃的连接数>可以直接连接数据库,使用下列查询语句得到:
select count( * ) from pg_stat_activity where state not like '%idle';
如果活跃连接数的变化处于正常范围,则很大概率可能是当时有性能很差的SQL被大量执行导致。由于RDS有慢SQL日志,我们可以通过这个日志,定位到当时比较耗时的SQL来进一步做分析。但通常问题发生时,整个系统都处于停滞状态,所有SQL都慢下来,当时记录的>慢SQL可能非常多,并不容易排查罪魁祸首。这里我们介绍几种在问题发生时,即介入追查慢SQL的方法。
create extension pg_stat_statements; --创建插件
select pg_stat_reset();---重置统计信息
select pg_stat_statements_reset(); ---重置统计信息
–编辑postgresql.conf文件 vi $PGDATA/postgresql.conf –修改配置 shared_preload_libraries=‘pg_stat_statements’ track_io_timing = on用于跟踪IO消耗的时间 track_activity_query_size = 1024设置单条SQL的最长长度,超过被截断显示(可选) -- 如无插件,则需要先进行设置 配置pg_stat_statements采样参数 –在postgresql.conf文件添加 pg_stat_statements.max = 10000 # 在pg_stat_statements中最多保留多少条统计信息 pg_stat_statements.track = all # all - (所有SQL包括函数内嵌套的SQL), top - 直接执行的SQL(函数内的sql不被跟踪), none - (不跟踪) pg_stat_statements.track_utility = off # 是否跟踪非DML语句 (例如DDL,DCL), on表示跟踪, off表示不跟踪 pg_stat_statements.save = on # 重启后是否保留统计信息 修改完postgresql.conf文件后,重启数据库restart pg_ctl —D $PGDATA restart
select * from pg_stat_statements order by total_time desc limit 5;
单次调用耗时前五条SQL,降序
select userid::regrole, dbid, query
from pg_stat_statements
order by mean_time desc
limit 5;
总耗时前五条SQL,降序(最关注的)
select userid::regrole, dbid, query
from pg_stat_statements
order by total_time
desc limit 5;
select * from pg_stat_statements
order by shared_blks_hit+shared_blks_read desc
limit 5;
单次消耗IO的前五条SQL,降序
select userid::regrole, dbid, query
from pg_stat_statements
order by (blk_read_time+blk_write_time)/calls desc
limit 5;
总消耗IO的前五条SQL,降序
select userid::regrole, dbid, query
from pg_stat_statements
order by (blk_read_time+blk_write_time) desc
limit 5;
select userid::regrole, dbid, query
from pg_stat_statements
order by stddev_time desc
limit 5;
select userid::regrole, dbid, query
from pg_stat_statements
order by (shared_blks_hit+shared_blks_dirtied) desc
limit 5;
select userid::regrole, dbid, query
from pg_stat_statements
order by temp_blks_written desc
limit 5;
postgres=# select * from pg_stat_statements;
userid | dbid | queryid | query | calls | total_time | min_time | max_time | mean_time | stddev_time | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time | blk_write_time
--------±-----±--------±------±------±-----------±---------±---------±----------±------------±-----±----------------±-----------------±--------------------±--------------------±---------------±----------------±-------------------±-------------------±---------------±------------------±--------------±---------------
字段分析
字段名 | 类型 | 引用 | 描述 |
---|---|---|---|
userid | oid | pg_authid.oid | 执行该语句的用户的 OID |
dbid | oid | pg_database.oid | 在其中执行该语句的数据库的 OID |
queryid | bigint | 内部哈希码,从语句的解析树计算得来 | |
query | text | 语句的文本形式 | |
calls | bigint | 被执行的次数 | |
total_time | double precision | 在该语句中花费的总时间,以毫秒计 | |
min_time | double precision | 在该语句中花费的最小时间,以毫秒计 | |
max_time | double precision | 在该语句中花费的最大时间,以毫秒计 | |
mean_time | double precision | 在该语句中花费的平均时间,以毫秒计 | |
stddev_time | double precision | 在该语句中花费时间的总体标准偏差,以毫秒计 | |
rows | bigint | 该语句检索或影响的行总数 | |
shared_blks_hit | bigint | 该语句造成的共享块缓冲命中总数 | |
shared_blks_read | bigint | 该语句读取的共享块的总数 | |
shared_blks_read | bigint | 该语句弄脏的共享块的总数 | |
shared_blks_written | bigint | 该语句写入的共享块的总数 | |
local_blks_hit | bigint | 该语句造成的本地块缓冲命中总数 | |
local_blks_read | bigint | 该语句读取的本地块的总数 | |
local_blks_dirtied | bigint | 该语句弄脏的本地块的总数 | |
local_blks_written | bigint | 该语句写入的本地块的总数 | |
temp_blks_read | bigint | 该语句读取的临时块的总数 | |
temp_blks_written | bigint | 该语句写入的临时块的总数 | |
blk_read_time | double precision | 该语句花在读取块上的总时间,以毫秒计(如果track_io_timing被启用,否则为零) | |
blk_write_time | double precision | 该语句花在写入块上的总时间,以毫秒计(如果track_io_timing被启用,否则为零) |
select datname, usename, client_addr, application_name, state, backend_start, xact_start, xact_stay, query_start, query_stay, replace(query, chr(10), ' ') as query
from (select pgsa.datname as datname, pgsa.usename as usename, pgsa.client_addr client_addr, pgsa.application_name as application_name, pgsa.state as state, pgsa.backend_start as backend_start, pgsa.xact_start as xact_start, extract(epoch from (now() - pgsa.xact_start)) as xact_stay, pgsa.query_start as query_start, extract(epoch from (now() - pgsa.query_start)) as query_stay , pgsa.query as query
from pg_stat_activity as pgsa where pgsa.state != 'idle' and pgsa.state != 'idle in transaction' and pgsa.state != 'idle in transaction (aborted)') idleconnections order by query_stay desc limit 5;
datname | usename | client_addr | application_name | state | backend_start | xact_start | xact_stay | query_start | query_stay |
query
---------+-------------+---------------+--------------------------+--------+-------------------------------+-------------------------------+---------------+-------------------------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-26 22:36:05.603781+00 | 2018-02-26 22:36:13.054396+00 | 187614.245395 | 2018-02-26 22:36:13.054396+00 | 187614.245395 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r and lang = 'ENG'\r limit 50
denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-26 23:46:24.442846+00 | 2018-02-26 23:46:34.920261+00 | 183392.37953 | 2018-02-26 23:46:34.920261+00 | 183392.37953 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r and lang = 'ENG'\r limit 50\r
denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-27 01:19:53.83589+00 | 2018-02-27 01:20:01.519778+00 | 177785.780013 | 2018-02-27 01:20:01.519778+00 | 177785.780013 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r limit 50
denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-27 01:46:05.207888+00 | 2018-02-27 01:47:52.039779+00 | 176115.260012 | 2018-02-27 01:47:52.039779+00 | 176115.260012 | select a.place_id, a.metadata_dictionary_id,a.value, a.lang, b.place_id, b.metadata_dictionary_id, b.value, b.lang\r from gen3_search_eu_17q1_20170308_epl.place_address a \r inner join gen3_search_eu_17q2_20171115_epl.place_address b\r on a.place_id = b.place_id \r where a.metadata_dictionary_id = b.metadata_dictionary_id and a.lang = b.lang and a.value!=b.value and b.place_id not in (select poi_id from gen3_search_eu_17q2_20171115_epl.place_embeded_ids)\r limit 100\r
denali | denaliadmin | 10.224.14.148 | pgAdmin III - Query Tool | active | 2018-02-27 05:05:39.903885+00 | 2018-02-27 05:05:48.827779+00 | 164238.472012 | 2018-02-27 05:05:48.827779+00 | 164238.472012 | select a.place_id, a.metadata_dictionary_id,a.value, a.lang, b.place_id, b.metadata_dictionary_id, b.value, b.lang\r from gen3_search_eu_17q1_20170308_epl.place_address a \r inner join gen3_search_eu_17q2_20171115_epl.place_address b\r on a.place_id = b.place_id \r where a.metadata_dictionary_id = b.metadata_dictionary_id and a.lang = b.lang and a.value!=b.value and b.place_id not in (select poi_id from gen3_search_eu_17q2_20171115_epl.place_embeded_ids)\r limit 100\r
(5 rows)
数据表如果缺失索引,大部分热数据又都在内存时(例如内存8G,热数据6G),此时数据库只能使用表扫描,并需要处理已在内存中的大量的无关记录,而耗费大量CPU。特别是对于表记录数超100的表,一次表扫描占用大量CPU(基本把一个CPU占满),多个连接并发(例如上百连接),把所有CPU占满。
select * from pg_stat_user_tables
where n_live_tup > 100000 and seq_scan > 0
order by seq_tup_read desc
limit 10;
select * from pg_stat_activity
where query ilike '%<table name>%' and query_start - now() > interval '10 seconds';
select * from pg_stat_statements
where query ilike '%<table>%'
order by shared_blks_hit+shared_blks_read desc
limit 3;
对于上面的方法查出来的慢SQL,首先需要做的可能是Cancel或Kill掉他们,使业务先恢复:
select pg_cancel_backend(pid)
from pg_stat_activity
where query like '%<query text>%' and pid != pg_backend_pid();
select pg_terminate_backend(pid)
from pg_stat_activity
where query like '%<query text>%' and pid != pg_backend_pid();
如果这些SQL确实是业务上必需的,则需要对他们做优化。这方面有“三板斧”:
1、对查询涉及的表,执行ANALYZE <table>
或VACUUM ANZLYZE <table>
,更新表的统计信息,使查询计划更准确。注意,为避免对业务影响,最好在业务低峰执行。
2、执行explain (query text)
或explain (buffers true, analyze true, verbose true) (query text)
命令,查看SQL的执行计划(注意,前者不会实际执行SQL,后者会实际执行而且能得到详细的执行信息),对其中的Table Scan涉及的表,建立索引。
3、重新编写SQL,去除掉不必要的子查询、改写UNION ALL
、使用JOIN CLAUSE
固定连接顺序等到,都是进一步深度优化SQL的手段,这里不再深入说明。
如果是要使用count(*)计算,要使用计算行数的子查询
select count(*) from
(
select
id
from users
where preferred_language = ‘zh_CN’
and private_profile = True
) as temp;
避免使用IN或者NOT IN,因为这个操作会进行全表扫描。可以使用EXCEPT或NOT EXISTS来代替
对表中经常进行查询、排序、分组的字段添加索引
b-tree适合所有的数据类型,支持排序,支持大于、小于、等于、大于或等于、小于或等于的搜索。
postgres=# create index idx_t_btree_1 on t_btree using btree (id);
hash索引存储的是被索引字段VALUE的哈希值,只支持等值查询。hash索引特别适用于字段VALUE非常长(不适合b-tree索引,因为b-tree一个PAGE至少要存储3个ENTRY,所以不支持特别长的VALUE)的情况。
如果只需要等值搜索,可以使用hash索引
postgres=# create index idx_t_hash_1 on t_hash using hash (info);
当需要搜索多值类型内的VALUE时,适合多值类型,例如数组、全文检索、TOKEN。
–arr是数组类型
postgres=# create index idx_t_gin1_1 on t_gin1 using gin (arr);
它是一种平衡树结构的访问方法,在系统中作为一个基本模版,可以使用它实现任意索引模式。B-trees, R-trees和许多其它的索引模式都可以用GiST实现。但Gist索引创建耗时较长,占用空间也比较大。
postgres=# create index idx_t_gist_1 on t_gist using gist (pos);
SP-GiST类似GiST,是一个通用的索引接口,但是SP-GIST使用了空间分区的方法,使得SP-GiST可以更好的支持非平衡数据结构,支持位置搜索。
postgres=# create index idx_t_spgist_1 on t_spgist using spgist (rg);
UNION在进行表链接后会筛选掉重复的记录,UNION ALL不会去除重复记录UNION将会按照字段的顺序进行排序,UNION ALL只是简单的将两个结果合并后就返回
https://www.centos.bz/2017/08/postgresql-cpu-100-slow-sql/
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。