赞
踩
Greenplum关于表膨胀,数据倾斜
检查表膨胀
mydb=# select * from gp_toolkit.gp_bloat_diag limit 3;
bdirelid | bdinspname | bdirelname | bdirelpages | bdiexppages | bdidiag
----------+------------+------------------+-------------+-------------+---------------------------------------
6040 | pg_catalog | pg_exttable | 7807 | 32 | significant amount of bloat suspected
5094 | pg_catalog | gp_relation_node | 26317 | 211 | significant amount of bloat suspected
#bdidiag:bloat诊断结果 (比率1到3表示:no bloat;比率从4到10表示:moderate bloat;比率从ratio大于10表示:significantamount of bloat suspected)
bdirelpages:磁盘上的实际页数。
bdiexppages:期望的页数
或者直接定义到表
select * from gp_toolkit.gp_bloat_diag where bdirelname ='twb_list_cm_lxrkzxx';
查看重度膨胀的表的数量
select count(*) from gp_toolkit.gp_bloat_diag where bdidiag='significant amount of bloat suspected';
查看重度膨胀的表有哪些
select bdinspname||'.'||bdirelname ,bdirelpages,bdiexppages from gp_toolkit.gp_bloat_diag where bdidiag='significant amount of bloat suspected' order by bdirelpages desc limit 30 ;
包含膨胀倍数(按膨胀倍数排序)
`select bdirelid ,bdinspname ,bdirelname ,bdirelpages*32/1024/1024 realsize_G,bdiexppages*32/1024/1024 expectsize_G, bdirelpages/bdiexppages as expansion from gp_toolkit.gp_bloat_diag where bdidiag='significant amount of bloat suspected' order by expansion desc limit 30;`
按表实际大小排序
select bdirelid ,bdinspname||'.'||bdirelname ,bdirelpages*32/1024/1024 realsize_G,bdiexppages*32/1024/1024 expectsize_G, bdirelpages/bdiexppages as expansion from gp_toolkit.gp_bloat_diag where bdidiag='significant amount of bloat suspected' order by bdirelpages desc limit 30;
中度膨胀的表
select bdirelid ,bdinspname ,bdirelname ,bdirelpages*32/1024/1024 realsize_G,bdiexppages*32/1024/1024 expectsize_G, bdirelpages/bdiexppages as expansion from gp_toolkit.gp_bloat_diag where bdidiag='moderate amount of bloat suspected' and
bdinspname not like 'pg_catalog' order by bdirelpages limit 50;
除去系统表,临时表,外部表,错误表膨胀的表
select bdirelid ,bdinspname ||'.'||bdirelname ,bdirelpages*32/1024/1024 realsize_G,bdiexppages*32/1024/1024 expectsize_G, bdirelpages/bdiexppages as expansion from gp_toolkit.gp_bloat_diag where bdinspname not like '%pg_catalog%' and bdirelname not like '%_tmp%' and bdirelname not like 'tmp%' and bdirelname not like 'temp%' and bdirelname not like 'err_%' and bdirelname not like 'ext_%' and bdirelname not like '%_bak' and bdirelname not like '%_prt%' order by bdirelpages desc limit 30;
具体表的大小及膨胀情况
select bdirelid relation_id,bdinspname schemaname,bdirelname tablenmae,bdirelpages*32/1024/1024 realsize_G,bdiexppages*32/1024/1024 expectsize_G from gp_toolkit.gp_bloat_diag where bdirelname li/ke '%twb_zw_ssdfjl_d%';
视图gp_toolkit.gp_bloat_expected_pages,列出每个数据库对象实际使用的页数和期望使用的磁盘页数
mydb=# select * from gp_toolkit.gp_bloat_expected_pages ;
btdrelid | btdrelpages | btdexppages
----------+-------------+-------------
10784 | 1 | 32
10789 | 1 | 32
……
消除数据膨胀
大型update和delete操作之后务必运行vacuum ,vacuum full 不建议使用
如果一个表膨胀严重,对于小表可以通过vacuum full table_name 回收页空间
对于重度膨胀的大表有以下两种方法处理
第一种方法,创建大表拷贝,删掉原表,然后重命名拷贝
BEGIN;
LOCK TABLE tablename;
CREATE TABLE tablename_tmp SELECT * FROM tablename;
DROP TABLE tablename;
ALTER TABLE tablename_tmp RENAME TO tablename;
COMMIT
第二种方法是重分布
1、记录表的分布键
\d+ table_name
2、修改表的分布策略为随机分布
ALTER TABLE tablename SET WITH (REORGANIZE=false) DISTRIBUTED randomly;
3、改回原来的分布策略
ALTER TABLE tablename SET WITH (REORGANIZE=true) DISTRIBUTED by (分布键字段);
最后进行表分析
ANALYZE tablename;
消除索引表的膨胀
重建表的所有索引
REINDEX TABLE my_table;
重建某个索引
REINDEX INDEX my_index;
消除元数据表的膨胀,应该只是每周一次对pg_catalog下的所有数据库对象进行VACUUM/REINDEX/ANALYZE操作,如果无法通过拷贝或者重分布的方法进行维护,必须进行VACUUM FULL 来消除膨胀
当因缺乏VACUUM维护是的Greenplum达到xid_stop_limit transaction ID限制,数据库就会变得无响应。为了从这种局面恢复,需要以数据库管理员的身份来执行如下步骤。
找到要监视的偏斜处理数据库的OID:
mydb=# SELECT oid,datname FROM pg_database;
oid | datname
-------+-----------
17146 | mydb #比如是查询该数据库是否倾斜
10899 | postgres
1 | template1
10898 | template0
17383 | gpperfmon
检查系统中mydb数据库所有段节点的文件大小。目录根据每个数据库路径来写
有时候pgsql_tmp这个文件为空时这写到上一级目录进行统计
"du -b /data[1-2]/pg_system/primary/gpseg*/base/17146/pgsql_tmp/*"改写为
"du -b /data[1-2]/pg_system/primary/gpseg*/base/17146/*"
[gpadmin@mas01 ~]$ gpssh -f host_seg -e "du -b /data[1-2]/pg_system/primary/gpseg*/base/17146/pgsql_tmp/*" | grep -v "du -b" | sort | awk -F" " '{ arr[$1] = arr[$1] + $2 ; tot = tot + $2 }; END \
{ for ( i in arr ) print "Segment node" i, arr[i], "bytes (" arr[i]/(1024**3)" GB)"; \
print "Total", tot, "bytes (" tot/(1024**3)" GB)" }'
Segment node[seg03] 183998152160 bytes (171.362 GB)
Segment node[seg04] 186324821280 bytes (173.529 GB)
Segment node[seg05] 185831027816 bytes (173.069 GB)
Segment node[seg06] 189082187808 bytes (176.097 GB)
Segment node[seg07] 187205020552 bytes (174.348 GB)
Segment node[seg08] 190963047960 bytes (177.848 GB)
Segment node[seg01] 184466387464 bytes (171.798 GB)
Segment node[seg02] 182483788656 bytes (169.951 GB)
Total 1490354433696 bytes (1388 GB)
如果出现明显且持续的偏斜,则下一个任务是识别有问题的查询。上一步中的命令汇总了整个节点。这次,找到实际的段目录。您可以从主服务器执行此操作,也可以登录上一步中确定的特定节点来执行此操作。
ls -l /data/gp4/primary/gpseg0/base/1 gpssh -f host_seg -e "ls -l /data[1-2]/pg_system/primary/gpseg*/base/17146/pgsql_tmp/*" | grep -i sort | awk '{sub(/base.*tmp\//, ".../", $10); print {$1,$6,$10}' | sort -k2 -n /data1/primary/gpseg2/.../pgsql_tmp_slice0_sort_17758_0001.0[sdw1] 291176448 /data2/primary/gpseg5/.../pgsql_tmp_slice0_sort_17764_0001.0[sdw8] 924581888 /data2/primary/gpseg45/.../pgsql_tmp_slice10_sort_15673_0010.9[sdw4] 980582400 /data1/primary/gpseg18/.../pgsql_tmp_slice10_sort_29425_0001.0[sdw6] 986447872 /data2/primary/gpseg35/.../pgsql_tmp_slice10_sort_29602_0001.0...[sdw5] 999620608 /data1/primary/gpseg26/.../pgsql_tmp_slice10_sort_28637_0001.0[sdw2] 999751680 /data2/primary/gpseg9/.../pgsql_tmp_slice10_sort_3969_0001.0[sdw3] 1000112128 /data1/primary/gpseg13/.../pgsql_tmp_slice10_sort_24723_0001.0[sdw5] 1000898560 /data2/primary/gpseg28/.../pgsql_tmp_slice10_sort_28641_0001.0...[sdw8] 1008009216 /data1/primary/gpseg44/.../pgsql_tmp_slice10_sort_15671_0001.0[sdw5] 1008566272 /data1/primary/gpseg24/.../pgsql_tmp_slice10_sort_28633_0001.0[sdw4] 1009451008 /data1/primary/gpseg19/.../pgsql_tmp_slice10_sort_29427_0001.0[sdw7] 1011187712 /data1/primary/gpseg37/.../pgsql_tmp_slice10_sort_18526_0001.0[sdw8] 1573741824 /data2/primary/gpseg45/.../pgsql_tmp_slice10_sort_15673_0001.0[sdw8] 1573741824 /data2/primary/gpseg45/.../pgsql_tmp_slice10_sort_15673_0002.1[sdw8] 1573741824 /data2/primary/gpseg45/.../pgsql_tmp_slice10_sort_15673_0003.2[sdw8] 1573741824
扫描此输出可显示该段 gpseg45 在主机上 sdw8 是罪魁祸首,因为其排序文件比输出中的其他文件大。
root登录到有问题的节点使用以下命令 lsof 命令来查找拥有排序文件之一的进程PID:
lsof /data2/primary/gpseg45/base/19979/pgsql_tmp/300602255.1
查看具体表的数据分布
要查看表的行的数据分布(各段的行数),你可以运行一个查询,如下:如果所有的段的行数都大致相同,则认为这个表是均匀分布的。
=# SELECT gp_segment_id, count(*) FROM table_name GROUP BY gp_segment_id;
例如:
mydb=# SELECT gp_segment_id, count(*) FROM wwld.temp_01 GROUP BY gp_segment_id;
gp_segment_id | count
---------------+-------
(0 rows)
#分区表上是没有查询到的
mydb=# SELECT gp_segment_id, count(*) FROM wwld.temp_02 GROUP BY gp_segment_id order by gp_segment_id;
gp_segment_id | count
---------------+---------
0 | 4661071
1 | 4774123
2 | 4650578
…………
15 | 4778579
这种方法太简单,只有判断存储是否倾斜,不能够去对数据处理是否会出现倾斜做出判断。而且判断的维度很少,不直观。
Greenplum提供了gp_toolkit.gp_skew_coefficients等工具来进行检查判断
mydb=# \d+ gp_toolkit.gp_skew_coefficients
View "gp_toolkit.gp_skew_coefficients"
Column | Type | Modifiers | Storage | Description
--------------+---------+-----------+---------+-------------
skcoid | oid | | plain |
skcnamespace | name | | plain |
skcrelname | name | | plain |
skccoeff | numeric | | main |
View definition:
SELECT skew.skewoid AS skcoid, pgn.nspname AS skcnamespace, pgc.relname AS skcrelname, skew.skewval AS skccoeff
FROM gp_toolkit.__gp_skew_coefficients() skew(skewoid, skewval)
JOIN pg_class pgc ON skew.skewoid = pgc.oid
JOIN pg_namespace pgn ON pgc.relnamespace = pgn.oid;
当我们使用视图gp_toolkit.gp_skew_coefficients来检查表数据倾斜时,该视图会基于表的行数据量来检查,如果表数据量越大,检查时间就会越长。
mydb=# select * from gp_toolkit.gp_skew_coefficients limit 3;
其中skccoeff 通过存储记录均值计算出的标准差,这个值越低说明数据存放约均匀,反之说明数据存储分布不均匀,要考虑分布键选择是否合理。
另外一个视图gp_toolkit.gp_skew_idle_fractions 通过计算表扫描过程中,系统闲置的百分比,帮助用户快速判断,是否存在分布键选择不合理,导致数据处理倾斜的问题
mydb=# select * from gp_toolkit.gp_skew_idle_fractions ;
siffraction字段表示表扫描过程中系统闲置的百分比,比如0.1表示10%的倾斜。
CREATE OR REPLACE FUNCTION my_func_for_files_skew() RETURNS void AS $$ DECLARE v_function_name text := 'my_create_func_for_files_skew'; v_location_id int; v_sql text; v_db_oid text; v_number_segments numeric; v_skew_amount numeric; BEGIN --定义代码的位置,方便用来定位问题-- v_location_id := 1000; --获取当前数据库的oid-- SELECT oid INTO v_db_oid FROM pg_database WHERE datname = current_database(); --文件倾斜的视图并创建该视图-- v_location_id := 2000; v_sql := 'DROP VIEW IF EXISTS my_file_skew_view'; v_location_id := 2100; EXECUTE v_sql; --保存db文件的外部表并创建该外部表-- v_location_id := 2200; v_sql := 'DROP EXTERNAL TABLE IF EXISTS my_db_files_web_tbl'; v_location_id := 2300; EXECUTE v_sql; --获取 segment_id,relfilenode,filename,size 信息-- v_location_id := 3000; v_sql := 'CREATE EXTERNAL WEB TABLE my_db_files_web_tbl ' || '(segment_id int, relfilenode text, filename text, size numeric) ' || 'execute E''ls -l $GP_SEG_DATADIR/base/' || v_db_oid || ' | grep gpadmin | ' || E 'awk {''''print ENVIRON["GP_SEGMENT_ID"] "\\t" $9 "\\t" ' || 'ENVIRON["GP_SEG_DATADIR"] "/' || v_db_oid || E '/" $9 "\\t" $5''''}'' on all ' || 'format ''text'''; v_location_id := 3100; EXECUTE v_sql; --获取所有primary segment的个数-- v_location_id := 4000; SELECT count(*) INTO v_number_segments FROM gp_segment_configuration WHERE preferred_role = 'p' AND content >= 0; --如果primary segment总数为40个,那么此处v_skew_amount=1.2*0.025=0.03-- v_location_id := 4100; v_skew_amount := 1.2 * (1 / v_number_segments); --创建记录文件倾斜的视图-- v_location_id := 4200; v_sql := 'CREATE OR REPLACE VIEW my_file_skew_view AS ' || 'SELECT schema_name, ' || 'table_name, ' || 'max(size)/sum(size) as largest_segment_percentage, ' || 'sum(size) as total_size ' || 'FROM ( ' || 'SELECT n.nspname AS schema_name, ' || ' c.relname AS table_name, ' || ' sum(db.size) as size ' || ' FROM my_db_files_web_tbl db ' || ' JOIN pg_class c ON ' || ' split_part(db.relfilenode, ''.'', 1) = c.relfilenode ' || ' JOIN pg_namespace n ON c.relnamespace = n.oid ' || ' WHERE c.relkind = ''r'' ' || ' GROUP BY n.nspname, c.relname, db.segment_id ' || ') as sub ' || 'GROUP BY schema_name, table_name ' || 'HAVING sum(size) > 0 and max(size)/sum(size) > ' || --只记录大于合适的才输出--- v_skew_amount ::text || ' ' || 'ORDER BY largest_segment_percentage DESC, schema_name, ' || 'table_name'; v_location_id := 4300; EXECUTE v_sql; EXCEPTION WHEN OTHERS THEN RAISE EXCEPTION '(%:%:%)', v_function_name, v_location_id, sqlerrm; END; $$ language plpgsql;
然后我们执行函数,创建相关的对象:
select my_func_for_files_skew();
这时我们就可以查看我们计划的倾斜表:
select * from my_file_skew_view ;
我们也可以选择按照倾斜度的大小进行排序:
select * from my_file_skew_view order by largest_segment_percentage desc;
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。