当前位置:   article > 正文

ClickHouse默认库system日志清理方法(query_log、asynchronous_metric_log)_clickhouse 日志清理

clickhouse 日志清理

问题

  • clickhouse在测试环境运行一段时间后,业务数据大概有2G(接入的前端硬件设备较少),但是发现服务器磁盘空间少了20G左右
  • 查看之后发现,默认库system的表里面有大量日志,加一起接近20G
  • 对于这些日志,主要辅助我们做性能分析、问题分析、查询分析等,clickhouse默认是不删除的

处理

  • 对于这些辅助日志,我们可以根据需要保留指定天数即可
  • 对于已有的大量日志,可以使用alert ...delete命令删除
ALTER table `system`.asynchronous_metric_log  DELETE where event_date is not null;
ALTER table `system`.metric_log DELETE where event_date is not null;
ALTER table `system`.part_log DELETE where event_date is not null;
ALTER table `system`.query_log DELETE where event_date is not null;
ALTER table `system`.query_thread_log DELETE where event_date is not null;
ALTER table `system`.session_log DELETE where event_date is not null;
ALTER table `system`.trace_log DELETE where event_date is not null;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • clickhouse数据库是支持TTL设置的,一般有两种做法,直接修改表,或者改配置文件
  • 对于这些system库的日志记录,其实是在clickhouse配置文件/etc/clickhouse-server/config.xml里设置定义的,故可以在配置文件里改,官方也推荐这种做法

改配置文件

  • 可以根据表名去搜索,我附上我的修改供参考
  • 字段解释:

database – 数据库名
table – 日志存放的系统表名
partition_by — 系统表分区键,如果定义了 engine 则不能使用
engine -系统表 表引擎,如果定义了 partition_by 则不能使用
flush_interval_milliseconds – 将数据从内存的缓冲区刷新到表的时间间隔。
ttl 保存时间

 <!-- Query log. Used only for queries with setting log_queries = 1. -->
    <query_log>
        <!-- What table to insert data. If table is not exist, it will be created.
             When query log structure is changed after system update,
              then old table will be renamed and new table will be created automatically.
        -->
        <database>system</database>
        <table>query_log</table>
        <!--
            PARTITION BY expr: https://clickhouse.com/docs/en/table_engines/mergetree-family/custom_partitioning_key/
            Example:
                event_date
                toMonday(event_date)
                toYYYYMM(event_date)
                toStartOfHour(event_time)
        -->
        <partition_by>toYYYYMM(event_date)</partition_by>
        <!--
            Table TTL specification: https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#mergetree-table-ttl
            Example:
                event_date + INTERVAL 1 WEEK
                event_date + INTERVAL 7 DAY DELETE
                event_date + INTERVAL 2 WEEK TO DISK 'bbb'
		-->
        <ttl>event_date + INTERVAL 7 DAY DELETE</ttl>
        

        <!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters,
             Example: <engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
          -->

        <!-- Interval of flushing data. -->
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_log>
    <asynchronous_metric_log>
        <database>system</database>
        <table>asynchronous_metric_log</table>
		<ttl>event_date + INTERVAL 7 DAY DELETE</ttl>
        <!--
            Asynchronous metrics are updated once a minute, so there is
            no need to flush more often.
        -->
        <flush_interval_milliseconds>7000</flush_interval_milliseconds>
    </asynchronous_metric_log>
    <opentelemetry_span_log>
        <!--
            The default table creation code is insufficient, this <engine> spec
            is a workaround. There is no 'event_time' for this log, but two times,
            start and finish. It is sorted by finish time, to avoid inserting
            data too far away in the past (probably we can sometimes insert a span
            that is seconds earlier than the last span in the table, due to a race
            between several spans inserted in parallel). This gives the spans a
            global order that we can use to e.g. retry insertion into some external
            system.
        -->
        <engine>
            engine MergeTree
            partition by toYYYYMM(finish_date)
            order by (finish_date, finish_time_us, trace_id)
			ttl event_date + INTERVAL 10 day
        </engine>
        <database>system</database>
        <table>opentelemetry_span_log</table>
        <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </opentelemetry_span_log>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

改表结构

  • 还可以直接修改表结构,设置TTL,超过设置的超时时间,就会自动删除
-- 修改表的 TTL,event_date超过一定时间的数据自动删除
ALTER table `system`.asynchronous_metric_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.metric_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.part_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.query_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.query_thread_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.session_log MODIFY TTL event_date + toIntervalDay(10);
ALTER table `system`.trace_log MODIFY TTL event_date + toIntervalDay(10);
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 引申一下,对于自己的业务表,如果有删除需求的,也可以根据时间字段做类似的操作
-- 修改表的 TTL,time_stamp超过一定时间的数据自动删除
ALTER TABLE flow_stats MODIFY TTL time_stamp + toIntervalYear(5);
ALTER TABLE lane_status_in_phase MODIFY TTL time_stamp + toIntervalYear(3);
ALTER TABLE passing_vehicle MODIFY TTL time_stamp + toIntervalYear(3);
  • 1
  • 2
  • 3
  • 4
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/587521
推荐阅读
相关标签
  

闽ICP备14008679号