当前位置:   article > 正文

iLogtail社区版使用DaemonSet模式采集K8s容器日志_ilogtail采集pod日志

ilogtail采集pod日志

一:ilogtail

    iLogtail是阿里云日志服务(SLS)团队自研的可观测数据采集Agent,拥有的轻量级、高性能、自动化配置等诸多生产级别特性,可以署于物理机、虚拟机、Kubernetes等多种环境中来采集遥测数据。iLogtail在阿里云上服务了数万家客户主机和容器的可观测性采集工作,在阿里巴巴集团的核心产品线,如淘宝、天猫、支付宝、菜鸟、高德地图等也是默认的日志、监控、Trace等多种可观测数据的采集工具。目前iLogtail已有千万级的安装量,每天采集数十PB的可观测数据,广泛应用于线上监控、问题分析/定位、运营分析、安全分析等多种场景,在实战中验证了其强大的性能和稳定性。

     https://ilogtail.gitbook.io/ilogtail-docs/plugins/flusher/flusher-kafka_v2

https://github.com/alibaba/ilogtail/blob/main/k8s_templates/ilogtail-daemonset-file-to-kafka.yaml 

二:ilogtail 和 filebeat 性能对比

     https://developer.aliyun.com/article/850614

三:k8s以 dameseat 方式部署 ilogtail

1)  kubectl apply -f  ilogtail-namespace.yaml

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: ilogtail

2) kubectl apply -f  ilogtail-user-configmap.yaml

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: ilogtail-user-cm
  5. namespace: ilogtail
  6. data:
  7. ruoyi_log.yaml: |
  8. enable: true
  9. inputs:
  10. - Type: input_file
  11. FilePaths:
  12. - /data/logs/logs/*/info.log
  13. EnableContainerDiscovery: true #k8s集群里动态发现pod
  14. ContainerFilters:
  15. K8sNamespaceRegex: default #k8s条件为default namespace
  16. flushers:
  17. - Type: flusher_kafka_v2
  18. Brokers:
  19. - 192.168.3.110:9092 #kafka 地址
  20. Topic: test_%{tag.container.name} # topIC 为动态topic

3) kubectl apply -f ilogtail-deployment.yaml

  1. apiVersion: apps/v1
  2. kind: DaemonSet
  3. metadata:
  4. name: ilogtail-ds
  5. namespace: ilogtail
  6. labels:
  7. k8s-app: logtail-ds
  8. spec:
  9. selector:
  10. matchLabels:
  11. k8s-app: logtail-ds
  12. template:
  13. metadata:
  14. labels:
  15. k8s-app: logtail-ds
  16. spec:
  17. containers:
  18. - name: logtail
  19. env:
  20. - name: cpu_usage_limit
  21. value: "1"
  22. - name: mem_usage_limit
  23. value: "512"
  24. image: >-
  25. sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:latest
  26. imagePullPolicy: IfNotPresent
  27. resources:
  28. limits:
  29. cpu: 1000m
  30. memory: 1Gi
  31. requests:
  32. cpu: 400m
  33. memory: 384Mi
  34. volumeMounts:
  35. - mountPath: /var/run
  36. name: run
  37. - mountPath: /logtail_host
  38. mountPropagation: HostToContainer
  39. name: root
  40. readOnly: true
  41. - mountPath: /usr/local/ilogtail/checkpoint
  42. name: checkpoint
  43. - mountPath: /usr/local/ilogtail/config/local #此处需要配置到正确位置
  44. name: user-config
  45. readOnly: true
  46. dnsPolicy: ClusterFirst
  47. hostNetwork: true
  48. volumes:
  49. - hostPath:
  50. path: /var/run
  51. type: Directory
  52. name: run
  53. - hostPath:
  54. path: /
  55. type: Directory
  56. name: root
  57. - hostPath:
  58. path: /lib/var/ilogtail-ilogtail-ds/checkpoint
  59. type: DirectoryOrCreate
  60. name: checkpoint
  61. - configMap:
  62. defaultMode: 420
  63. name: ilogtail-user-cm
  64. name: user-config

四: kafka安装

     单节点  kafk-3.3.1 安装

 1) 启动 zk

bin/zookeeper-server-start.sh config/zookeeper.properties > /dev/null &

2) 启动 kafka

  1. 修改配置
  2. vim config/server.properties
  3. broker.id=0
  4. listeners=PLAINTEXT://192.168.3.110:9092
  5. log.dirs=/tmp/kafka-logs
  6. zookeeper.connect=127.0.0.1:2181
  7. 启动 kafka
  8. bin/kafka-server-start.sh config/server.properties > /dev/null &

五: 安装kafka-eagle

版本: kafka-eagle-bin-2.0.4

  1. vim /etc/profile
  2. export KE_HOME=/data/servers/efak
  3. export PATH=$PATH:$KE_HOME/bin
  4. vim conf/system-config.properties
  5. ######################################
  6. # multi zookeeper & kafka cluster list
  7. ######################################
  8. kafka.eagle.zk.cluster.alias=cluster1
  9. cluster1.zk.list=192.168.3.110:2181
  10. ######################################
  11. # kafka offset storage
  12. ######################################
  13. cluster1.kafka.eagle.offset.storage=kafka
  14. #cluster2.kafka.eagle.offset.storage=zk
  15. ######################################
  16. # kafka sqlite jdbc driver address
  17. ######################################
  18. kafka.eagle.driver=org.sqlite.JDBC
  19. kafka.eagle.url=jdbc:sqlite:/opt/kafka-eagle/db/ke.db
  20. kafka.eagle.username=root
  21. kafka.eagle.password=www.kafka-eagle.org

启动kafka-eagle

bin/ke.sh start

账号:admin  密码:123456

六:测试 kafka消息

bin/kafka-console-consumer.sh --bootstrap-server 192.168.3.110:9092 --topic test_ruoyi-gateway --from-beginning

 

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号