赞
踩
目的: 鉴于目前网络上没有完整的kafka数据投递至splunk教程,通过本文操作步骤,您将实现kafka数据投递至splunk日志系统
实现思路:
测试环境:
- 测试使用的操作系统为centos7.5_x86_64
- 文章提供了两种部署方式,分别是单机部署和容器化部署
- 单机部署使用的主机来自腾讯云-cvm产品(腾讯云CVM),1台4c8g(如果条件允许,建议使用3台2c4g主机,分别部署kafka、connector、splunk,钱包有限,这里只是教程,不讲究这些)
- 上述云主机,已安装JDK8及以上版本
- 容器化部署使用的k8s集群来自腾讯云TKE,可以一键部署k8s集群,欢迎体验~
●splunk是一款收费软件,如果每天的数据量少于500M,可以使用Splunk提供的免费License,但不能用安全,分布式等高级功能。
部署步骤如下:
yum install docker -y
systemctl start docker
# https://hub.docker.com/r/splunk/splunk/tags
docker pull splunk/splunk
docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=你的密码" -p 8088:8088 --name splunk splunk/splunk:latest
vi splunk-deployment.yaml
apiVersion: v1 kind: Namespace metadata: name: splunk-ns --- apiVersion: apps/v1 kind: Deployment metadata: name: splunk namespace: splunk-ns spec: replicas: 1 selector: matchLabels: app: splunk template: metadata: labels: app: splunk spec: containers: - name: splunk image: splunk/splunk:latest ports: - containerPort: 8000 - containerPort: 8088 env: - name: SPLUNK_START_ARGS value: "--accept-license" - name: SPLUNK_PASSWORD value: "你的密码" volumeMounts: - name: splunk-data mountPath: /opt/splunk/var volumes: - name: splunk-data emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: splunk namespace: splunk-ns spec: selector: app: splunk ports: - name: http port: 8000 targetPort: 8000 - name: mgmt port: 8088 targetPort: 8088 type: LoadBalancer
# 解压到/opt
tar -zxvf splunk-8.0.8-xxzx-Linux-x86_64.tgz -C /opt
cd /opt/splunk/bin/
./splunk start --accept-license //启动,并自动接收许可
./splunk start //启动splunk
./splunk restart //重启splunk
./splunk status //查看splunk状态
./splunk version //查看splunk版
#卸载
./splunk disable boot-start //关闭自启动
./splunk stop //停止splunk
/opt/splunk/bin/rm–rf/opt/splunk //移除splunk安装目录
至此,splunk部署成功
在splunk中配置HTTP 事件收集器:
a. 进入splunk web页面,点击右上角【设置】-【数据输入】
b. 选择HTTP事件收集器,点击【全局设置】,启用标记,HTTP端口为8088,点击【保存】
c. 点击右上角【新建标记】,新建HTTP事件收集器,填写:
填写名称:splunk_kafka_connect_token,点击【下一步】;
新建来源类型“splunk_kafka_data”,新建索引“splunk_kafka_index”,点击【检查】;
提交;
随后,在设置-数据输入-HTTP事件收集器页面,将得到一个token,记录此token
yum install java -y
b. 下载kafka:https://kafka.apache.org/downloads,以2.12版本为例
c.解压
tar -zxvf kafka_2.12-3.6.1.tgz
d.启动zookeeper
cd kafka_2.12-3.6.1/
./bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
e.启动kafka
./bin/kafka-server-start.sh config/server.properties &
f.创建topic,假设叫topic0
./bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic topic0
g.使用生产者发送若干条消息
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic0
h.消费
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic topic0
至此,kafka启动成功
# 将10.0.0.0:19000替换为你的kafka地址
bootstrap.servers=10.0.0.0:19000
group.id=test-splunk-kafka-connector
# 假设消息是string类型,格式不对splunk就不能解析日志
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
# 换为connector的地址
rest.advertised.host.name=10.1.1.1
rest.advertised.port=8083
#指定splunk-kafka-connector.jar所在目录
plugin.path=/usr/local/bin/
cd kafka_2.12-3.6.1/
./bin/connect-distributed.sh config/connect-distributed.properties
# curl http://「connector ip」:8083/connector-plugins
curl http://10.1.1.1:8083/connector-plugins
预期出现这个字段,表示splunk connector已经启动了:{“class”:“com.splunk.kafka.connect.SplunkSinkConnector”,“type”:“sink”,“version”:“v2.2.0”}
curl 10.1.1.1:8083/connectors -X POST -H "Content-Type: application/json" -d'{ "name": "splunk-kafka-connect-task", "config": { "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector", "tasks.max": "3", "topics": "topic0", "splunk.indexes": "splunk_kafka_index", "splunk.hec.uri":"https://10.0.0.0:8088", "splunk.hec.token": "b4594xxxxxx", "splunk.hec.ack.enabled" : "false", "splunk.hec.raw" : "false", "splunk.hec.json.event.enrichment" : "org=fin,bu=south-east-us", "splunk.hec.ssl.validate.certs": "false", "splunk.hec.track.data" : "true" } }'
预期返回:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。