赞
踩
公司项目开发用到了kafka,可是某个菜鸟不会,于是不得不进行的为期一周的晚自习去恶补相关知识。
但是恶补来知识很零散,所以趁这次周末写一篇博客,总结一下得失,顺便查缺补漏。
Kafka是最初由Linkedin公司开发,是一个分布式、支持分区的(partition)、多副本的(replica),基于zookeeper协调的分布式消息系统,它的最大的特性就可以实时的处理大量数据以满足各种需求场景:比如基于hadoop的批处理系统、低延迟的实时系统、storm/Spark流式处理引擎,web/nginx日志、访问日志,消息服等等,用scala语言编写,Linkedin于2010年贡献给了Apache基金会并成为顶级开源项目。
Broker:即Kafka的服务器,用户存储消息,Kafa集群中的一台或多台服务器统称为 broker。
Producer:消息的生产者,是消息的产生的源头,负责生成消息并发送到Kafka服务器上。
Consumer:消息消费者,是消息的使用方,负责消费Kafka服务器上的消息。
Group:消费者分组,用于归组同类消费者,在Kafka中,多个消费者可以共同消息一个Topic下的消息,每个消费者消费其中的部分消息,这些消费者就组成了一个分组,拥有同一个分组名称,通常也被称为消费者集群。group下订阅的topic下的每个分区只能分配给某个group下的一个consumer(当然该分区还可以被分配给其他group)
Offset:消息存储在Kafka的Broker上,消费者拉取消息数据的过程中需要知道消息在文件中的偏移量,这个偏移量就是所谓的Offset。
Topic:主题,由用户定义并配置在Kafka服务器,用于建立生产者和消息者之间的订阅关系:生产者发送消息到指定的Topic下,消息者从这个Topic下消费消息。
Partition:消息分区,一个Topic下面会分为很多分区,例如:“kafka-test”这个Topic下可以分为6个分区,分别由两台服务器提供,那么通常可以配置为让每台服务器提供3个分区,假如服务器ID分别为0、1,则所有的分区为0-0、0-1、0-2和1-0、1-1、1-2。Topic物理上的分组,一个 topic可以分为多个 partition,每个 partition 是一个有序的队列。partition中的每条消息都会被分配一个有序的 id(offset)。
优点:
高吞吐量
低延迟
分布式
消息代理能力
高并发
批处理能力(ETL之类的功能)
实时处理
缺点
没有完整的监控工具集
消息调整的问题
不支持使用通配符选择主题
缺乏一致性
性能降低
表现笨拙
参考:https://blog.csdn.net/yunfeng482/article/details/72856762
假设你已经安装好自己的kafka了
引入依赖和设置application.yml,版本号根据自己的kafka版本决定
- <dependency>
- <groupId>org.springframework.kafka</groupId>
- <artifactId>spring-kafka</artifactId>
- </dependency>
- kafka:
- bootstrap_servers_config: 192.168.75.128:9092
- retries_config: 0
- batch_size_config: 16384
- buffer_memory_config: 33554432
- topic: one,two,three
- group_id: asdf
- auto_offset_reset: earliest
- enable-auto-commit: true
- auto_commit_interval: 100
- consumeOneTopic: one
- consumeTwoTopic: two
编写kafka生产者配置类 KafkaConsumerConfig.java
- package com.example.demokafka.config;
-
- import org.apache.kafka.clients.producer.ProducerConfig;
- import org.apache.kafka.common.serialization.StringSerializer;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Configuration;
- import org.springframework.kafka.annotation.EnableKafka;
- import org.springframework.kafka.core.DefaultKafkaProducerFactory;
- import org.springframework.kafka.core.KafkaTemplate;
- import org.springframework.kafka.core.ProducerFactory;
-
- import java.util.HashMap;
- import java.util.Map;
-
- /**
- * className: KafkaConfig <br/>
- * packageName:com.example.demokafka.config <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-7 22:44 <br/>
- */
- @Configuration
- @EnableKafka
- public class KafkaProducerConfig {
- @Value("${kafka.bootstrap_servers_config}")
- private String bootstrap_servers_config;
-
- @Value("${kafka.retries_config}")
- private String retries_config;
-
- @Value("${kafka.batch_size_config}")
- private String batch_size_config;
-
- @Value("${kafka.buffer_memory_config}")
- private String buffer_memory_config;
-
- @Value("${kafka.group_id}")
- private String groupId;
-
- @Value("${kafka.auto_offset_reset}")
- private String autoOffsetReset;
-
- @Value("${kafka.enable-auto-commit}")
- private String enableAutoCommit;
-
- @Value("${kafka.auto_commit_interval}")
- private String autoCommitInterval;
-
- @Bean
- public Map<String, Object> producerConfigs() {
- HashMap<String, Object> configs = new HashMap<>();
- // 生产者
- configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers_config);
- configs.put(ProducerConfig.RETRIES_CONFIG, retries_config);
- configs.put(ProducerConfig.BATCH_SIZE_CONFIG, batch_size_config);
- configs.put(ProducerConfig.BUFFER_MEMORY_CONFIG, buffer_memory_config);
- configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
- configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
-
- return configs;
- }
-
- @Bean
- public ProducerFactory<String, String> producerFactory() {
- return new DefaultKafkaProducerFactory<>(producerConfigs());
- }
-
- @Bean
- public KafkaTemplate<String, String> kafkaTemplate() {
- //注意这个 不要把auto flush设置成true,会非常低效率
- KafkaTemplate<String, String> stringStringKafkaTemplate = new KafkaTemplate<>(producerFactory());
- return stringStringKafkaTemplate;
- }
- }
编写消费者配置类 KafkaConsumerConfig.java
- package com.example.demokafka.config;
-
- import org.apache.kafka.clients.consumer.Consumer;
- import org.apache.kafka.clients.consumer.ConsumerConfig;
- import org.apache.kafka.clients.producer.ProducerConfig;
- import org.apache.kafka.common.serialization.StringDeserializer;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Configuration;
- import org.springframework.kafka.annotation.EnableKafka;
- import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
- import org.springframework.kafka.config.KafkaListenerContainerFactory;
- import org.springframework.kafka.core.*;
- import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
-
- import java.util.HashMap;
- import java.util.Map;
-
- /**
- * className: KafkaConfig <br/>
- * packageName:com.example.demokafka.config <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-7 22:44 <br/>
- */
- @Configuration
- @EnableKafka
- public class KafkaConsumerConfig {
- @Value("${kafka.bootstrap_servers_config}")
- private String bootstrap_servers_config;
-
- @Value("${kafka.retries_config}")
- private String retries_config;
-
- @Value("${kafka.batch_size_config}")
- private String batch_size_config;
-
- @Value("${kafka.buffer_memory_config}")
- private String buffer_memory_config;
-
- @Value("${kafka.group_id}")
- private String groupId;
-
- @Value("${kafka.auto_offset_reset}")
- private String autoOffsetReset;
-
- @Value("${kafka.enable-auto-commit}")
- private String enableAutoCommit;
-
-
- @Value("${kafka.auto_commit_interval}")
- private String autoCommitInterval;
-
- @Bean
- public Map<String, Object> consumerConfigs() {
- HashMap<String, Object> configs = new HashMap<>();
- //消费者
- configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers_config);
- configs.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
- configs.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
- configs.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
- configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
- configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
-
- return configs;
- }
-
- @Bean
- public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
- ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
- factory.setConsumerFactory(consumerFactory());
- factory.getContainerProperties().setPollTimeout(1000);
- return factory;
- }
-
- @Bean
- public Consumer<String, String> consumer() {
- return consumerFactory().createConsumer();
- }
-
- public ConsumerFactory<String, String> consumerFactory() {
- return new DefaultKafkaConsumerFactory<>(consumerConfigs());
- }
- }
编写kafka的消息发送者 KafkaSender.java
- package com.example.demokafka.producer;
-
- import com.alibaba.fastjson.JSON;
- import com.example.demokafka.entity.Message;
- import lombok.extern.slf4j.Slf4j;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.kafka.core.KafkaTemplate;
- import org.springframework.stereotype.Component;
-
- import java.util.Date;
- import java.util.UUID;
- import java.util.concurrent.Executors;
-
- /**
- * className: KafkaSender <br/>
- * packageName:com.example.demokafka.producer <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-7 22:45 <br/>
- */
- @Component
- @Slf4j
- public class KafkaSender {
-
- @Autowired
- private KafkaTemplate<String, String> kafkaTemplate;
-
- //发送消息方法
- public void send(String topic) {
- Executors.newSingleThreadExecutor().submit(() -> {
- while (true) {
- Message message = new Message();
- message.setId(System.currentTimeMillis());
- message.setMsg(UUID.randomUUID().toString());
- kafkaTemplate.send(topic, JSON.toJSONString(message));
- try {
- Thread.sleep(1000);
- } catch (InterruptedException e) {
- e.printStackTrace();
- }
- log.info("send success");
- }
- });
- }
-
- }
接下类编写消费者的代码,
1.新建一个消费者父类 MsgConsume,所有的消费者需要实现这个父类的抽象方法
- package com.example.demokafka.consumer.base;
-
- import lombok.Data;
- import org.apache.kafka.clients.consumer.ConsumerRecord;
- import org.springframework.stereotype.Component;
-
- /**
- * className: MsgConsumer <br/>
- * packageName:com.example.demokafka.consumer.base <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-8 22:21 <br/>
- */
-
- @Component
- @Data
- public abstract class MsgConsumer {
-
- public abstract String getTopic();
-
- public abstract void doWork(ConsumerRecord consumerRecord);
-
- }
2.编写一个消费者去继承他
- package com.example.demokafka.consumer;
-
- import com.example.demokafka.consumer.base.MsgConsumer;
- import org.apache.kafka.clients.consumer.ConsumerRecord;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.stereotype.Component;
-
- /**
- * className: ConsumeOne <br/>
- * packageName:com.example.demokafka.consumer <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-25 13:38 <br/>
- */
- @Component
- public class ConsumeOne extends MsgConsumer {
-
- @Value("${kafka.consumeOneTopic}")
- private String topic;
-
- @Override
- public String getTopic() {
- return topic;
- }
-
- @Override
- public void doWork(ConsumerRecord consumerRecord) {
- System.out.println("it is one");
- }
- }
3.然后新建一个消费者group,这个group中有很多消费者,消费到消息时,group循环消费者list,每个消费者消费自己topic下的消息
- package com.example.demokafka.consumer.base;
-
- import lombok.Data;
- import org.springframework.stereotype.Component;
-
- import java.util.ArrayList;
- import java.util.List;
-
- /**
- * className: ConsumerGroup <br/>
- * packageName:com.example.demokafka.consumer.base <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-25 13:35 <br/>
- */
-
- @Component
- @Data
- public class ConsumerGroup {
-
- private List<MsgConsumer> consumerList= new ArrayList<>();
-
- }
4.以上好了以后,建立 KafkaReceive.java 用于消费消息
- package com.example.demokafka.consumer;
-
- import com.example.demokafka.consumer.base.ConsumerGroup;
- import com.example.demokafka.consumer.base.MsgConsumer;
- import org.apache.kafka.clients.consumer.Consumer;
- import org.apache.kafka.clients.consumer.ConsumerRecord;
- import org.apache.kafka.clients.consumer.ConsumerRecords;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.stereotype.Component;
-
- import java.time.Duration;
- import java.util.Arrays;
- import java.util.concurrent.Executors;
-
- /**
- * className: KafkaConsumer <br/>
- * packageName:com.example.demokafka.consumer <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-8 22:14 <br/>
- */
-
- @Component
- public class KafkaReceive {
-
- @Autowired
- private Consumer consumer;
-
- @Value("${kafka.topic}")
- private String topic;
-
- public void consumer(ConsumerGroup consumerGroup) {
- Executors.newSingleThreadExecutor().submit(() -> {
- try {
- // 订阅主题
- consumer.subscribe(Arrays.asList(topic.split(",")));
- while (true) {
- ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000));
- for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
- for (MsgConsumer msgConsumer : consumerGroup.getConsumerList()) {
- if (msgConsumer.getTopic().contains(consumerRecord.topic())) {
- msgConsumer.doWork(consumerRecord);
- }
- }
- }
- }
- } catch (Exception e) {
- e.printStackTrace();
- }
- });
- }
- }
5.最后在项目启动时候,加上 kafka 的消费任务
- package com.example.demokafka.start;
-
- import com.example.demokafka.consumer.ConsumeOne;
- import com.example.demokafka.consumer.ConsumeTwo;
- import com.example.demokafka.consumer.KafkaReceive;
- import com.example.demokafka.consumer.base.ConsumerGroup;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.boot.CommandLineRunner;
- import org.springframework.context.annotation.Configuration;
-
- /**
- * className: KafkaTask <br/>
- * packageName:com.example.demokafka.start <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-25 19:10 <br/>
- */
- @Configuration
- public class KafkaTask implements CommandLineRunner{
-
- @Autowired
- private ConsumerGroup consumerGroup;
-
- @Autowired
- private KafkaReceive kafkaReceive;
-
- @Autowired
- private ConsumeOne consumeOne;
-
- @Autowired
- private ConsumeTwo consumeTwo;
-
- @Override
- public void run(String... args) throws Exception {
- startConsume();
- }
-
- /**
- * kafka消费
- */
- public void startConsume(){
- consumerGroup.getConsumerList().add(consumeOne);
- consumerGroup.getConsumerList().add(consumeTwo);
- kafkaReceive.consumer(consumerGroup);
- }
-
- }
ok,到此为止基本上就结束了,编写一个接口去测试一下是否可以消费,
- package com.example.demokafka.controller;
-
- import com.example.demokafka.producer.KafkaSender;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.web.bind.annotation.GetMapping;
- import org.springframework.web.bind.annotation.RequestParam;
- import org.springframework.web.bind.annotation.RestController;
-
- /**
- * className: KafkaController <br/>
- * packageName:com.example.demokafka.controller <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-7 22:51 <br/>
- */
- @RestController
- public class KafkaController {
-
- @Autowired
- private KafkaSender kafkaSender;
-
- @GetMapping("/send")
- public String send(@RequestParam(name = "topic") String topic) {
- kafkaSender.send(topic);
- return "SUCCESS";
- }
- }
浏览器发送http请求:http://localhost:8080/send?topic=one
控制台打印信息如下:
除了上面的那种消费方法,还可以使用注解监听的方式消费消息,如下 KafkaListen.java
- package com.example.demokafka.consumer;
-
- import lombok.extern.slf4j.Slf4j;
- import org.apache.kafka.clients.consumer.ConsumerRecord;
- import org.springframework.kafka.annotation.KafkaListener;
- import org.springframework.stereotype.Component;
-
- import java.util.Optional;
-
- /**
- * className: KafkaReceiver <br/>
- * packageName:com.example.demokafka.consumer <br/>
- * description: <br/>
- *
- * @author yuwen <br/>
- * @date: 2020-4-7 22:48 <br/>
- */
- @Component
- @Slf4j
- public class KafkaListen {
-
- @KafkaListener(topics = {"yuwen"})
- public void listen(ConsumerRecord<?, ?> record) {
-
- Optional<?> kafkaMessage = Optional.ofNullable(record.value());
-
- if (kafkaMessage.isPresent()) {
- Object message = kafkaMessage.get();
- log.info("----------------- record =" + record);
- log.info("------------------ message =" + message);
- }
-
- }
-
- }
kafka初步的学习已经达成目标,处于会用阶段,至于kafka原理什么的,这些需要深入了解才能写出博客,不然就是误己误人。因此在这里就暂告一段落吧。
写了一篇博客,算是一个小小的学习产物吧,而且以后再次回顾kakfa也非常的方便。
end
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。