赞
踩
Kafka中的每个partition都由一系列有序的、不可变的消息组成,这些消息被连续的追加到partition中。partition中的每个消息都有一个连续的序号,用于partition唯一标识一条消息。
Offset记录着下一条将要发送给Consumer的消息的序号。
流处理系统常见的三种语义:
最多一次 | 每个记录要么处理一次,要么根本不处理 |
至少一次 | 这比最多一次强,因为它确保不会丢失任何数据。但是可能有重复的 |
有且仅有一次 | 每条记录将被精确处理一次,没有数据会丢失,也没有数据会被多次处理 |
The semantics of streaming systems are often captured in terms of how many times each record can be processed by the system. There are three types of guarantees that a system can provide under all possible operating conditions (despite failures, etc.)
- At most once: Each record will be either processed once or not processed at all.
- At least once: Each record will be processed one or more times. This is stronger than at-most once as it ensure that no data will be lost. But there may be duplicates.
- Exactly once: Each record will be processed exactly once - no data will be lost and no data will be processed multiple times. This is obviously the strongest guarantee of the three.
Offset首先建议存放到Zookeeper中,Zookeeper相比于HBASE等来说更为轻量级,且是做HA(高可用性集群,High Available)的,offset更安全。
对于offset管理常见的两步操作:
启动一个Kafka生产者,测试使用topic:tp_kafka:
./kafka-console-producer.sh --broker-list hadoop000:9092 --topic tp_kafka
启动一个Kafka消费者:
./kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic tp_kafka
在IDEA中生产数据:
- package com.taipark.spark;
-
- import kafka.javaapi.producer.Producer;
- import kafka.producer.KeyedMessage;
- import kafka.producer.ProducerConfig;
-
- import java.util.Properties;
- import java.util.UUID;
-
- public class KafkaApp {
-
- public static void main(String[] args) {
- String topic = "tp_kafka";
-
- Properties props = new Properties();
- props.put("serializer.class","kafka.serializer.StringEncoder");
- props.put("metadata.broker.list","hadoop000:9092");
- props.put("request.required.acks","1");
- props.put("partitioner.class","kafka.producer.DefaultPartitioner");
- Producer<String,String> producer = new Producer<>(new ProducerConfig(props));
-
- for(int index = 0;index <100; index++){
- KeyedMessage<String, String> message = new KeyedMessage<>(topic, index + "", "taipark" + UUID.randomUUID());
- producer.send(message);
- }
- System.out.println("数据生产完毕");
-
- }
- }
-

Spark Streaming链接Kafka统计个数:
- package com.taipark.spark.offset
-
- import kafka.serializer.StringDecoder
- import org.apache.spark.SparkConf
- import org.apache.spark.streaming.kafka.KafkaUtils
- import org.apache.spark.streaming.{Seconds, StreamingContext}
-
- object Offset01App {
- def main(args: Array[String]): Unit = {
- val sparkConf = new SparkConf().setMaster("local[2]").setAppName("Offset01App")
- val ssc = new StreamingContext(sparkConf,Seconds(10))
-
- val kafkaParams = Map[String, String](
- "metadata.broker.list" -> "hadoop000:9092",
- "auto.offset.reset" -> "smallest"
- )
- val topics = "tp_kafka".split(",").toSet
- val messages = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
-
- messages.foreachRDD(rdd=>{
- if(!rdd.isEmpty()){
- println("Taipark" + rdd.count())
- }
- })
-
- ssc.start()
- ssc.awaitTermination()
- }
-
- }

再生产100条Kafka数据->Spark Streaming接受:
但这时如果Spark Streaming停止后重启:
会发现这里重头开始计数了,原因是代码里将auto.offset.reset的值设置为了smallest。(kafka-0.10.1.X版本之前)
在HDFS中创建一个/offset文件夹:
hadoop fs -mkdir /offset
使用Checkpoint:
- package com.taipark.spark.offset
-
- import kafka.serializer.StringDecoder
- import org.apache.spark.SparkConf
- import org.apache.spark.streaming.kafka.KafkaUtils
- import org.apache.spark.streaming.{Duration, Seconds, StreamingContext}
-
- object Offset01App {
- def main(args: Array[String]): Unit = {
- val sparkConf = new SparkConf().setMaster("local[2]").setAppName("Offset01App")
-
- val kafkaParams = Map[String, String](
- "metadata.broker.list" -> "hadoop000:9092",
- "auto.offset.reset" -> "smallest"
- )
- val topics = "tp_kafka".split(",").toSet
- val checkpointDirectory = "hdfs://hadoop000:8020/offset/"
- def functionToCreateContext():StreamingContext = {
- val ssc = new StreamingContext(sparkConf,Seconds(10))
- val messages = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
- //设置checkpoint
- ssc.checkpoint(checkpointDirectory)
- messages.checkpoint(Duration(10*1000))
-
- messages.foreachRDD(rdd=>{
- if(!rdd.isEmpty()){
- println("Taipark" + rdd.count())
- }
- })
-
- ssc
- }
- val ssc = StreamingContext.getOrCreate(checkpointDirectory,functionToCreateContext _)
-
-
-
-
- ssc.start()
- ssc.awaitTermination()
- }
-
- }

注:IDEA修改HDFS用户,在设置里的VM options中:
-DHADOOP_USER_NAME=hadoop
先启动:
发现消费了之前的100条。这是停止之后,生产100条,再启动:
发现这里只读取了上次结束到这次启动之间的100条,而不是像smallest一样读取之前所有条数。
但是checkpiont存在问题,如果采用这种方式管理offset,只要业务逻辑发生了变化,则checkpoint就没有作用了。因为其调用的是getOrCreate()。
思路:
- package com.taipark.spark.offset
-
- import kafka.common.TopicAndPartition
- import kafka.message.MessageAndMetadata
- import kafka.serializer.StringDecoder
- import org.apache.spark.SparkConf
- import org.apache.spark.streaming.kafka.{HasOffsetRanges, KafkaUtils}
- import org.apache.spark.streaming.{Seconds, StreamingContext}
-
- object Offset01App {
- def main(args: Array[String]): Unit = {
- val sparkConf = new SparkConf().setMaster("local[2]").setAppName("Offset01App")
- val ssc = new StreamingContext(sparkConf,Seconds(10))
-
-
- val kafkaParams = Map[String, String](
- "metadata.broker.list" -> "hadoop000:9092",
- "auto.offset.reset" -> "smallest"
- )
- val topics = "tp_kafka".split(",").toSet
- //从某地获取偏移量
- val fromOffsets = Map[TopicAndPartition,Long]()
-
- val messages = if(fromOffsets.size == 0){ //从头消费
- KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
- }else{ //从指定偏移量消费
-
- val messageHandler = (mm:MessageAndMetadata[String,String]) => (mm.key,mm.message())
- KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder,(String,String)](ssc,kafkaParams,fromOffsets,messageHandler)
-
- )
- }
-
- messages.foreachRDD(rdd=>{
- if(!rdd.isEmpty()){
- //业务逻辑
- println("Taipark" + rdd.count())
-
- //将offset提交保存到某地
- val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
- offsetRanges.foreach(x =>{
- //提交如下信息提交到外部存储
- println(s"${x.topic} ${x.partition} ${x.fromOffset} ${x.untilOffset}")
- })
- }
- })
-
- ssc.start()
- ssc.awaitTermination()
- }
-
- }

解决方式1:实现幂等(idempotent)
在编程中一个幂等操作的特点是其任意多次执行所产生的影响均与一次执行的影响相同。
解决方式2:事务 (transaction)
1.数据库事务可以包含一个或多个数据库操作,但这些操作构成一个逻辑上的整体。
2.构成逻辑整体的这些数据库操作,要么全部执行成功,要么全部不执行。
3.构成事务的所有操作,要么全都对数据库产生影响,要么全都不产生影响,即不管事务是否执行成功,数据库总能保持一致性状态。
4.以上即使在数据库出现故障以及并发事务存在的情况下依然成立。
将业务逻辑与offset保存放在一个事务里,仅执行一次。
earliest | 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费 |
latest | 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据 |
none | topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常 |
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。