赞
踩
上官网,看quick start
http://kafka.apache.org/quickstart
太简单不写了。
注意一点:
要配置server.properties中的一个属性:
advertised.host.name=主机IP
否则只能在本机连接kafka,外部无法访问。
可以看到配置文件中的说明:
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.host.name=192.168.80.224
生产者:
@Test
public void testProducer() throws ExecutionException, InterruptedException {
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.80.224:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
//创建kafka的生产者类
Producer<String, String> producer = new KafkaProducer<>(props);
System.out.println("producer is created!");
String topic = "test";
producer.send(new ProducerRecord<String, String>(topic, "idea-key2", "java-message 1")).get();
producer.send(new ProducerRecord<String, String>(topic, "idea-key2", "java-message 2")).get();
producer.send(new ProducerRecord<String, String>(topic, "idea-key2", "java-message 3")).get();
producer.close();
System.out.println("producer is closed!");
}
消费者:
@Test
public void testConsumer() {
String topic = "test";
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.80.224:9092");//用于建立与 kafka 集群连接的 host/port 组。
props.put("group.id", "testGroup1");// Consumer Group Name
props.put("enable.auto.commit", "true");// Consumer 的 offset 是否自动提交
props.put("auto.commit.interval.ms", "1000");// 自动提交 offset 到 zookeeper 的时间间隔,时间是毫秒
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
Consumer<String, String> consumer = new KafkaConsumer(props);
consumer.subscribe(Arrays.asList(topic));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("partition = %d, offset = %d, key = %s, value = %s%n", record.partition(), record.offset(), record.key(), record.value());
}
以上代码可以进行测试,当然,可以打开kafka自带的生产者和消费者进行查看消息:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。