赞
踩
在flink的官方文档中看到,无论是source还是sink都称之为flink的Connectors
点击overview然后就可以看到它所有的cnnectors
从上边的图片中我们发现,这些组件不是都作为source和sink,有的可以作为source,有的可以作为sink,有的同时当做source和sink。我们点击Redis(sink)
可以看到flink提供了一个接口把数据发送的redis中。这个sink可以使用三种不同的方法与redis不同的环境进行通信。
1、Single Redis Server 单节点模式
2、Redis Cluster 集群模式
3、Redis Sentinel 哨兵模式
继续往下看有他针对不同Redis环境的示例,接下来我们就参考人家给出的示例写一个自己的示例:
在写之前我们需要把redis的依赖加入进去,文章的开头其实也写了:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>flink-connector-redis_2.11</artifactId>
<version>1.0</version>
</dependency>
RedisSink组件
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;
public class MyRedisMapper implements RedisMapper<Tuple2<String, Integer>> {
public RedisCommandDescription getCommandDescription() {
return new RedisCommandDescription(RedisCommand.HSET, "HASH_NAME");
}
public String getKeyFromData(Tuple2<String, Integer> data) {
return data.f0;
}
public String getValueFromData(Tuple2<String, Integer> data) {
return data.f1+"";
}
}
job任务
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.util.Collector;
public class FlinkRedisSinkDemo {
public static void main(String[] args) throws Exception {
//获取流执行环境
StreamExecutionEnvironment senv=StreamExecutionEnvironment.getExecutionEnvironment();
//获取数据源
DataStream<String> source=senv.socketTextStream("192.168.112.111",1234);
DataStream<Tuple2<String,Integer>> data = source.flatMap(new FlatMapFunction<String, Tuple2<String,Integer>>() {
public void flatMap(String line, Collector<Tuple2<String,Integer>> collector) throws Exception {
String[] words = line.split(" ");
for (String word:words
) {
collector.collect(new Tuple2<String, Integer>(word,1));
}
}
}).keyBy(0).sum(1);
FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig.Builder().setHost("192.168.112.111").setPort(6379).build();
data.addSink(new RedisSink<Tuple2<String, Integer>>(conf, new MyRedisMapper()));
//执行流失计算
senv.execute("FlinkRedisSinkDemo");
}
}
然后我们把redis服务启动了,并在linux操作系统打开一个socket流端口供flink任务链接:
nc -l -p 1234
随后我们启动任务,任务启动之后我们就可以通过socket给flink传数据了
然后我进入redis客户端查看一下这个hash里边的数据
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。