当前位置:   article > 正文

Flink Stream API实践

Flink Stream API实践

目录

Flink程序的基本构成

获得执行环境(environment)

加载/初始化数据(source)

基于文件

基于socket

基于集合

自定义

转换操作(transformation)

基本转换

物理分区

任务链和资源组

名称和描述

指定计算结果放置在何处(sink)

触发程序执行(execution)


 

Flink程序的基本构成

一个Flink程序的基本构成如下:

1.获得一个执行环境(environment)
2.加载/创建初始数据(source)
3.在此数据上指定转换(transformation)
4.指定将计算结果放置在何处(sink)
5.触发程序执行(execution)

获得执行环境(environment)

获得流处理执行环境的三种方式:

1.根据上下文实际情况的执行环境
StreamExecutionEnvironment.getExecutionEnvironment();


2.本地执行环境
StreamExecutionEnvironment.createLocalEnvironment();


3.远程执行环境
createRemoteEnvironment(String host, int port, String... jarFiles);

例如:

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

LocalStreamEnvironment localEnvironment = StreamExecutionEnvironment.createLocalEnvironment();

StreamExecutionEnvironment remoteEnvironment = StreamExecutionEnvironment.createRemoteEnvironment("node1", 8081,"path/to/jarfile");

通常情况下,直接使用getExecutionEnvironment()来获取执行环境,因为程序运行时根据上下文条件自动选择相应的环境。如果在IDE中执行程序,将返回本地的执行环境。如果将程序打成jar文件,并通过命令提交jar到flink集群,此时将返回flink集群环境。


加载/初始化数据(source)

基于文件

  • readTextFile(path)
  • readFile(fileInputFormat, path)
  • readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo)

例如:

DataSource<String> lineDs = env.readTextFile("data/words.txt");

基于socket

socketTextStream(hostname, port)

例如:

DataStreamSource<String> lineStream = env.socketTextStream("node2", 7777);

基于集合

  • fromCollection(Collection)
  • fromCollection(Iterator, Class)
  • fromElements(T ...)
  • fromParallelCollection(SplittableIterator, Class)
  • generateSequence(from, to)

例如:

DataStreamSource<Integer> source = env.fromCollection(Arrays.asList(0, 1, 2));
DataStream<Integer> dataStream = env.fromElements(1, 0, 3, 0, 5);
DataStreamSource<Long> source1 = env.generateSequence(1, 10);

自定义

  • 旧的方式:addSource(SourceFunction<OUT> function)

例如:读取kafka的数据

env.addSource(new FlinkKafkaConsumer<>(...))

  • 新的方式:fromSource(Source<OUT, ?, ?> source, WatermarkStrategy<OUT> timestampsAndWatermarks, String sourceName)

例如:读取kafka的数据

env.fromSource(
        kafkaSource, 
        WatermarkStrategy.noWatermarks(), 
        "kafkasource")


转换操作(transformation)

转换操作:运算符将一个或多个数据流转换为新的数据流。

程序可以将多个转换组合成复杂的数据流拓扑结构。

本节描述了基本转换、应用这些转换后的有效物理分区以及对Flink运算符链的深入了解。

基本转换

  • Map
  • FlatMap
  • Filter
  • KeyBy
  • Reduce
  • Window
  • WindowAll
  • Window Apply
  • WindowReduce
  • Union
  • Window Join
  • Interval Join
  • Window CoGroup
  • Connect
  • CoMap, CoFlatMap
  • Cache

Flink WordCount工程 的基础上操作,把以下案例代码放在org.example.transformations包或者其他自定义的包下。

Map

DataStream → DataStream

对流数据里的每个元素进行转换,得到另一个流数据。

DataStream<Integer> dataStream = //...
dataStream.map(new MapFunction<Integer, Integer>() {
    @Override
    public Integer map(Integer value) throws Exception {
        return 2 * value;
    }
});

将数据流里的每个元素乘以2,得到新的数据流并输出,完整代码如下:

  1. import org.apache.flink.api.common.functions.MapFunction;
  2. import org.apache.flink.streaming.api.datastream.DataStream;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. /*
  6. Takes one element and produces one element.
  7. A map function that doubles the values of the input stream:
  8. */
  9. public class OperatorMap {
  10. public static void main(String[] args) throws Exception {
  11. // env
  12. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  13. // source
  14. DataStream<Integer> dataStream = env.fromElements(1, 2, 3, 4, 5);
  15. // transformations
  16. SingleOutputStreamOperator<Integer> data = dataStream.map(new MapFunction<Integer, Integer>() {
  17. @Override
  18. public Integer map(Integer value) throws Exception {
  19. return 2 * value;
  20. }
  21. });
  22. // sink
  23. data.print();
  24. // execute
  25. env.execute();
  26. }
  27. }

运行结果

2> 10
7> 4
6> 2
8> 6
1> 8

FlatMap

DataStream → DataStream

将数据流中的每个元素转换得到0个,1个 或 多个元素

dataStream.flatMap(new FlatMapFunction<String, String>() {
    @Override
    public void flatMap(String value, Collector<String> out)
        throws Exception {
        for(String word: value.split(" ")){
            out.collect(word);
        }
    }
});

把句子中的单词取出来,完整代码如下:

  1. import org.apache.flink.api.common.functions.FlatMapFunction;
  2. import org.apache.flink.streaming.api.datastream.DataStream;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. import org.apache.flink.util.Collector;
  6. /*
  7. Takes one element and produces zero, one, or more elements.
  8. A flatmap function that splits sentences to words:
  9. */
  10. public class OperatorFlatMap {
  11. public static void main(String[] args) throws Exception {
  12. // env
  13. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  14. // source
  15. DataStream<String> dataStream = env.fromElements("hello world", "hello flink", "hello hadoop");
  16. // transformations
  17. SingleOutputStreamOperator<String> data = dataStream.flatMap(new FlatMapFunction<String, String>() {
  18. @Override
  19. public void flatMap(String value, Collector<String> out) throws Exception {
  20. for (String word : value.split(" ")) {
  21. out.collect(word);
  22. }
  23. }
  24. });
  25. // sink
  26. data.print();
  27. // execute
  28. env.execute();
  29. }
  30. }

运行结果

5> hello
7> hello
6> hello
7> hadoop
5> world
6> flink

Filter

DataStream → DataStream

为每个元素计算一个布尔函数,并保留函数返回true的元素。

dataStream.filter(new FilterFunction<Integer>() {
    @Override
    public boolean filter(Integer value) throws Exception {
        return value != 0;
    }
});

输出不是0的元素,完整代码如下:

  1. import org.apache.flink.api.common.functions.FilterFunction;
  2. import org.apache.flink.streaming.api.datastream.DataStream;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. /*
  6. Evaluates a boolean function for each element and retains those for which the function returns true.
  7. A filter that filters out zero values
  8. */
  9. public class OperatorFilter {
  10. public static void main(String[] args) throws Exception {
  11. // env
  12. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  13. // source
  14. DataStream<Integer> dataStream = env.fromElements(1, 0, 3, 0, 5);
  15. // transformations
  16. SingleOutputStreamOperator<Integer> data = dataStream.filter(new FilterFunction<Integer>() {
  17. @Override
  18. public boolean filter(Integer value) throws Exception {
  19. return value != 0;
  20. }
  21. });
  22. // sink
  23. data.print();
  24. // execute
  25. env.execute();
  26. }
  27. }

运行结果

8> 5
6> 3
4> 1

KeyBy

DataStream → KeyedStream

在逻辑上将流划分为不相交的分区。具有相同键的所有记录都被分配到同一个分区。在内部,keyBy()是通过散列分区实现的。

dataStream.keyBy(value -> value.getSomeKey());
dataStream.keyBy(value -> value.f0);

根据key进行分组,并对值进行求和,完整代码如下:

  1. import org.apache.flink.api.java.tuple.Tuple2;
  2. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. import java.util.Arrays;
  6. import java.util.List;
  7. /*
  8. Logically partitions a stream into disjoint partitions.
  9. All records with the same key are assigned to the same partition. Internally, keyBy() is implemented with hash partitioning.
  10. There are different ways to specify keys.
  11. */
  12. public class OperatorKeyBy {
  13. public static void main(String[] args) throws Exception {
  14. // env
  15. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  16. // source
  17. List<Tuple2<String, Integer>> dataSource = Arrays.asList(
  18. Tuple2.of("hello", 3),
  19. Tuple2.of("flink", 2),
  20. Tuple2.of("hadoop", 4),
  21. Tuple2.of("flink", 5));
  22. DataStreamSource<Tuple2<String, Integer>> dataStream = env.fromCollection(dataSource);
  23. // transformations
  24. SingleOutputStreamOperator<Tuple2<String, Integer>> data = dataStream.keyBy(value -> value.f0).sum(1);
  25. // sink
  26. data.print();
  27. // execute
  28. env.execute();
  29. }
  30. }

运行结果

3> (hello,3)
7> (flink,2)
8> (hadoop,4)
7> (flink,7)

Reduce

KeyedStream → DataStream

对键控数据流进行“滚动”缩减。将当前元素与上一个缩减值组合并发出新值。

keyedStream.reduce(new ReduceFunction<Integer>() {
    @Override
    public Integer reduce(Integer value1, Integer value2)
    throws Exception {
        return value1 + value2;
    }
});

对有相同key的值进行规约运算,这里做求和运算,完整代码如下:

  1. import org.apache.flink.api.common.functions.ReduceFunction;
  2. import org.apache.flink.api.java.tuple.Tuple2;
  3. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  4. import org.apache.flink.streaming.api.datastream.KeyedStream;
  5. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  6. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  7. import java.util.Arrays;
  8. import java.util.List;
  9. /*
  10. KeyedStream → DataStream
  11. A “rolling” reduce on a keyed data stream.
  12. Combines the current element with the last reduced value and emits the new value.
  13. A reduce function that creates a stream of partial sums:
  14. */
  15. public class OperatorReduce {
  16. public static void main(String[] args) throws Exception {
  17. // env
  18. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  19. // source
  20. List<Tuple2<String, Integer>> dataSource = Arrays.asList(
  21. Tuple2.of("hello", 3),
  22. Tuple2.of("flink", 2),
  23. Tuple2.of("hadoop", 3),
  24. Tuple2.of("flink", 5),
  25. Tuple2.of("hello", 1),
  26. Tuple2.of("hadoop", 1));
  27. DataStreamSource<Tuple2<String, Integer>> dataStream = env.fromCollection(dataSource);
  28. // transformations
  29. KeyedStream<Tuple2<String, Integer>, String> keyedStream = dataStream.keyBy(value -> value.f0);
  30. SingleOutputStreamOperator<Tuple2<String, Integer>> data = keyedStream.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
  31. @Override
  32. public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
  33. return Tuple2.of(
  34. value1.f0, (value1.f1 + value2.f1)
  35. );
  36. }
  37. });
  38. // sink
  39. data.print();
  40. // execute
  41. env.execute();
  42. }
  43. }

运行结果

7> (flink,2)
8> (hadoop,3)
3> (hello,3)
7> (flink,7)
8> (hadoop,4)
3> (hello,4) 

Window

KeyedStream → WindowedStream

可以在已分区的KeyedStreams上定义窗口(Windows)。窗口根据某些特性(例如,最后10秒内到达的数据)对每个键中的数据进行分组。

dataStream
  .keyBy(value -> value.f0)
  .window(TumblingEventTimeWindows.of(Time.seconds(10))); 

可以对窗口的数据进行计算,例如:计算10秒滚动窗口中每个单词出现的次数,案例代码如下:

  1. import org.apache.flink.api.common.functions.FlatMapFunction;
  2. import org.apache.flink.api.java.tuple.Tuple2;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
  6. import org.apache.flink.streaming.api.windowing.time.Time;
  7. import org.apache.flink.util.Collector;
  8. public class WindowWordCount {
  9. public static void main(String[] args) throws Exception {
  10. // env
  11. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  12. // source and transformations
  13. SingleOutputStreamOperator<Tuple2<String, Integer>> dataStream = env
  14. .socketTextStream("node1", 7777)
  15. .flatMap(new Splitter())
  16. .keyBy(value -> value.f0)
  17. .window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  18. .sum(1);
  19. // sink
  20. dataStream.print();
  21. // execution
  22. env.execute("Window WordCount");
  23. }
  24. public static class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
  25. @Override
  26. public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
  27. for (String word : sentence.split(" ")) {
  28. out.collect(new Tuple2<String, Integer>(word, 1));
  29. }
  30. }
  31. }
  32. }

启动nc监听

[hadoop@node1 ~]$ nc -lk 7777

运行flink程序

发送测试数据

[hadoop@node1 ~]$ nc -lk 7777
hello world
hello flink
hello hadoop
hello java
hello  
​

运行结果

5> (world,1)
8> (hadoop,1)
3> (hello,3)
7> (flink,1)
2> (java,1)
3> (hello,1)
3> (hello,1)

注意:输入数据的速度不一样,会导致数据分配到不同的窗口,计算出的结果也会不一样。

WindowAll

DataStream → AllWindowedStream

可以在常规数据流上定义窗口。 Windows 根据某些特征(例如,最近 10 秒内到达的数据)对所有流事件进行分组.

dataStream
  .windowAll(TumblingEventTimeWindows.of(Time.seconds(10)));

注意:很多情况下WindowAll是一种非并行转换。所有记录将被收集到同一个任务中进行计算,数据量大可能会出现OOM问题。

把所有窗口中的数据进行规约运算,这里使用逗号来拼接每个单词,完整代码如下:

  1. import org.apache.flink.api.common.functions.ReduceFunction;
  2. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  3. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  4. import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
  5. import org.apache.flink.streaming.api.windowing.time.Time;
  6. /*
  7. Windows can be defined on regular DataStreams.
  8. Windows group all the stream events according to some characteristic
  9. This is in many cases a non-parallel transformation. (非并行)
  10. All records will be gathered in one task for the windowAll operator.
  11. */
  12. public class OperatorWindowAll {
  13. public static void main(String[] args) throws Exception {
  14. // env
  15. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  16. // source and transformations
  17. SingleOutputStreamOperator<String> result = env
  18. .socketTextStream("node1", 7777)
  19. .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  20. .reduce(new ReduceFunction<String>() {
  21. @Override
  22. public String reduce(String value1, String value2) throws Exception {
  23. return value1 + "," + value2;
  24. }
  25. });
  26. // sink
  27. result.print();
  28. // execute
  29. env.execute();
  30. }
  31. }

测试数据

[hadoop@node1 ~]$ nc -lk 7777
hello      
world
hadoop
flink
hello
​

运行结果

4> hello,world
5> hadoop,flink,hello

Window Apply

WindowedStream → DataStream
AllWindowedStream → DataStream

将通用函数应用到整个窗口。

注意:如果使用 windowAll 转换,则需要使用 AllWindowFunction。

windowedStream.apply(new WindowFunction<Tuple2<String,Integer>, Integer, Tuple, Window>() {
    public void apply (Tuple tuple,
            Window window,
            Iterable<Tuple2<String, Integer>> values,
            Collector<Integer> out) throws Exception {
        int sum = 0;
        for (value t: values) {
            sum += t.f1;
        }
        out.collect (new Integer(sum));
    }
});
​
// applying an AllWindowFunction on non-keyed window stream
allWindowedStream.apply (new AllWindowFunction<Tuple2<String,Integer>, Integer, Window>() {
    public void apply (Window window,
            Iterable<Tuple2<String, Integer>> values,
            Collector<Integer> out) throws Exception {
        int sum = 0;
        for (value t: values) {
            sum += t.f1;
        }
        out.collect (new Integer(sum));
    }
});

对窗口内元素根据key相同进行求和运算,完整代码如下:

  1. import org.apache.flink.api.common.functions.FlatMapFunction;
  2. import org.apache.flink.api.java.tuple.Tuple2;
  3. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  4. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  5. import org.apache.flink.streaming.api.datastream.WindowedStream;
  6. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  7. import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
  8. import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
  9. import org.apache.flink.streaming.api.windowing.time.Time;
  10. import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
  11. import org.apache.flink.util.Collector;
  12. public class OperatorWindowApply {
  13. public static void main(String[] args) throws Exception {
  14. // env
  15. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  16. // source and transformations
  17. DataStreamSource<String> dataStream = env.socketTextStream("node1", 7777);
  18. WindowedStream<Tuple2<String, Integer>, String, TimeWindow> windowedStream = dataStream
  19. .flatMap(new Splitter())
  20. .keyBy(value -> value.f0)
  21. .window(TumblingProcessingTimeWindows.of(Time.seconds(10)));
  22. SingleOutputStreamOperator<Integer> applyStream = windowedStream.apply(
  23. new WindowFunction<Tuple2<String, Integer>, Integer, String, TimeWindow>() {
  24. @Override
  25. public void apply(String s, TimeWindow window, Iterable<Tuple2<String, Integer>> values, Collector<Integer> out) throws Exception {
  26. int sum = 0;
  27. for (Tuple2<String, Integer> value : values) {
  28. sum += value.f1;
  29. }
  30. out.collect(new Integer(sum));
  31. }
  32. }
  33. );
  34. // sink
  35. applyStream.print();
  36. // execute
  37. env.execute();
  38. }
  39. public static class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
  40. @Override
  41. public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
  42. for (String word : sentence.split(" ")) {
  43. out.collect(new Tuple2<String, Integer>(word, 1));
  44. }
  45. }
  46. }
  47. }

发送测试数据

[hadoop@node1 ~]$ nc -lk 7777
hello world
hello hadoop
hello flink
flink
​

运行结果

5> 1
3> 1
3> 2
7> 2
8> 1

注意:输入速度不一样,导致数据分配到不同的窗口,运行结果也会不一样。

分析结果

第一行hello world在一个窗口,每个单词都出现1次,所以输出1 、 1

第二行、第三行、第四行 在同一窗口,hello出现2次, flink出现2次, hadoop出现一次,所以输出 2 、 2、 1

WindowReduce

WindowedStream → DataStream 

将函数式Reduce函数应用于窗口并返回Reduce后的值。

windowedStream.reduce (new ReduceFunction<Tuple2<String,Integer>>() {
    public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
        return new Tuple2<String,Integer>(value1.f0, value1.f1 + value2.f1);
    }
});

使用Reduce实现词频统计,完整代码如下:

  1. import org.apache.flink.api.common.functions.FlatMapFunction;
  2. import org.apache.flink.api.common.functions.ReduceFunction;
  3. import org.apache.flink.api.java.tuple.Tuple2;
  4. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  5. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  6. import org.apache.flink.streaming.api.datastream.WindowedStream;
  7. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  8. import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
  9. import org.apache.flink.streaming.api.windowing.time.Time;
  10. import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
  11. import org.apache.flink.util.Collector;
  12. public class OperatorWindowReduce {
  13. public static void main(String[] args) throws Exception {
  14. // env
  15. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  16. // source and transformations
  17. DataStreamSource<String> dataStream = env.socketTextStream("node1", 7777);
  18. WindowedStream<Tuple2<String, Integer>, String, TimeWindow> windowedStream = dataStream
  19. .flatMap(new Splitter())
  20. .keyBy(value -> value.f0)
  21. .window(TumblingProcessingTimeWindows.of(Time.seconds(10)));
  22. SingleOutputStreamOperator<Tuple2<String, Integer>> result = windowedStream.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
  23. @Override
  24. public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
  25. return new Tuple2<String, Integer>(value1.f0, value1.f1 + value2.f1);
  26. }
  27. });
  28. // sink
  29. result.print();
  30. // execute
  31. env.execute();
  32. }
  33. public static class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
  34. @Override
  35. public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
  36. for (String word : sentence.split(" ")) {
  37. out.collect(new Tuple2<String, Integer>(word, 1));
  38. }
  39. }
  40. }
  41. }

测试数据

[hadoop@node1 ~]$ nc -lk 7777
hello hello world
hello flink
flink flink
hadoop hadoop
hello
​

运行结果

5> (world,1)
3> (hello,2)
7> (flink,3)
3> (hello,1)
8> (hadoop,2)
3> (hello,1)

Union

DataStream* → DataStream

两个或多个相同类型的数据流联合创建一个包含所有流中所有元素的新流。

dataStream.union(otherStream1, otherStream2, ...);

两个相同类型的数据流联结,完整代码如下:

  1. import org.apache.flink.streaming.api.datastream.DataStream;
  2. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  3. /*
  4. Union of two or more data streams creating a new stream containing all the elements from all the streams.
  5. Note: If you union a data stream with itself you will get each element twice in the resulting stream.
  6. */
  7. public class OperatorUnion {
  8. public static void main(String[] args) throws Exception {
  9. // env
  10. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  11. env.setParallelism(1);
  12. // source
  13. DataStream<Integer> dataStream1 = env.fromElements(1, 2, 3);
  14. DataStream<Integer> dataStream2 = env.fromElements(4, 5, 6);
  15. // transformations
  16. DataStream<Integer> res = dataStream1.union(dataStream2);
  17. // sink
  18. res.print();
  19. // execute
  20. env.execute();
  21. }
  22. }

运行结果

1
2
3
4
5
6

Window Join

DataStream,DataStream → DataStream

连接给定键和公共窗口上的两个数据流

dataStream.join(otherStream)
    .where(<key selector>).equalTo(<key selector>)
    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
    .apply (new JoinFunction () {...});

两个数据流的窗口联结,案例完整代码如下:

  1. import org.apache.flink.api.common.eventtime.WatermarkStrategy;
  2. import org.apache.flink.api.common.functions.JoinFunction;
  3. import org.apache.flink.api.java.tuple.Tuple2;
  4. import org.apache.flink.api.java.tuple.Tuple3;
  5. import org.apache.flink.streaming.api.datastream.DataStream;
  6. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  7. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  8. import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;
  9. import org.apache.flink.streaming.api.windowing.time.Time;
  10. public class OperatorWindowJoin {
  11. public static void main(String[] args) throws Exception {
  12. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  13. env.setParallelism(1);
  14. SingleOutputStreamOperator<Tuple2<String, Integer>> ds1 = env
  15. .fromElements(
  16. Tuple2.of("a", 1),
  17. Tuple2.of("a", 2),
  18. Tuple2.of("b", 3),
  19. Tuple2.of("c", 4),
  20. Tuple2.of("c", 12)
  21. )
  22. .assignTimestampsAndWatermarks(
  23. WatermarkStrategy
  24. .<Tuple2<String, Integer>>forMonotonousTimestamps()
  25. .withTimestampAssigner((value, ts) -> value.f1 * 1000L)
  26. );
  27. SingleOutputStreamOperator<Tuple3<String, Integer,Integer>> ds2 = env
  28. .fromElements(
  29. Tuple3.of("a", 1,1),
  30. Tuple3.of("a", 11,1),
  31. Tuple3.of("b", 2,1),
  32. Tuple3.of("b", 12,1),
  33. Tuple3.of("c", 14,1),
  34. Tuple3.of("d", 15,1)
  35. )
  36. .assignTimestampsAndWatermarks(
  37. WatermarkStrategy
  38. .<Tuple3<String, Integer,Integer>>forMonotonousTimestamps()
  39. .withTimestampAssigner((value, ts) -> value.f1 * 1000L)
  40. );
  41. DataStream<String> join = ds1.join(ds2)
  42. .where(r1 -> r1.f0)
  43. .equalTo(r2 -> r2.f0)
  44. .window(TumblingEventTimeWindows.of(Time.seconds(10)))
  45. .apply(new JoinFunction<Tuple2<String, Integer>, Tuple3<String, Integer, Integer>, String>() {
  46. @Override
  47. public String join(Tuple2<String, Integer> first, Tuple3<String, Integer, Integer> second) throws Exception {
  48. return first + "<----->" + second;
  49. }
  50. });
  51. join.print();
  52. env.execute();
  53. }
  54. }

运行结果

(a,1)<----->(a,1,1)
(a,2)<----->(a,1,1)
(b,3)<----->(b,2,1)
(c,12)<----->(c,14,1)

Interval Join

KeyedStream,KeyedStream → DataStream

在给定时间间隔内使用公共key连接两个KeyedStream的两个元素 e1 和 e2,以便 e1.timestamp + lowerBound <= e2.timestamp <= e1.timestamp + upperBound。

keyedStream.intervalJoin(otherKeyedStream)
    .between(Time.milliseconds(-2), Time.milliseconds(2)) // 时间下限,时间上限
    .upperBoundExclusive(true) // 可选项
    .lowerBoundExclusive(true) // 可选项
    .process(new IntervalJoinFunction() {...});

间隔连接,完整代码如下:

  1. import org.apache.flink.api.common.eventtime.WatermarkStrategy;
  2. import org.apache.flink.api.java.tuple.Tuple2;
  3. import org.apache.flink.api.java.tuple.Tuple3;
  4. import org.apache.flink.streaming.api.datastream.KeyedStream;
  5. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  6. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  7. import org.apache.flink.streaming.api.functions.co.ProcessJoinFunction;
  8. import org.apache.flink.streaming.api.windowing.time.Time;
  9. import org.apache.flink.util.Collector;
  10. public class OperatorIntervalJoin {
  11. public static void main(String[] args) throws Exception {
  12. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  13. env.setParallelism(1);
  14. SingleOutputStreamOperator<Tuple2<String, Integer>> ds1 = env
  15. .fromElements(
  16. Tuple2.of("a", 1),
  17. Tuple2.of("a", 2),
  18. Tuple2.of("b", 3),
  19. Tuple2.of("c", 4)
  20. )
  21. .assignTimestampsAndWatermarks(
  22. WatermarkStrategy
  23. .<Tuple2<String, Integer>>forMonotonousTimestamps()
  24. .withTimestampAssigner((value, ts) -> value.f1 * 1000L)
  25. );
  26. SingleOutputStreamOperator<Tuple3<String, Integer, Integer>> ds2 = env
  27. .fromElements(
  28. Tuple3.of("a", 1, 1),
  29. Tuple3.of("a", 11, 1),
  30. Tuple3.of("b", 2, 1),
  31. Tuple3.of("b", 12, 1),
  32. Tuple3.of("c", 14, 1),
  33. Tuple3.of("d", 15, 1)
  34. )
  35. .assignTimestampsAndWatermarks(
  36. WatermarkStrategy
  37. .<Tuple3<String, Integer, Integer>>forMonotonousTimestamps()
  38. .withTimestampAssigner((value, ts) -> value.f1 * 1000L)
  39. );
  40. KeyedStream<Tuple2<String, Integer>, String> ks1 = ds1.keyBy(r1 -> r1.f0);
  41. KeyedStream<Tuple3<String, Integer, Integer>, String> ks2 = ds2.keyBy(r2 -> r2.f0);
  42. //调用 interval join
  43. ks1.intervalJoin(ks2)
  44. // 连接时间间隔
  45. .between(Time.seconds(-2), Time.seconds(2))
  46. .process(
  47. new ProcessJoinFunction<Tuple2<String, Integer>, Tuple3<String, Integer, Integer>, String>() {
  48. @Override
  49. public void processElement(Tuple2<String, Integer> left, Tuple3<String, Integer, Integer> right, Context ctx, Collector<String> out) throws Exception {
  50. out.collect(left + "<------>" + right);
  51. }
  52. })
  53. .print();
  54. env.execute();
  55. }
  56. }

运行结果

(a,1)<------>(a,1,1)
(a,2)<------>(a,1,1)
(b,3)<------>(b,2,1)

Window CoGroup

DataStream,DataStream → DataStream

将给定键和公共窗口上的两个数据流联合分组。

dataStream.coGroup(otherStream)
    .where(0).equalTo(1)
    .window(TumblingEventTimeWindows.of(Time.seconds(3)))
    .apply (new CoGroupFunction () {...});

两个数据流联合分组,完整代码如下:

  1. import org.apache.flink.api.common.functions.CoGroupFunction;
  2. import org.apache.flink.api.common.typeinfo.Types;
  3. import org.apache.flink.api.java.tuple.Tuple2;
  4. import org.apache.flink.streaming.api.datastream.DataStream;
  5. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  6. import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
  7. import org.apache.flink.streaming.api.windowing.time.Time;
  8. import org.apache.flink.util.Collector;
  9. import org.example.jiaocai.chapter5.CoGroupExample;
  10. public class OperatorWindowCoGroup {
  11. public static void main(String[] args) throws Exception {
  12. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  13. DataStream<String> socketSource1 = env.socketTextStream("node1", 7777);
  14. DataStream<String> socketSource2 = env.socketTextStream("node1", 8888);
  15. DataStream<Tuple2<String, Integer>> input1 = socketSource1.map(
  16. line -> {
  17. String[] arr = line.split(" ");
  18. String id = arr[0];
  19. int t = Integer.parseInt(arr[1]);
  20. return Tuple2.of(id, t);
  21. })
  22. .returns(Types.TUPLE(Types.STRING, Types.INT));
  23. DataStream<Tuple2<String, Integer>> input2 = socketSource2.map(
  24. line -> {
  25. String[] arr = line.split(" ");
  26. String id = arr[0];
  27. int t = Integer.parseInt(arr[1]);
  28. return Tuple2.of(id, t);
  29. })
  30. .returns(Types.TUPLE(Types.STRING, Types.INT));
  31. DataStream<String> coGroupResult = input1.coGroup(input2)
  32. .where(i1 -> i1.f0)
  33. .equalTo(i2 -> i2.f0)
  34. .window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
  35. .apply(new CoGroupExample.MyCoGroupFunction());
  36. coGroupResult.print();
  37. env.execute("window cogroup function");
  38. }
  39. public static class MyCoGroupFunction implements CoGroupFunction<Tuple2<String, Integer>, Tuple2<String, Integer>, String> {
  40. @Override
  41. public void coGroup(Iterable<Tuple2<String, Integer>> input1, Iterable<Tuple2<String, Integer>> input2, Collector<String> out) {
  42. input1.forEach(element -> System.out.println("input1 :" + element.f1));
  43. input2.forEach(element -> System.out.println("input2 :" + element.f1));
  44. }
  45. }
  46. }

测试数据

[hadoop@node1 ~]$ nc -lk 7777
hello 2
hello 1
​
​
[hadoop@node1 ~]$ nc -lk 8888
hello 3
hello 4
​

运行结果

input1 :2
input1 :1
input2 :3
input2 :4

Connect

DataStream,DataStream → ConnectedStream

“连接”两个保留其类型的数据流,连接允许两个流之间共享状态。两个流的数据类型可以不一样。

DataStream<Integer> someStream = //...
DataStream<String> otherStream = //...

ConnectedStreams<Integer, String> connectedStreams = someStream.connect(otherStream);

connect连接后,得到ConnectedStreams流,对于ConnectedStreams流转换时需要实现CoMapFunction或CoFlatMapFunction接口,重写里面的两个方法分别来处理两个流数据,也就是第一个方法处理第一个流的数据,第二个方法处理第二个流的数据。传入的数据类型如下:

// IN1 表示第一个流的数据类型
// IN2 表示第二个流的数据类型
// IN3 表示处理后输出流的数据类型
public interface CoMapFunction<IN1, IN2, OUT> extends Function, Serializable {

两个不同数据类型的数据流的联结,完整代码如下:

  1. import org.apache.flink.streaming.api.datastream.ConnectedStreams;
  2. import org.apache.flink.streaming.api.datastream.DataStream;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. import org.apache.flink.streaming.api.functions.co.CoMapFunction;
  6. public class OperatorConnect {
  7. public static void main(String[] args) throws Exception {
  8. // env
  9. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  10. env.setParallelism(1);
  11. // source
  12. DataStream<Integer> dataStream1 = env.fromElements(1, 2, 3);
  13. DataStream<String> dataStream2 = env.fromElements("hello", "flink", "spark");
  14. // transformations
  15. ConnectedStreams<Integer, String> connectedStreams = dataStream1.connect(dataStream2);
  16. SingleOutputStreamOperator<String> res = connectedStreams.map(new CoMapFunction<Integer, String, String>() {
  17. @Override
  18. public String map1(Integer input1) throws Exception {
  19. return input1.toString();
  20. }
  21. @Override
  22. public String map2(String input2) throws Exception {
  23. return input2;
  24. }
  25. });
  26. // sink
  27. res.print();
  28. // execute
  29. env.execute();
  30. }
  31. }

运行结果

1
hello
2
flink
3
spark

CoMap, CoFlatMap

ConnectedStream → DataStream

将连接流转换为数据流,其中转换与map、flatMap类似

connectedStreams.map(new CoMapFunction<Integer, String, Boolean>() {
    @Override
    public Boolean map1(Integer value) {
        return true;
    }

    @Override
    public Boolean map2(String value) {
        return false;
    }
});

connectedStreams.flatMap(new CoFlatMapFunction<Integer, String, String>() {

    @Override
    public void flatMap1(Integer value, Collector<String> out) {
        out.collect(value.toString());
    }

    @Override
    public void flatMap2(String value, Collector<String> out) {
        for (String word: value.split(" ")) {
            out.collect(word);
        }
    }
});

将连接流转换为数据流,完整代码如下:

  1. import org.apache.flink.streaming.api.datastream.ConnectedStreams;
  2. import org.apache.flink.streaming.api.datastream.DataStream;
  3. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  4. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  5. import org.apache.flink.streaming.api.functions.co.CoFlatMapFunction;
  6. import org.apache.flink.util.Collector;
  7. public class OperatorCoFlatMap {
  8. public static void main(String[] args) throws Exception {
  9. // env
  10. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  11. // source
  12. DataStream<Integer> dataStream1 = env.fromElements(1, 2, 3);
  13. DataStream<String> dataStream2 = env.fromElements("hello world", "hello flink");
  14. // transformations
  15. ConnectedStreams<Integer, String> connectedStreams = dataStream1.connect(dataStream2);
  16. SingleOutputStreamOperator<String> res = connectedStreams.flatMap(new CoFlatMapFunction<Integer, String, String>() {
  17. @Override
  18. public void flatMap1(Integer value, Collector<String> out) throws Exception {
  19. out.collect(value.toString());
  20. }
  21. @Override
  22. public void flatMap2(String value, Collector<String> out) throws Exception {
  23. for (String word : value.split(" ")) {
  24. out.collect(word);
  25. }
  26. }
  27. });
  28. // sink
  29. res.print();
  30. // execute
  31. env.execute();
  32. }
  33. }

运行结果

8> hello
8> flink
4> 1
7> hello
7> world
5> 2
6> 3

Cache

缓存转换的中间结果。 目前仅支持以批量执行模式运行的作业。 缓存中间结果是在第一次计算中间结果时延迟生成的,以便后续作业可以重用该结果。 如果缓存丢失,将使用原始转换重新计算。

DataStream<Integer> dataStream = //...
CachedDataStream<Integer> cachedDataStream = dataStream.cache();//缓存数据
cachedDataStream.print(); 

物理分区

  • 自定义分区
  • 随机分区
  • 重新缩放
  • 广播

自定义分区
dataStream.partitionCustom(partitioner, "someKey");
dataStream.partitionCustom(partitioner, 0);

随机分区
dataStream.shuffle();

重新缩放
dataStream.rescale();

广播
dataStream.broadcast();

任务链和资源组

  • 启动新链
  • 禁用链接
  • 设置插槽共享组
启动新链

Begin a new chain, starting with this operator. The two mappers will be chained, and filter will not be chained to the first mapper.

someStream.filter(...).map(...).startNewChain().map(...);

禁用链接

Do not chain the map operator.

someStream.map(...).disableChaining();

设置插槽共享组

someStream.filter(...).slotSharingGroup("name");

名称和描述

someStream.filter(...).setName("filter").setDescription("一些描述内容");


指定计算结果放置在何处(sink)

  • writeAsText(path):输出到文件中
  • writeAsCsv(...):输出csv文件中
  • print():打印到控制台
  • writeUsingOutputFormat()
  • writeToSocket:输出到Socket中
  • addSink:自定义sink,例如输出到kafka中

例如:

DataStreamSource<Integer> dataStream = env.fromElements(1, 2, 3);
dataStream.writeAsText("sinkout");
dataStream.print();


触发程序执行(execution)

  • execute:同步执行,会阻塞其他作业
execute()
execute(String jobName)
execute(StreamGraph streamGraph)

例如:

env.execute();

  • executeAsync:异步执行,不会阻塞其他作业
executeAsync()
executeAsync(String jobName)
executeAsync(StreamGraph streamGraph)

例如:

env.executeAsync();

完成!enjoy it!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/624611
推荐阅读
相关标签
  

闽ICP备14008679号