当前位置:   article > 正文

Flink学习3-WordCount词频统计_flink统计文本中单词出现的次数

flink统计文本中单词出现的次数

基于Flink开发环境,接下来我们将完成Flink版本的词频统计程序,主要内容如下:

  • 需求描述
  • 功能设计
  • 功能开发
  • 需求升级

针对以上几个步骤,下面将详细展开,读者可根据自身情况有选择阅读。

1. 需求描述

输入几行句子,统计并输出句子中每个单词出现的次数,词与词之间通过空格分割。

2. 功能设计

根据以上描述,词频统计程序主要包含数据读取、分词、统计和输出四个子模块。

3. 功能开发

正式开发前还需要选择Flink处理模式,Flink是流批一体的大数据计算引擎,既支持流处理,也支持批处理,区别在这里就不再赘述,具体到这个需求,很显然采用批处理模式更合适,不过也可以采用流处理模式,相关实现后续会给出。

3.1 新建WorkCount项目

(1)打开IntelliJ IDEA,选择New Project,注意红框标识出来的地方,选择创建

(2)依次展开src/main/java,右键选择New Package, 输入自己想要的包名,多级包名中间用.分割,Idea支持一次性创建多级包名。

(3)在com.windy.myflink 包名上右键,新建WordCount类

(4)双击打开WordCount.java文件,加入main方法,打印语句,源码如下:

  1. package com.windy.myflink;
  2. public class WordCount {
  3. public static void main(String[] args) {
  4. System.out.println("Hello, Flink");
  5. }
  6. }

(5) 打开pom.xml文件,配置参数如下

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6. <groupId>org.example</groupId>
  7. <artifactId>word-count</artifactId>
  8. <version>1.0-SNAPSHOT</version>
  9. <properties>
  10. <maven.compiler.source>8</maven.compiler.source>
  11. <maven.compiler.target>8</maven.compiler.target>
  12. </properties>
  13. <build>
  14. <sourceDirectory>${basedir}/src/main/java</sourceDirectory>
  15. <plugins>
  16. <plugin>
  17. <groupId>org.apache.maven.plugins</groupId>
  18. <artifactId>maven-compiler-plugin</artifactId>
  19. <version>3.8.1</version>
  20. <configuration>
  21. <source>1.8</source>
  22. <target>1.8</target>
  23. <compilerVersion>1.8</compilerVersion>
  24. </configuration>
  25. </plugin>
  26. <plugin>
  27. <groupId>org.apache.maven.plugins</groupId>
  28. <artifactId>maven-assembly-plugin</artifactId>
  29. <version>3.0.0</version>
  30. <configuration>
  31. <descriptorRefs>
  32. <descriptorRef>jar-with-dependencies</descriptorRef>
  33. </descriptorRefs>
  34. </configuration>
  35. <executions>
  36. <execution>
  37. <id>make-assembly</id>
  38. <phase>package</phase>
  39. <goals>
  40. <goal>single</goal>
  41. </goals>
  42. </execution>
  43. </executions>
  44. </plugin>
  45. </plugins>
  46. </build>
  47. </project>

(6)在当前工程目录下执行mvn package生成jar包,然后运行输出Hello, Flink

  1. java -cp target/word-count-1.0-SNAPSHOT.jar com.windy.myflink.WordCount
  2. // Output:Hello, Flink

3.2 基于Flink批处理的词频统计

(1)为了使用Flink框架,需要引入Flink相关Jar包,修改pom.xml,加入如下内容

  1. <dependencies>
  2. <dependency>
  3. <groupId>org.apache.flink</groupId>
  4. <artifactId>flink-java</artifactId>
  5. <version>1.13.2</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>org.apache.flink</groupId>
  9. <artifactId>flink-streaming-java_2.12</artifactId>
  10. <version>1.13.2</version>
  11. </dependency>
  12. <dependency>
  13. <groupId>org.apache.flink</groupId>
  14. <artifactId>flink-clients_2.12</artifactId>
  15. <version>1.13.2</version>
  16. </dependency>
  17. </dependencies>

(2)展开右上角Maven工具栏,执行maven package, 刷新即可在左侧出现依赖Jar包,这样我们就可以在代码中直接引入flink相关的组件了。

(3)打开WordCount.java,加入词频统计的核心逻辑,这里我直接选择fromElenents接口从字符串中读取句子进行词频统计,修改后的代码如下。

  1. package com.windy.myflink;
  2. import org.apache.flink.api.common.functions.FlatMapFunction;
  3. import org.apache.flink.api.java.DataSet;
  4. import org.apache.flink.api.java.ExecutionEnvironment;
  5. import org.apache.flink.api.java.tuple.Tuple2;
  6. import org.apache.flink.util.Collector;
  7. public class WordCount {
  8. public static void main(String[] args) throws Exception {
  9. ExecutionEnvironment setEnv = ExecutionEnvironment.getExecutionEnvironment();
  10. DataSet<String> dataSet = setEnv.fromElements("Hello world", "Hello flink");
  11. dataSet.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
  12. @Override
  13. public void flatMap(String s, Collector<Tuple2<String, Integer>> collector) throws Exception {
  14. String[] fields = s.toLowerCase().split("\\s");
  15. for (String field : fields) {
  16. collector.collect(new Tuple2<>(field, 1));
  17. }
  18. }
  19. }).groupBy(0).sum(1).print();
  20. }
  21. }

相关算子说明如下:

  • flatMap:一对多转换操作,输入句子,输出分词后的每个词
  • groupBy:按Key分组,0代表选择第1列作为Key
  • sum:求和,1代表按照第2列进行累加
  • print:打印最终结果

(4)打包运行,这里如果直接mvn package,运行生成的Jar包,会报如下错误:org.apache.flink.client.program.ProgramInvocationException: Neither a 'Main-Class', nor a 'program-class' entry was found in the jar file.

  1. flink run target/word-count-1.0-SNAPSHOT-jar-with-dependencies.jar
  2. ------------------------------------------------------------
  3. The program finished with the following exception:
  4. org.apache.flink.client.program.ProgramInvocationException: Neither a 'Main-Class', nor a 'program-class' entry was found in the jar file.
  5. at org.apache.flink.client.program.PackagedProgram.getEntryPointClassNameFromJar(PackagedProgram.java:437)
  6. at org.apache.flink.client.program.PackagedProgram.<init>(PackagedProgram.java:158)
  7. at org.apache.flink.client.program.PackagedProgram.<init>(PackagedProgram.java:65)
  8. at org.apache.flink.client.program.PackagedProgram$Builder.build(PackagedProgram.java:691)
  9. at org.apache.flink.client.cli.CliFrontend.buildProgram(CliFrontend.java:875)
  10. at org.apache.flink.client.cli.CliFrontend.getPackagedProgram(CliFrontend.java:272)
  11. at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
  12. at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1078)
  13. at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1156)
  14. at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
  15. at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1156)
  16. KALOCHEN-MB0:word-count kalochen$

正确打包方式如下:

  • 选择File => Project Structure => Artifacts => JAR => From modules with dependencies

  • 按照下图所示填写

  • 修改MANIFEST路径,去掉src/main/resources后缀,保存退出

  • 执行打包:Build => Build Artifacts => Build

  •  build完成,会在当前工作目录下生成jar包文件

(5)运行jar包输出词频统计结果

 flink run out/artifacts/word_count_jar/word-count.jar

4. 需求升级

假设现在修改需求,从文本文件中读入句子,句子中包含连续空格、*等特殊字符,统计词频,按照词频由高到低的顺序输出。变更点如下:

  • 读取文件:使用文件读取接口
  • 过滤非法字符:加入Filter组件
  • 排序:加入排序组件

升级后的词频统计程序如下:

  1. package com.windy.myflink;
  2. import org.apache.flink.api.common.functions.FlatMapFunction;
  3. import org.apache.flink.api.common.operators.Order;
  4. import org.apache.flink.api.java.DataSet;
  5. import org.apache.flink.api.java.ExecutionEnvironment;
  6. import org.apache.flink.api.java.tuple.Tuple2;
  7. import org.apache.flink.util.Collector;
  8. public class WordCount {
  9. public static void main(String[] args) throws Exception {
  10. ExecutionEnvironment setEnv = ExecutionEnvironment.getExecutionEnvironment();
  11. DataSet<String> dataSet = setEnv.readTextFile(
  12. "/Users/windy/IdeaProjects/word-count/src/main/resources/word.txt");
  13. dataSet.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
  14. @Override
  15. public void flatMap(String s, Collector<Tuple2<String, Integer>> collector) throws Exception {
  16. String[] fields = s.toLowerCase().split("\\s");
  17. for (String field : fields) {
  18. collector.collect(new Tuple2<>(field, 1));
  19. }
  20. }
  21. }).filter(x -> !x.f0.isEmpty() && !x.f0.contains("*"))
  22. .groupBy(0)
  23. .sum(1)
  24. .sortPartition(x -> x.f1, Order.DESCENDING)
  25. .print();
  26. }
  27. }

 说明:flink没有全局排序算子,只有分区排序算子sortPartition,当我们把sortPartition的并行度设置为1,实现的就是全局排序效果,默认平行度就是1。

按照同样的方式打包并运行,执行结果如下:

  1. flink run out/artifacts/word_count_jar/word-count.jar
  2. Job has been submitted with JobID 6b562989adfc77104ff0a8d01d21b428
  3. Program execution finished
  4. Job with JobID 6b562989adfc77104ff0a8d01d21b428 has finished.
  5. Job Runtime: 629 ms
  6. Accumulator Results:
  7. - c06f16d3ec88909a3ed11492b22ca269 (java.util.ArrayList) [33 elements]
  8. (the,4)
  9. ({@code,3)
  10. (permits,2)
  11. (if,2)
  12. ((without,1)
  13. (acquires,1)
  14. (be,1)
  15. (been,1)
  16. (before,1)
  17. (can,1)
  18. (exceeding,1)
  19. (expired.,1)
  20. (false},1)
  21. (from,1)
  22. (given,1)
  23. (granted,1)
  24. (have,1)
  25. (immediately,1)
  26. (it,1)
  27. (number,1)
  28. (obtained,1)
  29. (of,1)
  30. (or,1)
  31. (ratelimiter},1)
  32. (returns,1)
  33. (specified,1)
  34. (this,1)
  35. (timeout,1)
  36. (timeout},,1)
  37. (waiting),1)
  38. (without,1)
  39. (would,1)
  40. (not,1)

感兴趣的读者可以再加入其他的flink算子,来进一步丰富这个例子。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/388247
推荐阅读
相关标签
  

闽ICP备14008679号