当前位置:   article > 正文

hadoop词频统计_基于hadoop的词频统计分析

基于hadoop的词频统计分析

1 Hadoop 安装与伪分布的搭建

2 Hadoop词频统计

此文章基于搭建好hadoop之后做的词频统计实验,以上是链接为搭建hadoop的教程

目录

1 HDFS 文件系统常用命令

2 词频统计实验准备工作

2.1 启动hadoop 关闭防火墙

2.2 查看图形化界面

2.3 文件上传

3 词频统计

3.1 方法一:使用hadoop自带的jar包文件

3.2 方法二:编写java程序打包jar包


1 HDFS 文件系统常用命令

  1. # 显示HDFS根目录下的文件和目录列表
  2. hadoop fs -ls /
  3. # 创建HDFS目录
  4. hadoop fs -mkdir /path/to/directory
  5. # 将本地文件上传到HDFS
  6. hadoop fs -put localfile /path/in/hdfs
  7. # 将HDFS上的文件下载到本地
  8. hadoop fs -get /path/in/hdfs localfile
  9. # 显示HDFS上的文件内容
  10. hadoop fs -cat /path/in/hdfs
  11. # 删除HDFS上的文件或目录
  12. hadoop fs -rm /path/in/hdfs
  13. # 递归删除目录
  14. hadoop fs -rm -r /path/in/hdfs
  15. # 移动或重命名HDFS上的文件或目录
  16. hadoop fs -mv /source/path /destination/path
  17. # 复制HDFS上的文件或目录
  18. hadoop fs -cp /source/path /destination/path
  19. # 显示HDFS上文件的元数据
  20. hadoop fs -stat %n /path/in/hdfs
  21. # 设置HDFS上文件的权限
  22. hadoop fs -chmod 755 /path/in/hdfs
  23. # 设置HDFS上文件的所有者和所属组
  24. hadoop fs -chown user:group /path/in/hdfs

2 词频统计实验准备工作

2.1 启动hadoop 关闭防火墙

  1. [root@hadoop ~]# start-all.sh
  2. Starting namenodes on [localhost]
  3. Starting datanodes
  4. Starting secondary namenodes [hadoop]
  5. Starting resourcemanager
  6. Starting nodemanagers
  7. [root@hadoop ~]# systemctl stop firewalld.service

2.2 查看图形化界面

查看ip地址

输入ip地址+9870

这是在HDFS文件系统上的文件

在虚拟机上使用命令同样也能看到

2.3 文件上传

网上随便找一篇英语短文,作为单词统计的文档

  1. [root@hadoop ~]# mkdir /wordcount
  2. [root@hadoop ~]# cd /wordcount/
  3. [root@hadoop wordcount]# vim words2.txt

英语文章实例

Once a circle missed a wedge. The circle wanted to be whole,so it went around looking for its missing piece.But because it was incomplete and therefore could roll only very slowly,it admired the flowers along the way.It chatted with worms.It enjoyed the sunshine.It found lots of different pieces,but none of them fit.So it left them all by the side of the road and kept on searching.Then one day the circle found a piece that fit perfectly.It was so happy.Now it could be whole,with nothing missing.It incorporated the missing piece into itself and began to roll.Now that it was a perfect circle,it could roll very fast,too fast to notice the flowers or talking to the worms.When it realized how different the world seemed when it rolled so quickly,it stopped,left its found piece by the side of the road and rolled slowly away.

在HDFS文件系统中根目录创建 input 目录

我这里目录已经创建过了所以会显示已存在

  1. [root@hadoop wordcount]# hadoop fs -mkdir /input
  2. mkdir: `/input': File exists

上传文件到HDFS文件系统

[root@hadoop wordcount]# hadoop fs -put /wordcount/words2.txt  /input

浏览器查看是否上传成功

2.4 配置hadoop的classpath

  1. [root@hadoop wordcount]# hadoop classpath
  2. /opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*
  3. [root@hadoop wordcount]# vim /opt/hadoop/etc/hadoop/yarn-site.xml

3 词频统计

在文件系统上有了文章可以开始词频统计了

3.1 方法一:使用hadoop自带的jar包文件

查看jar包放在哪个目录下了

[root@hadoop wordcount]# find $HADOOP_HOME/ -name mapreduce

移动到这个目录下

  1. [root@hadoop wordcount]# cd /opt/hadoop/share/hadoop/mapreduce/
  2. [root@hadoop mapreduce]# ls
  3. hadoop-mapreduce-client-app-3.3.6.jar hadoop-mapreduce-client-nativetask-3.3.6.jar
  4. hadoop-mapreduce-client-common-3.3.6.jar hadoop-mapreduce-client-shuffle-3.3.6.jar
  5. hadoop-mapreduce-client-core-3.3.6.jar hadoop-mapreduce-client-uploader-3.3.6.jar
  6. hadoop-mapreduce-client-hs-3.3.6.jar hadoop-mapreduce-examples-3.3.6.jar
  7. hadoop-mapreduce-client-hs-plugins-3.3.6.jar jdiff
  8. hadoop-mapreduce-client-jobclient-3.3.6.jar lib-examples
  9. hadoop-mapreduce-client-jobclient-3.3.6-tests.jar sources

找到一个叫hadoop-mapreduce-examples-3.3.6.jar 的文件

这个文件是hadoop自带的专门做词频统计的jar包

选择jar包运行java程序对文章进行词频统计

[root@hadoop mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.6.jar wordcount /input/words2.txt /output

查看根目录多出了个output目录,点击他

得出结果

同样在虚拟机上也可查看

3.2 方法二:编写java程序打包jar包

使用的软件为idea

新建项目

将以下代码插入pom.xml 中

  1. <dependencies>
  2. <dependency>
  3. <groupId>org.apache.hadoop</groupId>
  4. <artifactId>hadoop-client</artifactId>
  5. <version>3.3.2</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>junit</groupId>
  9. <artifactId>junit</artifactId>
  10. <version>4.13.2</version>
  11. </dependency>
  12. <dependency>
  13. <groupId>org.slf4j</groupId>
  14. <artifactId>slf4j-log4j12</artifactId>
  15. <version>1.7.36</version>
  16. </dependency>
  17. </dependencies>
  18. <build>
  19. <plugins>
  20. <plugin>
  21. <artifactId>maven-compiler-plugin</artifactId>
  22. <version>3.6.1</version>
  23. <configuration>
  24. <source>1.8</source>
  25. <target>1.8</target>
  26. </configuration>
  27. </plugin>
  28. <plugin>
  29. <artifactId>maven-assembly-plugin</artifactId>
  30. <configuration>
  31. <descriptorRefs>
  32. <descriptorRef>jar-with-dependencies</descriptorRef>
  33. </descriptorRefs>
  34. </configuration>
  35. <executions>
  36. <execution>
  37. <id>make-assembly</id>
  38. <phase>package</phase>
  39. <goals>
  40. <goal>single</goal>
  41. </goals>
  42. </execution>
  43. </executions>
  44. </plugin>
  45. </plugins>
  46. </build>

插入之后点击

添加以下内容

  1. log4j.rootLogger=INFO, stdout
  2. log4j.appender.stdout=org.apache.log4j.ConsoleAppender
  3. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
  4. log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
  5. log4j.appender.logfile=org.apache.log4j.FileAppender
  6. log4j.appender.logfile.File=target/spring.log
  7. log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
  8. log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

编写java类

WordCountDriver    ---主类

WordCountMapper

WordCountReducer

代码如下

WordCountDriver

  1. package com.hadoop.mapreducer.wordcount;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.io.IntWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Job;
  7. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  8. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  9. import java.io.IOException;
  10. public class WordCountDriver {
  11. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  12. //1.获取job
  13. Configuration conf = new Configuration();
  14. Job job = Job.getInstance(conf);
  15. //2.设置jar包路径
  16. job.setJarByClass(WordCountDriver.class);
  17. //3.关联mapper和reducer
  18. job.setMapperClass(WordCountMapper.class);
  19. job.setReducerClass(WordCountReducer.class);
  20. //4.设置map输出kv类型
  21. job.setMapOutputKeyClass(Text.class);
  22. job.setMapOutputValueClass(IntWritable.class);
  23. //5.设置最终输出kv类型
  24. job.setOutputKeyClass(Text.class);
  25. job.setOutputValueClass(IntWritable.class);
  26. //6.设置输入路径和输出路径
  27. FileInputFormat.setInputPaths(job,new Path(args[0]));
  28. FileOutputFormat.setOutputPath(job,new Path(args[1]));
  29. //7.提交job
  30. boolean result = job.waitForCompletion(true);
  31. System.exit(result?0:1);
  32. }
  33. }

WordCountMapper

  1. package com.hadoop.mapreducer.wordcount;
  2. import org.apache.hadoop.io.IntWritable;
  3. import org.apache.hadoop.io.LongWritable;
  4. import org.apache.hadoop.io.Text;
  5. import org.apache.hadoop.mapreduce.Mapper;
  6. import java.io.IOException;
  7. public class WordCountMapper extends Mapper<LongWritable,Text,Text, IntWritable> {
  8. //为了节省空间,将k-v设置到函数外
  9. private Text outK=new Text();
  10. private IntWritable outV=new IntWritable(1);
  11. @Override
  12. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException {
  13. //获取一行输入数据
  14. String line = value.toString();
  15. //将数据切分
  16. String[] words = line.split(" ");
  17. //循环每个单词进行k-v输出
  18. for (String word : words) {
  19. outK.set(word);
  20. //将参数传递到reduce
  21. context.write(outK,outV);
  22. }
  23. }
  24. }

WordCountReducer

  1. package com.hadoop.mapreducer.wordcount;
  2. import org.apache.hadoop.io.IntWritable;
  3. import org.apache.hadoop.io.Text;
  4. import org.apache.hadoop.mapreduce.Reducer;
  5. import java.io.IOException;
  6. public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {
  7. //全局变量输出类型
  8. private IntWritable outV = new IntWritable();
  9. @Override
  10. protected void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException { //设立一个计数器
  11. int sum=0;
  12. //统计单词出现个数
  13. for (IntWritable value : values) {
  14. sum+=value.get();
  15. }
  16. //转换结果类型
  17. outV.set(sum);
  18. //输出结果
  19. context.write(key,outV);
  20. }
  21. }

可能会出现报红

打包jar包

这时候会出现两个jar包使用第一个就可以了

将jar包移动到linux下

[root@hadoop wordcount]# hadoop jar hadoop03-1.0-SNAPSHOT.jar com.hadoop.mapreducer.wordcount.WordCountDriver /input/words2.txt /output

执行成功

动图演示

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/木道寻08/article/detail/864763
推荐阅读
相关标签
  

闽ICP备14008679号