赞
踩
MapeReduce(简称MR)的是大数据计算引擎,相对于Linux awk等工具而已,最大的优势是可以分布式执行,充分利用计算机的多核性能。
一个MR作业(job)是客户端需要执行的一个工作单元,包括输入数据、MR程序和配置信息。作业又可以分成若干个任务(task)来执行,包括map任务和reduce任务。原始数据被MR按照HDFS的快大小(默认128M)分片(split),每一个片是启动一个map任务,计算完的中间结果暂时存在本地。reduce拉取中间结果后进行计算输出最终结果,如下图所示:
切片是为了使程序能够并行处理。如果切片过大,并行度低,处理速度变慢;如果切片过小,会增加管理分片得总时间和构建map任务得总时间。
对于大部分Job,一个切片大小跟HDFS的块(block)大小一样是最优的。因为数据是存储在HDFS,如果一个分片对应多个块,就可能产生网络IO,因为不同分片的数据需要传输到map所在设备。
依赖的pom.xml如下:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.dhhy</groupId> <artifactId>hadoopapp</artifactId> <version>1.0-SNAPSHOT</version> <properties> <encoding>UTF-8</encoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>RELEASE</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.8.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.7.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.7.2</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <artifactId>maven-assembly-plugin </artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>com.dhhy.mr.wordcount.WordcountDriver</mainClass> </manifest> </archive> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <repositories> <repository> <id>aliyunmaven</id> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> </repository> </repositories> </project>
package com.dhhy.mr.wordcount; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; /** 字符串统计 * map阶段入参: Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> KEYIN mr框架读到的一行文本的起始偏移量 VALUEIN mr框架读到的一行文本的内容 KEYOUT 用户业务逻辑处理后输出的key VALUEOUT 用户业务逻辑处理后输出的value * Created by JayLai on 2020-02-17 22:33:42 */ public class WordcountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { `Text k = new Text(); IntWritable v = new IntWritable(1); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 1 获取一行 String line = value.toString(); // 2 切割单词 String[] words = line.split(" "); //3 循环写出 for (String word : words) { k.set(word); context.write(k, v); } } }
package com.dhhy.mr.wordcount; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; /** * Created by JayLai on 2020-02-18 18:20:07 */ public class WordcountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{ IntWritable v = new IntWritable(); @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; // 1 累加求和 for (IntWritable value : values) { sum += value.get(); } v.set(sum); // 2 写出 context.write(key, v); } }
package com.dhhy.mr.wordcount; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; /** * Created by JayLai on 2020-02-18 18:29:04 */ public class WordcountDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { // 1 获取配置信息以及封装任务 Configuration configuration = new Configuration(); Job job = Job.getInstance(configuration); // 2 设置jar加载路径 job.setJarByClass(WordcountDriver.class); // 3 设置map和reduce类 job.setMapperClass(WordcountMapper.class); job.setReducerClass(WordcountReducer.class); // 4 设置map输出 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); // 5 设置最终输出kv类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // 6 设置输入和输出路径 FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); // 7 提交 boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
在/opt/bigdata/data/mr/hello.txt中 添加以下内容:
hello worldhello bigdata
右键点击在WordcountDriver类,在program arguments填写参数:
/opt/bigdata/data/mr/hello.txt /opt/bigdata/data/mr/output
可以看到在输出目录output生成结果文件:
root@ubuntu18:/opt/bigdata/data/mr/output# ls
part-r-00000 _SUCCESS
用编辑器查看part-r-00000可以看到统计结果:
bigdata 1
hello 2
world 1
hadoop fs -put hello.txt /tmp
使用maven编译项目,在target目录下面会生成hadoopapp-
1.0-SNAPSHOT.jarhadoop@ubuntu18:/opt/bigdata/project/mr$ hadoop jar hadoopapp-1.0-SNAPSHOT.jar com.dhhy.mr.wordcount.WordcountDriver /tmp/hello.txt /tmp/output
在浏览器访问http://{IP}}:8088/cluster
可以看到任务执行成功,再使用命令行查看统计结果:
hadoop@ubuntu18:/opt/bigdata/project/mr$ hadoop fs -cat /tmp/output/part-r-00000
bigdata 1
hello 2
world 1
1)TomWhite,Hadoop权威指南 第4版. 2017, 清华大学出版
2)社尚硅谷频 http://www.atguigu.com/bigdata_video.shtml
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。