赞
踩
def coalesce(numPartitions: Int, shuffle: Boolean = false)(implicit ord: Ordering[T] = null): RDD[T]
该函数用于将RDD进行重分区,使用HashPartitioner。第一个参数为重分区的数目,第二个为是否进行shuffle,默认为false.
代码测试如下:
- scala> var data = sc.textFile("example.txt")
- data: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[53] at textFile at :21
-
- scala> data.collect
- res1: Array[String] = Array(hello world, hello spark, hello hive, hi spark)
-
- scala> data.partitions.size
- res2: Int = 2 //RDD data默认有两个分区
-
- scala> var rdd1 = data.coalesce(1)
- rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[2] at coalesce at :23
-
- scala> rdd1.partitions.size
- res3: Int = 1 //rdd1的分区数为1
-
- scala> var rdd1 = data.coalesce(4)
- rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[3] at coalesce at :23
-
-
- scala> rdd1.partitions.size
- res4: Int = 2 //如果重分区的数目大于原来的分区数,那么必须指定shuffle参数为true,否则,分区数不变
-
- scala> var rdd1 = data.coalesce(4,true)
- rdd1: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at coalesce at :23
-
- scala> rdd1.partitions.size
- res5: Int = 4

def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T]
该函数其实就是coalesce函数第二个参数为true的实现
代码测试如下:
- scala> var rdd2 = data.repartition(1)
- rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[11] at repartition at :23
-
- scala> rdd2.partitions.size
- res6: Int = 1
-
- scala> var rdd2 = data.repartition(4)
- rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at repartition at :23
-
- scala> rdd2.partitions.size
- res7: Int = 4
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。