jane27 发表于 2019-1-31 06:52:55

Spark1.4源码走读笔记之模式匹配

  RDD里的模式匹配:
  def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {
  case (true, true) => true
  case (false, false) => false
  case _ => throw new SparkException("Can only zip RDDs with " +
  "same number of elements in each partition")
  }
  

  jobResult = jobResult match {
  case Some(value) => Some(f(value, taskResult.get))
  case None => taskResult
  }
  

  take(1) match {
  case Array(t) => t
  case _ => throw new UnsupportedOperationException("empty collection")
  }
  下面的比较好理解:
  val len = rdd.dependencies.length
  len match {
  case 0 => Seq.empty
  case 1 =>
  val d = rdd.dependencies.head
  debugString(d.rdd, prefix, d.isInstanceOf], true)
  case _ => //所有的都到碗里来
  val frontDeps = rdd.dependencies.take(len - 1)
  val frontDepStrings = frontDeps.flatMap(
  d => debugString(d.rdd, prefix, d.isInstanceOf]))
  

  val lastDep = rdd.dependencies.last
  val lastDepStrings =
  debugString(lastDep.rdd, prefix, lastDep.isInstanceOf], true)
  

  (frontDepStrings ++ lastDepStrings)
  }
  




页: [1]
查看完整版本: Spark1.4源码走读笔记之模式匹配