// RDDs generated, marked as private[streaming] so that testsuites can access it
@transient private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]] () 1、简单的WordCount程序
object WordCount { def main(args:Array[String]): Unit ={
val sparkConf = new SparkConf().setMaster("Master:7077").setAppName("WordCount")
val ssc = new StreamingContext(sparkConf,Seconds(10)) // Timer触发频率
val lines = ssc.socketTextStream("Master",9999) //接收数据
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x,1)).reduceByKey(_+_)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
首先我们先看看print方法,具体的代码如下:
/**
* Print the first num elements of each RDD generated in this DStream. This is an output
* operator, so this DStream will be registered as an output stream and there materialized.
*/ def print(num: Int): Unit = ssc.withScope { def foreachFunc: (RDD[T], Time) => Unit = {
(rdd: RDD[T], time: Time) => { val firstNum = rdd.take(num + 1)
// scalastyle:off println println("-------------------------------------------") println("Time: " + time) println("-------------------------------------------")
firstNum.take(num).foreach(println) if (firstNum.length > num) println("...") println()
// scalastyle:on println
}
}
foreachRDD(context.sparkContext.clean(foreachFunc), displayInnerRDDOps = false)
}
首先定义了一个函数,该函数用来从RDD中取出前几条数据,并打印出结果与时间等,后面会调用foreachRDD函数。
private def foreachRDD(
foreachFunc: (RDD[T], Time) => Unit,
displayInnerRDDOps: Boolean): Unit = { new ForEachDStream(this,context.sparkContext.clean(foreachFunc, false), displayInnerRDDOps).register()
} /**
* Register this streaming as an output stream. This would ensure that RDDs of this
* DStream will be generated.
*/ private[streaming] def register(): DStream[T] = {
ssc.graph.addOutputStream(this) this } def addOutputStream(outputStream: DStream[_]) { this.synchronized {
outputStream.setGraph(this) outputStreams += outputStream
}
在foreachRDD中new出了一个ForEachDStream对象,并将这个注册给DStreamGraph,ForEachDStream对象也就是DStreamGraph中的outputStreams。
当每到达一个BatchInterval时候,就会调用DStreamingGraph中的generateJobs.
def generateJobs(time: Time): Seq[Job] = {
logDebug("Generating jobs for time " + time) val jobs = this.synchronized { outputStreams.flatMap { outputStream => val jobOption = outputStream.generateJob(time)
jobOption.foreach(_.setCallSite(outputStream.creationSite))
jobOption
}
}
logDebug("Generated " + jobs.length + " jobs for time " + time)
jobs
} 这里就会调用outputStream的generateJob方法
private[streaming] def generateJob(time: Time): Option[Job] = {
getOrCompute(time) match { case Some(rdd) => { val jobFunc = () => { val emptyFunc = { (iterator: Iterator[T]) => {} }
context.sparkContext.runJob(rdd, emptyFunc)
} Some(new Job(time, jobFunc))
} case None => None
}
} 这里会调用getOrCompute(time)来产生新RDD,并将其存入到