clh899 发表于 2017-3-2 10:18:37

SparkStreaming 监控文件目录

SparkStream 监控文件目录时,只能监控文件内是否添加新的文件,如果文件名没有改变只是文件内容改变,那么不会检测出有文件进行了添加。


object SparkStreaming_TextFile {

def main(args: Array): Unit = {
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)

    val conf = new SparkConf().setMaster("spark://hmaster:7077")
      .setAppName(this.getClass.getSimpleName)
      .set("spark.executor.memory", "2g")
      .set("spark.cores.max", "8")
      .setJars(Array("E:\\ScalaSpace\\Spark_Streaming\\out\\artifacts\\Spark_Streaming.jar"))
    val context = new SparkContext(conf)

    //step1 create streaming context
    val ssc = new StreamingContext(context,Seconds(10))

    //step2 监控特定目录
    val lines = ssc.textFileStream("hdfs://hmaster:9000/zh/logs/")

    val words = lines.flatMap(_.split(" ")).map(x => (x,1)).reduceByKey(_ + _)
    words.print()

    ssc.start()
    ssc.awaitTermination()
}
}





def fileStream[
K: ClassTag,
V: ClassTag,
F <: NewInputFormat: ClassTag
] (directory: String, filter: Path => Boolean, newFilesOnly: Boolean): InputDStream[(K, V)] = {
new FileInputDStream(this, directory, filter, newFilesOnly)
}



//注意这里一定要给x设置类型,否则总是报错。
val dataStream = ssc.fileStream(directory,(x : Path)=> {
println(x.getName)
x.getName.contains(".txt")
},true)

如下图所示,这也是为什么spark中已经存在的文件不能够再次读取的原因。当文件名存在时,spark将会记录文件,并不会更新它的时间,故而时间的过滤不满足。/** If given key is already in this map, returns associated value.
*
*Otherwise, computes value from given expression `op`, stores with key
*in map and returns that value.
*@paramkey the key to test
*@paramopthe computation yielding the value to associate with `key`, if
*            `key` is previously unbound.
*@return   the value associated with key (either previously or as a result
*            of executing the method).
*/
def getOrElseUpdate(key: A, op: => B): B =
get(key) match {
    case Some(v) => v
    case None => val d = op; this(key) = d; d
}











From WizNote
页: [1]
查看完整版本: SparkStreaming 监控文件目录