car.3205 发表于 2019-2-20 07:57:24

docker中spark+scala安装配置

  一、scala安装
首先下载scala压缩包
  wget https://downloads.lightbend.com/scala/2.11.7/scala-2.11.7.tgz
  解压
  tar -zxvf scala-2.11.7.tgz
  移动目录
  mv scala-2.11.7 /usr/local/
  改名
  cd /usr/local/
mv scala-2.11.7 scala
  配置环境变量
  vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
https://s1.运维网.com/images/blog/201901/10/84494bcbe09d5b25b75ff1d66f076d09.png
  环境变量生效
  source /etc/profile
  查看scala版本
  scala -version
  分发scala到其他主机
  scp -r /usr/local/scala/ root@Master:/usr/local/
scp -r /usr/local/scala/ root@Slave2:/usr/local/
  二、spark安装
复制spark压缩包 到容器中
  docker cp /root/spark-2.1.2-bin-hadoop2.4.tgz b0c77:/
https://s1.运维网.com/images/blog/201901/10/3d18f5bc6ec0bb2d3722a8362be5b228.png
  查看并解压
https://s1.运维网.com/images/blog/201901/10/e9a4c067719c26d035ee848db58ed8ba.png
在profile中添加spark环境变量
https://s1.运维网.com/images/blog/201901/10/1ff5fe155526a86bea61fcba6f42e874.png
生效环境变量
  source /etc/profile
  编辑spark-env.sh
  vim /usr/local/spark/conf/spark-env.sh
https://s1.运维网.com/images/blog/201901/10/aac5149964729cae318d592b1019f971.png

[*]JAVA_HOME:Java安装目录
[*]SCALA_HOME:Scala安装目录
[*]HADOOP_HOME:hadoop安装目录
[*]HADOOP_CONF_DIR:hadoop集群的配置文件的目录
[*]SPARK_MASTER_IP:spark集群的Master节点的ip地址
[*]SPARK_WORKER_MEMORY:每个worker节点能够最大分配给exectors的内存大小
[*]SPARK_WORKER_CORES:每个worker节点所占有的CPU核数目
[*]SPARK_WORKER_INSTANCES:每台机器上开启的worker节点的数目
修改slaves文件
  cp slaves.template slaves
https://s1.运维网.com/images/blog/201901/10/2090f7c907814131dee6f06d4359c5da.png
  vi conf/slaves
https://s1.运维网.com/images/blog/201901/10/3f42d47d357aeecfad3e66996514dd98.png
  scp -r /usr/local/spark/ Master:/usr/local
https://s1.运维网.com/images/blog/201901/10/cdce091a0a964683cf46e0800b9504d8.png
  scp -r /usr/local/spark/ Slave2:/usr/local
https://s1.运维网.com/images/blog/201901/10/da365444d92eb213ecb02a3ae1a3b382.png
同时其他两个节点也要修改 /etc/profile
启动spark
  ./sbin/start-all.sh
https://s1.运维网.com/images/blog/201901/10/9bb6ae98ad3bc2209a966ddd68d0e17d.png
成功打开之后使用jps在Master、Slave1和Slave2节点上分别可以看到新开启的Master和Worker进程。
https://s1.运维网.com/images/blog/201901/10/42025f54c4ff95bec3495e20def61db2.png
https://s1.运维网.com/images/blog/201901/10/f4c8e17b9d5d183509a63ba04dd62b2f.png
https://s1.运维网.com/images/blog/201901/10/7033fd94dd6796c88c1a0c4a96b2c94a.png
成功打开Spark集群之后可以进入Spark的WebUI界面,可以通过
SparkMaster_IP:8080
端口映射:
  iptables -t nat -A DOCKER -p tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:8080
https://s1.运维网.com/images/blog/201901/10/5637fc864a4b94a3268bd725222bf778.png
  此时我们可以通过映射到宿主机的端口访问,可见有两个正在运行的Worker节点。
https://s1.运维网.com/images/blog/201901/10/0db1452a9101d7dee5315bc41a1f0d5d.png
打开Spark-shell
使用
  spark-shell
https://s1.运维网.com/images/blog/201901/10/e77e93df8e1d4e8b3d9ac5f165015340.png
  推出spark-shell的命令是“:quit”
因为shell在运行,我们也可以通过
SparkMaster_IP:4040(172.17.0.2:4040)
  访问WebUI查看当前执行的任务。
先进行端口映射:
  iptables -t nat -A DOCKER -p tcp --dport 4040 -j DNAT --to-destination 172.17.0.2:4040
https://s1.运维网.com/images/blog/201901/10/f18ca0a7b49db7023a19170a04a9727f.png
https://s1.运维网.com/images/blog/201901/10/07ab564dbfb1d9ea96a889fc6f2f392b.png



页: [1]
查看完整版本: docker中spark+scala安装配置