下载swarm镜像
[iyunv@manager-node ~]# docker pull swarm
[iyunv@manager-node ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/swarm latest 36b1e23becab 4 months ago 15.85 MB
3)创建swarm(要保存初始化后token,因为在节点加入时要使用token作为通讯的密钥)
[iyunv@manager-node ~]# docker swarm init --advertise-addr 182.48.115.237
Swarm initialized: current node (1gi8utvhu4rxy8oxar2g7h6gr) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4roc8fx10cyfgj1w1td8m0pkyim08mve578wvl03eqcg5ll3ig-f0apd81qfdwv27rnx4a4y9jej \
182.48.115.237:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
上面命令执行后,该机器自动加入到swarm集群。这个会创建一个集群token,获取全球唯一的 token,作为集群唯一标识。后续将其他节点加入集群都会用到这个token值。
其中,--advertise-addr参数表示其它swarm中的worker节点使用此ip地址与manager联系。命令的输出包含了其它节点如何加入集群的命令。
-------------------------------------------------------------------------------------------------------------------
温馨提示:
如果再次执行上面启动swarm集群的命令,会报错说这个节点已经在集群中了
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
解决办法:
[iyunv@manager-node ~]# docker swarm leave --help //查看帮助
[iyunv@manager-node ~]# docker swarm leave --force
-------------------------------------------------------------------------------------------------------------------
使用docker info 或 docker node ls 查看集群中的相关信息
[iyunv@manager-node ~]# docker info
.......
Swarm: active
NodeID: 1gi8utvhu4rxy8oxar2g7h6gr
Is Manager: true
ClusterID: a88a9j6nwcbn31oz6zp9oc0f7
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
.......
[iyunv@manager-node ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1gi8utvhu4rxy8oxar2g7h6gr * manager-node Ready Active Leader
注意上面node ID旁边那个*号表示现在连接到这个节点上。
登录到node1节点上,执行前面创建swarm集群时输出的命令:
[iyunv@node1 ~]# docker swarm join --token SWMTKN-1-4roc8fx10cyfgj1w1td8m0pkyim08mve578wvl03eqcg5ll3ig-f0apd81qfdwv27rnx4a4y9jej 182.48.115.237:2377
This node joined a swarm as a worker.
同理在node2节点上,也执行这个命令
[iyunv@node2 ~]# docker swarm join --token SWMTKN-1-4roc8fx10cyfgj1w1td8m0pkyim08mve578wvl03eqcg5ll3ig-f0apd81qfdwv27rnx4a4y9jej 182.48.115.237:2377
This node joined a swarm as a worker.
如果想要将其他更多的节点添加到这个swarm集群中,添加方法如上一致!
然后在manager-node管理节点上看一下集群节点的状态:
[iyunv@manager-node ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1gi8utvhu4rxy8oxar2g7h6gr * manager-node Ready Active Leader
ei53e7o7jf0g36329r3szu4fi node1 Ready Active
f1obgtudnykg51xzyj5fs1aev node2 Ready Active
--------------------------------------------------------------------------------------------------------------------
温馨提示:更改节点的availablity状态
swarm集群中node的availability状态可以为 active或者drain,其中:
active状态下,node可以接受来自manager节点的任务分派;
drain状态下,node节点会结束task,且不再接受来自manager节点的任务分派(也就是下线节点)。
[iyunv@manager-node ~]# docker node update --availability drain node1 //将node1节点下线。如果要删除node1节点,命令是"docker node rm --force node1"
[iyunv@manager-node ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1gi8utvhu4rxy8oxar2g7h6gr * manager-node Ready Active Leader
ei53e7o7jf0g36329r3szu4fi node1 Ready drain
f1obgtudnykg51xzyj5fs1aev node2 Ready Active
如上,当node1的状态改为drain后,那么该节点就不会接受task任务分发,就算之前已经接受的任务也会转移到别的节点上。
再次修改为active状态(及将下线的节点再次上线)
[iyunv@manager-node ~]# docker node update --availability drain node1
5)在Swarm中部署服务(这里以nginx服务为例)
Docker 1.12版本提供服务的Scaling、health check、滚动升级等功能,并提供了内置的dns、vip机制,实现service的服务发现和负载均衡能力。
在启动容器之前,先来创建一个覆盖网络,用来保证在不同主机上的容器网络互通的网络模式
[iyunv@manager-node ~]# docker network create -d overlay ngx_net
[iyunv@manager-node ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
8bbd1b7302a3 bridge bridge local
9e637a97a3b9 docker_gwbridge bridge local
b5a41c8c71e7 host host local
1x45zepuysip ingress overlay swarm
3ye6vfp996i6 ngx_net overlay swarm
0808a5c72a0a none null local
再次查询服务的状态列表,发现这两个task又转移到node1上了(即在swarm cluster集群中启动的容器,在worker node节点上删除或停用后,该容器会自动转移到其他的worker node节点上)
[iyunv@manager-node ~]# docker service ps my-test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
2m8qqpoa0dpeua5jbgz1infuy my-test.1 docker.io/nginx manager-node Running Running 38 minutes ago
aqko8yhmdj53gmzs8gqhoylc2 my-test.2 docker.io/nginx node2 Running Running 31 minutes ago
7dhmc63rk0bc8ngt59ix38l44 my-test.3 docker.io/nginx node1 Running Running about a minute ago
di99oj7l9x6firw1ai25sewwc \_ my-test.3 docker.io/nginx node2 Shutdown Complete about a minute ago
erqk394hd4ay7nfwgaz4zp3s0 \_ my-test.3 docker.io/nginx node1 Shutdown Complete 9 minutes ago
607tyjv6foc0ztjjvdo3l3lge my-test.4 docker.io/nginx node1 Running Running about a minute ago
aibl3u3pph3fartub0mhwxvzr \_ my-test.4 docker.io/nginx node2 Shutdown Complete about a minute ago
2dslg6w16wzcgboa2hxw1c6k1 \_ my-test.4 docker.io/nginx node1 Shutdown Complete 9 minutes ago
bmyddndlx6xi18hx4yinpakf3 my-test.5 docker.io/nginx manager-node Running Running 31 minutes ago
----------------------------------------------------------------------------------------------------
同理,swarm还可以缩容,如下,将my-test容器变为1个。
[iyunv@manager-node ~]# docker service scale my-test=1
[iyunv@manager-node ~]# docker service ps my-test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
2m8qqpoa0dpeuasdfsdfdfsdf my-test.1 nginx manager-node Running Running 3 minutes ago
登录node2节点,使用docker ps查看,会发现容器被stop而非rm
---------------------------------------------------------------------------------------------------
删除容器服务
[iyunv@manager-node ~]# docker service --help //查看帮助
[iyunv@manager-node ~]# docker service rm my-test //这样就会把所有节点上的所有容器(task任务实例)全部删除了
my-nginx
---------------------------------------------------------------------------------------------------
除了上面使用scale进行容器的扩容或缩容之外,还可以使用docker service update 命令。 可对 服务的启动 参数 进行 更新/修改。
[iyunv@manager-node ~]# docker service update --replicas 3 my-test
my-test
更新完毕以后,可以查看到REPLICAS已经变成3/3
[iyunv@manager-node ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
d7cygmer0yy5 my-test 3/3 nginx /bin/bash
[iyunv@manager-node ~]# docker service ps my-test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
ddkidkz0jgor751ffst55kvx4 my-test.1 nginx node1 Running Preparing 4 seconds ago
1aucul1b3qwlmu6ocu312nyst \_ my-test.1 nginx manager-node Shutdown Complete 5 seconds ago
4w9xof53f0falej9nqgq064jz \_ my-test.1 nginx manager-node Shutdown Complete 19 seconds ago
0e9szyfbimaow9tffxfeymci2 \_ my-test.1 nginx manager-node Shutdown Complete 30 seconds ago
27aqnlclp0capnp1us1wuiaxm my-test.2 nginx manager-node Running Preparing 1 seconds ago
7dmmmle29uuiz8ey3tq06ebb8 my-test.3 nginx manager-node Running Preparing 1 seconds ago
docker service update 命令,也可用于直接 升级 镜像等。
[iyunv@manager-node ~]# docker service update --image nginx:new my-test
[iyunv@manager-node ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
d7cygmer0yy5 my-test 3/3 nginx:new /bin/bash
6)Swarm中使用Volume(挂在目录,mount)
[iyunv@manager-node ~]# docker volume ls
DRIVER VOLUME NAME
local 11b68dce3fff0d57172e18bc4e4cfc252b984354485d747bf24abc9b11688171
local 1cd106ed7416f52d6c77ed19ee7e954df4fa810493bb7e6cf01775da8f9c475f
local myvolume
[iyunv@manager-node ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
8s9m0okwlhvl test-nginx 2/2 nginx
[iyunv@manager-node ~]# docker service ps test-nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
32bqjjhqcl1k5z74ijjli35z3 test-nginx.1 nginx node1 Running Running 23 seconds ago
48xoypunb3g401jkn690lx7xt test-nginx.2 nginx node2 Running Running 23 seconds ago
登录node1节点的test-nginx容器查看
[iyunv@node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d471569629b2 nginx:latest "nginx -g 'daemon off" 2 minutes ago Up 2 minutes 80/tcp test-nginx.1.32bqjjhqcl1k5z74ijjli35z3
[iyunv@node1 ~]# docker exec -ti d471569629b2 /bin/bash
root@d471569629b2:/# cd /wangshibo/
root@d471569629b2:/wangshibo# ls
root@d471569629b2:/wangshibo# echo "ahahha" > test
root@d471569629b2:/wangshibo# ls
test
[iyunv@node1 ~]# docker volume inspect myvolume
[
{
"Name": "myvolume",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/myvolume/_data",
"Labels": null,
"Scope": "local"
}
]
[iyunv@node1 ~]# cd /var/lib/docker/volumes/myvolume/_data/
[iyunv@node1 _data]# ls
test
[iyunv@node1 _data]# cat test
ahahha
[iyunv@node1 _data]# echo "12313" > 123
[iyunv@node1 _data]# ls
123 test
root@d471569629b2:/wangshibo# ls
123 test
root@d471569629b2:/wangshibo# cat test
ahahha
还可以将node11节点机上的volume数据目录做成阮链接
[iyunv@node1 ~]# ln -s /var/lib/docker/volumes/myvolume/_data /wangshibo
[iyunv@node1 ~]# cd /wangshibo
[iyunv@node1 wangshibo]# ls
123 test
[iyunv@node1 wangshibo]# rm -f test
[iyunv@node1 wangshibo]# echo "5555" > haha
root@d471569629b2:/wangshibo# ls
123 haha
root@d471569629b2:/wangshibo# cat haha
5555
---------------------------------------------------------------------------------
第二种方法:
命令格式:
docker service create --mount type=bind,target=/container_data/,source=/host_data/
其中,参数target表示容器里面的路径,source表示本地硬盘路径
[iyunv@manager-node ~]# docker service create --replicas 1 --mount type=bind,target=/usr/share/nginx/html/,source=/opt/web/ --network ngx_net --name haha-nginx -p 8880:80 nginx
[iyunv@manager-node ~]# docker service ls
ID NAME REPLICAS IMAGE COMMAND
9t9d58b5bq4u haha-nginx 1/1 nginx
[iyunv@manager-node ~]# docker service ps haha-nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bji4f5tikhvm7nf5ief3jk2is haha-nginx.1 nginx node2 Running Running 18 seconds ago
登录node2节点,在挂载目录/opt/web下写测试数据
[iyunv@node2 _data]# cd /opt/web/
[iyunv@node2 web]# ls
[iyunv@node2 web]# cat wang.html
sdfasdf
登录容器查看,发现已经实现数据同步
[iyunv@node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3618e3d1b966 nginx:latest "nginx -g 'daemon off" 28 seconds ago Up 24 seconds 80/tcp haha-nginx.1.bji4f5tikhvm7nf5ief3jk2is
[iyunv@node2 ~]# docker exec -ti 3618e3d1b966 /bin/bash
root@3618e3d1b966:/# cd /usr/share/nginx/html
root@3618e3d1b966:/usr/share/nginx/html# ls
wang.html
root@3618e3d1b966:/usr/share/nginx/html# cat wang.html
sdfasdf
root@3618e3d1b966:/usr/share/nginx/html# touch test
touch: cannot touch 'test': Permission denied
由此可见,以上设置后,在容器里的同步目录下没有写权限,更新内容时只要放到宿主机的挂在目录下即可!
总之,Swarm上手很简单,Docker swarm可以非常方便的创建类似kubernetes那样带有副本的服务,确保一定数量的容器运行,保证服务的高可用。
然而,光从官方文档来说,功能似乎又有些简单;
swarm、kubernetes、messos总体比较而言:
1)Swarm的优点和缺点都是使用标准的Docker接口,使用简单,容易集成到现有系统,但是更困难支持更复杂的调度,比如以定制接口方式定义的调度。
2)Kubernetes 是自成体系的管理工具,有自己的服务发现和复制,需要对现有应用的重新设计,但是能支持失败冗余和扩展系统。
3)Mesos是低级别 battle-hardened调度器,支持几种容器管理框架如Marathon, Kubernetes, and Swarm,现在Kubernetes和Mesos稳定性超过Swarm,在扩展性方面,Mesos已经被证明支持超大规模的系统,比如数百数千台主机,但是,如果你需要小的集群,比如少于一打数量的节点服务器数量,Mesos也许过于复杂了。