在ceph集群中摘掉包含(mon osd mds)的节点
一、摘掉一台包含(mon osd mds)的节点1、摘除mon
# ceph mon remove bgw-os-node153
removed mon.bgw-os-node153 at 10.240.216.153:6789/0, there are now 2 monitors
2、摘除此节点上所有的osd
1)、查看此节点的osd
# ceph osd tree
-4 1.08 host bgw-os-node153
8 0.27 osd.8 up 1
9 0.27 osd.9 up 1
10 0.27 osd.10up 1
11 0.27 osd.11up 1
2)、把上面的节点的osd进程停掉
# /etc/init.d/ceph stop osd
=== osd.10 ===
Stopping Ceph osd.10 on bgw-os-node153...kill 2251...done
=== osd.9 ===
Stopping Ceph osd.9 on bgw-os-node153...kill 2023...kill 2023...done
=== osd.8 ===
Stopping Ceph osd.8 on bgw-os-node153...kill 1724...kill 1724...done
=== osd.11 ===
Stopping Ceph osd.11 on bgw-os-node153...kill 1501...done
3)、再次查看ceph osd状态
# ceph osd tree
-4 1.08 host bgw-os-node153
8 0.27 osd.8 down 1
9 0.27 osd.9 down 1
10 0.27 osd.10down 1
11 0.27 osd.11down 1
4)、删除所有的osd
# ceph osd rm 8
removed osd.8
# ceph osd rm 9
removed osd.9
# ceph osd rm 10
^[[Aremoved osd.10
# ceph osd rm 11
removed osd.11
5)、删除所有osd的crush map
# ceph osd crush rm osd.8
removed item id 8 name 'osd.8' from crush map
# ceph osd crush rm osd.9
removed item id 9 name 'osd.9' from crush map
# ceph osd crush rm osd.10
^[[Aremoved item id 10 name 'osd.10' from crush map
# ceph osd crush rm osd.11
removed item id 11 name 'osd.11' from crush map
6)、删除所有osd的认证
# ceph auth del osd.8
updated
# ceph auth del osd.9
updated
# ceph auth del osd.10
updated
# ceph auth del osd.11
updated
7)、在ceph osd tree中删除此机器host的crush map
# ceph osd crush rmbgw-os-node153
removed item id -4 name 'bgw-os-node153' from crush map
8)、卸载所有挂载在osd的硬盘
# umount /var/lib/ceph/osd/ceph-8
# umount /var/lib/ceph/osd/ceph-9
# umount /var/lib/ceph/osd/ceph-10
# umount /var/lib/ceph/osd/ceph-11
3、摘掉mds
1、直接关闭此节点的mds进程
# /etc/init.d/ceph stop mds
=== mds.bgw-os-node153 ===
Stopping Ceph mds.bgw-os-node153 on bgw-os-node153...kill 4981...done
#
2、删除此mds的认证
# ceph auth del mds.bgw-os-node153
updated
页:
[1]