开始部署OSD:
ceph-deploy --overwrite-conf osd prepare ceph-node1:/dev/sdb ceph-node1:/dev/sdc ceph-node2:/dev/sdb ceph-node2:/dev/sdc ceph-node3:/dev/sdb ceph-node3:/dev/sdc --zap-disk
ceph-deploy --overwrite-conf osd activate ceph-node1:/dev/sdb1 ceph-node1:/dev/sdc1 ceph-node2:/dev/sdb1 ceph-node2:/dev/sdc1 ceph-node3:/dev/sdb1 ceph-node3:/dev/sdc1 查看集群状态
[root@ceph-node1 cluster]# ceph -s
cluster 466e0a3e-f351-46f3-94a2-5ea976c26fd8
health HEALTH_WARN
15 pgs peering
2 pgs stuck unclean
too few PGs per OSD (21 < min 30)
monmap e1: 3 mons at {ceph-node1=10.0.70.40:6789/0,ceph-node2=10.0.70.41:6789/0,ceph-node3=10.0.70.42:6789/0}
election epoch 4, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
osdmap e47: 6 osds: 6 up, 6 in; 9 remapped pgs
flags sortbitwise,require_jewel_osds
pgmap v125: 64 pgs, 1 pools, 0 bytes data, 0 objects
203 MB used, 5966 GB / 5967 GB avail
49 active+clean
9 remapped+peering
6 peering 查看OSD
[root@ceph-node1 cluster]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 5.82715 root default
-2 1.94238 host ceph-node1
0 0.97119 osd.0 up 1.00000 1.00000
1 0.97119 osd.1 up 1.00000 1.00000
-3 1.94238 host ceph-node2
2 0.97119 osd.2 up 1.00000 1.00000
3 0.97119 osd.3 up 1.00000 1.00000
-4 1.94238 host ceph-node3
4 0.97119 osd.4 up 1.00000 1.00000
5 0.97119 osd.5 up 1.00000 1.00000
问题1:
[ceph-1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.ceph-1 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
解决:
主机名和hosts对应即可