Centos7-单台物理机搭建ceph集群
学习ceph时,看到crush规则的时候,crush策略最小为osd,但是这个osd定义的是真实的osd还是指单块磁盘?为了验证一下,自己用测试机模拟了一下单台机器使用一块磁盘搭建ceph。配置ceph源,这里使用的阿里云的源
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# yum install --nogpgcheck -y epel-release
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
# vim /etc/yum.repos.d/ceph.repo
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
安装ceph
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# yum update -y
# yum install ceph-deploy -y
# yum install ntp ntpdate ntp-doc openssh-server yum-plugin-priorities -y
# vim /etc/hosts
172.16.10.167 admin-node#主机的IP和主机名,不写会无法连接,或者可以修改ceph配置文件mon_initial_members改成主机名
# mkdir my-cluster
# cd my-cluster
# ceph-deploy new admin-node
# vim ceph.conf
osd pool default size = 3 #创建3个副本
public_network = 172.16.10.0/24 #公用网络
cluster_network = 172.16.10.0/24#集群网络
# ceph-deploy install admin-node
# fdisk /dev/sdb#划分三个相同大小的分区
# ceph-deploy mon create-initial
# ceph-deploy admin admin-node
# chmod +r /etc/ceph/ceph.client.admin.keyring
# ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb1
# ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb2
# ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb3
上面的uuid使用ceph -s可以查看,就是第一行cluster后面的那串字符,配置文件中可以修改
# ceph-disk activate /dev/sdb1
# ceph-disk activate /dev/sdb2
# ceph-disk activate /dev/sdb3
# ceph osd getcrushmap -o a.map
# crushtool -d a.map -oa
# vim a
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type osd #默认为host,修改为osd
step emit
# crushtool -c a -o b.map
# ceph osd setcrushmap -i b.map
# ceph osd tree
# ceph -s
搭建完成
通过测试结果可以看出来,使用一块磁盘就可以搭建ceph集群,crush策略中的osd指的是真实的osd。
页:
[1]