Openstack-Cinder multi backend
Cinder multi backend思路以及步骤:一、根据实际情况在ceph上创建相应的pool,例如ssd,sata等等
二、根据实际情况编写crushmap,下面是我摘抄ceph创始人sebastien-han的一篇配置ssd,sata的文章里面的crushmap。
将其配置摘录如下:
##
# OSD SATA DECLARATION
##
host ceph-osd2-sata {
id -2 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.0 weight 1.000
item osd.3 weight 1.000
}
host ceph-osd1-sata {
id -3 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.2 weight 1.000
item osd.5 weight 1.000
}
host ceph-osd0-sata {
id -4 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.1 weight 1.000
item osd.4 weight 1.000
}
##
# OSD SSD DECLARATION
##
host ceph-osd2-ssd {
id -22 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.6 weight 1.000
item osd.9 weight 1.000
}
host ceph-osd1-ssd {
id -23 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.8 weight 1.000
item osd.11 weight 1.000
}
host ceph-osd0-ssd {
id -24 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item osd.7 weight 1.000
item osd.10 weight 1.000
}
##
# SATA ROOT DECLARATION
##
root sata {
id -1 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item ceph-osd2-sata weight 2.000
item ceph-osd1-sata weight 2.000
item ceph-osd0-sata weight 2.000
}
##
# SATA ROOT DECLARATION
##
root ssd {
id -21 # do not change unnecessarily
# weight 0.000
alg straw
hash 0# rjenkins1
item ceph-osd2-ssd weight 2.000
item ceph-osd1-ssd weight 2.000
item ceph-osd0-ssd weight 2.000
}
##
# SSD RULE DECLARATION
##
# rules
rule ssd {
ruleset 0
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type host
step emit
}
##
# SATA RULE DECLARATION
##
rule sata {
ruleset 1
type replicated
min_size 1
max_size 10
step take sata
step chooseleaf firstn 0 type host
step emit
}
这个crushmap是将SSD和SATA的盘分到了多个逻辑host然后再针对于逻辑host进行bucket分组,一个bucket对应一个rule。
默认的crushmap的话应该是下面这样的一个组织架构:
三、他将所有的host分为了一个bucket然后针对于bucket做了一个default的rule。
设置pool的crushmap
ceph osd pool set crush_ruleset 0 #注解:这里的crush_ruleset 0 是你在crushmap里面的rule选项
ceph osd pool set crush_ruleset 1 #注解:这里的crush_ruleset 1 是你在crushmap里面的rule选项
至此ceph端已经配置完毕,接下来配置cinder端
四、在cinder-volumes节点配置
vi /etc/cinder/cinder.conf 添加如下
enabled_backends=ssd,sata
volume_driver=cinder.volume.driver.RBDDriverrbd_pool=ssdvolume_backend_name=ssd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = XXXXXXXXX
volume_driver=cinder.volume.driver.RBDDriverrbd_pool=satavolume_backend_name=sata
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = XXXXXXXXX
五、创建两个cinder 卷类型
cinder type-create ssd
cinder type-create ssta
root@controller:~# cinder type-list
+--------------------------------------+------+
| ID | Name |
+--------------------------------------+------+
| 707e887d-95e5-45ca-b7df-53a51fadf458 | ssd|
| 82c32938-f1e5-4e22-a4b9-b0920c4543e7 | sata |
+--------------------------------------+------+
六、设置卷类型的key键值
cinder type-key ssd set volume_backend_name=ssd
cinder type-key ssd set volume_backend_name=sata
root@controller:~# cinderextra-specs-list
+--------------------------------------+------+-----------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+------+-----------------------------------+
| 707e887d-95e5-45ca-b7df-53a51fadf458 | ssd|{u'volume_backend_name': u'ssd'} |
| 82c32938-f1e5-4e22-a4b9-b0920c4543e7 | sata | {u'volume_backend_name': u'sata'} |
+--------------------------------------+------+-----------------------------------+
七、最后重启服务
restart cinder-api ; sudo restart cinder-scheduler
在cinder-volumes节点
restart cinder-volume
八、验证是否成功
故障总结:
在修改ceph的crushmap的过程中一般会遇到以下几种情况:
这种情况呢是因为我的pool size设置的是2,而我改了crushmap之后变成了已单个host为bucket的所以要把pool size设置成1.
但是当我改了pool size之后还是会有些问题,他状态变成了以下,Pg正常重新映射关系,可能pg映射的osd集合与根据crush计算的不一样。
查看remapped状态的pg都在哪些osd上
根据pg dump出来的显示虽然我把pool size设置成了1,但是还有些pg size是2,pg 1.7e的acting集合有2和0两个osd,于是把osd2和osd0重启,让这2个osd上的pg重新peering
状态在逐渐变好,最后我重启了全部osd
mark!!!要做先看一下 {:6_397:}
页:
[1]