Ceph 中的配置
(1) 在 ceph 中创建三个 pool 分别给 Cinder,Glance 和 nova 使用
cephadmin@ceph001:~$ ceph osd pool create volumes 64
pool 'volumes' created
cephadmin@ceph001:~$ ceph osd pool create images 64
pool 'images' created
cephadmin@ceph001:~$ ceph osd pool create vms 64
pool 'vms' created
(2) 将Ceph配置文件Copy到Openstack各节点中
The nodes running glance-api, cinder-volume, nova-compute and cinder-backup act as Ceph clients. Each requires the ceph.conffile:
root@ceph001:~# ssh openstack tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
root@openstack's password:
[global]
fsid = 4f1100a0-bc37-4472-b0b0-58b44eabac97
mon_initial_members = ceph001
mon_host = 192.168.20.178
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
root@ceph001:~# ssh openstack-compute tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
The authenticity of host 'openstack-compute (192.168.20.182)' can't be established.
ECDSA key fingerprint is b7:bf:c5:81:0d:a0:2a:2d:94:2f:c1:16:78:f3:9f:b2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'openstack-compute,192.168.20.182' (ECDSA) to the list of known hosts.
root@openstack-compute's password:
[global]
fsid = 4f1100a0-bc37-4472-b0b0-58b44eabac97
mon_initial_members = ceph001
mon_host = 192.168.20.178
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
(3) 在各节点上安装ceph 客户端
http://docs.ceph.com/docs/master/rbd/rbd-openstack/
On the glance-api node, you’ll need the Python bindings for librbd:
#apt-get install python-ceph //好像N版openstack已经自带了,并且官方介绍的是安装python-rbd 各种源都找不到这个包,手动下载也无法安装提示和python-ceph有冲突
On the nova-compute, cinder-backup and on the cinder-volume node, use both the Python bindings and the client command line tools:
#apt-get install ceph-common
(4) SETUP CEPH CLIENT AUTHENTICATION
If you have cephx authentication enabled, create a new user for Nova/Cinder and Glance. Execute the following:
#cinder 用户会被 cinder 和 nova 使用,需要访问三个pool: volumes,vms 为rwx权限,images为rx权限
ceph auth get-or-create client.glance | ssh openstack sudo tee /etc/ceph/ceph.client.glance.keyring
ssh openstack sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring //我这里的openstack环境是devstack,所以应该将glance用户替换为stack
ceph auth get-or-create client.cinder | ssh openstack sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh openstack sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring //我这里的openstack环境是devstack,所以应该将cinder用户替换为stack
ceph auth get-or-create client.cinder | ssh openstack-compute sudo tee /etc/ceph/ceph.client.cinder.keyring
Create a temporary copy of the secret key on the nodes running nova-compute:
ceph auth get-key client.cinder | ssh openstack tee client.cinder.key
ceph auth get-key client.cinder | ssh openstack-compute tee client.cinder.key
Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:
root@openstack:~# uuidgen
2862d317-e8df-4a5c-af7a-387ab2bc7ef5
root@openstack:/etc/ceph# ls
ceph.client.cinder.keyring ceph.client.glance.keyring ceph.conf
root@openstack:/etc/ceph# cat > secret.xml <<EOF
> <secret ephemeral='no' private='no'>
> <uuid>2862d317-e8df-4a5c-af7a-387ab2bc7ef5</uuid>
> <usage type='ceph'>
> <name>client.cinder secret</name>
> </usage>
> </secret>
>
> EOF
root@openstack:~# virsh secret-define --file secret.xml
Secret 2862d317-e8df-4a5c-af7a-387ab2bc7ef5 created
root@openstack:~# virsh secret-set-value --secret 2862d317-e8df-4a5c-af7a-387ab2bc7ef5 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set
root@openstack:~#
//在所有nova-compute节点都需要执行以上操作
//openstack-compute uuid c0f5d1a4-b086-4c0c-984e-2f4e84f0f9c5
//Important: You don’t necessarily need the UUID on all the compute nodes. However from a platform consistency perspective, it’s better to keep the same UUID. CONFIGURE OPENSTACK TO USE CEPH
(1) CONFIGURING GLANCE
Edit /etc/glance/glance-api.conf and add under the [DEFAULT] section:
[glance_store]
stores = glance.store.rbd.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:
show_image_direct_url = True
(2) CONFIGURING CINDER
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding: