1. 需求说明
glance作为openstack中image服务,支持多种适配器,支持将image存放到本地文件系统,http服务器,ceph分布式文件系统,glusterfs和sleepdog等开源的分布式文件系统上,本文,通过将讲述glance如何和ceph结合。
目前glance采用的是本地filesystem的方式存储,存放在默认的路径/var/lib/glance/images下,当把本地的文件系统修改为分布式的文件系统ceph之后,原本在系统中镜像将无法使用,所以建议当前的镜像删除,部署好ceph之后,再统一上传至ceph中存储。
2.原理解析
使用ceph的rbd接口,需要通过libvirt,所以需要在客户端机器上安装libvirt和qemu,关于ceph和openstack结合的结构如下,同时,在openstack中,需要用到存储的地方有三个:1. glance的镜像,默认的本地存储,路径在/var/lib/glance/images目录下,2. nova虚拟机存储,默认本地,路径位于/var/lib/nova/instances目录下,3. cinder存储,默认采用LVM的存储方式。
3. glance与ceph联动
1.创建资源池pool
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
1、ceph默认创建了一个pool:rbd
[iyunv@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,
[iyunv@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
nothing is going on
2、创建一个pool,指定pg_num的大小为128
[iyunv@controller_10_1_2_230 ~]# ceph osd pool create images 128
pool 'images' created
3、查看pool的pg_num和pgp_num大小
[iyunv@controller_10_1_2_230 ~]# ceph osd pool get images pg_num
pg_num: 128
[iyunv@controller_10_1_2_230 ~]# ceph osd pool get images pgp_num
pgp_num: 128
4、查看ceph中的pools
[iyunv@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,1 images,
[iyunv@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
nothing is going on
pool images id 1 #增加了一个pool,id号码是1
nothing is going on
2.配置ceph客户端
1
2
3
4
5
6
7
8
9
1. glance作为ceph的客户端,即glance-api,需要有ceph的配置文件,从ceph的monitor节点复制一份配置文件过去即可,我所在环境中控制节点和ceph monitor为同一台机器,不需要操作
#如果controller节点和ceph的monitor节点是分开,则需要复制
[iyunv@controller_10_1_2_230 ~]# scp /etc/ceph/ceph.conf root@controller_10_1_2_230:/etc/ceph/
ceph.conf
2. 安装客户端rpm包
[iyunv@controller_10_1_2_230 ~]# yum install python-rbd -y
3.配置ceph认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
1. 添加认证的key
[iyunv@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'class-read object_prefix rbd_children,allow rwx pool=images'
[client.glance]
key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
2. 查看认证列表
[iyunv@controller_10_1_2_230 ~]# ceph auth list
installed auth entries:
osd.0
key: AQDsx6lWYGehDxAAGwcYP9jDvH2Zaa8JlGwj1Q==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQD1x6lWQCYBERAAjIKO1LVpj8FvVefDvNQZSA==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQCexqlWQL6OGBAA2v5LsYEB5VgLyq/K2huY3A==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQCexqlWUMNRMRAAZEp/UlhQuaixMcNy5d5pPw==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQCexqlWQFfpJBAAfPCx4sTLNztBESyFKys9LQ==
caps: [mon] allow profile bootstrap-osd
client.glance #glance连接ceph的认证信息
key: AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
caps: [mon] allow r
caps: [osd] class-read object_prefix rbd_children,allow rwx pool=images
3. 将glance生成的key拷贝至
[iyunv@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance
[client.glance]
key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
#将key导出到客户端
[iyunv@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
[iyunv@controller_10_1_2_230 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[iyunv@controller_10_1_2_230 ~]# ll /etc/ceph/ceph.client.glance.keyring
-rw-r--r-- 1 glance glance 64 Jan 28 17:17 /etc/ceph/ceph.client.glance.keyring
4. 配置glance使用ceph做为后端存储
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
1、备份glance-api的配置文件,以便于恢复
[iyunv@controller_10_1_2_230 ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig
2、修改glance配置文件,连接至ceph
[iyunv@controller_10_1_2_230 ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = messaging
rabbit_hosts = 10.1.2.230:5672
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
rabbit_max_retries = 0
rabbit_ha_queues = True
rabbit_durable_queues = False
rabbit_userid = glance
rabbit_password = GLANCE_MQPASS
rabbit_virtual_host = /glance
default_store=rbd #glance使用的后端存储
known_stores=glance.store.rbd.Store #配置rbd的驱动
rbd_store_ceph_conf=/etc/ceph/ceph.conf #ceph的配置文件,包含有monitor的地址,通过查找monitor,可以获取认证信息
rbd_store_user=glance #认证用户,即是刚创建的用户
rbd_store_pool=images #连接的存储池
rbd_store_chunk_size=8 #设置chunk size,即切割的大小
3. 重启glance服务
[iyunv@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-api restart
Stopping openstack-glance-api: [ OK ]
Starting openstack-glance-api: [ OK ]
[iyunv@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-registry restart
Stopping openstack-glance-registry: [ OK ]
Starting openstack-glance-registry: [ OK ]
[iyunv@controller_10_1_2_230 ~]# tail -2 /etc/glance/glance-api.conf
# location strategy defined by the 'location_strategy' config option.
#store_type_preference =
[iyunv@controller_10_1_2_230 ~]# tail -2 /var/log/glance/registry.log
2016-01-28 18:40:25.231 21890 INFO glance.wsgi.server [-] Started child 21896
2016-01-28 18:40:25.232 21896 INFO glance.wsgi.server [-] (21896) wsgi starting up on http://0.0.0.0:9191/
5. 测试glance和ceph联动情况
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[iyunv@controller_10_1_2_230 ~]# glance --debug image-create --name glance_ceph_test --disk-format qcow2 --container-format bare --file cirros-0.3.3-x86_64-disk.img
curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-size: 13200896' -H 'x-image-meta-is_public: False' -H 'X-Auth-Token: 062af9027a85487997d176c9f1e963f2' -H 'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format: qcow2' -H 'x-image-meta-name: glance_ceph_test' -d '<open file u'cirros-0.3.3-x86_64-disk.img', mode 'rb' at 0x1ba24b0>' http://controller:9292/v1/images
HTTP/1.1 201 Created
content-length: 489
etag: 133eae9fb1c98f45894a4e60d8736619
location: http://controller:9292/v1/images/348a90e8-3631-4a66-a45d-590ec6413e7d
date: Thu, 28 Jan 2016 10:42:06 GMT
content-type: application/json
x-openstack-request-id: req-b993bc0b-447e-49b4-a8ce-bd7765199d5a
{"image": {"status": "active", "deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": "2016-01-28T10:42:06", "owner": "ef4b83a909dc4689b663ff2c70022478", "min_disk": 0, "is_public": false, "deleted_at": null, "id": "348a90e8-3631-4a66-a45d-590ec6413e7d", "size": 13200896, "virtual_size": null, "name": "glance_ceph_test", "checksum": "133eae9fb1c98f45894a4e60d8736619", "created_at": "2016-01-28T10:42:04", "disk_format": "qcow2", "properties": {}, "protected": false}}
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2016-01-28T10:42:04 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 348a90e8-3631-4a66-a45d-590ec6413e7d |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | glance_ceph_test |
| owner | ef4b83a909dc4689b663ff2c70022478 |
| protected | False |
| size | 13200896 |
| status | active |
| updated_at | 2016-01-28T10:42:06 |
| virtual_size | None |
+------------------+--------------------------------------+
[iyunv@controller_10_1_2_230 ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 56e96957-1308-45c7-9c66-1afff680b217 | cirros-0.3.3-x86_64 | qcow2 | bare | 13200896 | active |
| 348a90e8-3631-4a66-a45d-590ec6413e7d | glance_ceph_test | qcow2 | bare | 13200896 | active | #上传成功
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
6.查看ceph池的数据
1
2
3
4
5
6
7
8
9
[iyunv@controller_10_1_2_230 ~]# rados -p images ls
rbd_directory
rbd_header.10d7caaf292
rbd_data.10dd1fd73446.0000000000000001
rbd_id.348a90e8-3631-4a66-a45d-590ec6413e7d
rbd_header.10dd1fd73446
rbd_data.10d7caaf292.0000000000000000
rbd_data.10dd1fd73446.0000000000000000
rbd_id.8a09b280-5916-44c6-9ce8-33bb57a09dad @@@glance中的数据存储到了ceph文件系统中@@@
4. 总结
将openstack的glance的数据存储到ceph中是一种非常好的解决方案,既能够保障image数据的安全性,同时glance和nova在同个存储池中,能够基于copy-on-write的方式快速创建虚拟机,能够在秒级为单位实现vm的创建。
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com