8870188 发表于 2018-6-1 10:07:53

配置openstack 使用ceph(glance cinder nova)

  一. ceph 实现与openstack(kilo)集成
  注意: 不同的版本配置文件稍有不同,请参考官方文档:
  http://docs.ceph.com/docs/master/rbd/rbd-openstack/#any-openstack-version
  环境说明:
  192.168.10.95 glance (控制节点,glance cinder)
  192.168.10.99 network01
  192.168.10.101   compute01
  

  1. 安装ceph客户端
  安装Ceph客户端
  apt-get install python-ceph -y
  apt-get install ceph-common -y
  

  2. 创建pool:
  ceph osd pool create datastore 512
  rados lspools
  3. 创建用户:
  ceph auth get-or-create client.kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore'
  ceph auth get-or-create client.kilo | ssh 192.168.10.95 sudo tee /etc/ceph/ceph.client.kilo.keyring
  

  ceph auth get-or-create client.kilo2 mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore2'
  ceph auth get-or-create client.kilo2 | ssh 192.168.10.95 sudo tee /etc/ceph/ceph.client.kilo2.keyring
  

  ssh 192.168.10.95 sudo chmod +r /etc/ceph/ceph.client.kilo.keyring
  将/etc/ceph/ceph.conf文件拷贝到openstack 节点上:
  ssh 192.168.10.95 sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
  

  ceph与glance
  配置glance-api.conf文件
  具体配置要在ceph官网上找,不同版本的openstack配置位置可能不相同
  http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph
  
  default_store = rbd
  
  stores = rbd
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  rbd_store_user = kilo(创建的ceph用户)
  rbd_store_pool = datastore (创建的池)
  show_image_direct_url = True
  重启glance服务:
  service glance-api restart
  ervice glance-registry restart
  上传一个镜像,测试ceph是否配置成功作为glance后端使用:
  参考上边glance使用
  查看ceph中datastore pool的列表
  rados --pool=datastore ls
  

  ###############################################
  vi /etc/glance/glance-api.conf
  
  show_image_direct_url = True
  
  stores = rbd
  default_store = rbd
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  rbd_store_user = kilo
  rbd_store_pool = datastore
  

  在glance节点修改
  脚本方式修改:
  crudini --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url True
  

  crudini --set /etc/glance/glance-api.conf glance_store stores rbd
  crudini --set /etc/glance/glance-api.conf glance_store default_store rbd
  crudini --set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
  crudini --set /etc/glance/glance-api.conf glance_store rbd_store_user kilo
  crudini --set /etc/glance/glance-api.conf glance_store rbd_store_pool datastore
  ###############################################
  

  

  

  

  创建nova,cinder使用的秘钥
  一. 在openstack计算节点上生成一个uuid:
  root@ubuntu:~# uuidgen
  9fa61a4a-da28-4b24-a319-52e6dee46660
  创建一个临时文件
  vi secret.xml
  <secret ephemeral='no' private='no'>
  <uuid>9fa61a4a-da28-4b24-a319-52e6dee46660</uuid>
  <usage type='ceph'>
  <name>client.kilo secret</name>
  </usage>
  </secret>
  

  在计算节点:
  创建的 secret.xml 文件创建密钥:
  virsh secret-define --file secret.xml
  Secret 9fa61a4a-da28-4b24-a319-52e6dee46660 created
  

  

  安装Ceph客户端
  apt-get install python-ceph -y
  apt-get install ceph-common -y
  mkdir /etc/ceph
  在ceph服务器上
  ceph auth get-or-create client.kilo2 | ssh 192.168.10.101 sudo tee /etc/ceph/ceph.client.kilo2.keyring
  设定 libvirt 使用上面的密钥:
  

  virsh secret-set-value --secret cf133036-7099-43e3-b60f-dc487c72d3d0 --base64 AQBmpBBXWIB0FxAAPWDi60w6jImcwuzWcZAvbQ== && rm ceph.client.kilo2.keyring secret.xml
  查看秘钥
  virsh secret-list
  

  

  ceph与cinder
  配置cinder.conf配置文件
  
  volume_driver = cinder.volume.drivers.rbd.RBDDriver
  rbd_pool = datastore
  rbd_ceph_conf = /etc/ceph/ceph.conf
  rbd_flatten_volume_from_snapshot = false
  rbd_max_clone_depth = 5
  rbd_user = kilo
  glance_api_version = 2
  rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
  重启服务
  service cinder-api restart
  service cinder-scheduler restart
  service cinder-volume restart
  测试cinder是否使用ceph
  创建卷cephVolume:
  cinder create --display-name cephVolume 1
  通过cinder list与rados --pool=datastore ls验证cephVolume是否放在cinder上
  

  ############################################
  vi /etc/cinder/cinder.conf
  
  .......
  enabled_backends = ceph
  

  
  volume_driver = cinder.volume.drivers.rbd.RBDDriver
  rbd_pool = datastore2
  rbd_ceph_conf = /etc/ceph/ceph.conf
  rbd_flatten_volume_from_snapshot = false
  rbd_max_clone_depth = 5
  rbd_user = kilo2
  glance_api_version = 2
  rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
  

  脚本方式修改:
  在glance节点:
  crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
  

  

  crudini --set /etc/cinder/cinder.conf ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver
  crudini --set /etc/cinder/cinder.conf ceph rbd_pool datastore2
  crudini --set /etc/cinder/cinder.conf ceph rbd_ceph_conf /etc/ceph/ceph.conf
  crudini --set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot false
  crudini --set /etc/cinder/cinder.conf ceph rbd_max_clone_depth 5
  crudini --set /etc/cinder/cinder.conf ceph rbd_user kilo2
  crudini --set /etc/cinder/cinder.conf ceph glance_api_version 2
  crudini --set /etc/cinder/cinder.conf ceph rbd_secret_uuid cf133036-7099-43e3-b60f-dc487c72d3d0
  

  ############################################
  

  

  

  

  

  ceph与nova
  修改计算节点的nova.conf文件
  
  images_type = rbd
  images_rbd_pool = datastore2
  images_rbd_ceph_conf = /etc/ceph/ceph.conf
  rbd_user = icehouse
  rbd_secret_uuid = cf133036-7099-43e3-b60f-dc487c72d3d0
  inject_password = false
  inject_key = false
  inject_partition = -2
  

  #################################################################
  在计算节点:
  

  crudini --set /etc/nova/nova.conf libvirt images_type rbd
  crudini --set /etc/nova/nova.conf libvirt images_rbd_pool datastore2
  crudini --set /etc/nova/nova.conf libvirt images_rbd_ceph_conf /etc/ceph/ceph.conf
  crudini --set /etc/nova/nova.conf libvirt rbd_user icehouse
  crudini --set /etc/nova/nova.conf libvirt rbd_secret_uuid cf133036-7099-43e3-b60f-dc487c72d3d0
  crudini --set /etc/nova/nova.conf libvirt inject_password false
  crudini --set /etc/nova/nova.conf libvirt inject_key false
  crudini --set /etc/nova/nova.conf libvirt inject_partition -2
  

  #################################################################
  

  重启nova
  service nova-compute restart
  测试nova是否使用ceph:
  在控制台创建虚拟机
  nova list
  rados --pool=datastore2 ls
  

  

  
页: [1]
查看完整版本: 配置openstack 使用ceph(glance cinder nova)