设为首页 收藏本站
查看: 3247|回复: 0

[经验分享] Ceph分布式存储学习笔记之ISCSI服务配置

[复制链接]

尚未签到

发表于 2019-2-2 08:58:24 | 显示全部楼层 |阅读模式
  本文分两部分分别介绍Ceph Luminous版集群的安装以及配置ISCSI服务。

一、部署ceph集群

[root@ceph01 ~]# yum -y install ceph
1.1 Ceph Monitor部署

[root@ceph01 ~]# mkdir ceph;cd ceph
[root@ceph01 ceph]# ceph-deploy new ceph01 ceph02 ceph03
[root@ceph01 ceph]# ceph-deploy mon create  ceph01 ceph02 ceph03
1.2 节点认证

[root@ceph01 ceph]# ceph-deploy gatherkeys ceph01 ceph02 ceph03
1.3 分发ceph配置到其他节点

[root@ceph01 ceph]# ceph-deploy admin ceph01 ceph02 ceph03
[root@ceph01 ceph]# ceph -s
cluster:
id:     97291641-fb19-49c5-9fd2-d42fe7d78243
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools:   0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs:  
1.4 Ceph manager部署

[root@ceph01 ceph]# ceph-deploy mgr create ceph01 ceph02 ceph03
[root@ceph01 ceph]# ceph -s
cluster:
id:     97291641-fb19-49c5-9fd2-d42fe7d78243
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph01(active), standbys: ceph02, ceph03
osd: 0 osds: 0 up, 0 in
data:
pools:   0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs:     
  --启用dashboard

[root@ceph01 ceph]# ceph mgr module enable dashboard
  Dashboard的port默认为7000,可以执行ceph config-key set mgr/dashboard/server_port $PORT修改port。也可以执行ceph config-key set mgr/dashboard/server_addr $IP指定dashboard的访问IP。

1.5 Ceph OSD部署

[root@ceph01 ceph]# ceph-deploy disk zap ceph01:sdb ceph02:sdb ceph03:sdb
[root@ceph01 ceph]# ceph-deploy disk zap ceph01:sdc ceph02:sdc ceph03:sdc
[root@ceph01 ceph]# ceph-deploy osd create ceph01:sdb ceph02:sdb ceph03:sdb
[root@ceph01 ceph]# ceph-deploy osd create ceph01:sdc ceph02:sdc ceph03:sdc
[root@ceph01 ceph]# ceph -s
cluster:
id:     97291641-fb19-49c5-9fd2-d42fe7d78243
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph01(active), standbys: ceph02, ceph03
osd: 6 osds: 6 up, 6 in
data:
pools:   0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage:   6162 MB used, 1193 GB / 1199 GB avail
pgs:     
1.6 官方建议调整的参数

[root@ceph01 ~]# ceph tell osd.* injectargs '--osd_client_watch_timeout 15'
[root@ceph01 ~]# ceph tell osd.* injectargs '--osd_heartbeat_grace 20'
[root@ceph01 ~]# ceph tell osd.* injectargs '--osd_heartbeat_interval 5'
二、部署ISCSI服务

2.1 安装软件
  ceph相关的ISCSI软件包可以从redhat通过的源代码进行编译或者下载centos已经编译好的。

[root@ceph01 ~]# yum install ceph-iscsi-cli tcmu-runner ceph-iscsi-tools
[root@ceph01 ceph]# ceph osd pool create rbd 150 150
[root@ceph01 ceph]# ceph osd pool application enable rbd rbd --yes-i-really-mean-it
2.2 创建配置文件
  创建iscsi-gateway.cfg,此文件主要设置iscsi服务的网关。

[root@ceph01 ~]# vi /etc/ceph/iscsi-gateway.cfg
[config]
# Name of the Ceph storage cluster. A suitable Ceph configuration file allowing
# access to the Ceph storage cluster from the gateway node is required, if not
# colocated on an OSD node.
cluster_name = ceph
# Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph
# drectory and reference the filename here
gateway_keyring = ceph.client.admin.keyring
# API settings.
# The API supports a number of options that allow you to tailor it to your
# local environment. If you want to run the API under https, you will need to
# create cert/key files that are compatible for each iSCSI gateway node, that is
# not locked to a specific node. SSL cert and key files *must* be called
# 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory
# on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.
# To support the API, the bear minimum settings are:
api_secure = false
# Additional API configuration options are as follows, defaults shown.
# api_user = admin
# api_password = admin
# api_port = 5001
trusted_ip_list = 192.168.120.81,192.168.120.82,192.168.120.83
2.3 同步文件到其他节点

[root@ceph01 ~]# scp /etc/ceph/iscsi-gateway.cfg ceph02:/etc/ceph
[root@ceph01 ~]# scp /etc/ceph/iscsi-gateway.cfg ceph03:/etc/ceph
2.4 启动API服务

[root@ceph01 ~]# systemctl daemon-reload
[root@ceph01 ~]# systemctl enable rbd-target-api
[root@ceph01 ~]# systemctl start rbd-target-api
[root@ceph01 ~]# systemctl status rbd-target-api
● rbd-target-api.service - Ceph iscsi target configuration API
Loaded: loaded (/usr/lib/systemd/system/rbd-target-api.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-05-31 11:35:04 CST; 4s ago
Main PID: 25372 (rbd-target-api)
CGroup: /system.slice/rbd-target-api.service
└─25372 /usr/bin/python /usr/bin/rbd-target-api
May 31 11:35:04 ceph01 systemd[1]: Started Ceph iscsi target configuration API.
May 31 11:35:04 ceph01 systemd[1]: Starting Ceph iscsi target configuration API...
May 31 11:35:05 ceph01 rbd-target-api[25372]: Started the configuration object watcher
May 31 11:35:05 ceph01 rbd-target-api[25372]: Checking for config object changes every 1s
May 31 11:35:05 ceph01 rbd-target-api[25372]:  * Running on http://0.0.0.0:5000/
三、配置ISCSI服务

3.1 创建target

[root@ceph01 ~]# gwcli
/> cd iscsi-target
/iscsi-target> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
3.2 创建ISCSI网关

/iscsi-target...-igw/gateways> create ceph01 192.168.120.81 skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...-igw/gateways> create ceph02 192.168.120.82 skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...-igw/gateways> create ceph03 192.168.120.83 skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...-igw/gateways> ls
o- gateways .................................................................................................. [Up: 3/3, Portals: 3]
o- ceph01 .................................................................................................. [192.168.120.81 (UP)]
o- ceph02 .................................................................................................. [192.168.120.82 (UP)]
o- ceph03 .................................................................................................. [192.168.120.83 (UP)]
  如果操作系统非Centos或redhat,则需要加skipchecks=true参数。

3.3 创建RBD image

/iscsi-target...-igw/gateways> cd /disks
/disks> create Oracle vol01 100G
ok
/disks> create Oracle vol02 300G
ok
3.4 创建客户端名称
  Linux平台可以查看/etc/iscsi/initiatorname.iscsi文件获取InitiatorName。如果修改了默认的名称,必须重启iscsid服务,否则在登录iscsi服务端的时候会报错。

/disks> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/hosts
/iscsi-target...csi-igw/hosts>create iqn.1988-12.com.oracle:3d93d2aa7f1:odb03
ok
/iscsi-target...csi-igw/hosts>create iqn.1988-12.com.oracle:ccd061606e1:odb04
ok
3.5 设置客户端认证

/iscsi-target...csi-igw/hosts> cd  iqn.1988-12.com.oracle:3d93d2aa7f1:odb03
/iscsi-target...odb03> auth chap=admin/redhat
/iscsi-target...odb04> auth chap=admin/redhat
3.6 客户端映射磁盘

/iscsi-target...odb03> disk add vol01
ok
/iscsi-target...odb03> disk add vol02
ok
/iscsi-target...odb04> disk add vol01
ok
/iscsi-target...odb04> disk add vol02
ok
  最后的结果如下图所示:





运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-670698-1-1.html 上篇帖子: 解析CEPH: 存储引擎实现之一 filestore 下篇帖子: Ceph逻辑架构
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表