|
本文使用ceph-deploy工具,能快速搭建出一个ceph集群。
一、环境准备
IP
| 主机名 | 角色
| 10.10.10.20 | admin-node | ceph-deploy | 10.10.10.21 | node1 | mon | 10.10.10.22 | node2 | osd | 10.10.10.23 | node3 | osd |
[iyunv@admin-node ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.10.20 admin-node 10.10.10.21 node1 10.10.10.22 node2 10.10.10.23 node3
[iyunv@admin-node ~]# mv /etc/yum.repos.d{,.bak} [iyunv@admin-node ~]# mkdir /etc/yum.repos.d [iyunv@node3 ceph]# cat /etc/yum.repos.d/ceph.repo [Ceph] name=Ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ enabled=1 gpgcheck=0
[iyunv@admin-node ~]# systemctl stop firewalld.service [iyunv@admin-node ~]# systemctl disable firewalld.service [iyunv@admin-node ~]# setenforce 0 [iyunv@admin-node ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[iyunv@admin-node ~]# ssh-keygen [iyunv@admin-node ~]# ssh-copy-id 10.10.10.21 [iyunv@admin-node ~]# ssh-copy-id 10.10.10.22 [iyunv@admin-node ~]# ssh-copy-id 10.10.10.23
[iyunv@admin-node ~]# yum install chrony -y
[iyunv@admin-node ~]# systemctl restart chronyd [iyunv@admin-node ~]# systemctl enable chronyd [iyunv@admin-node ~]# chronyc source -v (查看时间是否同步,*表示同步完成)
二、安装ceph-luminous
- 安装ceph-deploy
只在admin-node节点安装
清除配置(若想从新安装可以执行以下命令) 只在admin-node节点安装
修改ceph的配置,将副本数改为2 只在admin-node节点安装
[iyunv@admin-node ceph]# vi ceph.conf
[global] fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735 mon_initial_members = node1 mon_host = 10.10.10.21 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd pool default size = 2
[iyunv@admin-node ceph]# ceph-deploy install admin-node node1 node2 node3
[node3][DEBUG ] Configure Yum priorities to include obsoletes [node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [node3][WARNIN] 警告:/etc/yum.repos.d/ceph.repo 已建立为 /etc/yum.repos.d/ceph.repo.rpmnew [node3][DEBUG ] 准备中... ######################################## [node3][DEBUG ] 正在升级/安装... [node3][DEBUG ] ceph-release-1-1.el7 ######################################## [node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-noarch' 这个地方报错了,安装了一个高版本的ceph-release 解决方法:yum remove ceph-release 每个节点删除ceph-release后再次重新执行上一次的命令
配置初始 monitor(s)、并收集所有密钥 只在admin-node节点安装
[iyunv@admin-node ceph]# ceph-deploy mon create-initial [iyunv@admin-node ceph]# ls ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph-deploy-ceph.log ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.mon.keyring ceph.bootstrap-osd.keyring ceph.conf rbdmap [iyunv@admin-node ceph]# ceph -s (查看集群状态) cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9 health HEALTH_ERR no osds monmap e1: 1 mons at {node1=10.10.10.21:6789/0} election epoch 3, quorum 0 node1 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
[iyunv@node2 ~]# lsblk (node2,node3做osd) NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 17G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 50G 0 disk /var/local/osd0 sdc 8:32 0 5G 0 disk sr0 11:0 1 4.1G 0 rom [iyunv@node2 ~]# mkfs.xfs /dev/sdb [iyunv@node2 ~]# mkdir /var/local/osd0 [iyunv@node2 ~]# mount /dev/sdb /var/local/osd0 [iyunv@node2 ~]# chown ceph:ceph /var/local/osd0 [iyunv@node3 ~]# mkdir /var/local/osd1 [iyunv@node3 ~]# mkfs.xfs /dev/sdb [iyunv@node3 ~]# mount /dev/sdb /var/local/osd1/ [iyunv@node3 ~]# chown ceph:ceph /var/local/osd1 [iyunv@admin-node ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1 (在admin-node节点执行)
[iyunv@admin-node ceph]# ceph health HEALTH_OK [iyunv@admin-node ceph]# ceph -s cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927 health HEALTH_OK monmap e1: 1 mons at {node1=10.10.10.21:6789/0} election epoch 3, quorum 0 node1 osdmap e14: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects 15459 MB used, 45950 MB / 61410 MB avail 64 active+clean
|
|
|