8675645 发表于 2017-9-18 09:26:15

基于Centos7.4搭建Ceph

本文使用ceph-deploy工具,能快速搭建出一个ceph集群。

一、环境准备
[*] 修改主机名


[*]# cat /etc/redhat-release
[*]CentOS Linux release 7.4.1708 (Core)


IP
主机名角色

10.10.10.20admin-nodeceph-deploy
10.10.10.21node1mon
10.10.10.22node2osd
10.10.10.23node3osd


[*]设置DNS解析(我们这里修改/etc/hosts文件)
[*]每个节点都要配置


[*]# cat /etc/hosts
[*]127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[*]::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[*]10.10.10.20 admin-node
[*]10.10.10.21 node1
[*]10.10.10.22 node2
[*]10.10.10.23 node3


[*]配置yum源
[*]每个节点都要配置


[*]# mv /etc/yum.repos.d{,.bak}
[*]# mkdir /etc/yum.repos.d
[*]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[*]# cat /etc/yum.repos.d/ceph.repo
[*]
[*]name=Ceph
[*]baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
[*]enabled=1
[*]gpgcheck=0


[*]关闭防火墙和Selinux
[*]每个节点都要配置


[*]# systemctl stop firewalld.service
[*]# systemctl disable firewalld.service
[*]# setenforce 0
[*]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config


[*]设置节点之间面秘钥登入
[*]每个节点都要配置

[*]# ssh-keygen
[*]# ssh-copy-id 10.10.10.21
[*]# ssh-copy-id 10.10.10.22
[*]# ssh-copy-id 10.10.10.23


[*]使用chrony同步时间
[*]每个节点都要配置

[*]# yum install chrony -y

[*]# systemctl restartchronyd
[*]# systemctl enablechronyd
[*]# chronyc source -v (查看时间是否同步,*表示同步完成)

二、安装ceph-luminous

[*]安装ceph-deploy
[*]只在admin-node节点安装


[*]# yum install ceph-deploy -y

[*]在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对
[*]只在admin-node节点安装


[*]# mkdir /etc/ceph
[*]# cd /etc/ceph/


[*]清除配置(若想从新安装可以执行以下命令)
[*]只在admin-node节点安装


[*]# ceph-deploy purgedata node1 node2 node3
[*]# ceph-deploy forgetkeys


[*]创建集群
[*]只在admin-node节点安装


[*]# ceph-deploy new node1


[*]修改ceph的配置,将副本数改为2
[*]只在admin-node节点安装


[*]# vi ceph.conf

[*]
[*]fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735
[*]mon_initial_members = node1
[*]mon_host = 10.10.10.21
[*]auth_cluster_required = cephx
[*]auth_service_required = cephx
[*]auth_client_required = cephx
[*]filestore_xattr_use_omap = true
[*]osd pool default size = 2


[*]安装ceph
[*]只在admin-node节点安装


[*]# ceph-deploy install admin-node node1 node2 node3
Configure Yum priorities to include obsoletes check_obsoletes has been enabled for Yum priorities plugin Running command: rpm --import https://download.ceph.com/keys/release.asc Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-je ... -1-0.el7.noarch.rpm 获取https://download.ceph.com/rpm-je ... -1-0.el7.noarch.rpm 警告:/etc/yum.repos.d/ceph.repo 已建立为 /etc/yum.repos.d/ceph.repo.rpmnew 准备中...                        ######################################## 正在升级/安装... ceph-release-1-1.el7                  ######################################## ensuring that /etc/yum.repos.d/ceph.repo contains a high priority RuntimeError: NoSectionError: No section: 'ceph-noarch'
[*]这个地方报错了,安装了一个高版本的ceph-release解决方法:yum remove ceph-release每个节点删除ceph-release后再次重新执行上一次的命令


[*]配置初始 monitor(s)、并收集所有密钥
[*]只在admin-node节点安装


[*]# ceph-deploy mon create-initial
[*]# ls
[*]ceph.bootstrap-mds.keyringceph.bootstrap-rgw.keyringceph-deploy-ceph.log
[*]ceph.bootstrap-mgr.keyringceph.client.admin.keyring   ceph.mon.keyring
[*]ceph.bootstrap-osd.keyringceph.conf                   rbdmap
[*]# ceph -s (查看集群状态)
[*]    cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9
[*]   health HEALTH_ERR
[*]            no osds
[*]   monmap e1: 1 mons at {node1=10.10.10.21:6789/0}
[*]            election epoch 3, quorum 0 node1
[*]   osdmap e1: 0 osds: 0 up, 0 in
[*]            flags sortbitwise,require_jewel_osds
[*]      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
[*]            0 kB used, 0 kB / 0 kB avail
[*]                  64 creating


[*]创建OSD


[*]# lsblk    (node2,node3做osd)
[*]NAME      MAJ:MIN RMSIZE RO TYPE MOUNTPOINT
[*]fd0         2:0    1    4K0 disk
[*]sda         8:0    0   20G0 disk
[*]├─sda1      8:1    0    1G0 part /boot
[*]└─sda2      8:2    0   19G0 part
[*]├─cl-root 253:0    0   17G0 lvm/
[*]└─cl-swap 253:1    0    2G0 lvm
[*]sdb         8:16   0   50G0 disk /var/local/osd0
[*]sdc         8:32   0    5G0 disk
[*]sr0          11:0    14.1G0 rom
[*]# mkfs.xfs /dev/sdb
[*]#mkdir /var/local/osd0
[*]#mount /dev/sdb /var/local/osd0
[*]# chown ceph:ceph/var/local/osd0
[*]# mkdir /var/local/osd1
[*]# mkfs.xfs /dev/sdb
[*]# mount /dev/sdb /var/local/osd1/
[*]# chown ceph:ceph/var/local/osd1
[*]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1    (在admin-node节点执行)


[*]将admin-node上的密钥和配合文件拷贝到各个节点
[*]只在admin-node节点安装



[*]# ceph-deploy admin admin-node node1 node2 node3


[*]确保对 ceph.client.admin.keyring 有正确的操作权限
[*]只在OSD节点执行


[*]# chmod +r /etc/ceph/ceph.client.admin.keyring


[*]管理节点执行 ceph-deploy 来准备 OSD


[*]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1


[*]激活 OSD


[*]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1


[*]检查集群的健康状况


[*]# ceph health
[*]HEALTH_OK


[*]# ceph health
[*]HEALTH_OK
[*]# ceph -s
[*]    cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927
[*]   health HEALTH_OK
[*]   monmap e1: 1 mons at {node1=10.10.10.21:6789/0}
[*]            election epoch 3, quorum 0 node1
[*]   osdmap e14: 3 osds: 3 up, 3 in
[*]            flags sortbitwise,require_jewel_osds
[*]      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
[*]            15459 MB used, 45950 MB / 61410 MB avail
[*]                  64 active+clean


页: [1]
查看完整版本: 基于Centos7.4搭建Ceph