|
构建云设施,存储是一个重要组件,所以本文主要介绍一下我这里如何使用ceph的。
云软件选择openstack,版本是Mitaka,部署系统是centos 7.1,ceph版本是10.2.2.
选择ceph的原因是,免费、开源、支持多,并且市面上大部分都是选择ceph做云存储。
另外本文是参考了http://www.vpsee.com/2015/07/install-ceph-on-centos-7/
目录
一、ceph安装
二、openstack里应用ceph集群
三、glance应用ceph
四、删除osd节点
五、ceph使用混合磁盘
下面是开始安装
可以参考官方的http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
一、ceph安装
主机环境
一个adm,3个mon,3个osd,复制2份
下面是hosts配置(每个主机都有)
10.10.128.18 ck-ceph-adm
10.10.128.19 ck-ceph-mon1
10.10.128.20 ck-ceph-mon2
10.10.128.21 ck-ceph-mon3
10.10.128.22 ck-ceph-osd1
10.10.128.23 ck-ceph-osd2
10.10.128.24 ck-ceph-osd3 另外需要对mon与osd节点进行一些优化
绑定盘符
ll /sys/block/sd*|awk '{print $NF}'|sed 's/..//'|awk -F '/' '{print "DEVPATH==\""$0"\", NANE=\""$NF"\", MODE=\"0660\""}'>/etc/udev/rules.d/90-ceph-disk.rules
#关闭节能模式
for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done
#增加pid数量
echo "kernel.pid_max = 4194303"|tee -a /etc/sysctl.conf
#增加最大打开文件数量
echo "fs.file-max = 26234859"|tee -a /etc/sysctl.conf
#增加顺序度
for READ_KB in /sys/block/sd*/queue/read_ahead_kb; do [ -f $READ_KB ] || continue; echo 8192 > $READ_KB; done
#增加IO调度队列
for REQUEST in /sys/block/sd*/queue/nr_requests; do [ -f $REQUEST ] || continue; echo 20480 > $REQUEST; done
#配置IO调度器
for SCHEDULER in /sys/block/sd*/queue/scheduler; do [ -f $SCHEDULER ] || continue; echo deadline > $SCHEDULER; done
#关闭swwap
echo "vm.swappiness = 0" | tee -a /etc/sysctl.conf 每个主机也最好是把主机名修改跟hosts里一致
1、创建用户
useradd -m ceph-admin
su - ceph-admin
mkdir -p ~/.ssh
chmod 700 ~/.ssh
cat ~/.ssh/config
Host *
Port 50020
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
EOF
chmod 600 ~/.ssh/config 做ssh信任
ssh-keygen -t rsa -b 2048 之后一路回车就行
复制id_rsa.pub到其他节点的/home/ceph/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys
之后给予ceph sudo权限
修改/etc/sudoers
ceph-admin ALL=(root) NOPASSWD:ALL 之后在这个配置文件里关闭
Defaults requiretty 在这行前加#
对osd组服务器进行磁盘格式化
如果只是测试,可以直接使用目录,正式使用,还是直接裸设备格式化
cat auto_parted.sh
#!/bin/bash
name="b c d e f g h i"
for i in ${name}; do
echo "Creating partitions on /dev/sd${i} ..."
parted -a optimal --script /dev/sd${i} -- mktable gpt
parted -a optimal --script /dev/sd${i} -- mkpart primary xfs 0% 100%
sleep 1
mkfs.xfs -f /dev/sd${i}1 &
done 然后运行
2、安装epel(所有节点)
yum -y install epel-release 3、安装ceph源(所有节点,如果是不使用ceph-deploy安装使用,否则使用ceph-deploy自动安装)
yum -y install yum-plugin-priorities
rpm --import https://download.ceph.com/keys/release.asc
rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
cd /etc/yum.repos.d/
sed -i 's@download.ceph.com@mirrors.ustc.edu.cn/ceph@g' ceph.repo
yum -y install ceph ceph-radosgw 4、管理节点配置
安装定制软件
yum install ceph-deploy -y 进行初始化
su - ceph-admin
mkdir ck-ceph-cluster
cd ck-ceph-cluster
ceph-deploy new ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3 有几个mon节点就写几个
配置
echo "osd pool default size = 2">>ceph.conf
echo "osd pool default min size = 2">>ceph.conf
echo "public network = 10.10.0.0/16">>ceph.conf
echo "cluster network = 172.16.0.0/16">>ceph.conf 请注意如果是多个网卡的话,最好把public与cluster单独区分出来,cluster是集群通信与同步数据网络,public是供监控与客户端连接网络。
在所有节点安装ceph(如果是想使用ceph-deploy安装就进行,如果使用了第3步,可以忽略这步)
ceph-deploy install ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3 监控节点初始化
ceph-deploy mon create-initial 对osd节点进行数据盘初始化
ceph-deploy disk zap ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi
ceph-deploy osd create ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi
ceph-deploy disk zap ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi
ceph-deploy osd create ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi
ceph-deploy disk zap ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi
ceph-deploy osd create ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi 同步配置
ceph-deploy admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3
ceph-deploy --overwrite-conf admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3 ck-ceph-osd1 ck-ceph-osd2
sudo chmod +r /etc/ceph/ceph.client.admin.keyring 对所有节点/etc/ceph修改权限
sudo chown -R ceph:ceph /etc/ceph 查看集群信息
[ceph-admin@ck-ceph-adm ~]$ ceph -s
cluster 2aafe304-2dd1-48be-a0fa-cb9c911c7c3b
health HEALTH_OK
monmap e1: 3 mons at {ck-ceph-mon1=10.10.128.19:6789/0,ck-ceph-mon2=10.10.128.20:6789/0,ck-ceph-mon3=10.10.128.21:6789/0}
election epoch 6, quorum 0,1,2 ck-ceph-mon1,ck-ceph-mon2,ck-ceph-mon3
osdmap e279: 40 osds: 40 up, 40 in
flags sortbitwise
pgmap v96866: 2112 pgs, 3 pools, 58017 MB data, 13673 objects
115 GB used, 21427 GB / 21543 GB avail
2112 active+clean 二、openstack里应用ceph集群
可以参考官网http://docs.ceph.com/docs/master/rbd/rbd-openstack/
1、创建池子
ceph osd pool create volumes 1024 1024 这个1024的pg_num与pgp_num的值,大家参考http://docs.ceph.com/docs/master/rados/operations/placement-groups/
2、安装ceph客户端工具
在所有cinder节点与计算节点都安装
rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
yum install ceph-common 3、同步配置
同步/etc/ceph/ceph.conf
把adm里的同步到cinder节点与计算节点
4、安全认证(ceph管理节点)
运行cinder用户访问ceph权限
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes' 5、把key加入节点(管理节点)
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring 6、密钥文件管理(管理节点)
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key 把密钥加入到libvirt使用
获取uuid
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337 登陆计算节点,把uuid改为上面的
cat > secret.xml |
|
|