集群安装
这里使用 EXT4 作为集群的文件系统,并且为了方便测试关掉了 ceph 的安全认证。
安装依赖包:
yum -y install gcc gcc-c++ make automake libtool expat expat-devel \
boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \
fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel
源码安装:
下载源码: http://ceph.com/download/ceph-0.48argonaut.tar.gz
解压包进行安装:
tar -zxvf ceph-0.48argonaut.tar.gz
cd ceph-0.48argonaut
./autogen.sh
CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc
make && make install
rpm 安装方法:
#wget http://ceph.com/download/ceph-0.48argonaut.tar.bz2
#tar xjvf ceph-0.48argonaut.tar.bz2
#cp ceph-0.48argonaut.tar.bz2 ~/rpmbuild/SOURCES
#rpmbuild -ba ceph-0.48argonaut/ceph.spec
#cd /root/rpmbuild/RPMS/x86_64/
#rpm -Uvh
修改配置文件 ceph.conf :
[global]
; enable secure authentication
;auth supported = cephx
; allow ourselves to open a lot of files
max open files = 131072
; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog
; set up pid files
pid file = /var/run/ceph/$name.pid
; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true
[mon]
mon data = /ceph/mon_data/$name
debug ms = 1
debug mon = 20
debug paxos = 20
debug auth = 20
[mon.0]
host = node89
mon addr = 1.1.1.89:6789
[mon.1]
host = node97
mon addr = 1.1.1.97:6789
[mon.2]
host = node56
mon addr = 1.1.1.56:6789
; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /ceph/mds_data/keyring.$name
; mds logging to debug issues.
debug ms = 1
debug mds = 20
[mds.0]
host = node89
[mds.1]
host = node97
[mds.2]
host = node56
[osd]
osd data = /ceph/osd_data/$name
filestore xattr use omap = true
osd journal = /ceph/osd_data/$name/journal
osd journal size = 1000 ; journal size, in megabytes
journal dio = false
debug ms = 1
debug osd = 20
debug filestore = 20
debug journal = 20
filestore fiemap = false
osd class dir = /usr/lib/rados-classes
keyring = /etc/ceph/keyring.$name
[osd.0]
host = node89
devs = /dev/mapper/vg_control-lv_home
[osd.1]
host = node97
devs = /dev/mapper/vg_node2-lv_home
[osd.2]
host = node56
devs = /dev/mapper/vg_node56-lv_home
安装脚本:
#!/bin/bash
yum -y install gcc gcc-c++ make automake libtool expat expat-devel \
boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \
fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel
which ceph > /dev/null 2>&1
if [ $? -eq 1 ]; then
tar -zxvf ceph-0.48argonaut.tar.gz
cd ceph-0.48argonaut
./autogen.sh
CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc
make && make install
cd ..
rm -rf ceph-0.48argonaut
fi
echo "#####################Configure#########################"
rm -rf /ceph/*
rm -rf /etc/ceph/*
mkdir -p /ceph/mon_data/{mon.0,mon.1,mon.2}
mkdir -p /ceph/osd_data/{osd.0,osd.1,osd.2}
mkdir -p /ceph/mds_data
touch /etc/ceph/keyring
touch /etc/ceph/ceph.keyring
touch /etc/ceph/keyring.bin
cp ceph.conf /etc/ceph/
echo "#####################Iptables##########################"
grep 6789 /etc/sysconfig/iptables
if [ $? -eq 1 ];then
iptables -A INPUT -m multiport -p tcp --dports 6789,6800:6810 -j ACCEPT
service iptables save
service iptables restart
fi
echo "######################Init service#####################"
#mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin
#service ceph restart
echo "Install Ceph Successful!"
在监控节点上初始化集群(需要多次输入节点登陆密码,可以改为 ssh 认证):
mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin
执行此命令后会在 osd 和 mon 文件下生成相关文件,如果没有生成则说明初始化没有成功,启动 ceph 服务时会报错,注意,如果使用 btrfs 格式时,不需要手动挂载 osd 文件,如果使用 ext4 文件格式,需要手动挂载 osd 。
在所有节点上启动服务:
service ceph restart
检测集群运行状态:
#ceph health detail
#ceph –s
#ceph osd tree
#ceph osd dump
集群操作
这里在已有的逻辑卷组上创建一个新的逻辑分区用来做实验:
#vgs
VG #PV #LV #SN Attr VSize VFree
vg_control 1 4 0 wz--n- 931.02g 424.83g
#lvcreate --size 10g --name ceph_test vg_control
#mkfs.ext4 /dev/mapper/ vg_control-ceph_test
#lvs
1. 扩张节点步骤:
ceph osd create
ceph.conf 配置文件添加:
[osd.3]
host = newnode
devs = /dev/mapper/vg_control-ceph_test
格式化、挂载 osd :
#mkfs.ext4 /dev/mapper/ vg_control-ceph_test
#mkdir /ceph/osd_data/osd.3
#mount -o user_xattr /dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.3
这样,新的 osd 就添加成功了,但是如果要正常使用它,就必须添加映射关系(用来调度 osd 的关系):
#ceph osd crush set 3 osd.3 1.0 pool=default host=newnode
# ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 3 -–mkfs
2. 删除 osd.3 节点:
#ceph osd crush remove osd.3
#ceph osd tree
dumped osdmap tree epoch 21
# id weight type name up/down reweight
-1 3 pool default
-3 3 rack unknownrack
-2 1 host node89
0 1 osd.0 up 1
-4 1 host node97
1 1 osd.1 up 1
-5 1 host node56
2 1 osd.2 up 1
3 0 osd.3 down 0
#ceph osd rm 3
#ceph osd tree
dumped osdmap tree epoch 22
# id weight type name up/down reweight
-1 3 pool default
-3 3 rack unknownrack
-2 1 host node89
0 1 osd.0 up 1
-4 1 host node97
1 1 osd.1 up 1
-5 1 host node56
2 1 osd.2 up 1
#rm -r /ceph/osd_data/osd.3/
修改 ceph.conf ,删除 osd.3 相关的内容。
3. 为 osd.0 挂载新盘
#service ceph stop osd
#umount /ceph/osd_data/osd.0
# mkfs.ext4 /dev/mapper/vg_control-ceph_test
# tune2fs -o journal_data_writeback /dev/mapper/vg_control-ceph_test
# mount -o rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 /dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0
在 fstab 中添加如下:
/dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0 ext4 rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 0 0
加入监控:
#mount -a
#ceph mon getmap -o /tmp/monmap
#ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 0 -–mkfs
#service ceph start osd
4. 客户端挂载
在装有 ceph-clietn 的客户端进行挂载 ceph 集群的 osd 。
#mount –t ceph 1.1.1.89:6789:/ /mnt
#df –h
1.1.1.89:6789:/ 100G 3.1G 97G 4% /mnt
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com