不正狼 发表于 2019-2-2 09:24:14

Ubuntu 12.04 Ceph分布式文件系统之部署

  二、   Ceph快速配置
资源:
两台机器:一台server,一台client,安装ubuntu12.04
其中,server安装时,另外分出两个区,作为osd0、osd1的存储,没有的话,系统安装好后,使用loop设备虚拟出两个也可以。
步骤:
1、服务端安装CEPH(MON、MDS、OSD)
2、添加key到APT中,更新sources.list,安装ceph
#sudo wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
#sudo echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
# sudo apt-get update && sudo apt-get install ceph
3、查看版本
# ceph-v//将显示ceph的版本和key信息
如果没有显示,请执行如下命令
# sudo apt-get update && apt-get upgrade
4、在/etc/ceph/下创建ceph.conf配置文件,并将配置文件拷贝到其它服务端。
  


[*]
[*]
[*]    # For version 0.55 and beyond, you must explicitly enable
[*]    # or disable authentication with "auth" entries in .
[*]
[*]    auth cluster required = none
[*]    auth service required = none
[*]    auth client required = none
[*]
[*]
[*]    osd journal size = 1000
[*]
[*]    #The following assumes ext4 filesystem.
[*]    filestore xattr use omap = true
[*]
[*]
[*]    # For Bobtail (v 0.56) and subsequent versions, you may
[*]    # add settings for mkcephfs so that it will create and mount
[*]    # the file system on a particular OSD for you. Remove the comment `#`
[*]    # character for the following settings and replace the values
[*]    # in braces with appropriate values, or leave the following settings
[*]    # commented out to accept the default values. You must specify the
[*]    # --mkfs option with mkcephfs in order for the deployment script to
[*]    # utilize the following settings, and you must define the 'devs'
[*]    # option for each osd instance; see below.
[*]
[*]    osd mkfs type = xfs
[*]    osd mkfs options xfs = -f   # default for xfs is "-f"
[*]    osd mount options xfs = rw,noatime # default mount option is "rw,noatime"
[*]
[*]    # For example, for ext4, the mount option might look like this:
[*]
[*]    #osd mkfs options ext4 = user_xattr,rw,noatime
[*]
[*]    # Execute $ hostname to retrieve the name of your host,
[*]    # and replace {hostname} with the name of your host.
[*]    # For the monitor, replace {ip-address} with the IP
[*]    # address of your host.
[*]
[*]
[*]
[*]    host = compute-01
[*]    mon addr = 192.168.4.165:6789
[*]
[*]
[*]    host = compute-02
[*]
[*]    # For Bobtail (v 0.56) and subsequent versions, you may
[*]    # add settings for mkcephfs so that it will create and mount
[*]    # the file system on a particular OSD for you. Remove the comment `#`
[*]    # character for the following setting for each OSD and specify
[*]    # a path to the device if you use mkcephfs with the --mkfs option.
[*]
[*]    devs = /dev/sda7
[*]
[*]
[*]    host = compute-01
  

5、创建目录
sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a
  


6、创建分区与挂载
fdisk/dev/sda    //创建sda6分区
mkfs.xfs-f /dev/sda7
mount/dev/sda7/var/lib/ceph/osd/ceph-0   (第一次必须先挂载分区写入初始化数据)

7、执行初始化
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
8、启动
# sudo service ceph -a start
9、执行健康检查
sudo ceph health
如果返回的是 HEALTH_OK,代表成功!
出现: HEALTH_WARN 576 pgs stuck inactive; 576 pgs stuck unclean; no osds之类的,请执行:
#ceph pg dump_stuck stale
#ceph pg dump_stuck inactive
#ceph pg dump_stuck unclean
再次健康检查是,应该是OK

注意:重新执行如下命令#sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring前,所有服务端停止ceph服务在清空创建的四个目录:/var/lib/ceph/osd/ceph-0、/var/lib/ceph/osd/ceph-1、 /var/lib/ceph/mon/ceph-a、/var/lib/ceph/mds/ceph-a
#/etc/init.d/ceph stop
# rm –frv /var/lib/ceph/osd/ceph-0/*
# rm –frv /var/lib/ceph/osd/ceph-1/*
# rm –frv /var/lib/ceph/mon/ceph-a/*
# rm –frv /var/lib/ceph/mds/ceph-a/*
  三、   CephFS的使用
在客户端上操作:
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
或者
sudo mkdir /home/{username}/cephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 /home/{username}/cephfs
# df –h   查看


页: [1]
查看完整版本: Ubuntu 12.04 Ceph分布式文件系统之部署