设为首页 收藏本站
查看: 883|回复: 0

[经验分享] Install Ceph on CentOS 6.5

[复制链接]

尚未签到

发表于 2019-2-1 14:29:05 | 显示全部楼层 |阅读模式
  参考http://hj192837.blog.运维网.com/655995/1539329,升级kernel至3.10
  disable selinux and iptables
service iptables stop
chkconfig iptables off; chkconfig ip6tables off
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
  

  1. Create ceph user on each Ceph Node
useradd ceph
passwd ceph
Add sudo privileges for the user on each Ceph Node
echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph
2. Disable requiretty on each Ceph Node
visudo
# change Defaults requiretty line
Defaults:ceph !requiretty
  

  3. yum install yum-plugin-priorities
vi /etc/yum/pluginconf.d/priorities.conf
[main]
enabled = 1

  add ceph yum repository on each ceph Node
  vi /etc/yum.repos.d/ceph-extras.repo
[ceph-extras]
name=Ceph Extras Packages
baseurl=file:///ceph/ceph-extras
enabled=1
priority=2
gpgcheck=0
type=rpm-md
[ceph-extras-noarch]
name=Ceph Extras noarch
baseurl=file:///ceph/ceph-extras-noarch
enabled=1
priority=2
gpgcheck=0
type=rpm-md
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages
baseurl=file:///ceph/ceph
enabled=1
priority=2
gpgcheck=0
type=rpm-md
[ceph-noarch]
name=Ceph noarch packages
baseurl=file:///ceph/ceph-noarch
enabled=1
priority=2
gpgcheck=0
type=rpm-md
vi /etc/yum.repos.d/ceph-apache.repo
[apache2-ceph-noarch]
name=Apache noarch packages for Ceph
baseurl=file:///ceph/apache2-ceph-noarch
enabled=1
priority=2
gpgcheck=0
type=rpm-md
vi /etc/yum.repos.d/ceph-fastcgi.repo
[fastcgi-ceph-basearch]
name=FastCGI basearch packages for Ceph
baseurl=file:///ceph/fastcgi-ceph-basearch
enabled=1
priority=2
gpgcheck=0
type=rpm-md
  

  4. add epel yum repository on each ceph Node
  vi /etc/yum.repos.d/epel.repo

  [epel]
name=epel
baseurl=http://mirrors.sohu.com/fedora-epel/6Server/x86_64
enable=1
gpgcheck=0
  

  vi /etc/hosts (on all nodes)
127.0.0.1    localhost
192.168.1.15    ceph1
192.168.1.16    ceph2
192.168.1.17    ceph3
  192.168.1.18    client1

  
5. on Ceph1 run as ceph user:
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -C '' -N ''
vi ~/.ssh/config
Host ceph2
  Hostname ceph2
  User ceph
  StrictHostKeyChecking no
Host ceph3
  Hostname ceph3
  User ceph
  StrictHostKeyChecking no
ssh-copy-id ceph2
ssh-copy-id ceph3
  

  yum -y install ceph-deploy
mkdir ceph-cluster
cd ceph-cluster
  

  # Create the Cluster
ceph-deploy new ceph1 ceph2 ceph3
  

  # Install Ceph
ceph-deploy install ceph1 ceph2 ceph3
  

  # Add the initial monitor(s) and gather the keys
  ceph-deploy mon create-initial
  

  # Add OSDs
ceph-deploy osd --zap-disk create ceph1:sdb
ceph-deploy osd --zap-disk create ceph2:sdb
ceph-deploy osd --zap-disk create ceph3:sdb
  

  # copy the configuration file and admin key to nodes

  ceph-deploy admin ceph1 ceph2 ceph3
chmod +r /etc/ceph/ceph.client.admin.keyring
ssh ceph2 chmod +r /etc/ceph/ceph.client.admin.keyring
ssh ceph3 chmod +r /etc/ceph/ceph.client.admin.keyring

  Notes: 如果前面只做了:ceph-deploy new ceph1,那么运行下面的命令:
  ceph-deploy mon add ceph2 ceph3
  check quorum status for monitors:
ceph quorum_status --format json-pretty
  # check health

  ceph -w
  

  Add Metadata Server:
  ceph-deploy mds create node1 (Currently Ceph runs in production with one metadata server only)
  ceph mds stat
  

  # List cluster's pool
  ceph osd lspools
  The default pools include:

  •   data
  •   metadata
  •   rbd
  

  # Create pool
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated]  example: ceph osd pool create mydatapool {pg-num}

             (OSDs * 100)
Total PGs =  ------------   = pg-num = pgp-num
            OSD per object  The result should be rounded up to the nearest power of two
  

  Install ceph client on client1:
on ceph-deploy node:
  ceph-deploy install client1
  ceph-deploy admin client1

  chmod +r /etc/ceph/ceph.client.admin.keyring
  

  Create Ceph admin host:
  on ceph-deploy node:
  ceph-deploy admin admin-host
  ceph-deploy config push admin-host





运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-670543-1-1.html 上篇帖子: ceph优化的几个问题 下篇帖子: Ceph replication
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表