gaoke 发表于 2019-2-2 10:22:41

3.ceph安装包明细及deploy过程输出

  1:安装ceph所需要的包及依赖
  Installing:
ceph                              x86_64 1:0.94.3-0.el6    Ceph         21 M
ceph-radosgw                        x86_64 1:0.94.3-0.el6    Ceph      2.3 M
Installing for dependencies:
boost-program-options               x86_64 1.41.0-27.el6   base      108 k
boost-system                        x86_64 1.41.0-27.el6   base         26 k
boost-thread                        x86_64 1.41.0-27.el6   base         43 k
ceph-common                         x86_64 1:0.94.3-0.el6    Ceph      6.8 M
fcgi                              x86_64 2.4.0-10.el6      Ceph         40 k
gdisk                               x86_64 0.8.2-1.el6       Ceph      163 k
gperftools-libs                     x86_64 2.0-11.el6.3      Ceph      246 k
leveldb                           x86_64 1.7.0-2.el6       Ceph      158 k
libcephfs1                        x86_64 1:0.94.3-0.el6    Ceph      1.9 M
libicu                              x86_64 4.2.1-12.el6      base      4.9 M
librados2                           x86_64 1:0.94.3-0.el6    Ceph      1.6 M
librbd1                           x86_64 1:0.94.3-0.el6    Ceph      1.8 M
libunwind                           x86_64 1.1-3.el6         epel         55 k
lttng-ust                           x86_64 2.4.1-1.el6       epel      162 k
python-argparse                     noarch 1.2.1-2.el6       Ceph-noarch48 k
python-babel                        noarch 0.9.4-5.1.el6   base      1.4 M
python-backports                  x86_64 1.0-5.el6         base      5.5 k
python-backports-ssl_match_hostname noarch 3.4.0.2-1.el6   Ceph-noarch12 k
python-cephfs                     x86_64 1:0.94.3-0.el6    Ceph         11 k
python-chardet                      noarch 2.0.1-1.el6       Ceph-noarch 225 k
python-docutils                     noarch 0.6-1.el6         base      1.3 M
python-flask                        noarch 1:0.9-5.el6       Ceph-noarch 190 k
python-imaging                      x86_64 1.1.6-19.el6      base      388 k
python-jinja2                     x86_64 2.2.1-2.el6_5   base      466 k
python-jinja2-26                  noarch 2.6-2.el6         Ceph-noarch 526 k
python-markupsafe                   x86_64 0.9.2-4.el6       base         22 k
python-ordereddict                  noarch 1.1-2.el6         Ceph-noarch 7.6 k
python-pygments                     noarch 1.1.1-1.el6       base      562 k
python-rados                        x86_64 1:0.94.3-0.el6    Ceph         29 k
python-rbd                        x86_64 1:0.94.3-0.el6    Ceph         18 k
python-requests                     noarch 1.1.0-4.el6       Ceph-noarch71 k
python-six                        noarch 1.4.1-1.el6       Ceph-noarch22 k
python-sphinx                     noarch 0.6.6-2.el6       base      487 k
python-urllib3                      noarch 1.5-7.el6         Ceph-noarch41 k
python-werkzeug                     noarch 0.8.3-2.el6       Ceph-noarch 552 k
userspace-rcu                     x86_64 0.7.7-1.el6       epel         60 k
xfsprogs                            x86_64 3.1.1-14_ceph.el6 Ceph      724 k

Transaction Summary
================================================================================
  2:deploy各步骤输出明细
  --------------------------------------------------------------------------------------------------------
yum install ceph-deploy
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
epel/metalink                                                                              | 3.9 kB   00:00   
* base: mirrors.btte.net
* epel: mirrors.ustc.edu.cn
* extras: mirror.bit.edu.cn
* updates: mirror.bit.edu.cn
epel                                                                                       | 4.3 kB   00:00   
epel/primary_db                                                                              | 5.7 MB   00:04   
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ceph-deploy.noarch 0:1.5.26-0 will be installed
--> Processing Dependency: python-argparse for package: ceph-deploy-1.5.26-0.noarch
--> Running transaction check
---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================
Package                        Arch                  Version                      Repository                  Size
====================================================================================================================
Installing:
ceph-deploy                  noarch                1.5.26-0                     Ceph-noarch                279 k
Installing for dependencies:
python-argparse                noarch                1.2.1-2.1.el6                base                        48 k

Transaction Summary
====================================================================================================================
Install       2 Package(s)

Total download size: 327 k
Installed size: 1.3 M
Is this ok : y
Downloading Packages:
(1/2): ceph-deploy-1.5.26-0.noarch.rpm                                                       | 279 kB   00:00   
(2/2): python-argparse-1.2.1-2.1.el6.noarch.rpm                                              |48 kB   00:00   
--------------------------------------------------------------------------------------------------------------------
Total                                                                               5.6 MB/s | 327 kB   00:00   
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : python-argparse-1.2.1-2.1.el6.noarch                                                             1/2
Installing : ceph-deploy-1.5.26-0.noarch                                                                      2/2
Verifying: ceph-deploy-1.5.26-0.noarch                                                                      1/2
Verifying: python-argparse-1.2.1-2.1.el6.noarch                                                             2/2

Installed:
ceph-deploy.noarch 0:1.5.26-0                                                                                    

Dependency Installed:
python-argparse.noarch 0:1.2.1-2.1.el6                                                                           

Complete!

--------------------------------------------------------------------------------------------------------------
ceph-deploy install master osd1 osd2
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy install master osd1 osd2
ceph-deploy options:
verbose                     : False
testing                     : None
cd_conf                     :
cluster                     : ceph
install_mds                   : False
stable                        : None
prog                        : ceph-deploy
default_release               : False
username                      : None
adjust_repos                  : True
func                        :
install_all                   : False
repo                        : False
host                        : ['master', 'osd1', 'osd2']
install_rgw                   : False
repo_url                      : None
ceph_conf                     : None
install_osd                   : False
version_kind                  : stable
install_common                : False
overwrite_conf                : False
quiet                         : False
dev                           : master
local_mirror                  : None
release                     : None
install_mon                   : False
gpg_url                     : None
Installing stable version hammer on cluster ceph hosts master osd1 osd2
Detecting platform for host master ...
connected to host: master
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
installing ceph on master
Running command: yum clean all
Loaded plugins: fastestmirror, priorities, security
Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
Cleaning up Everything
Cleaning up list of fastest mirrors
adding EPEL repository
Running command: yum -y install epel-release
Loaded plugins: fastestmirror, priorities, security
Determining fastest mirrors
* base: mirrors.yun-idc.com
* epel: mirror.premi.st
* extras: mirrors.yun-idc.com
* updates: mirrors.yun-idc.com
63 packages excluded due to repository priority protections
Setting up Install Process
Package epel-release-6-8.noarch already installed and latest version
Nothing to do
Running command: yum -y install yum-plugin-priorities
Loaded plugins: fastestmirror, priorities, security
Loading mirror speeds from cached hostfile
* base: mirrors.yun-idc.com
* epel: mirror.premi.st
* extras: mirrors.yun-idc.com
* updates: mirrors.yun-idc.com
63 packages excluded due to repository priority protections
Setting up Install Process
Package yum-plugin-priorities-1.1.30-30.el6.noarch already installed and latest version
Nothing to do
Configure Yum priorities to include obsoletes
check_obsoletes has been enabled for Yum priorities plugin
Running command: rpm --import https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm
Retrieving http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm
Preparing...                ##################################################
ceph-release                ##################################################
ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
altered ceph.repo priorities to contain: priority=1
Running command: yum -y install ceph ceph-radosgw
Loaded plugins: fastestmirror, priorities, security
Loading mirror speeds from cached hostfile
* base: mirrors.yun-idc.com
* epel: mirror.premi.st
* extras: mirrors.yun-idc.com
* updates: mirrors.yun-idc.com
63 packages excluded due to repository priority protections
Setting up Install Process
Package 1:ceph-0.94.3-0.el6.x86_64 already installed and latest version
Package 1:ceph-radosgw-0.94.3-0.el6.x86_64 already installed and latest version
Nothing to do
Running command: ceph --version



-------------------------------------------------------------------------------------------------------------
ceph-deploymon create-initial
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy mon create-initial
ceph-deploy options:
username                      : None
subcommand                  : create-initial
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
quiet                         : False
cd_conf                     :
cluster                     : ceph
func                        :
ceph_conf                     : None
default_release               : False
keyrings                      : None
Deploying mon, cluster ceph hosts master
detecting platform for host master ...
connected to host: master
detect platform information from remote host
detect machine type
distro info: CentOS 6.5 Final
determining if provided host has same hostname in remote
get remote short hostname
deploying mon to master
get remote short hostname
remote hostname: master
write cluster configuration to /etc/ceph/{cluster}.conf
create the mon path if it does not exist
checking for done path: /var/lib/ceph/mon/ceph-master/done
done path does not exist: /var/lib/ceph/mon/ceph-master/done
creating keyring file: /var/lib/ceph/tmp/ceph-master.mon.keyring
create the monitor keyring file
Running command: ceph-mon --cluster ceph --mkfs -i master --keyring /var/lib/ceph/tmp/ceph-master.mon.keyring
ceph-mon: mon.noname-a 10.0.0.21:6789/0 is local, renaming to mon.master
ceph-mon: set fsid to e2345a18-c1e1-4079-8dc6-25285231e09d
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-master for mon.master
unlinking keyring file /var/lib/ceph/tmp/ceph-master.mon.keyring
create a done file to avoid re-doing the mon deployment
create the init path if it does not exist
locating the `service` executable...
Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.master
=== mon.master ===
Starting Ceph mon.master on master...
Starting ceph-create-keys on master...
Running command: chkconfig ceph on
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
********************************************************************************
status for monitor: mon.master
{
   "election_epoch": 2,
   "extra_probe_peers": [],
   "monmap": {
   "created": "0.000000",
   "epoch": 1,
   "fsid": "e2345a18-c1e1-4079-8dc6-25285231e09d",
   "modified": "0.000000",
   "mons": [
       {
         "addr": "10.0.0.21:6789/0",
         "name": "master",
         "rank": 0
       }
   ]
   },
   "name": "master",
   "outside_quorum": [],
   "quorum": [
   0
   ],
   "rank": 0,
   "state": "leader",
   "sync_provider": []
}
********************************************************************************
monitor: mon.master is running
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
processing monitor mon.master
connected to host: master
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
mon.master monitor has reached quorum!
all initial monitors are running and have formed quorum
Running gatherkeys...
Checking master for /etc/ceph/ceph.client.admin.keyring
connected to host: master
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.client.admin.keyring key from master.
Have ceph.mon.keyring
Checking master for /var/lib/ceph/bootstrap-osd/ceph.keyring
connected to host: master
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.bootstrap-osd.keyring key from master.
Checking master for /var/lib/ceph/bootstrap-mds/ceph.keyring
connected to host: master
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.bootstrap-mds.keyring key from master.
Checking master for /var/lib/ceph/bootstrap-rgw/ceph.keyring
connected to host: master
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.bootstrap-rgw.keyring key from master.
Error in sys.exitfunc:

------------------------------------------------------------------------------------------------------------------------------------------
ceph-deploymon create master
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy mon create master
ceph-deploy options:
username                      : None
subcommand                  : create
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
quiet                         : False
cd_conf                     :
cluster                     : ceph
mon                           : ['master']
func                        :
ceph_conf                     : None
default_release               : False
keyrings                      : None
Deploying mon, cluster ceph hosts master
detecting platform for host master ...
connected to host: master
detect platform information from remote host
detect machine type
distro info: CentOS 6.5 Final
determining if provided host has same hostname in remote
get remote short hostname
deploying mon to master
get remote short hostname
remote hostname: master
write cluster configuration to /etc/ceph/{cluster}.conf
create the mon path if it does not exist
checking for done path: /var/lib/ceph/mon/ceph-master/done
create a done file to avoid re-doing the mon deployment
create the init path if it does not exist
locating the `service` executable...
Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.master
=== mon.master ===
Starting Ceph mon.master on master...already running
Running command: chkconfig ceph on
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
********************************************************************************
status for monitor: mon.master
{
   "election_epoch": 2,
   "extra_probe_peers": [],
   "monmap": {
   "created": "0.000000",
   "epoch": 1,
   "fsid": "e2345a18-c1e1-4079-8dc6-25285231e09d",
   "modified": "0.000000",
   "mons": [
       {
         "addr": "10.0.0.21:6789/0",
         "name": "master",
         "rank": 0
       }
   ]
   },
   "name": "master",
   "outside_quorum": [],
   "quorum": [
   0
   ],
   "rank": 0,
   "state": "leader",
   "sync_provider": []
}
********************************************************************************
monitor: mon.master is running
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
Error in sys.exitfunc:
-----------------------------------------------------------------------------------------------
ceph-deploygatherkeys master
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy gatherkeys master
ceph-deploy options:
username                      : None
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
quiet                         : False
cd_conf                     :
cluster                     : ceph
mon                           : ['master']
func                        :
ceph_conf                     : None
default_release               : False
Have ceph.client.admin.keyring
Have ceph.mon.keyring
Have ceph.bootstrap-osd.keyring
Have ceph.bootstrap-mds.keyring
Have ceph.bootstrap-rgw.keyring
Error in sys.exitfunc:
----------------------------------------------------
ceph-deployosdprepare osd2:/dev/vdc osd1:/dev/vdc
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy osd prepare osd2:/dev/vdc osd1:/dev/vdc
ceph-deploy options:
username                      : None
subcommand                  : prepare
dmcrypt                     : False
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
quiet                         : False
cd_conf                     :
disk                        : [('osd2', '/dev/vdc', None), ('osd1', '/dev/vdc', None)]
cluster                     : ceph
fs_type                     : xfs
func                        :
ceph_conf                     : None
default_release               : False
zap_disk                      : False
Preparing cluster ceph disks osd2:/dev/vdc: osd1:/dev/vdc:
connected to host: osd2
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
Deploying osd to osd2
write cluster configuration to /etc/ceph/{cluster}.conf
Running command: udevadm trigger --subsystem-match=block --action=add
Preparing host osd2 disk /dev/vdc journal None activate False
Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/vdc
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
INFO:ceph-disk:Will colocate journal with data on /dev/vdc
DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/vdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:04b38c48-ef2a-4120-a839-4449a6320ca2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc
Creating new GPT entries.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
INFO:ceph-disk:calling partx on prepared device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 2
INFO:ceph-disk:Running command: /sbin/udevadm settle
DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
DEBUG:ceph-disk:Creating osd partition on /dev/vdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:95b35ffa-b954-4add-b775-0b81ed3eaee5 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/vdc
Information: Moved requested sector from 10485761 to 10487808 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
INFO:ceph-disk:calling partx on created device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
INFO:ceph-disk:Running command: /sbin/udevadm settle
DEBUG:ceph-disk:Creating xfs fs on /dev/vdc1
INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdc1
meta-data=/dev/vdc1            isize=2048   agcount=4, agsize=327615 blks
          =                     sectsz=512   attr=2, projid32bit=0
data   =                     bsize=4096   blocks=1310459, imaxpct=25
          =                     sunit=0      swidth=0 blks
naming   =version 2            bsize=4096   ascii-ci=0
log      =internal log         bsize=4096   blocks=2560, version=2
          =                     sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.inbaDA with options noatime,inode64
INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.inbaDA
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.inbaDA
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.inbaDA/journal -> /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.inbaDA
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.inbaDA
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc
The operation has completed successfully.
INFO:ceph-disk:calling partx on prepared device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Host osd2 is now ready for osd use.
connected to host: osd1
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
Deploying osd to osd1
write cluster configuration to /etc/ceph/{cluster}.conf
Running command: udevadm trigger --subsystem-match=block --action=add
Preparing host osd1 disk /dev/vdc journal None activate False
Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/vdc
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
INFO:ceph-disk:Will colocate journal with data on /dev/vdc
DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/vdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc
Creating new GPT entries.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
INFO:ceph-disk:calling partx on prepared device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 2
INFO:ceph-disk:Running command: /sbin/udevadm settle
DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
DEBUG:ceph-disk:Creating osd partition on /dev/vdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a203debe-0140-466b-9ee5-c71b476fd1ac --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/vdc
Information: Moved requested sector from 10485761 to 10487808 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
INFO:ceph-disk:calling partx on created device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
INFO:ceph-disk:Running command: /sbin/udevadm settle
DEBUG:ceph-disk:Creating xfs fs on /dev/vdc1
INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdc1
meta-data=/dev/vdc1            isize=2048   agcount=4, agsize=327615 blks
          =                     sectsz=512   attr=2, projid32bit=0
data   =                     bsize=4096   blocks=1310459, imaxpct=25
          =                     sunit=0      swidth=0 blks
naming   =version 2            bsize=4096   ascii-ci=0
log      =internal log         bsize=4096   blocks=2560, version=2
          =                     sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.eYabbH with options noatime,inode64
INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.eYabbH
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.eYabbH
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.eYabbH/journal -> /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.eYabbH
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.eYabbH
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc
The operation has completed successfully.
INFO:ceph-disk:calling partx on prepared device /dev/vdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Host osd1 is now ready for osd use.
Error in sys.exitfunc:


------------------------------------------------------------------------------------------------

# ceph-deploy osdactivate osd1:/dev/vdc1 osd2:/dev/vdc1
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy osd activate osd1:/dev/vdc1 osd2:/dev/vdc1
ceph-deploy options:
username                      : None
subcommand                  : activate
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
quiet                         : False
cd_conf                     :
cluster                     : ceph
func                        :
ceph_conf                     : None
default_release               : False
disk                        : [('osd1', '/dev/vdc1', None), ('osd2', '/dev/vdc1', None)]
Activating cluster ceph disks osd1:/dev/vdc1: osd2:/dev/vdc1:
connected to host: osd1
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
activating host osd1 disk /dev/vdc1
will use init type: sysvinit
Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/vdc1
INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/vdc1
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.WDZv83 with options noatime,inode64
INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.WDZv83
DEBUG:ceph-disk:Cluster uuid is e2345a18-c1e1-4079-8dc6-25285231e09d
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is a203debe-0140-466b-9ee5-c71b476fd1ac
DEBUG:ceph-disk:OSD id is 4
DEBUG:ceph-disk:Marking with init system sysvinit
DEBUG:ceph-disk:ceph osd.4 data dir is ready at /var/lib/ceph/tmp/mnt.WDZv83
INFO:ceph-disk:ceph osd.4 already mounted in position; unmounting ours.
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.WDZv83
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.WDZv83
DEBUG:ceph-disk:Starting ceph osd.4...
INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.4
=== osd.4 ===
Starting Ceph osd.4 on osd1...already running
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Running command: chkconfig ceph on
connected to host: osd2
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
activating host osd2 disk /dev/vdc1
will use init type: sysvinit
Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/vdc1
INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/vdc1
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.dlgr0S with options noatime,inode64
INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.dlgr0S
DEBUG:ceph-disk:Cluster uuid is e2345a18-c1e1-4079-8dc6-25285231e09d
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 95b35ffa-b954-4add-b775-0b81ed3eaee5
DEBUG:ceph-disk:OSD id is 3
DEBUG:ceph-disk:Marking with init system sysvinit
DEBUG:ceph-disk:ceph osd.3 data dir is ready at /var/lib/ceph/tmp/mnt.dlgr0S
INFO:ceph-disk:ceph osd.3 already mounted in position; unmounting ours.
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.dlgr0S
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.dlgr0S
DEBUG:ceph-disk:Starting ceph osd.3...
INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.3
=== osd.3 ===
Starting Ceph osd.3 on osd2...already running
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Running command: chkconfig ceph on
Error in sys.exitfunc:
#
------------------------------------------------------------------------------
ceph-deployadmin master osd1 osd2
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.26): /usr/bin/ceph-deploy admin master osd1 osd2
ceph-deploy options:
username                      : None
verbose                     : False
overwrite_conf                : False
prog                        : ceph-deploy
quiet                         : False
cd_conf                     :
cluster                     : ceph
client                        : ['master', 'osd1', 'osd2']
func                        :
ceph_conf                     : None
default_release               : False
Pushing admin keys and conf to master
connected to host: master
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to osd1
connected to host: osd1
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to osd2
connected to host: osd2
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf
Error in sys.exitfunc:


---------------------------------------------------------------------------------------------------------------------




页: [1]
查看完整版本: 3.ceph安装包明细及deploy过程输出