openstack运维实战系列(十六)之ceph存储安装
1.前言企业上线openstack,必须要思考和解决三方面的难题:1.控制集群的高可用和负载均衡,保障集群没有单点故障,持续可用,2.网络的规划和neutron L3的高可用和负载均衡,3. 存储的高可用性和性能问题。存储openstack中的痛点与难点之一,在上线和运维中,值得考虑和规划的重要点,openstack支持各种存储,包括分布式的文件系统,常见的有:ceph,glusterfs和sheepdog,同时也支持商业的FC存储,如IBM,EMC,NetApp和huawei的专业存储设备,一方面能够满足企业的利旧和资源的统一管理。
2. ceph概述
ceph作为近年来呼声最高的统一存储,在云环境下适应而生,ceph成就了openstack和cloudstack这类的开源的平台方案,同时openstack的快速发展,也吸引了越来越多的人参与到ceph的研究中来,在过去的2015年中,ceph在整个社区的活跃度越来越高,越来越多的企业,使用ceph做为openstack的glance,nova,cinder的存储。
ceph是一种统一的分布式文件系统,能够支持三种常用的接口:1.对象存储接口,兼容于S3,用于存储结构化的数据,如图片,视频,音频等文件,其他对象存储有:S3,Swift,FastDFS等;2. 文件系统接口,通过cephfs来完成,能够实现类似于nfs的挂载文件系统,需要由MDS来完成,类似的文件系存储有:nfs,samba,glusterfs等;3. 块存储,通过rbd实现,专门用于存储云环境下块设备,如openstack的cinder卷存储,这也是目前ceph应用最广泛的地方。
ceph的体系结构:
3.ceph的安装
[*]环境介绍,通过ceph-depoly的方式部署ceph,三台角色如下,按照需求配置好hostname并格式化磁盘为xfs并挂载
ip主机名角色磁盘
10.1.2.230controller_10_1_2_230Monitor/admin-deploy
10.1.2.231network_10_1_2_231OSD/data1/osd0,格式化为xfs
10.1.2.232compute1_10_1_2_232OSD/data1/osd1,格式化为xfs
2.yum源的准备,需要配置好Epel源和ceph源
1
2
3
4
5
6
7
8
vim /etc/yum.repos.d/ceph.repo
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# yum repolist
Loaded plugins: fastestmirror, priorities
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
Determining fastest mirrors
base | 3.7 kB 00:00
base/primary_db | 4.4 MB 00:01
ceph | 2.9 kB 00:00
ceph/primary_db |24 kB 00:00
epel | 4.4 kB 00:00
epel/primary_db | 6.3 MB 00:00
extras | 3.3 kB 00:00
extras/primary_db |19 kB 00:00
openstack-icehouse | 2.9 kB 00:00
openstack-icehouse/primary_db | 902 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 5.3 MB 00:00
211 packages excluded due to repository priority protections
repo id repo name status
base CentOS-6 - Base 6,311+56
ceph ceph 21+1
epel Extra Packages for Enterprise Linux 6 - x86_64 11,112+36
extras CentOS-6 - Extras 15
openstack-icehouse OpenStack Icehouse Repository 1,353+309
updates CentOS-6 - Updates 1,397+118
repolist: 20,209
3.配置ntp,所有的机器均需要配置ntp服务器,统一同步到内网的ntp服务器,参考:
1
2
3
4
5
6
7
8
# vim /etc/ntp.conf
server 10.1.2.230
# /etc/init.d/ntpd restart
Shutting down ntpd:
Starting ntpd:
# chkconfig ntpd on
# chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
4.安装ceph,在所有的节点上都需要安装ceph,在admin节点,需要安装另外一个ceph-deploy,用于软件部署
1
2
3
# yum install ceph ceph-deploy#管理节点
# yum install ceph -y #OSD节点
# yum install ceph -y #OSD节点
5.生成SSH key秘钥,并拷贝至远端
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
a、生成key
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
75:b4:24:cd:24:a6:a4:a4:a5:c7:da:6f:e7:29:ce:0f root@controller_10_1_2_230
The key's randomart image is:
+--[ RSA 2048]----+
| o . +++ |
| * o o =o. |
| o + . . o |
| + . . |
| . . S |
| . |
| E . |
| o.+ . |
| .oo+ |
+-----------------+
b、将key拷贝至远端
# ssh-copy-id -i /root/.ssh/id_rsa.pub root@controller_10_1_2_230
The authenticity of host ':32200 (:32200)' can't be established.
RSA key fingerprint is 7c:6b:e6:d5:b9:cc:09:d2:b7:bb:db:a4:41:aa:5e:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ':32200,:32200' (RSA) to the list of known hosts.
root@controller_10_1_2_230's password:
Now try logging into the machine, with "ssh 'root@controller_10_1_2_230'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
# ssh-copy-id -i /root/.ssh/id_rsa.pub root@network_10_1_2_231
The authenticity of host ':32200 (:32200)' can't be established.
RSA key fingerprint is de:27:84:74:d3:9c:cf:d5:d8:3e:c4:65:d5:9d:dc:9a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ':32200,:32200' (RSA) to the list of known hosts.
root@network_10_1_2_231's password:
Now try logging into the machine, with "ssh 'root@network_10_1_2_231'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
# ssh-copy-id -i /root/.ssh/id_rsa.pub root@compute1_10_1_2_232
The authenticity of host ':32200 (:32200)' can't be established.
RSA key fingerprint is d7:3a:1a:3d:b5:26:78:6a:39:5e:bd:5d:d4:96:29:0f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ':32200,:32200' (RSA) to the list of known hosts.
root@compute1_10_1_2_232's password:
Now try logging into the machine, with "ssh 'root@compute1_10_1_2_232'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
c、测试是否能无密码登陆
# ssh root@compute1_10_1_2_232 'df -h'
Filesystem SizeUsed Avail Use% Mounted on
/dev/sda2 9.9G3.3G6.1G35% /
tmpfs 3.9G8.0K3.9G 1% /dev/shm
/dev/sda1 1008M 82M876M 9% /boot
/dev/sda4 913G 33M913G 1% /data1
6.创建初始Monitor节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
a、创建配置目录,后续所有的操作都在该目录下执行
# mkdir ceph-deploy
# cd ceph-deploy/
b、新建一个集群
# ceph-deploy new--cluster-network 10.1.2.0/24--public-network 10.1.2.0/24 controller_10_1_2_230
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.19): /usr/bin/ceph-deploy new --cluster-network 10.1.2.0/24 --public-network 10.1.2.0/24 controller_10_1_2_230
Creating new cluster named ceph
making sure passwordless SSH succeeds
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
find the location of an executable
Running command: /sbin/ip link show
Running command: /sbin/ip addr show
IP addresses found: ['10.1.2.230']
Resolving host controller_10_1_2_230
Monitor controller_10_1_2_230 at 10.1.2.230
Monitor initial members are ['controller_10_1_2_230']
Monitor addrs are ['10.1.2.230']
Creating a random mon key...
Writing monitor keyring to ceph.mon.keyring...
Writing initial config to ceph.conf...
Error in sys.exitfunc:
c、修改副本的个数
# vim ceph.conf
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx #开启ceph认证
mon_host = 10.1.2.230 #monitor的ip地址
public_network = 10.1.2.0/24 #ceph共有网络,即访问的网络
mon_initial_members = controller_10_1_2_230 #初始的monitor,建议是奇数,防止脑裂
cluster_network = 10.1.2.0/24 #ceph集群内网
fsid = 07462638-a00f-476f-8257-3f4c9ec12d6e #fsid号码
osd pool default size = 2 #replicate的个数
d、monitor初始化
# ceph-deploy mon create-initial
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.19): /usr/bin/ceph-deploy mon create-initial
Deploying mon, cluster ceph hosts controller_10_1_2_230
detecting platform for host controller_10_1_2_230 ...
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
distro info: CentOS 6.5 Final
determining if provided host has same hostname in remote
get remote short hostname
deploying mon to controller_10_1_2_230
get remote short hostname
remote hostname: controller_10_1_2_230
write cluster configuration to /etc/ceph/{cluster}.conf
create the mon path if it does not exist
checking for done path: /var/lib/ceph/mon/ceph-controller_10_1_2_230/done
done path does not exist: /var/lib/ceph/mon/ceph-controller_10_1_2_230/done
creating keyring file: /var/lib/ceph/tmp/ceph-controller_10_1_2_230.mon.keyring
create the monitor keyring file
Running command: ceph-mon --cluster ceph --mkfs -i controller_10_1_2_230 --keyring /var/lib/ceph/tmp/ceph-controller_10_1_2_230.mon.keyring
ceph-mon: renaming mon.noname-a 10.1.2.230:6789/0 to mon.controller_10_1_2_230
ceph-mon: set fsid to 07462638-a00f-476f-8257-3f4c9ec12d6e
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-controller_10_1_2_230 for mon.controller_10_1_2_230
unlinking keyring file /var/lib/ceph/tmp/ceph-controller_10_1_2_230.mon.keyring
create a done file to avoid re-doing the mon deployment
create the init path if it does not exist
locating the `service` executable...
Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.controller_10_1_2_230
=== mon.controller_10_1_2_230 ===
Starting Ceph mon.controller_10_1_2_230 on controller_10_1_2_230...
Starting ceph-create-keys on controller_10_1_2_230...
Running command: chkconfig ceph on
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.controller_10_1_2_230.asok mon_status
********************************************************************************
status for monitor: mon.controller_10_1_2_230
{
"election_epoch": 2,
"extra_probe_peers": [],
"monmap": {
"created": "0.000000",
"epoch": 1,
"fsid": "07462638-a00f-476f-8257-3f4c9ec12d6e",
"modified": "0.000000",
"mons": [
{
"addr": "10.1.2.230:6789/0",
"name": "controller_10_1_2_230",
"rank": 0
}
]
},
"name": "controller_10_1_2_230",
"outside_quorum": [],
"quorum": [
0
],
"rank": 0,
"state": "leader",
"sync_provider": []
}
********************************************************************************
monitor: mon.controller_10_1_2_230 is running
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.controller_10_1_2_230.asok mon_status
processing monitor mon.controller_10_1_2_230
connected to host: controller_10_1_2_230
Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.controller_10_1_2_230.asok mon_status
mon.controller_10_1_2_230 monitor has reached quorum!
all initial monitors are running and have formed quorum
Running gatherkeys...
Checking controller_10_1_2_230 for /etc/ceph/ceph.client.admin.keyring
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.client.admin.keyring key from controller_10_1_2_230.
Have ceph.mon.keyring
Checking controller_10_1_2_230 for /var/lib/ceph/bootstrap-osd/ceph.keyring
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.bootstrap-osd.keyring key from controller_10_1_2_230.
Checking controller_10_1_2_230 for /var/lib/ceph/bootstrap-mds/ceph.keyring
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
fetch remote file
Got ceph.bootstrap-mds.keyring key from controller_10_1_2_230.
Error in sys.exitfunc:
e、校验ceph状态
# ceph -s
cluster 07462638-a00f-476f-8257-3f4c9ec12d6e
health HEALTH_ERR 64 pgs stuck inactive; 64 pgs stuck unclean; no osds
monmap e1: 1 mons at {controller_10_1_2_230=10.1.2.230:6789/0}, election epoch 2, quorum 0 controller_10_1_2_230
osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
PS:由于目前还没有OSD,所有集群目前的health健康状态为:HEALTH_ERR,添加之后,就不会报错了@@
7.添加OSD,按照准备条件,先格式化为xfs文件系统并挂载
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
a、创建目录
# ssh root@network_10_1_2_231 'mkdir -pv /data1/osd0'
mkdir: created directory `/data1/osd0'
# ssh compute1_10_1_2_232 'mkdir -pv /data1/osd1'
mkdir: created directory `/data1/osd1'
b、OSD准备
# ceph-deploy osd prepare network_10_1_2_231:/data1/osd0 compute1_10_1_2_232:/data1/osd1
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.19): /usr/bin/ceph-deploy osd prepare network_10_1_2_231:/data1/osd0 compute1_10_1_2_232:/data1/osd1
Preparing cluster ceph disks network_10_1_2_231:/data1/osd0: compute1_10_1_2_232:/data1/osd1:
connected to host: network_10_1_2_231
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
Deploying osd to network_10_1_2_231
write cluster configuration to /etc/ceph/{cluster}.conf
osd keyring does not exist yet, creating one
create a keyring file
Running command: udevadm trigger --subsystem-match=block --action=add
Preparing host network_10_1_2_231 disk /data1/osd0 journal None activate False
Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /data1/osd0
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir /data1/osd0
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Host network_10_1_2_231 is now ready for osd use.
connected to host: compute1_10_1_2_232
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
Deploying osd to compute1_10_1_2_232
write cluster configuration to /etc/ceph/{cluster}.conf
osd keyring does not exist yet, creating one
create a keyring file
Running command: udevadm trigger --subsystem-match=block --action=add
Preparing host compute1_10_1_2_232 disk /data1/osd1 journal None activate False
Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /data1/osd1
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir /data1/osd1
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Host compute1_10_1_2_232 is now ready for osd use.
Error in sys.exitfunc:
c、激活OSD
# ceph-deploy osd activate network_10_1_2_231:/data1/osd0 compute1_10_1_2_232:/data1/osd1
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.19): /usr/bin/ceph-deploy osd activate network_10_1_2_231:/data1/osd0 compute1_10_1_2_232:/data1/osd1
Activating cluster ceph disks network_10_1_2_231:/data1/osd0: compute1_10_1_2_232:/data1/osd1:
connected to host: network_10_1_2_231
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
activating host network_10_1_2_231 disk /data1/osd0
will use init type: sysvinit
Running command: ceph-disk -v activate --mark-init sysvinit --mount /data1/osd0
=== osd.0 ===
Starting Ceph osd.0 on network_10_1_2_231...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
DEBUG:ceph-disk:Cluster uuid is 07462638-a00f-476f-8257-3f4c9ec12d6e
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 6d298aa5-b33b-4d93-8ac4-efaa671200e5
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 6d298aa5-b33b-4d93-8ac4-efaa671200e5
DEBUG:ceph-disk:OSD id is 0
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /data1/osd0/activate.monmap
got monmap epoch 1
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /data1/osd0/activate.monmap --osd-data /data1/osd0 --osd-journal /data1/osd0/journal --osd-uuid 6d298aa5-b33b-4d93-8ac4-efaa671200e5 --keyring /data1/osd0/keyring
2016-01-28 15:48:59.836760 7f2ec01337a0 -1 journal FileJournal::_open: disabling aio for non-block journal.Use journal_force_aio to force use of aio anyway
2016-01-28 15:49:00.053509 7f2ec01337a0 -1 journal FileJournal::_open: disabling aio for non-block journal.Use journal_force_aio to force use of aio anyway
2016-01-28 15:49:00.053971 7f2ec01337a0 -1 filestore(/data1/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2016-01-28 15:49:00.262211 7f2ec01337a0 -1 created object store /data1/osd0 journal /data1/osd0/journal for osd.0 fsid 07462638-a00f-476f-8257-3f4c9ec12d6e
2016-01-28 15:49:00.262234 7f2ec01337a0 -1 auth: error reading file: /data1/osd0/keyring: can't open /data1/osd0/keyring: (2) No such file or directory
2016-01-28 15:49:00.262280 7f2ec01337a0 -1 created new key in keyring /data1/osd0/keyring
DEBUG:ceph-disk:Marking with init system sysvinit
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /data1/osd0/keyring osd allow * mon allow profile osd
added key for osd.0
DEBUG:ceph-disk:ceph osd.0 data dir is ready at /data1/osd0
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-0 -> /data1/osd0
DEBUG:ceph-disk:Starting ceph osd.0...
INFO:ceph-disk:Running command: /sbin/service ceph start osd.0
create-or-move updating item name 'osd.0' weight 0.89 at location {host=network_10_1_2_231,root=default} to crush map
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Running command: chkconfig ceph on
connected to host: compute1_10_1_2_232
detect platform information from remote host
detect machine type
Distro info: CentOS 6.5 Final
activating host compute1_10_1_2_232 disk /data1/osd1
will use init type: sysvinit
Running command: ceph-disk -v activate --mark-init sysvinit --mount /data1/osd1
=== osd.1 ===
Starting Ceph osd.1 on compute1_10_1_2_232...
starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
DEBUG:ceph-disk:Cluster uuid is 07462638-a00f-476f-8257-3f4c9ec12d6e
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 770582a6-e408-4fb8-b59b-8b781d5e226b
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 770582a6-e408-4fb8-b59b-8b781d5e226b
DEBUG:ceph-disk:OSD id is 1
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /data1/osd1/activate.monmap
got monmap epoch 1
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /data1/osd1/activate.monmap --osd-data /data1/osd1 --osd-journal /data1/osd1/journal --osd-uuid 770582a6-e408-4fb8-b59b-8b781d5e226b --keyring /data1/osd1/keyring
2016-01-28 15:49:08.734889 7fad7e2a3800 -1 journal FileJournal::_open: disabling aio for non-block journal.Use journal_force_aio to force use of aio anyway
2016-01-28 15:49:08.976634 7fad7e2a3800 -1 journal FileJournal::_open: disabling aio for non-block journal.Use journal_force_aio to force use of aio anyway
2016-01-28 15:49:08.976965 7fad7e2a3800 -1 filestore(/data1/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2016-01-28 15:49:09.285254 7fad7e2a3800 -1 created object store /data1/osd1 journal /data1/osd1/journal for osd.1 fsid 07462638-a00f-476f-8257-3f4c9ec12d6e
2016-01-28 15:49:09.285287 7fad7e2a3800 -1 auth: error reading file: /data1/osd1/keyring: can't open /data1/osd1/keyring: (2) No such file or directory
2016-01-28 15:49:09.285358 7fad7e2a3800 -1 created new key in keyring /data1/osd1/keyring
DEBUG:ceph-disk:Marking with init system sysvinit
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /data1/osd1/keyring osd allow * mon allow profile osd
added key for osd.1
DEBUG:ceph-disk:ceph osd.1 data dir is ready at /data1/osd1
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-1 -> /data1/osd1
DEBUG:ceph-disk:Starting ceph osd.1...
INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.1
libust: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
create-or-move updating item name 'osd.1' weight 0.89 at location {host=compute1_10_1_2_232,root=default} to crush map
libust: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
checking OSD status...
Running command: ceph --cluster=ceph osd stat --format=json
Running command: chkconfig ceph on
Error in sys.exitfunc:
d、再次校验ceph状态
# ceph -s
cluster 07462638-a00f-476f-8257-3f4c9ec12d6e
health HEALTH_OK #状态为OK了
monmap e1: 1 mons at {controller_10_1_2_230=10.1.2.230:6789/0}, election epoch 2, quorum 0 controller_10_1_2_230 #monitor信息
osdmap e8: 2 osds: 2 up, 2 in #osd信息,有2个OSD,2个状态都是up
pgmap v14: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 1814 GB / 1824 GB avail
64 active+clean
8. 拷贝管理秘钥至其他节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# ceph-deploy admin controller_10_1_2_230 network_10_1_2_231 compute1_10_1_2_232
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.19): /usr/bin/ceph-deploy admin controller_10_1_2_230 network_10_1_2_231 compute1_10_1_2_232
Pushing admin keys and conf to controller_10_1_2_230
connected to host: controller_10_1_2_230
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to network_10_1_2_231
connected to host: network_10_1_2_231
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing admin keys and conf to compute1_10_1_2_232
connected to host: compute1_10_1_2_232
detect platform information from remote host
detect machine type
get remote short hostname
write cluster configuration to /etc/ceph/{cluster}.conf
Error in sys.exitfunc:
其他节点也能够管理ceph集群
# ll /etc/ceph/ceph.client.admin.keyring
-rw-r--r-- 1 root root 63 Jan 28 15:52 /etc/ceph/ceph.client.admin.keyring
# ceph health
HEALTH_OK
# ceph -s
cluster 07462638-a00f-476f-8257-3f4c9ec12d6e
health HEALTH_OK
monmap e1: 1 mons at {controller_10_1_2_230=10.1.2.230:6789/0}, election epoch 2, quorum 0 controller_10_1_2_230
osdmap e8: 2 osds: 2 up, 2 in
pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 1814 GB / 1824 GB avail
64 active+clean
至此,基本的ceph配置完毕!!!
4. 总结
以上配置是ceph的最基本的配置,此处只是作为练习使用,在生产中,实际用ceph,需要考虑的因素非常多,如monitor需要有3个,ceph的集群网络和公共网络的规划,OSD策略的调整等等,请继续关注我的blog,后续将会以ceph系列文章的形式出现,此处只是做简单的演示,接下来的博客,将介绍ceph和glance,nova,以及cinder结合,敬请关注!
5.参考链接
http://docs.ceph.com/docs/master/start/quick-ceph-deploy
页:
[1]