|
一、nova
1、查看openstack集群中的宿主机
[iyunv@node-44 ~]# nova host-list
2、查看虚拟机
[iyunv@node-44 ~]# nova host 或 nova list
二、glance
1、压缩一个qcow2格式的镜像
[iyunv@node-44 glance]#qemu-img convert -c ubuntuwcf-10.40.254.64.img -O qcow2 /var/lib/glance/SunDaMing.img
2、查看一个镜像的信息(包括:镜像格式及大小)
[iyunv@node-44 glance]# qemu-img info SunDaMing.img
3、镜像上传
[iyunv@node-44 glance]#glance image-create --is-public true --disk-format qcow2 --container-format bare --name "SunDaMing" < /var/lib/glance/SunDaMing.img
三、cinder
1、查看所有的云硬盘卷
[iyunv@node-44 ~]# cinder list
四、ceph
1、查看ceph集群中有多少个pool (只是查看pool)
[iyunv@node-44 ~]# rados lspools
data
metadata
rbd
images
volumes
.rgw.root
compute
.rgw.control
.rgw
.rgw.gc
.users.uid
2、查看ceph集群中有多少个pool,并且每个pool容量及利用情况
[iyunv@node-44 ~]# rados df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
.rgw - 0 0 0 0 0 0 0 0 0
.rgw.control - 0 8 0 0 0 0 0 0 0
.rgw.gc - 0 32 0 0 0 57172 57172 38136 0
.rgw.root - 1 4 0 0 0 75 46 10 10
.users.uid - 1 1 0 0 0 0 0 2 1
compute - 67430708 16506 0 0 0 398128 75927848 1174683 222082706
data - 0 0 0 0 0 0 0 0 0
images - 250069744 30683 0 0 0 50881 195328724 65025 388375482
metadata - 0 0 0 0 0 0 0 0 0
rbd - 0 0 0 0 0 0 0 0 0
volumes - 79123929 19707 0 0 0 2575693 63437000 1592456 163812172
total used 799318844 66941
total avail 11306053720
total space 12105372564
[iyunv@node-44 ~]#
3、查看ceph中一个pool里的所有镜像
[iyunv@node-44 ~]# rbd ls images
2014-05-24 17:17:37.043659 7f14caa6e700 0 -- :/1025604 >> 10.49.101.9:6789/0 pipe(0x6c5400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x6c5660).fault
2182d9ac-52f4-4f5d-99a1-ab3ceacbf0b9
34e1a475-5b11-410c-b4c4-69b5f780f03c
476a9f3b-4608-4ffd-90ea-8750e804f46e
60eae8bf-dd23-40c5-ba02-266d5b942767
72e16e93-1fa5-4e11-8497-15bd904eeffe
74cb427c-cee9-47d0-b467-af217a67e60a
8f181a53-520b-4e22-af7c-de59e8ccca78
9867a580-22fe-4ed0-a1a8-120b8e8d18f4
ac6f4dae-4b81-476d-9e83-ad92ff25fb13
d20206d7-ff31-4dce-b59a-a622b0ea3af6
[iyunv@node-44 ~]# rbd ls volumes
2014-05-24 17:22:18.649929 7f9e98733700 0 -- :/1010725 >> 10.49.101.9:6789/0 pipe(0x96a400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x96a660).fault
volume-0788fc6c-0dd4-4339-bad4-e9d78bd5365c
volume-0898c5b4-4072-4cae-affc-ec59c2375c51
volume-2a1fb287-5666-4095-8f0b-6481695824e2
volume-35c6aad4-8ea4-4b8d-95c7-7c3a8e8758c5
volume-814494cc-5ae6-4094-9d06-d844fdf485c4
volume-8a6fb0db-35a9-4b3b-9ace-fb647c2918ea
volume-8c108991-9b03-4308-b979-51378bba2ed1
volume-8cf3d206-2cce-4579-91c5-77bcb4a8a3f8
volume-91fc075c-8bd1-41dc-b5ef-844f23df177d
volume-b1263d8b-0a12-4b51-84e5-74434c0e73aa
volume-b84fad5d-16ee-4343-8630-88f265409feb
volume-c03a2eb1-06a3-4d79-98e5-7c62210751c3
volume-c17bf6c0-80ba-47d9-862d-1b9e9a48231e
volume-c32bca55-7ec0-47ce-a87e-a883da4b4ccd
volume-df8961ef-11d6-4dae-96ee-f2df8eb4a08c
volume-f1c38695-81f8-44fd-9af0-458cddf103a3
4、查看ceph pool中一个镜像的信息
[iyunv@node-44 ~]# rbd info -p images --image 74cb427c-cee9-47d0-b467-af217a67e60a
rbd image '74cb427c-cee9-47d0-b467-af217a67e60a':
size 1048 MB in 131 objects
order 23 (8192 KB objects)
block_name_prefix: rbd_data.95c7783fc0d0
format: 2
features: layering
5、把ceph pool中的一个镜像导出
导出镜像
[iyunv@node-44 ~]# rbd export -p images --image 74cb427c-cee9-47d0-b467-af217a67e60a /root/aaa.img
2014-05-24 17:16:15.197695 7ffb47a9a700 0 -- :/1020493 >> 10.49.101.9:6789/0 pipe(0x1368400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x1368660).fault
Exporting image: 100% complete...done.
导出云硬盘
[iyunv@node-44 ~]# rbd export -p volumes --image volume-0788fc6c-0dd4-4339-bad4-e9d78bd5365c /root/bbb.volume
2014-05-24 17:28:18.940402 7f14ad39f700 0 -- :/1032237 >> 10.49.101.9:6789/0 pipe(0x260a400 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x260a660).fault
Exporting image: 100% complete...done.
6、把一个镜像导入ceph中 (但是直接导入是不能用的,因为没有经过openstack,openstack是看不到的)
[iyunv@node-44 ~]# rbd import /root/aaa.img -p images --image 74cb427c-cee9-47d0-b467-af217a67e60a
Importing image: 100% complete...done.
11、把ceph中导出镜像用glance命令上传就可以在openstack中使用了
[iyunv@node-44 ~]# glance image-create --is-public true --disk-format qcow2 --container-format bare --name "aaa.img" < aaa.img
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 18b2a4552647767b75a06527896850ae |
| container_format | bare |
| created_at | 2014-05-24T17:36:15 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | fee4521d-a6fc-4647-b31e-3789dbc3764b |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | aaa.img |
| owner | aaa7590fe42340ed8c8c113886f5856d |
| protected | False |
| size | 1098907648 |
| status | active |
| updated_at | 2014-05-24T17:36:54 |
+------------------+--------------------------------------+
7、查看ceph pool中的ceph object (这里的object是以块形式存储的)
[iyunv@node-44 ~]# rados ls -p volumes | more
rbd_data.348f21ba7021.0000000000000866
rbd_data.32562ae8944a.0000000000000c79
rbd_data.589c2ae8944a.00000000000031ba
rbd_data.58c9151ff76b.00000000000029af
rbd_data.58c9151ff76b.0000000000002c19
rbd_data.58c9151ff76b.0000000000000a5a
rbd_data.58c9151ff76b.0000000000001c69
rbd_data.58c9151ff76b.000000000000281d
rbd_data.58c9151ff76b.0000000000002de1
rbd_data.58c9151ff76b.0000000000002dae
8、查看ceph集群的容量使用情况
[iyunv@node-44 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
11544G 10782G 762G 6.60
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 0 0 0
rbd 2 0 0 0
images 3 238G 2.07 30683
volumes 4 77269M 0.65 19707
.rgw.root 5 955 0 4
compute 6 65850M 0.56 16506
.rgw.control 7 0 0 8
.rgw 8 0 0 0
.rgw.gc 9 0 0 32
.users.uid 10 155 0 1
[iyunv@node-44 ~]#
9、查看ceph的认证列表
[iyunv@node-44 ~]# ceph auth list
installed auth entries:
osd.0
key: AQDrbndTKDH1OBAAH8tCB1p4o2kLVNp7NaVSuw==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQDtbndTKFF1BxAASRZwvBjUNPDWFgchBnLLsA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQDubndTYPWtEBAAk19KffEC6YgsXGcXKARxbg==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQB6YHdTUEfeFhAAC1F376iB4r2RmEtt3Yj/BQ==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQB7YHdTaPvMBhAAweJ4SuO+cuE39wKfCzm9Uw==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQB6YHdT2J+NLBAArBLMz9uO/S6F6n5mRumQRA==
caps: [mon] allow profile bootstrap-osd
client.compute
key: AQAsb3dT+HLiAhAA9ctWEt8C07d9/+fjYU4A3Q==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images, allow rwx pool=compute
client.images
key: AQCmYXdT8FGUHxAAMs6/63iRjCEhOJEbB6Ui4g==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
client.radosgw.gateway
key: AQCzYndTEJuAHxAA0TgxMTbvjA0sRKwCsWeROA==
caps: [mon] allow rw
caps: [osd] allow rwx
client.volumes
key: AQDDYXdTIDbrGBAAmJDM4Bsp59/Tc7uTPQZQdA==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images
10、查看ceph的版本
[iyunv@node-44 ~]# ceph --version
ceph version 0.67.5 (a60ac9194718083a4b6a225fc17cad6096c69bd1)
11、查看ceph的状态
[iyunv@node-44 ~]# ceph --status
cluster 15652267-49ab-4a02-a8a1-2002ff04ad2c
health HEALTH_WARN 1 mons down, quorum 0,1 node-44,node-46
monmap e3: 3 mons at {node-44=10.49.101.6:6789/0,node-46=10.49.101.8:6789/0,node-47=10.49.101.9:6789/0}, election epoch 26, quorum 0,1 node-44,node-46
osdmap e129: 3 osds: 3 up, 3 in
pgmap v261411: 992 pgs: 992 active+clean; 378 GB data, 762 GB used, 10782 GB / 11544 GB avail; 0B/s rd, 7352B/s wr, 2op/s
mdsmap e1: 0/0/1 up
12、查看ceph的健康状态
[iyunv@node-44 ~]# ceph health
HEALTH_WARN 1 mons down, quorum 0,1 node-44,node-46
#如果这样显示表示运行有问题,把node-47节点的ceph进程开启
[iyunv@node-47 ~]# /etc/init.d/ceph start
=== mon.node-47 ===
Starting Ceph mon.node-47 on node-47...
Starting ceph-create-keys on node-47...
再次执行ceph健康检查命令
[iyunv@node-44 ~]# ceph health
HEALTH_OK #次状态为ok
13、查看ceph元数据服务器状态
[iyunv@node-44 ~]# ceph mds stat
e1: 0/0/1 up
19、查看ceph监控服务器状态
[iyunv@node-44 ~]# ceph mon stat
e3: 3 mons at {node-44=10.49.101.6:6789/0,node-46=10.49.101.8:6789/0,node-47=10.49.101.9:6789/0}, election epoch 28, quorum 0,1,2 node-44,node-46,node-47
[iyunv@node-44 ~]# ceph mon_status
{"name":"node-46","rank":1,"state":"peon","election_epoch":28,"quorum":[0,1,2],"outside_quorum":[],"extra_probe_peers":["10.49.101.9:6789\/0"],"sync_provider":[],"monmap":{"epoch":3,"fsid":"15652267-49ab-4a02-a8a1-2002ff04ad2c","modified":"2014-05-17 13:59:00.566672","created":"0.000000","mons":[{"rank":0,"name":"node-44","addr":"10.49.101.6:6789\/0"},{"rank":1,"name":"node-46","addr":"10.49.101.8:6789\/0"},{"rank":2,"name":"node-47","addr":"10.49.101.9:6789\/0"}]}}
14、查看ceph crush rule 列表
[iyunv@node-44 ~]# ceph osd crush rule list
[
"data",
"metadata",
"rbd"]
[iyunv@node-44 ~]#
15、查看ceph osd 标示符
[iyunv@node-44 ~]# ceph osd ls
0
1
2
[iyunv@node-44 ~]#
16、查看ceph osd池
[iyunv@node-44 ~]# ceph osd lspools
0 data,1 metadata,2 rbd,3 images,4 volumes,5 .rgw.root,6 compute,7 .rgw.control,8 .rgw,9 .rgw.gc,10 .users.uid,
[iyunv@node-44 ~]#
17、查看ceph osd状态
[iyunv@node-44 ~]# ceph osd stat
e129: 3 osds: 3 up, 3 in
五、mysql
1、进入数据库
mysql -uroot -p 回车
默认没有密码
2、显示所有的数据库
show databases;
3、进入表
use nova; 进入nova表
4、查看表结构
desC agent_builds; 查看“agent_builds”的表结构
5、查看数据库表信息
mysql> show tables;
6、在数据库migrations表里查找所有东西
mysql> select * from migrations ;
7、在数据库中删除一条信息
mysql> delete * from ipallocationpools where id="041d7993-394b-4ec3-aae3-365aeb5dc10f";
8、变更云硬盘在数据库volumes表中的信息
mysql> update volumes set host="node-44.domain.tld" where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
9、查看云硬盘几个参数的状态
mysql> select status,attached_host,migration_status,deleted from volumes where id="0d883e29-1bc7-4e54-9eba-1883c4c32313";
10、变更云硬盘在数据库中的存储状态
变更前:
mysql> select status,attached_host,migration_status,deleted from volumes where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
+-----------+---------------+---------------------------------------------+---------+
| status | attached_host | migration_status | deleted |
+-----------+---------------+---------------------------------------------+---------+
| available | NULL | target:10de2dad-21c9-482f-b1c4-5afedef36dde | 0 |
+-----------+---------------+---------------------------------------------+---------+
1 row in set (0.00 sec)
变更后:
mysql> update volumes set migration_status="NULL" where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
mysql> select status,attached_host,migration_status,deleted from volumes where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
+-----------+---------------+------------------+---------+
| status | attached_host | migration_status | deleted |
+-----------+---------------+------------------+---------+
| available | NULL | NULL | 0 |
+-----------+---------------+------------------+---------+
1 row in set (0.00 sec)
11、彻底删除一个有问题的云硬盘
先修改该云硬盘的状态status="deleted"
mysql> update volumes set status="deleted" where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0
然后修改该云硬盘删除状态
mysql> update volumes set deleted="1" where id="8266d5cc-9be5-4c3b-a46e-06aa055c66e1";
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0
12、在数据库中查看ip pool
进入neutron表
mysql> select * from ipallocationpools;
13、在数据库中查看已使用的vlan信息
进入neutron表
mysql> select * from ovs_vlan_allocations;
|
|