openstack安装(liberty)
八、安装块存储服务(Block Storage service/cinder) ###注意注意注意时间同步很重要8.1安装环境准备中配置主机相应配置,包括主机名称,hosts,时间同步,防火墙,SELINUX以及相关OPENSTACK包
8.2控制节点配置
8.2.1创建数据库并授权
# mysql -uroot -p
Enter password:
MariaDB [(none)]> create database cinder;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'cinder';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by 'cinder'; 8.2.2创建用户并添加角色和项目
# . admin-openrc.sh
# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 272ebc2639084f76b02e610e6f89cc36 |
| name | cinder |
+-----------+----------------------------------+
# openstack role add --project service --user cinder admin
# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 33fa43d4e0f14b209b7ee90ef1e424a4 |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 0f48b8a432dc4701a73640fe68987d37 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
8.2.3创建API端点
# openstack endpoint create --region RegionOne volume public http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | ffcf850701fe4199bc40e0a3ad2b8b1b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 33fa43d4e0f14b209b7ee90ef1e424a4 |
| service_name | cinder |
| service_type | volume |
| url | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
# openstack endpoint create --region RegionOne volume internal http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | d040e9fa081741edb72e1a22caae54e0 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 33fa43d4e0f14b209b7ee90ef1e424a4 |
| service_name | cinder |
| service_type | volume |
| url | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
# openstack endpoint create --region RegionOne volume admin http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 562a36debdff48d68070f31c11780038 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 33fa43d4e0f14b209b7ee90ef1e424a4 |
| service_name | cinder |
| service_type | volume |
| url | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
# openstack endpoint create --region RegionOne volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 8215d81d0a1046059ac1581c598e1bde |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0f48b8a432dc4701a73640fe68987d37 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+
# openstack endpoint create --region RegionOne volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 0d4659dea7c14d75a56bbda84d9fd5c7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0f48b8a432dc4701a73640fe68987d37 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+
# openstack endpoint create --region RegionOne volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 615cf54933b04ca7a7a49bb06013abff |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0f48b8a432dc4701a73640fe68987d37 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+
8.2.4安装配置组件
# yum install openstack-cinder python-cinderclient -y
# vi /etc/cinder/cinder.conf
connection = mysql://cinder:cinder@controller1/cinder
rpc_backend = rabbit
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack
auth_strategy = keystone
###注释此模块下其他配置项
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
my_ip = 10.0.0.11
lock_path = /var/lib/cinder/tmp
##可选,用于排错排错
verbose = True
8.2.5初始化数据库
# su -s /bin/sh -c "cinder-manage db sync" cinder
No handlers could be found for logger "oslo_config.cfg"
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
2016-08-03 08:24:21.797 2556 INFO migrate.versioning.api [-] 0 -> 1...
2016-08-03 08:24:22.421 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.421 2556 INFO migrate.versioning.api [-] 1 -> 2...
2016-08-03 08:24:22.584 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.584 2556 INFO migrate.versioning.api [-] 2 -> 3...
2016-08-03 08:24:22.698 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.698 2556 INFO migrate.versioning.api [-] 3 -> 4...
2016-08-03 08:24:22.917 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.917 2556 INFO migrate.versioning.api [-] 4 -> 5...
2016-08-03 08:24:22.954 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.954 2556 INFO migrate.versioning.api [-] 5 -> 6...
2016-08-03 08:24:22.986 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.986 2556 INFO migrate.versioning.api [-] 6 -> 7...
2016-08-03 08:24:23.020 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.020 2556 INFO migrate.versioning.api [-] 7 -> 8...
2016-08-03 08:24:23.042 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.042 2556 INFO migrate.versioning.api [-] 8 -> 9...
2016-08-03 08:24:23.073 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.073 2556 INFO migrate.versioning.api [-] 9 -> 10...
2016-08-03 08:24:23.102 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.102 2556 INFO migrate.versioning.api [-] 10 -> 11...
2016-08-03 08:24:23.142 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.142 2556 INFO migrate.versioning.api [-] 11 -> 12...
2016-08-03 08:24:23.170 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.171 2556 INFO migrate.versioning.api [-] 12 -> 13...
2016-08-03 08:24:23.201 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.201 2556 INFO migrate.versioning.api [-] 13 -> 14...
2016-08-03 08:24:23.230 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.230 2556 INFO migrate.versioning.api [-] 14 -> 15...
2016-08-03 08:24:23.244 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.244 2556 INFO migrate.versioning.api [-] 15 -> 16...
2016-08-03 08:24:23.274 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.274 2556 INFO migrate.versioning.api [-] 16 -> 17...
2016-08-03 08:24:23.367 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.367 2556 INFO migrate.versioning.api [-] 17 -> 18...
2016-08-03 08:24:23.453 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.453 2556 INFO migrate.versioning.api [-] 18 -> 19...
2016-08-03 08:24:23.492 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.493 2556 INFO migrate.versioning.api [-] 19 -> 20...
2016-08-03 08:24:23.522 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.522 2556 INFO migrate.versioning.api [-] 20 -> 21...
2016-08-03 08:24:23.544 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.544 2556 INFO migrate.versioning.api [-] 21 -> 22...
2016-08-03 08:24:23.577 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.577 2556 INFO migrate.versioning.api [-] 22 -> 23...
2016-08-03 08:24:23.619 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.619 2556 INFO migrate.versioning.api [-] 23 -> 24...
2016-08-03 08:24:23.694 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.694 2556 INFO migrate.versioning.api [-] 24 -> 25...
2016-08-03 08:24:23.858 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.859 2556 INFO migrate.versioning.api [-] 25 -> 26...
2016-08-03 08:24:23.873 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.873 2556 INFO migrate.versioning.api [-] 26 -> 27...
2016-08-03 08:24:23.880 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.880 2556 INFO migrate.versioning.api [-] 27 -> 28...
2016-08-03 08:24:23.888 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.888 2556 INFO migrate.versioning.api [-] 28 -> 29...
2016-08-03 08:24:23.895 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.895 2556 INFO migrate.versioning.api [-] 29 -> 30...
2016-08-03 08:24:23.903 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.903 2556 INFO migrate.versioning.api [-] 30 -> 31...
2016-08-03 08:24:23.909 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.910 2556 INFO migrate.versioning.api [-] 31 -> 32...
2016-08-03 08:24:23.962 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.963 2556 INFO migrate.versioning.api [-] 32 -> 33...
/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py:2999: SAWarning: Table 'encryption' specifies columns 'volume_type_id' as primary_key=True, not matching locally specified columns 'encryption_id'; setting the current primary key columns to 'encryption_id'. This warning may become an exception in a future release
", ".join("'%s'" % c.name for c in self.columns)
2016-08-03 08:24:24.098 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.098 2556 INFO migrate.versioning.api [-] 33 -> 34...
2016-08-03 08:24:24.155 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.155 2556 INFO migrate.versioning.api [-] 34 -> 35...
2016-08-03 08:24:24.205 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.205 2556 INFO migrate.versioning.api [-] 35 -> 36...
2016-08-03 08:24:24.246 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.247 2556 INFO migrate.versioning.api [-] 36 -> 37...
2016-08-03 08:24:24.277 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.277 2556 INFO migrate.versioning.api [-] 37 -> 38...
2016-08-03 08:24:24.328 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.329 2556 INFO migrate.versioning.api [-] 38 -> 39...
2016-08-03 08:24:24.358 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.358 2556 INFO migrate.versioning.api [-] 39 -> 40...
2016-08-03 08:24:24.508 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.508 2556 INFO migrate.versioning.api [-] 40 -> 41...
2016-08-03 08:24:24.560 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.561 2556 INFO migrate.versioning.api [-] 41 -> 42...
2016-08-03 08:24:24.567 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.567 2556 INFO migrate.versioning.api [-] 42 -> 43...
2016-08-03 08:24:24.574 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.575 2556 INFO migrate.versioning.api [-] 43 -> 44...
2016-08-03 08:24:24.581 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.581 2556 INFO migrate.versioning.api [-] 44 -> 45...
2016-08-03 08:24:24.588 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.588 2556 INFO migrate.versioning.api [-] 45 -> 46...
2016-08-03 08:24:24.597 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.597 2556 INFO migrate.versioning.api [-] 46 -> 47...
2016-08-03 08:24:24.611 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.611 2556 INFO migrate.versioning.api [-] 47 -> 48...
2016-08-03 08:24:24.673 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.674 2556 INFO migrate.versioning.api [-] 48 -> 49...
2016-08-03 08:24:24.721 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.721 2556 INFO migrate.versioning.api [-] 49 -> 50...
2016-08-03 08:24:24.754 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.754 2556 INFO migrate.versioning.api [-] 50 -> 51...
2016-08-03 08:24:24.794 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.794 2556 INFO migrate.versioning.api [-] 51 -> 52...
2016-08-03 08:24:24.836 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.836 2556 INFO migrate.versioning.api [-] 52 -> 53...
2016-08-03 08:24:24.954 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.954 2556 INFO migrate.versioning.api [-] 53 -> 54...
2016-08-03 08:24:24.987 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.987 2556 INFO migrate.versioning.api [-] 54 -> 55...
2016-08-03 08:24:25.058 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.058 2556 INFO migrate.versioning.api [-] 55 -> 56...
2016-08-03 08:24:25.064 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.064 2556 INFO migrate.versioning.api [-] 56 -> 57...
2016-08-03 08:24:25.075 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.075 2556 INFO migrate.versioning.api [-] 57 -> 58...
2016-08-03 08:24:25.082 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.082 2556 INFO migrate.versioning.api [-] 58 -> 59...
2016-08-03 08:24:25.089 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.089 2556 INFO migrate.versioning.api [-] 59 -> 60...
2016-08-03 08:24:25.097 2556 INFO migrate.versioning.api [-] done
8.2.6配置计算节点使用块存储
# vi /etc/nova/nova.conf
os_region_name = RegionOne
8.2.7启动服务并配置自动启动
# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
8.3配置block1(cinder)节点
8.3.1安装并启动LVM服务
# yum install lvm2
# systemctl is-enabled lvm2-lvmetad
disabled
# systemctl enable lvm2-lvmetad.service
Created symlink from /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.service to /usr/lib/systemd/system/lvm2-lvmetad.service.
# systemctl start lvm2-lvmetad.service
# systemctl is-enabled lvm2-lvmetad
enabled 8.3.2创建lvm卷
# ll /dev/sd*
brw-rw---- 1 root disk 8,0 Aug3 08:10 /dev/sda
brw-rw---- 1 root disk 8,1 Aug3 08:10 /dev/sda1
brw-rw---- 1 root disk 8,2 Aug3 08:10 /dev/sda2
brw-rw---- 1 root disk 8, 16 Aug3 08:10 /dev/sdb
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
8.3.3配置LVM过滤
8.3.3.1如果块存储节点或计算节点OS磁盘有使用LVM,需做同样配置。
# vi /etc/lvm/lvm.conf##块存储节点
devices {
...
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
...
}
# vi /etc/lvm/lvm.conf##计算节点
devices {
...
filter = [ "a/sda/", "r/.*/"]
...
}
8.3.3.2如果块存储节点或计算节点OS磁盘没有使用LVM,则仅需如下配置
# vi /etc/lvm/lvm.conf##块存储节点
devices {
...
filter = [ "a/sdb/", "r/.*/"]
...
}
8.3.4安装配置组件
# yum install openstack-cinder targetcli python-oslo-policy -y
# vi /etc/cinder/cinder.conf
connection = mysql://cinder:cinder@controller1/cinder
rpc_backend = rabbit
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack
auth_strategy = keystone
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
my_ip = 10.0.0.41
####注意注意注意,配置文件中没有配置组,需要创建,不能修改现有配置,否则云盘无法挂载到实例
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
enabled_backends = lvm
glance_host = controller1
lock_path = /var/lib/cinder/tmp
verbose = True
8.3.5启动服务并设置自启动
# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
# systemctl start openstack-cinder-volume.service target.service
# systemctl status openstack-cinder-volume.service target.service
8.4验证
# . admin-openrc.sh
# cinder service-list
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone |Status | State | Updated_at | Disabled Reason |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | comtroller1 | nova | enabled | up| 2016-08-03T02:16:50.000000 | - |
|cinder-volume |block1@lvm | nova | enabled | up| 2016-08-03T02:16:47.000000 | - |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
8.5创建cinder卷并附加到实例
# source demo-openrc.sh
# cinder create --display-name volume1 1 ##创建一个1GB卷
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-08-03T02:19:41.000000 |
| description | None |
| encrypted | False |
| id | 2797bd27-a039-4821-903a-760571365650 |
| metadata | {} |
| multiattach | False |
| name | volume1 |
| os-vol-tenant-attr:tenant_id | db6bcde12cc947119ecab8c211fa4f35 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 3361e8c44fc94b63ac44049542129edc |
| volume_type | None |
+---------------------------------------+--------------------------------------+
# cinder list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
| ID | Status| Name| Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
| 2797bd27-a039-4821-903a-760571365650 | available | volume1 |1 | - |false | False | |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
# nova list
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| 4aa43e3a-c963-4a53-b500-78fa6a6872c5 | private-instance | ACTIVE | - | Running | private=172.16.1.3, 192.168.1.242 |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
# nova volume-attach private-instance 02b07808-8538-4ac2-9f23-57c7e3a23132
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 02b07808-8538-4ac2-9f23-57c7e3a23132 |
| serverId | 4aa43e3a-c963-4a53-b500-78fa6a6872c5 |
| volumeId | 02b07808-8538-4ac2-9f23-57c7e3a23132 |
+----------+--------------------------------------+
# nova volume-list
WARNING: Command volume-list is deprecated and will be removed after Nova 13.0.0 is released. Use python-cinderclient or openstackclient instead.
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| 02b07808-8538-4ac2-9f23-57c7e3a23132 | in-use | volume1 | 1 | - | 4aa43e3a-c963-4a53-b500-78fa6a6872c5 |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
# ssh cirros@192.168.1.242
$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks IdSystem
/dev/vda1 * 16065 2088449 1036192+83Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table 错误:
No handlers could be found for logger "oslo_config.cfg" 解决方法:能解决此报错但会引起新问题,待确认。此错误其实可以忽略。
# rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
# yum update
# yum install python-pip
# pip install oslo.config --upgrade
页:
[1]