3254rf 发表于 2016-2-19 09:05:17

在rhel7中搭建openstack kilo

1.       安装Controller1.1      配置主机名
1.2      配置网络
1.3      配置Selinux
1.4      安装源配置安装源包括:CENTOS7、EPEL7、OPENSTACK-KILO
1.5      安装Mariadb安装mysql数据库依赖包                   # yuminstall mariadb mariadb-server MySQL-python –y
编辑文件完成下列步骤:#vi /etc/my.cnf.d/mariadb_openstack.cnf 在 部分, 修改添加下列的选项bind-address = 10.0.0.11default-storage-engine = innodblower_case_table_names=1innodb_file_per_tablecollation-server = utf8_general_ciinit-connect = 'SET NAMES utf8'character-set-server = utf8


启动数据库服务#systemctl start mariadb.service#systemctl enablemariadb.service设置MySQL安全配置向导(此步骤主要设置mysql可远程连接):#mysql_secure_installation设置root密码为A0staryh                           

#systemctl restartmariadb.service
1.6      安装Rabbitmq-Server安装rabbitmq服务# yum install rabbitmq-server-y启动rabbitmq服务并设置为不开机启动:# systemctl start rabbitmq-server.service# systemctl enable rabbitmq-server.service# systemctl status rabbitmq-server.service添加ops用户# rabbitmqctl add_user openstack A0staryh赋权# rabbitmqctl set_permissions openstack ".*"".*" ".*"
# systemctl restart rabbitmq-server.service1.7      安装身份验证服务Keystone1.7.1       配置前的准备
[*]操作数据库,这里要用到刚刚数据库安装时用到的密码
# mysql -uroot -pA0staryh
建立数据库,与授权用户,还有加密码KEYSTONE_DBPASS
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'localhost' IDENTIFIED BY 'A0staryh';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'A0staryh';
MariaDB [(none)]> exit

[*]生成一个随机値,做为管理令牌,为后面的配置要用
# openssl rand -hex 10
fb7269c14626a5966181
1.7.2       安装和配置组件
[*]安装包
# yum install –y openstack-keystone python-keystoneclient

# vim /etc/keystone/keystone.conf
admin_token = 11d5c31d5a96d7b42315verbose = true
connection = mysql://keystone:A0staryh@controller/keystone
provider = keystone.token.providers.uuid.Providerdriver = keystone.token.persistence.backends.sql.Token
driver = keystone.contrib.revoke.backends.sql.Revoke
[*]创建管理证书与密钥,设置相关文件权限
# keystone-manage pki_setup --keystone-user keystone--keystone-group keystone
No handlers could be found for logger "oslo_config.cfg"The following cert files already exist, use --rebuild to remove theexisting files before regenerating:/etc/keystone/ssl/private/cakey.pem already exists/etc/keystone/ssl/certs/ca.pem already exists/etc/keystone/ssl/private/signing_key.pem already exists/etc/keystone/ssl/certs/signing_cert.pem already exists
# chown -R keystone:keystone /var/log/keystone
# chown -R keystone:keystone /etc/keystone/ssl
# chmod -R o-rwx /etc/keystone/ssl

[*]填充数据库数据
# su -s /bin/sh -c "keystone-manage db_sync"keystone
No handlers could be found for logger"oslo_config.cfg"

[*]填加到启动与启动程序
# systemctl enable openstack-keystone.service
# systemctl start openstack-keystone.service


[*]默认情况下,认证身份令牌到期后不会删除,会一直存在数据库中,所以加一下自动删除
# (crontab -l -u keystone 2>&1 | grep -q token_flush)|| echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log2>&1' >> /var/spool/cron/keystone
1.7.3       创建租户、用户和角色
[*]创建admin用户
# export OS_SERVICE_TOKEN=11d5c31d5a96d7b42315
# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
# keystone tenant-create --name admin --description "AdminTenant"
# keystone user-create --name admin --pass A0staryh --email root@localhost
# keystone role-create --name admin
# keystone user-role-add --tenant admin --user admin --roleadmin

[*]创建demo用户
# keystone tenant-create --name demo --description "DemoTenant"
# keystone user-create --name demo --tenant demo --pass A0staryh --email demo@localhost

[*]创建 service租户
# keystone tenant-create --name service --description"Service Tenant"

1.7.4       创建服务实体和 API 端点
[*]创建实体
# keystone service-create --name keystone --type identity--description "OpenStack Identity"

# keystone endpoint-create --service-id $(keystoneservice-list | awk '/ identity / {print $2}') --publicurlhttp://controller:5000/v2.0 --internalurl http://controller:5000/v2.0--adminurl http://controller:35357/v2.0 --region regionOne
+-------------+----------------------------------      +
|   Property|             Value                  |
+-------------+----------------------------------         +
|   adminurl|   http://controller:35357/v2.0      |
|      id   | 71ed01478ea34f12bfe81cc9de80ff75   |
| internalurl |http://controller:5000/v2.0            |
|publicurl|http://controller:5000/v2.0         |
|    region   |         regionOne                  |
|service_id |efa9e2e0830b4bd4a8d6470f1d1c95d4 |
+-------------+----------------------------------+


[*]验证
# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
使用admin租户和用户,需要一个认证的令牌
# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 token-get
使用 admin租户和用户,列出租户以验证 admin租户和用户
# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 tenant-list
使用 admin租户和用户,列出用户
# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 user-list
使用 admin租户和用户,列出角色
# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 role-list

[*]创建客户端环境脚本
# vi admin-openrc.sh
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=A0staryh
export OS_AUTH_URL=http://controller:35357/v2.0

# vi demo-openrc.sh
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=A0staryh
export OS_AUTH_URL=http://controller:5000/v2.0

# source admin-openrc.sh

1.8      安装镜像服务Glance1.8.1       创建数据库、服务证书和 API 端点
[*]创建数据库
# mysql -uroot -pA0staryhMariaDB [(none)]> CREATE DATABASEglance;MariaDB [(none)]> GRANT ALLPRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'A0staryh';MariaDB [(none)]> GRANT ALLPRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'A0staryh';
[*]导入 admin身份凭证以执行管理员用户专有的命令
# sourceadmin-openrc.sh
[*]创建服务证书
创建glance用户#keystone user-create --name glance --pass A0staryh给glance用户添加admin角色# keystoneuser-role-add --user glance --tenant service --role admin创建glance服务实体# keystone service-create --name glance--type image --description "OpenStack Image Service"创建镜像服务的 API 端点# keystone endpoint-create --service-id$(keystone service-list | awk '/ image / {print $2}') --publicurl http://controller:9292--internalurl http://controller:9292--adminurl http://controller:9292--region regionOne
1.8.2       安装和配置镜像服务组件
[*]安装软件包
# yum install openstack-glance python-glanceclient

[*]修改配置文件vim /etc/glance/glance-api.conf

verbose = Truenotification_driver = noop
connection =mysql://glance:A0staryh@controller/glance
auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = glanceadmin_password = A0staryhflavor = keystone
default_store = filefilesystem_store_datadir = /var/lib/glance/images/

注意:注释所有 auth_host、auth_port和 auth_protocol选项,因为identity_uri已经包括了它们。
[*]修改vim /etc/glance/glance-registry.conf
verbose = Truenotification_driver = noopconnection = mysql://glance:A0staryh@controller/glanceauth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = glanceadmin_password = A0staryhflavor = keystone
[*]写入镜像服务数据库
# su -s /bin/sh -c"glance-manage db_sync" glance
[*]启动镜像服务并将其配置为随系统启动
# systemctlenable openstack-glance-api.service openstack-glance-registry.service#systemctl start openstack-glance-api.service openstack-glance-registry.service1.8.3       验证# mkdir /tmp/images
下载cirros-0.3.0-x86_64-disk.img,并将文件拷贝至该目录
# source admin-openrc.sh
# glance image-create --name"cirros-0.3.0-x86_64" --file /tmp/images/cirros-0.3.0-x86_64-disk.img--disk-format qcow2 --container-format bare --is-public True --progress
# glance image-list
2.       安装计算服务Nova2.1      在controller端创建数据库
[*]创建数据库
# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ONnova.* TO 'nova'@'localhost' IDENTIFIED BY 'A0staryh';
MariaDB [(none)]> GRANT ALL PRIVILEGES ONnova.* TO 'nova'@'%' IDENTIFIED BY 'A0staryh';

[*]导入 admin身份凭证以执行管理员用户专有的命令
# source admin-openrc.sh
[*]创建服务证书
创建 nova用户# keystone user-create --name nova--pass A0staryh# keystone user-role-add --user nova--tenant service --role admin# keystone service-create --name nova--type compute --description "OpenStack Compute"# keystone endpoint-create--service-id $(keystone service-list | awk '/ compute / {print $2}')--publicurl http://controller:8774/v2/%\(tenant_id\)s--internalurl http://controller:8774/v2/%\(tenant_id\)s--adminurl http://controller:8774/v2/%\(tenant_id\)s--region regionOne
[*]安装包
# yum install openstack-nova-api openstack-nova-certopenstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-schedulerpython-novaclient
[*]修改配置文件 vim /etc/nova/nova.conf
rpc_backend = rabbitmy_ip= 10.0.0.11rabbit_host = controller
rabbit_password = A0staryhauth_strategy= keystonevncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11verbose = True
connection = mysql://nova:A0staryh@controller/nova
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = A0staryhhost = controller

[*]同步Compute 数据库
# su -s/bin/sh -c "nova-manage db sync" nova

[*]启动服务
# systemctl enableopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service# systemctl startopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service2.2      安装配置compute节点
[*]安装软件包
# yum install openstack-nova-computesysfsutils
[*]修改配置文件 vim /etc/nova/nova.conf
         verbose = Truerpc_backend = rabbit
rabbit_host = controller
rabbit_password = A0staryhauth_strategy = keystonemy_ip = 10.0.0.2[管理网IP地址]vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.2[管理网IP地址]
novncproxy_base_url = http://controller:6080/vnc_auto.htmlauth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = A0staryh host = controller

[*]完成安装
确定您的计算节点是否支持虚拟机的硬件加速。
# egrep -c '(vmx|svm)' /proc/cpuinfo
启动计算服务及其依赖,并将其配置为随系统自动启动# systemctl enable libvirtd.serviceopenstack-nova-compute.service# systemctl start libvirtd.service openstack-nova-compute.service2.3      在controller端验证# source admin-openrc.sh# nova service-list# nova image-list3.      安装网络组件3.1      使用OPENSTACK网络Neutron3.1.1       在controller端创neutron
[*]创建数据库
# mysql -uroot -pA0staryh
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ONneutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'A0staryh';
MariaDB [(none)]> GRANT ALL PRIVILEGES ONneutron.* TO 'neutron'@'%' IDENTIFIED BY 'A0staryh';

[*]导入 admin 身份凭证以执行管理员用户专有的命令:
# source admin-openrc.sh
[*]创建服务证书
# keystone user-create --name neutron --pass A0staryh
# keystone user-role-add --user neutron --tenantservice --role admin
# keystone service-create --name neutron --typenetwork --description "OpenStack Networking"
# keystone endpoint-create --service-id $(keystoneservice-list | awk '/ network / {print $2}') --publicurl http://controller:9696--adminurl http://controller:9696--internalurl http://controller:9696 --region regionOne

[*]安装网络组件
# yum install -y openstack-neutronopenstack-neutron-ml2 python-neutronclient which

[*]修改配置文件vim/etc/neutron/neutron.conf
         verbose = Truerpc_backend= rabbit
rabbit_host = controller
rabbit_password = A0staryhauth_strategy= keystonecore_plugin= ml2
service_plugins = router
allow_overlapping_ips = Truenotify_nova_on_port_status_changes= True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = SERVICE_TENANT_ID【获取方式:# source admin-openrc.sh#keystone tenant-get service】nova_admin_password= A0staryhconnection = mysql://neutron:A0staryh@controller/neutronauth_uri= http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = A0staryh
[*]配置Modular Layer2 (ML2) 插件
修改配置文件vim/etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers= flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitchtunnel_id_ranges = 1:1000enable_security_group= True
enable_ipset = True
firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[*]配置 Compute 以使用Networking
修改控制节点上的配置文件 vim /etc/nova/nova.conf
network_api_class= nova.network.neutronv2.api.APIsecurity_group_api= neutronlinuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriverfirewall_driver= nova.virt.firewall.NoopFirewallDriverurl =http://controller:9696auth_strategy= keystoneadmin_auth_url= http://controller:35357/v2.0admin_tenant_name= serviceadmin_username= neutronadmin_password= A0staryh

[*]完成安装
创建链接
# ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
# su -s/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo" neutron
重启 Compute 服务
# systemctlrestart openstack-nova-api.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service
启动 Networking 服务并将其配置为随系统启动
#systemctl enable neutron-server.service
#systemctl start neutron-server.service

[*]验证
导入admin身份凭证以执行管理员用户专有的命令:
# sourceadmin-openrc.sh
列出加载的扩展,以验证是否成功启动了一个 neutron-server 进程:
# neutronext-list
3.1.2       在network端安装配置网络节点
[*]配置前的准备
修改配置文件vim /etc/sysctl.conf 以将下列参数包含其中:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
使修改生效:
# sysctl-p

[*]安装网络组件
# yuminstall -y openstack-neutron openstack-neutron-ml2openstack-neutron-openvswitch

[*]配置网络的通用组件
修改配置文件vim /etc/neutron/neutron.conf
verbose = Truerpc_backend = rabbitrabbit_host = controllerrabbit_password = A0staryhauth_strategy = keystonecore_plugin = ml2service_plugins = routerallow_overlapping_ips = Trueauth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = A0staryh
[*]配置Modular Layer 2 (ML2)插件
修改配置文件 vim /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,gretenant_network_types = gremechanism_drivers = openvswitchflat_networks = externaltunnel_id_ranges = 1:1000enable_security_group = Trueenable_ipset = Truefirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriverlocal_ip = 10.0.0.11###########################################################local_ip= INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS###########################################################enable_tunneling = Truebridge_mappings = external:br-extunnel_types = gre

[*]配置 Layer-3 (L3) 代理
修改配置文件 /etc/neutron/l3_agent.ini
verbose = Trueinterface_driver = neutron.agent.linux.interface.OVSInterfaceDriveruse_namespaces = Trueexternal_network_bridge = br-exrouter_delete_namespaces = True
[*]配置DHCP代理
修改配置文件vim /etc/neutron/dhcp_agent.ini
verbose = Trueinterface_driver =neutron.agent.linux.interface.OVSInterfaceDriverdhcp_driver =neutron.agent.linux.dhcp.Dnsmasquse_namespaces = Truedhcp_delete_namespaces = True##############################################################################
注意:一些云镜像会忽略 DHCP MTU 选项,在这种情况下,您要配置其使用metadata
# vim /etc/neutron/dhcp_agent.ini
dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf创建并修改文件 /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454杀死所有存在的 dnsmasq 进程:
# pkilldnsmasq
##############################################################################

[*]配置metadata代理
修改配置文件 /etc/neutron/metadata_agent.ini
auth_url = http://controller:5000/v2.0auth_region = regionOneadmin_tenant_name = serviceadmin_user = neutronadmin_password = A0staryhnova_metadata_ip = 10.0.0.11(controler的Ip)metadata_proxy_shared_secret = A0staryh
[*]在控制节点(controller节点)上,修改配置文件vim /etc/nova/nova.conf
service_metadata_proxy = Truemetadata_proxy_shared_secret = A0staryh(与上个配置文件保持一致)
# systemctl restart openstack-nova-api.service
[*]配置Open vSwitch (OVS)服务
#systemctl enable openvswitch.service
#systemctl start openvswitch.service
#ovs-vsctl add-br br-ex
# ovs-vsctladd-port br-ex eth1

[*]完成安装
# ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g'/usr/lib/systemd/system/neutron-openvswitch-agent.service
# systemctlenable neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.serviceneutron-ovs-cleanup.service
#systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service
请勿直接地启动 neutron-ovs-cleanup 服务

[*]controller端验证
# sourceadmin-openrc.sh
# neutronagent-list
此处可能由于在controller端执行同步数据时,写的版本有误导致错误,重新执行数据同步命令,将juno修改为kilo即可。
3.1.3       在compute端安装和配置计算节点
[*]配置前的准备
# vim /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# sysctl –p

[*]安装网络组件
# yuminstall openstack-neutron-ml2 openstack-neutron-openvswitch

[*]配置网络的通用组件
# vim /etc/neutron/neutron.conf
rpc_backend = rabbitrabbit_host = controllerrabbit_password = A0staryhauth_strategy = keystonecore_plugin = ml2service_plugins = routerallow_overlapping_ips = Trueverbose = Trueauth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = A0staryh
[*]配置 Modular Layer 2 (ML2) 插件
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,gretenant_network_types = gremechanism_drivers = openvswitchtunnel_id_ranges = 1:1000enable_security_group = Trueenable_ipset = Truefirewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriverlocal_ip = 10.0.0.2###########################################################local_ip= INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS计算节点上的管理网络接口的IP 地址###########################################################enable_tunneling = Truetunnel_types = gre
[*]配置 Open vSwitch (OVS) 服务
启动 OVS 服务并将其配置为随系统启动:# systemctl enableopenvswitch.service# systemctl startopenvswitch.service
[*]配置 Compute 以使用 Networking
# vim /etc/nova/nova.confnetwork_api_class= nova.network.neutronv2.api.APIsecurity_group_api= neutronlinuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriverfirewall_driver= nova.virt.firewall.NoopFirewallDriverurl =http://controller:9696auth_strategy= keystoneadmin_auth_url= http://controller:35357/v2.0admin_tenant_name= serviceadmin_username= neutronadmin_password= A0staryh
[*]完成安装
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini# cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig# sed -i's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g'/usr/lib/systemd/system/neutron-openvswitch-agent.service# systemctl restartopenstack-nova-compute.service#systemctl enable neutron-openvswitch-agent.service# systemctl startneutron-openvswitch-agent.service
[*]验证操作
在controller节点执行# source admin-openrc.sh# neutron agent-list3.1.4       在controller创建初始网络可通过Dashboard完成
3.2      使用传统网络3.2.1       在controller配置# vim /etc/nova/nova.conf
         
network_api_class= nova.network.api.APIsecurity_group_api= nova# systemctl restart openstack-nova-api.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service
3.2.2       在compute端配置# yum install openstack-nova-networkopenstack-nova-api
# vi /etc/nova/nova.conf
network_api_class= nova.network.api.APIsecurity_group_api= novafirewall_driver= nova.virt.libvirt.firewall.IptablesFirewallDrivernetwork_manager= nova.network.manager.FlatDHCPManagernetwork_size= 254allow_same_net_traffic= Falsemulti_host= Truesend_arp_for_ha= Trueshare_dhcp_address= Trueforce_dhcp_release= Trueflat_network_bridge= br100flat_interface= INTERFACE_NAMEpublic_interface= INTERFACE_NAME
# systemctl enable openstack-nova-network.serviceopenstack-nova-metadata-api.service
# systemctl start openstack-nova-network.serviceopenstack-nova-metadata-api.service
3.2.3       在controller配置# sourceadmin-openrc.sh
# novanetwork-create demo-net --bridge br100 --multi-host T --fixed-range-v410.0.1.0/24
# novanet-list
4.       安装DASHBORAD4.1.1       安装配置# yuminstall openstack-dashboard httpd mod_wsgi memcached python-memcached
# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"ALLOWED_HOSTS = ['*']CACHES = { 'default': { 'BACKEND':'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION':'127.0.0.1:11211', }}由于一个包的 bug,仪表板的 CSS 会加载失败。可以执行以下命令来解决这个问题:
# chown-R apache:apache /usr/share/openstack-dashboard/static
启动服务
#systemctl enable httpd.service memcached.service
#systemctl start httpd.service memcached.service

4.1.2       验证http://controller/dashboard
5.       安装块设备存储服务5.1.1       在controller端创建数据库及配置
[*]创建数据库
# mysql -uroot-pA0staryhMariaDB[(none)]> CREATE DATABASE cinder;MariaDB[(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY 'A0staryh';MariaDB[(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'A0staryh';# sourceadmin-openrc.sh创建 cinder 用户# keystoneuser-create --name cinder --pass A0staryh# keystone user-role-add --user cinder--tenant service --role admin# keystone service-create --name cinderv2--type volumev2 --description "OpenStack Block Storage"# keystone endpoint-create --service-id$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region regionOne
[*]安装并配置块设备存储服务在控制节点服务器上的组件
# yum installopenstack-cinder python-cinderclient python-oslo-db# vim /etc/cinder/cinder.confconnection = mysql://cinder:A0staryh@controller/cinderrpc_backend = rabbitrabbit_host = controllerrabbit_password = A0staryhauth_strategy = keystonemy_ip = 10.0.0.11verbose = Trueauth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = cinderadmin_password = A0staryh# su -s /bin/sh -c"cinder-manage db sync" cinder# systemctl enable openstack-cinder-api.serviceopenstack-cinder-scheduler.service# systemctl start openstack-cinder-api.serviceopenstack-cinder-scheduler.service5.1.2       在compute节点上创建块存储
[*]配置前的准备
# yuminstall lvm2
#systemctl enable lvm2-lvmetad.service
#systemctl start lvm2-lvmetad.service
#pvcreate /dev/sdb1
#vgcreate cinder-volumes/dev/sdb1
# vim /etc/lvm/lvm.conf
         devices {
filter = [ "a/sdb/", "r/.*/"]
[*]安装并配置块存储卷组件
# yum install openstack-cindertargetcli python-oslo-db MySQL-python# vim /etc/cinder/cinder.conf         connection = mysql://cinder:A0staryh@controller/cinderrpc_backend = rabbitrabbit_host = controllerrabbit_password = A0staryhauth_strategy = keystonemy_ip = 10.0.0.2#################################################my_ip =MANAGEMENT_INTERFACE_IP_ADDRESS存储节点的管理网IP地址#################################################glance_host = controlleriscsi_helper = lioadmverbose = Trueauth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = cinderadmin_password = A0staryh
[*]启动服务
# systemctlenable openstack-cinder-volume.service target.service# systemctl startopenstack-cinder-volume.service target.service
[*]验证操作
在controller端操作# source admin-openrc.sh# cinder service-list创建一个 1 GB 的卷# source demo-openrc.sh# cinder create --display-name demo-volume1 1


6.       安装对象存储服务6.1.1       在controller端创建数据库及配置
[*]创建数据库
# keystone user-create --name swift --pass A0staryh# keystone user-role-add --user swift --tenant service --role admin# keystone service-create --name swift --type object-store--description "OpenStack Object Storage"# keystone endpoint-create --service-id $(keystone service-list | awk'/ object-store / {print $2}') --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --internalurl'http://controller:8080/v1/AUTH_%(tenant_id)s'--adminurl http://controller:8080 --region regionOne
[*]在controller节点配置
# yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token python-keystonemiddlewarememcached将配置文件proxy-server.conf-sample拷贝至/etc/swift# vim /etc/swift/proxy-server.confbind_port = 8080user = swiftswift_dir = /etc/swiftpipeline =authtoken cache healthcheck keystoneauth proxy-logging proxy-serverallow_account_management= trueaccount_autocreate= trueuse =egg:swift#keystoneauthoperator_roles= admin,_member_paste.filter_factory= keystonemiddleware.auth_token:filter_factoryauth_uri =http://controller:5000/v2.0identity_uri= http://controller:35357admin_tenant_name= serviceadmin_user= swiftadmin_password= A0staryhdelay_auth_decision= truememcache_servers= 127.0.0.1:11211

6.1.2       安装和配置存储节点# yuminstall xfsprogs rsync
#mkfs.xfs /dev/sda5
# vi /etc/fstab
         /dev/sda5 /srv/node/sda5 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2
# mount/srv/node/sda5
# vim /etc/rsyncd.conf   
uid = swiftgid = swiftlog file = /var/log/rsyncd.logpid file = /var/run/rsyncd.pidaddress = 10.0.0.2max connections = 2path = /srv/node/read only = falselock file = /var/lock/account.lockmax connections = 2path = /srv/node/read only = falselock file =/var/lock/container.lockmax connections = 2path = /srv/node/read only = falselock file = /var/lock/object.lock
#systemctl enable rsyncd.service
#systemctl start rsyncd.service



# yuminstall openstack-swift-account openstack-swift-containeropenstack-swift-object
# vim/etc/swift/account-server.conf

bind_ip = 10.0.0.2[存储节点的管理地址]bind_port = 6002user = swiftswift_dir = /etc/swiftdevices = /srv/nodepipeline = account-server[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的]recon_cache_path =/var/cache/swift
# vim /etc/swift/container-server.conf
         
bind_ip = 10.0.0.2[存储节点的管理地址]bind_port = 6001user = swiftswift_dir = /etc/swiftdevices = /srv/node
pipeline = account-server[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的] recon_cache_path =/var/cache/swift
# vim /etc/swift/object-server.conf
bind_ip = 10.0.0.2[存储节点的管理地址]bind_port = 6000user = swiftswift_dir = /etc/swiftdevices = /srv/nodepipeline = account-server[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的] recon_cache_path =/var/cache/swift# chown-R swift:swift /srv/node
# mkdir-p /var/cache/swift
# chown-R swift:swift /var/cache/swift
6.1.3       在controller端创建初始化的 rings
[*]帐户 ring
# cd /etc/swift
# swift-ring-builder account.builder create 10 3 1
# swift-ring-builder account.builder addr1z1-10.0.0.2:6002/sda5 100
# swift-ring-builder account.builder
# swift-ring-builder account.builder rebalance

[*]容器 ring
# cd/etc/swift# swift-ring-builder container.builder create 10 31
# swift-ring-builder container.builder addr1z1-10.0.0.2:6001/sda5 100
# swift-ring-builder container.builder
# swift-ring-builder container.builder rebalance

[*]对象环
# cd /etc/swift
# swift-ring-builder object.builder create 10 3 1
# swift-ring-builder object.builder addr1z1-10.0.0.2:6000/sda5 100
# swift-ring-builder object.builder
# swift-ring-builder object.builder rebalance

[*]分发环配置文件
[*]       完成安装
[*]在控制节点上
vim/etc/swift.confswift_hash_path_suffix= A0staryhswift_hash_path_prefix= A0staryhname =Policy-0default= yes
[*]将switf.conf拷贝至每个存储节点和其他运行了代理服务的额外节点
[*]在控制节点和其他运行了代理服务的节点上
# systemctl enable openstack-swift-proxy.servicememcached.service
# systemctl start openstack-swift-proxy.service memcached.service

[*]在存储节点上,启动对象存储服务,并将其设置为随系统启动
# systemctl enable openstack-swift-account.serviceopenstack-swift-account-auditor.service openstack-swift-account-reaper.serviceopenstack-swift-account-replicator.service# systemctl start openstack-swift-account.serviceopenstack-swift-account-auditor.service openstack-swift-account-reaper.serviceopenstack-swift-account-replicator.service# systemctl enable openstack-swift-container.serviceopenstack-swift-container-auditor.service openstack-swift-container-replicator.serviceopenstack-swift-container-updater.service# systemctl start openstack-swift-container.serviceopenstack-swift-container-auditor.serviceopenstack-swift-container-replicator.serviceopenstack-swift-container-updater.service# systemctl enable openstack-swift-object.serviceopenstack-swift-object-auditor.serviceopenstack-swift-object-replicator.serviceopenstack-swift-object-updater.service# systemctl startopenstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.serviceopenstack-swift-object-updater.service6.1.5       验证操作# sourcedemo-openrc.sh
# swiftstat


页: [1]
查看完整版本: 在rhel7中搭建openstack kilo