Openstack 最新Havana版本安装配置(nova-network Multi-host模式)
[*]摘要:本文将详细介绍Openstack的Havana版的安装部署,此处假定您已经熟悉Openstack的各个组件。
[*]本文部署方案优点:
[*]部署速度快,满足日常项目的资源虚拟化管理要求
[*]计算节点可伸缩(Nova-compute/Nova-network多点部署)
[*]部署局限性:
[*]Keystone, Mysql, RabbitMQ, Glance, Nova-API 等服务存在单点
[*]节省机器资源,存在控制节点和计算节点在同一台Host部署
[*]没有部署Cinder,Quantum/Neutron, Swift等服务
[*]物理部署示意图(TBD)
[*]安装过程参考文档
[*]http://docs.openstack.org/havana/install-guide/install/apt/content/
[*]http://www.chenshake.com/openstack-settings-on-the-network-card/
[*]环境准备
[*]下面红色标注的地方,请根据自己的情况调整
[*]2台物理主机, 内存最好是4G(测试用足够,生产环境需要重新估计),每台Host带有2块网卡
[*]BIOS里面设置允许虚拟化VT(如果不支持,后面虚拟化必须设置成qemu)
[*]Ubuntu 12.04(LTS)x86 64位, 安装过程中选择安装Openssh
[*]OS环境设置
[*]
Hostname:ubuntu58,eth0(外网):20.0.0.58, eth1 (内网,not set yet)
[*]
Hostanme:ubuntu59,eth0(外网):20.0.0.59 ,eth1 (内网,not set yet)
[*]控制节点:ubuntu58
[*]计算节点:ubuntu58,ubuntu59
[*]如果上面的设置不一样,下面的Example里面的参数需要作相应的调整
[*]设置允许root用户通过ssh链接登录OS
[*]包装每台OS可以连接Internet,这里Openstack相关的一些包需要网络安装, 企业内部用户需要配置代理。
[*]虚拟网络规划:为所有的虚拟机创建私有网络 192.168.22.0/24 (可以设置成其它的, 但是下面相应的示例中的参数需要修改)
[*]所有节点环境配置
[*]Update OS
#apt-get update
#apt-get upgrade
Then restart server and set proxy policy as above again
#reboot
[*]Configure software repository for openstack version "havana"
# apt-get install python-software-properties
# apt-get install ubuntu-cloud-keyring
# add-apt-repository cloud-archive:havana
[*]Change config file "/etc/sysctl.conf" and enable ip route
[*]
net.ipv4.ip_forward = 1
[*]
Change "/etc/security/limits.conf" and copy below 2 lines:
* soft nofile 10240
* hard nofile 10240
[*]
Install kvm support
# apt-get install qemu-utils
# apt-get install cpu-checker
# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
# lsmod | grep kvm
kvm_intel 1378990
kvm 4559321 kvm_intel
[*]Install bridge utils
# apt-get install bridge-utils
[*]同步2台Host上的时间,设置成一样就可以(你可以搭建NTP Server来同步)
[*]控制节点Ubuntu58环境配置
[*]Check hostname
# cat /etc/hosts
127.0.0.1 localhost ubuntu58
[*]edit interfaces as
# cat /etc/network/interfaces
## The loopback network interface##
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 20.0.0.58
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.1
auto eth1
iface eth1 inet manual
up ifconfig eth1 up
# /etc/init.d/networking restart
[*]防火墙设置
##执行如下命令##
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 3306/tcp
ufw allow 5000/tcp
ufw allow 5672/tcp
ufw allow 8080/tcp
ufw allow 8773/tcp
ufw allow 8774/tcp
ufw allow 8775/tcp
ufw allow 8776/tcp
ufw allow 8777/tcp
ufw allow 9292/tcp
ufw allow 9696/tcp
ufw allow 15672/tcp
ufw allow 55672/tcp
ufw allow 35357/tcp
ufw allow 6080/tcp
#/etc/init.d/ufw restart
#ufw enabale
#ufw status
[*]Configure software repository for openstack version "havana"
# apt-get install python-software-properties
# apt-get install ubuntu-cloud-keyring
# add-apt-repository cloud-archive:havana
[*]Install Mysql
#apt-get install python-mysqldb mysql-server
##Then set root password to 'openstack'##
# vim /etc/mysql/my.cnfand delete below line to accept req from all network adapters
bind-address = 127.0.0.1
#/etc/init.d/mysqlrestart
##Verify it by##
# mysql -u root -popenstack
[*]Install RabbitMQ
# apt-get install rabbitmq-server
##create a new file "/etc/rabbitmq/enabled_plugins" and copy below line: ##
.
#chkconfig rabbitmq-server on
#service rabbitmq-server restart
Access " http://20.0.0.58:55672" to verify Rabbit server status with guest/guest
[*]Install Keystone
#apt-get install keystone
# mysql -u root -popenstack
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
##create service token for keystone##
export SERVICE_TOKEN=$(openssl rand -hex 10)
echo $SERVICE_TOKEN >/root/ks_admin_token
# echo $SERVICE_TOKEN
ff4a8fd60e1824bfd08a
##配置"/etc/keystone/keystone.conf" 如下(admin_token 与上面的一致)
admin_token = ff4a8fd60e1824bfd08a
connection = mysql://keystone:keystone@20.0.0.58/keystone
##Sync data to mysql database "keystone"
keystone-manage db_sync
##修改文件属性##
chown -R keystone:keystone /etc/keystone
chown -R keystone:keystone /var/log/keystone
##Restart keystone##
# /etc/init.d/keystone restart
# service keystonestatus
##Config keystone service in database##
# export SERVICE_TOKEN=`cat /root/ks_admin_token`
# export SERVICE_ENDPOINT=http://20.0.0.58:35357/v2.0
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| description | Keystone Identity Service |
| id | 96934a7c8e31458aa2919e3f6a1ce974 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
##change id and IP@ here##
# keystone endpoint-create --service_id96934a7c8e31458aa2919e3f6a1ce974 --publicurl 'http://20.0.0.58:5000/v2.0' --adminurl 'http://20.0.0.58:35357/v2.0' --internalurl 'http://20.0.0.58:5000/v2.0' --region shanghai
##Create user, tenant for admin
#keystone user-create --name admin --pass openstack
+----------+-------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+----------+-------------------------------------------------------------------------------------------------------------------------+
| email | None |
| enabled| True |
| id | a582b6832ff245c0b54a81268f8094db |
| name | admin |
| password | $6$rounds=40000$/J9HmGD1qWfeCom5$XkNYO3r33CkUTfzdacrCwwbz5pT0OyyzlghsjuCnuPL5uvPy8ecCKievToxaKoib5bKotItLFIBeXapzCQRhO0 |
| tenantId | None |
+----------+-------------------------------------------------------------------------------------------------------------------------+
#keystone role-create --name admin
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | ea878347211945a68304fa3954e8a153 |
| name | admin |
+----------+----------------------------------+
#keystone tenant-create --name admin
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| description | None |
| enabled | True |
| id | 61bafc18ba26451f853d6dc85292e09c |
| name | admin |
+-------------+----------------------------------+
##change id here##
# keystone user-role-add --user a582b6832ff245c0b54a81268f8094db--roleea878347211945a68304fa3954e8a153 --tenant_id 61bafc18ba26451f853d6dc85292e09c
##添加环境变量##
# vi /root/keystone_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://20.0.0.58:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
##Create a Member role##
# source /root/keystone_admin
# keystone role-create --name Member
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 24285daca47a470db91414c6dd9dc8fe |
| name | Member |
+----------+----------------------------------+
##Create a new tenant as "TenantEagles",and a new user "eagle1" as role "Member" in "TenantEagles"##
# keystone user-create --name eagle1 --pass openstack
-------------------------------------------+
| Property | Value |
+----------+-------------------------------------------------------------------------------------------------------------------------+
| email | None |
| enabled| True |
| id | 101abe97dc7f4af49a860449d826e0c9 |
| name | eagle1 |
| password | $6$rounds=40000$0D75R/1xEs2E7na9$COeU0zsxd0Id.7gpLto12mKyiJ3SmMLlfHxDRmqcpBpfF2nJKJSwxptyOYMfU78U69EygMNohRKPgl26CMz6B/ |
| tenantId | None |
+----------+-------------------------------------------------------------------------------------------------------------------------+
# keystone tenant-create --name TenantEagles
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| description | None |
| enabled | True |
| id | 33f75c8c652548df9bd3e74cbe02d2b0 |
| name | TenantEagles |
+-------------+----------------------------------+
##Change Id here##
# keystone user-role-add --user 101abe97dc7f4af49a860449d826e0c9--role24285daca47a470db91414c6dd9dc8fe --tenant_id 33f75c8c652548df9bd3e74cbe02d2b0
##最后验证Keystone的安装##
# keystone user-list
# keystone role-list
# keystone tenant-list
# keystone service-list
# keystone endpoint-list
# curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "openstack"}}}' -H "Content-type: application/json" http://localhost:35357/v2.0/tokens | python -mjson.tool
[*]Install Glance
[*]Install glance
# apt-get install glance
[*]Create Glance database
mysql -u root -popenstack
mysql> CREATE DATABASE glance;
mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
mysql> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
mysql> exit
##Verify it by below command##
mysql -u glance -pglance
[*]Create Glance service in keystone
# source /root/keystone_admin
# keystone service-create --name glance --type image--description "Glance Image Service"
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| description | Glance Image Service |
| id | eb80d944a8fa4c24b3c26e3b792b488a |
| name | glance |
| type | image |
+-------------+----------------------------------+
##Change Id and IP@ here##
# keystone endpoint-create --service-id eb80d944a8fa4c24b3c26e3b792b488a --publicurl"http://20.0.0.58:9292" --adminurl "http://20.0.0.58:9292" --internalurl "http://20.0.0.58:9292" --region shanghai
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| adminurl| http://20.0.0.58:9292 |
| id | 6f6e4f782304433fb7af154271ca216f |
| internalurl | http://20.0.0.58:9292 |
|publicurl| http://20.0.0.58:9292 |
| region | shanghai |
|service_id | eb80d944a8fa4c24b3c26e3b792b488a |
+-------------+----------------------------------+
[*]修改文件属性
chown -R glance:glance /etc/glance
chown -R glance:glance /var/lib/glance
chown -R glance:glance /var/log/glance
[*]修改"/etc/glance/glance-api.conf", "/etc/glance/glance-registry.conf" (Here mysql and keystone are installed on 20.0.0.58)
#....................
sql_connection = mysql://glance:glance@20.0.0.58/glance
flavor=keystone
auth_host = 20.0.0.58
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = openstack
[*]Sync data to database "glance"
rm /etc/glance/glance.sqlite
rm /var/lib/glance/glance.sqlite
glance-manage db_sync
mysql -u glance -pglance
mysql> use glance
mysql> show tables;
mysql> exit
[*]Restart glance services
service glance-registry restart
service glance-api restart
[*]Verify Glance service
# wget -b https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
##If you have set proxy, please get it as below##
# https_proxy=http://localhost:3128 wget -b https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
## Wait for complete by run##
tail -fwget-log
# kvm-img info cirros-0.3.0-x86_64-disk.img
image: cirros-0.3.0-x86_64-disk.img
file format: qcow2
virtual size: 39M (41126400 bytes)
disk size: 9.3M
cluster_size: 65536
##Import an existing image to Glance##
# source /root/keystone_admin
# glance image-create --name "cirros-0.3.0-x86_64" --disk-format qcow2 --container-format bare --is-public true --file cirros-0.3.0-x86_64-disk.img
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 50bdc35edb03a38d91b1b071afb20a3c |
| container_format | bare |
| created_at | 2013-10-31T11:15:10 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | f3aea503-3ffe-48a0-8fb9-a18c68ca7d7e |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.0-x86_64 |
| owner | 61bafc18ba26451f853d6dc85292e09c |
| protected | False |
| size | 9761280 |
| status | active |
| updated_at | 2013-10-31T11:15:10 |
+------------------+--------------------------------------+
# glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
f3aea503-3ffe-48a0-8fb9-a18c68ca7d7e cirros-0.3.0-x86_64 qcow2 bare 9761280
[*]Install Nova
[*]
Install below component on controllers
# apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert nova-conductor nova-consoleauth nova-doc nova-scheduler
[*]Create database "nova" for nova service
mysql -u root -popenstack
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
[*]Create nova service in keystone
# source /root/keystone_admin
# keystone service-create --name compute--type compute --description "OpenStack Compute Service"
+-------------+----------------------------------+
| Property| Value |
+-------------+----------------------------------+
| description | OpenStack Compute Service |
| id | b21f848587614ec99de917beef768b0a |
| name | compute |
| type | compute |
+-------------+----------------------------------+
##Change id and IP@ here##
# keystone endpoint-create --service-id b21f848587614ec99de917beef768b0a --publicurl "http://20.0.0.58:8774/v2/%(tenant_id)s" --adminurl "http://20.0.0.58:8774/v2/%(tenant_id)s"--internalurl "http://20.0.0.58:8774/v2/%(tenant_id)s" --region shanghai
+-------------+----------------------------------------+
| Property| Value |
+-------------+----------------------------------------+
| adminurl| http://20.0.0.58:8774/v2/%(tenant_id)s |
| id | 956df959361d453d9f9e99f55c68652c |
| internalurl | http://20.0.0.58:8774/v2/%(tenant_id)s |
|publicurl| http://20.0.0.58:8774/v2/%(tenant_id)s |
| region | shanghai |
|service_id | b21f848587614ec99de917beef768b0a |
+-------------+----------------------------------------+
[*]Edit /etc/nova/nova.conf (vnc server, mysql server and rabbit server are installed on 20.0.0.58 here, please change it as your need)
#.................
my_ip=20.0.0.58
novnc_enabled=True
novncproxy_base_url = http://20.0.0.58:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=20.0.0.58
auth_strategy=keystone
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 20.0.0.58
connection = mysql://nova:nova@20.0.0.58/nova
[*]Edit /etc/nova/api-paste.ini(keystone is installed on 20.0.0.58, change it as your need)
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 20.0.0.58
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = openstack
[*]Sync data to database "nova"
nova-manage db sync
mysql -u nova -pnova
mysql > use nova;
mysql> show tables;
[*]Restart nova services
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
[*]Verify nova controll services
# apt-get install python-novaclient
# glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
f3aea503-3ffe-48a0-8fb9-a18c68ca7d7e cirros-0.3.0-x86_64 qcow2 bare 9761280
# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| f3aea503-3ffe-48a0-8fb9-a18c68ca7d7e | cirros-0.3.0-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
[*]计算节点(一)Ubuntu58环境配置
[*]Install nova compute component with kvm support
# apt-get install nova-compute-kvm python-guestfs
chmod 0644 /boot/vmlinuz*
rm /var/lib/nova/nova.sqlite
[*]Edit /etc/nova/nova.conf and add below line(here Glance is installed on 20.0.0.58, change it as your need)
#...............
glance_host=20.0.0.58
[*]Restart nova compute service
# service nova-compute restart
[*]Install nova network component
# apt-get install nova-network
[*]Edit /etc/nova/nova.conf and add below lines (192.168.22.0/24 is virtual network for VMs, change it as your condition)
#.....................
network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=256
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth1
flat_injected=False
public_interface=eth0
fixed_range=192.168.22.0/24
rabbit_host=20.0.0.58
#flat_network_dhcp_start=192.168.22.2
[*]Restart all services on this node(controller and compute components)
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-compute restart
service nova-network restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
service libvirt-bin restart
service rabbitmq-server restart
[*]Verify Nova services(control and compute) on this node
# nova-manageservice list
Binary Host Zone Status State Updated_At
nova-cert ubuntu58 internal enabled :-) 2013-10-31 19:29:42
nova-scheduler ubuntu58 internal enabled :-) 2013-10-31 19:29:42
nova-conductor ubuntu58 internal enabled :-) 2013-10-31 19:29:42
nova-consoleauth ubuntu58 internal enabled :-) 2013-10-31 19:29:42
nova-compute ubuntu58 nova enabled :-) 2013-10-31 19:29:41
nova-network ubuntu58 internal enabled :-) 2013-10-31 19:29:46
#nova list
[*]Create a private network for VMs corresponding to above network configuration in "/etc/nova/nova.conf"
# nova-manage network list
##Create a new private network##
# nova-manage network create private --multi_host=T --fixed_range_v4=192.168.22.0/24 --bridge_interface=br100 --num_networks=1 --network_size=256
# nova-manage network list
id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid
2 192.168.22.0/24 None 192.168.22.2 8.8.4.4 None None None ee7c97fe-39f8-483a-a799-419e91f90d8a
[*]计算节点(二)Ubuntu59环境配置
[*]检查hostname
# cat /etc/hosts
127.0.0.1 localhost ubuntu59
[*]edit interfaces as
# cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 20.0.0.59
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.1
auto eth1
iface eth1 inet manual
up ifconfig eth1 up
# /etc/init.d/networking restart
[*]Add openstack software version Havana repository
# apt-get install python-software-properties
# apt-get install ubuntu-cloud-keyring
# add-apt-repository cloud-archive:havana
[*]Install nova-compute and nova network component
# apt-get install nova-compute-kvm python-guestfs
# chmod 0644 /boot/vmlinuz*
# rm /var/lib/nova/nova.sqlite
# apt-get install python-novaclient
# apt-get install nova-network
[*]Set environment for admin user
# cat /root/keystone_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://20.0.0.58:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
[*]修改文件属性
chown -R nova:nova /var/lib/nova/
chown -R nova:nova/var/log/nova/
[*]Edit file "/etc/nova/nova.conf"and add below lines (vnc server, glance,keystone, rabbit server, mysql are all on 20.0.0.58, please change it as your condition)
..............
my_ip=20.0.0.59
novnc_enabled=True
novncproxy_base_url = http://20.0.0.58:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=20.0.0.59
auth_strategy=keystone
glance_host=20.0.0.58
network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=256
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth1
flat_injected=False
public_interface=eth0
fixed_range=192.168.22.0/24
rabbit_host=20.0.0.58
#flat_network_dhcp_start=192.168.22.2
connection = mysql://nova:nova@20.0.0.58/nova
[*]Edit "/etc/nova/api-paste.ini" and change below lines
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 20.0.0.58
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = openstack
[*]Restart nova compute and network service
source /root/keystone_admin
service nova-compute restart
service nova-network restart
service libvirt-bin restart
[*]Verify nova compute and network service
# source /root/keystone_admin
#nova-manageservice list
Binary Host Zone Status State Updated_At
nova-cert ubuntu58 internal enabled :-) 2013-11-01 22:53:31
nova-scheduler ubuntu58 internal enabled :-) 2013-11-01 22:53:35
nova-conductor ubuntu58 internal enabled :-) 2013-11-01 22:53:31
nova-consoleauth ubuntu58 internal enabled :-) 2013-11-01 22:53:32
nova-compute ubuntu58 nova enabled :-) 2013-11-01 22:53:31
nova-network ubuntu58 internal enabled :-) 2013-11-01 22:53:31
nova-compute ubuntu59 nova enabled :-) 2013-11-01 22:53:30
nova-network ubuntu59 internal enabled :-) 2013-11-01 22:53:34
#nova list
#nova image-list
# nova service-list
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status| State | Updated_at | Disabled Reason |
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| nova-cert | ubuntu58 | internal | enabled | up | 2013-11-01T22:54:21.000000 | None |
| nova-scheduler | ubuntu58 | internal | enabled | up | 2013-11-01T22:54:25.000000 | None |
| nova-conductor | ubuntu58 | internal | enabled | up | 2013-11-01T22:54:21.000000 | None |
| nova-consoleauth | ubuntu58 | internal | enabled | up | 2013-11-01T22:54:22.000000 | None |
| nova-compute | ubuntu58 | nova | enabled | up | 2013-11-01T22:54:21.000000 | None |
| nova-network | ubuntu58 | internal | enabled | up | 2013-11-01T22:54:21.000000 | None |
| nova-compute | ubuntu59 | nova | enabled | up | 2013-11-01T22:54:30.000000 | None |
| nova-network | ubuntu59 | internal | enabled | up | 2013-11-01T22:54:24.000000 | None |
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
[*]更多计算节点的安装部署
[*]请参考本文章第6步
[*]安装Dashboard到控制节点
[*]Install dashboard softwares
# apt-get install memcached libapache2-mod-wsgi openstack-dashboard
# apt-get remove --purge openstack-dashboard-ubuntu-theme
[*]Edit /etc/memcached.confand delete below line
-l 127.0.0.1
[*]Edit /etc/openstack-dashboard/local_settings.py (memcached and keystone are installed on 20.0.0.58, chagne it as your need)
##注释下面几行##
#CACHES = {
# 'default': {
# 'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
# }
#}
##修改下面几行##
DEBUG = True
TEMPLATE_DEBUG = True
OPENSTACK_HOST = "20.0.0.58"
##添加下面几行##
PROD = True
USE_SSL = False
SITE_BRANDING = 'Openstack Dashboard'
ENBLE_JUJU_PANEL = True
CACHE_BACKEND = 'memcached://20.0.0.58:11211/'
[*]Restart http server for dashboard
service apache2 restart
service memcached restart
[*]Verify Dashboard to access http://20.0.0.58/horizon(Change 20.0.0.58 as your need)
admin/openstack(admin user in project/tenant "admin")
eagle1/openstack (one of user in project/tenant "TenantEagles")
[*]测试(TBD)
页:
[1]