设为首页 收藏本站
查看: 860|回复: 0

[经验分享] Install and Configure OpenStack Compute Service (Nova) for Ubuntu 14.04

[复制链接]

尚未签到

发表于 2018-5-3 08:57:40 | 显示全部楼层 |阅读模式
  Based on Ubuntu 14.04 LTS x86_64
DSC0000.jpg

  nova controller node setup
  1. Install and Configure OpenStack Compute Service (Nova)
  aptitude -y install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
  

  mysql -uroot -p
mysql> create database nova;
mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA-DBPASS';
mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'NOVA-DBPASS';
mysql> flush privileges;
  

  vi /etc/nova/nova.conf
[database]
connection=mysql://nova:nova@MYSQL-SERVER/nova

  

  rm -rf /var/lib/nova/nova.sqlite

  nova-manage db sync
  

  vi /etc/nova/nova.conf
[DEFAULT]

  my_ip=192.168.1.10
auth_strategy=keystone

  rpc_backend = rabbit
rabbit_host = controller
  rabbit_password = GUEST-PASS

  vncserver_listen=192.168.1.10
vncserver_proxyclient_address=192.168.1.10

[keystone_authtoken]

  auth_host=controller
auth_port=35357
auth_protocol=http

  auth_uri=http://controller:5000
  admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
  

  # add nova user (set in service tenant))
keystone user-create --tenant service --name nova --pass NOVA-USER-PASSWORD

# add nova user in admin role
keystone user-role-add --user nova --tenant service --role admin
  

  # add service for nova
keystone service-create --name=nova --type=compute --description="Nova Compute Service"

  

  # add endpoint for nova
keystone endpoint-create --region RegionOne --service nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s

  

  chown -R nova:nova /etc/nova /var/log/nova
  for i in api cert consoleauth scheduler conductor novncproxy; do
  service nova-$i restart
  done
  

  2. nova image-list
  nova-manage service list
  

  nova computer node setup
  

  1. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.20.0/24), it's recommended to use seperated nic for management network

vi /etc/network/interface
auto eth1
iface eth1 inet manual
   up ip link set dev $IFACE up
   down ip link set dev $IFACE down

2. vi /etc/hosts
# remove or comment the line beginning with 127.0.1.1
192.168.1.10    controller
192.168.1.11    node1

3. aptitude -y install qemu-kvm libvirt-bin virtinst bridge-utils
modprobe vhost_net
echo vhost_net >> /etc/modules

4. aptitude -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntp restart

5. aptitude -y install python-mysqldb

6. aptitude -y install nova-compute-kvm python-guestfs

  

  

  7. dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
vi /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}

chmod +x /etc/kernel/postinst.d/statoverride

8. vi /etc/nova/nova.conf
[DEFAULT]
auth_strategy=keystone
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = GUEST-PASS
my_ip=192.168.1.11
vnc_enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.11
novncproxy_base_url=http://controller:6080/vnc_auto.html
glance_host=controller

[keystone_authtoken]
auth_host=controller
auth_port=35357
auth_protocol=http
auth_uri=http://controller:5000
admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service

[database]
connection=mysql://nova:NOVA-DATABASE-PASSWORD@MYSQL-SERVER/nova

rm -rf /var/lib/nova/nova.sqlite

9. chown -R nova:nova /etc/nova /var/log/nova
service nova-compute restart
  

  on controller node to check node1 status:
  nova-manage service list
  

  Now for legacy FlatDHCP networking:
  # on controller node
  vi /etc/nova/nova.conf
  [DEFAULT]

  network_api_class=nova.network.api.API
security_group_api=nova

  

  service nova-api restart
  service nova-scheduler restart
  service nova-conductor restart
  

  11. aptitude -y install nova-network nova-api-metadata
  
12. vi /etc/nova/nova.conf
  [DEFAULT]

  network_api_class=nova.network.api.API
security_group_api=nova

  network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_interface=eth1
flat_network_bridge=br100
public_interface=eth0

  #auto_assign_floating_ip=True
  

  Notes:
  Bydefault, all the VMs in the “flat” network can see one another regardless of which tenant they belong to. "allow_same_net_traffic=false",this configures IPtables policies to prevent any traffic between instances (even inside the same tenant), unless it is unblocked in a security group.
  
13. service nova-api-metadata restart; service nova-network restart

  

  14. on controller

  # create flat network
  source ~/adminrc

  demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`
  

  nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --bridge br100 --multi-host T --project-id $demo1

  Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4

  

  Now for legacy VLAN networking:
  

  there is a bug for vlan, to fix it, on nova controller and all compute nodes:
vi /usr/lib/python2.7/dist-packages/nova/network/manager.py

  # line 1212        
vlan = kwargs.get('vlan_start', None)

reboot

  

  # on controller node
  vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova

  

  service nova-api restart
  service nova-scheduler restart
  service nova-conductor restart
  

  11. aptitude -y install nova-network nova-api-metadata
  

  12. vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova
network_manager=nova.network.manager.VlanManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
vlan_start=100
vlan_interface=eth1
public_interface=eth0
#auto_assign_floating_ip=True

  

  13. service nova-api-metadata restart; service nova-network restart
  

  14. on controller

  # create vlan network
  source ~/adminrc
  # Normally: one subnet --> one vlan id --> one secuiry group
  demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`
  

  nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --vlan 100 --multi-host T --project-id $demo1

  Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4
  

  keystone tenant-create --name=demo2 --description="Demo2 Tenant"
demo2=`keystone tenant-list | grep demo2 | awk '{ print $2 }'`
novanetwork-create vmnet2 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.11.0/24 --vlan 110 --multi-host T --project-id $demo2

  

  Launch Instances
  source ~/demo1rc
  nova secgroup-list

  # create test-sec group
  nova secgroup-create test-sec "Test Security Group"
  # permit ssh
nova secgroup-add-rule test-sec tcp 22 22 0.0.0.0/0

# permit ICMP
nova secgroup-add-rule test-sec icmp -1 -1 0.0.0.0/0

nova secgroup-list-rules test-sec
  

  nova keypair-add demokey > demokey.pem
  nova keypair-list

nova flavor-list
nova image-list
  

  source ~/adminrc to run below commands:

  nova network-list
nova-manage network list


  vmnet1=`nova network-list | grep vmnet1 | awk '{ print $2 }'`
  

  source ~/demo1rc
  nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$vmnet1 CirrOS
  

  1. you can use vmware workstation to build images, then upload to glance using dashboard
  

   DSC0001.jpg

  
  ubuntu
1). vi /etc/hosts to remove 127.0.1.1. item
2). enable ssh login
3). enable dhcp client on interface
4). enable normal username/password
5). set root password

centos/redhat
1). rm -rf /etc/ssh/ssh_host_*
2). vi /etc/sysconfig/network-scripts/ifcfg-ethX to remove HWADDR and UUID items
3). rm -rf /etc/udev/rules.d/70-persistent-net.rules
4). enable ssh login
5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

  6). enable normal username/password
7). set root password

2. launch instance without keypair


  

  nova commands:
  nova list; nova show CirrOS
  nova stop CirrOS
nova start CirrOS
  

  # get vnc console address via web browser:
DSC0002.jpg

  nova get-vnc-console CirrOS novnc
  

  # create floating network

  nova-manage floating create --ip_range 192.168.1.248/29

  Notes: floating ip will use eth0 public related ip range
nova-manage floating list
  

  # Associate the floating IP address with your instance even it's running
  nova floating-ip-associate CirrOS 192.168.1.249
  ( nova floating-ip-disassociate cirros 192.168.1.249 )

  nova list
  
ping 192.168.1.249 (floating ip)
using xshell or putty to ssh -i demokey.pem cirros@192.168.1.249  (username: cirros, password: cubswin:))
[ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]

  now we can ping and ssh to 192.168.1.249, and CirrOS can access Internet now.
  

  Notes:you should have enough space in /var/lib/nova/instances for store VMs, you can mount partition to it ( using local or shared storages).
  

  Access novnc console from Internetmethod1

1. add another interface face to Internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service nova-compute restart
  

  4. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

  Access novnc console from Internetmethod2
  1. you can publish dashboard web site to Internet (normally keystone+dashboard node)
DSC0003.jpg

  

  2. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service nova-compute restart
  

  3. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-455131-1-1.html 上篇帖子: Ubuntu 12.04 安装KVM 管理虚拟机 下篇帖子: ubuntu14.04安装nginx+php5
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表