设为首页 收藏本站
查看: 710|回复: 0

[经验分享] Install and Configure OpenStack Compute Service (Nova)

[复制链接]

尚未签到

发表于 2018-6-2 10:39:39 | 显示全部楼层 |阅读模式
  Based on OpenStack Icehouse release
DSC0000.jpg

DSC0001.jpg

  nova controller node setup
  1. Install and Configure OpenStack Compute Service (Nova)

yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

  

  mysql -uroot -p
mysql> create database nova;
mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA-DBPASS';
mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'NOVA-DBPASS';
mysql> flush privileges;
  

  vi /etc/nova/nova.conf
[database]
connection=mysql://nova:nova@MYSQL-SERVER/nova

nova-manage db sync

vi /etc/nova/nova.conf
  my_ip=192.168.1.10
auth_strategy=keystone
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
vncserver_listen=192.168.1.10
vncserver_proxyclient_address=192.168.1.10
auth_host=controller
auth_port=35357
auth_protocol=http
  auth_uri=http://controller:5000
  admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
  

  # add nova user (set in service tenant))
keystone user-create --tenant service --name nova --pass NOVA-USER-PASSWORD

# add nova user in admin role
keystone user-role-add --user nova --tenant service --role admin
  

  # add service for nova
keystone service-create --name=nova --type=compute --description="Nova Compute Service"

  

  # add endpoint for nova
keystone endpoint-create --region RegionOne --service nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s

  

  chown -R nova:nova /etc/nova /var/log/nova
  for service in api cert consoleauth scheduler conductor novncproxy; do
  service openstack-nova-$service start
  chkconfig openstack-nova-$service on
  done


  to check mounted nova computer node:
  nova-manage service list
DSC0002.jpg

  nova image-list

  

   nova computer node setup

1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on
  

  disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.20.0/24), it's recommended to use seperated nic for management network
  

  vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none

  
3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1
  

  4. yum -y install qemu-kvm libvirt python-virtinst bridge-utils  
# make sure modules are loaded
lsmod | grep kvm

service libvirtd start; chekconfig libvirtd on
service messagebus start; chkconfig messagebus on

5. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

6. yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
  yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python openstack-utils

7. yum install -y openstack-nova-compute
  
8. vi /etc/nova/nova.conf
connection=mysql://nova:NOVA-DATABASE-PASSWORD@MYSQL-SERVER/nova
auth_strategy=keystone
auth_host=controller
auth_port=35357
auth_protocol=http
  auth_uri=http://controller:5000
  admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
my_ip=192.168.1.11
vnc_enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.11
novncproxy_base_url=http://controller:6080/vnc_auto.html
glance_host=controller
  

  9. egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, do nothiong
If this command returns a value of zero, set libvirt_type=qemu in nova.conf

10. chown -R nova:nova /etc/nova /var/log/nova
  service openstack-nova-compute start; chkconfig openstack-nova-compute on
  

  on controller node to check node1 status:
  nova-manage service list
DSC0003.jpg

  

  Now for legacy FlatDHCP networking:
  # on controller node
  vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova

service openstack-nova-api restart

  service openstack-nova-scheduler restart
  service openstack-nova-conductor restart
  

  11. yum -y install openstack-nova-network openstack-nova-api
  
12. vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova

  network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_interface=eth1
flat_network_bridge=br100
public_interface=eth0

  #auto_assign_floating_ip=True
  

  Notes:
  By default, all the VMs in the “flat” network can see one another regardless of which tenant they belong to. "allow_same_net_traffic=false",this configures IPtables policies to prevent any traffic between instances (even inside the same tenant), unless it is unblocked in a security group.
  
13. service openstack-nova-metadata-api start; chkconfig openstack-nova-metadata-api on
service openstack-nova-network start; chkconfig openstack-nova-network on

14. on controller

  # create flat network
  source ~/adminrc

  demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`
  

  nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --bridge br100 --multi-host T --project-id $demo1

  Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4

  

  Now for legacy VLAN networking:
  

  there is a bug for vlan, to fix it, on nova controller and all compute nodes:
vi /usr/lib/python2.6/site-packages/nova/network/manager.py
# line 1212        
vlan = kwargs.get('vlan_start', None)

reboot

  

  # on controller node
  vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova

service openstack-nova-api restart

  service openstack-nova-scheduler restart
  service openstack-nova-conductor restart
  

  11. yum -y install openstack-nova-network openstack-nova-api
  
12. vi /etc/nova/nova.conf
  network_api_class=nova.network.api.API
security_group_api=nova
network_manager=nova.network.manager.VlanManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
vlan_start=100
vlan_interface=eth1
public_interface=eth0
#auto_assign_floating_ip=True

  

  13. service openstack-nova-metadata-api start; chkconfig openstack-nova-metadata-api on
service openstack-nova-network start; chkconfig openstack-nova-network on

14. on controller

  # create vlan network
  source ~/adminrc
  # Normally: one subnet --> one vlan id --> one secuiry group
  demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`
  

  nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --vlan 100 --multi-host T --project-id $demo1

  Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4
  

  keystone tenant-create --name=demo2 --description="Demo2 Tenant"
demo2=`keystone tenant-list | grep demo2 | awk '{ print $2 }'`
nova network-create vmnet2 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.11.0/24 --vlan 110 --multi-host T --project-id $demo2

  

  Launch Instances
  source ~/demo1rc
  nova secgroup-list

  # create test-sec group
  nova secgroup-create test-sec "Test Security Group"
  # permit ssh
nova secgroup-add-rule test-sec tcp 22 22 0.0.0.0/0

# permit ICMP
nova secgroup-add-rule test-sec icmp -1 -1 0.0.0.0/0

nova secgroup-list-rules test-sec
  

  nova keypair-add demokey > demokey.pem
nova keypair-list
  

  nova flavor-list
nova image-list
  

  source ~/adminrc to run below commands:

  nova network-list
nova-manage network list


  vmnet1=`nova network-list | grep vmnet1 | awk '{ print $2 }'`
  

  source ~/demo1rc
  nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$vmnet1 CirrOS
  

  1. you can use vmware workstation to build images, then upload to glance using dashboard
  

   DSC0004.jpg

  
  ubuntu
1). vi /etc/hosts to remove 127.0.1.1. item
2). enable ssh login
3). enable dhcp client on interface
4). enable normal username/password
5). set root password

centos/redhat
1). rm -rf /etc/ssh/ssh_host_*
2). vi /etc/sysconfig/network-scripts/ifcfg-ethX to remove HWADDR and UUID items
3). rm -rf /etc/udev/rules.d/70-persistent-net.rules
4). enable ssh login
5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

  6). enable normal username/password
7). set root password

2. launch instance without keypair


  

  nova commands:

  nova list; nova show CirrOS
  nova stop CirrOS
nova start CirrOS
  

  # get vnc console address via web browser:
DSC0005.jpg

  nova get-vnc-console CirrOS novnc
  

  # create floating network

  nova-manage floating create --ip_range 192.168.1.248/29

  Notes: floating ip will use eth0 public related ip range
nova-manage floating list
  

  # Associate the floating IP address with your instance even it's running
  nova floating-ip-associate CirrOS 192.168.1.249
  ( nova floating-ip-disassociate cirros 192.168.1.249 )

  nova list
  
ping 192.168.1.249 (floating ip)
using xshell or putty to ssh -i demokey.pem cirros@192.168.1.249  (username: cirros, password: cubswin:))
  [ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]
  now we can ping and ssh to 192.168.1.249, and CirrOS can access Internet now.
  

  Notes: you should have enough space in /var/lib/nova/instances for store VMs, you can mount partition to it ( using local or shared storages).
  

  Access novnc console from Internetmethod1

1. add another interface face to Internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service openstack-nova-compute restart
  

  4. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

  Access novnc console from Internetmethod2
  1. you can publish dashboard web site to Internet (normally keystone+dashboard node)
DSC0006.jpg

  

  2. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service openstack-nova-compute restart
  

  3. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-506130-1-1.html 上篇帖子: Install and Configure OpenStack Image Service (Glance) 下篇帖子: Install andConfigure OpenStack Dashboard Service (Horizon)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表