设为首页 收藏本站
查看: 665|回复: 0

[经验分享] Install Configure OpenStack Network Service (Neutron)

[复制链接]

尚未签到

发表于 2018-6-2 10:49:11 | 显示全部楼层 |阅读模式
  Based on OpenStack Icehouse release
DSC0000.jpg

DSC0001.jpg

  

DSC0002.jpg

   DSC0003.jpg
  configure neutron controller node:
  1. on keystone node

  mysql -uroot -p
mysql> create database neutron;
mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'NEUTRON-DBPASS';
mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'NEUTRON-DBPASS';
mysql> flush privileges;

# Create a neutron user
keystone user-create --tenant service --name neutron --pass NEUTRON-USER-PASSWORD

# Add role to the neutron user
keystone user-role-add --user neutron --tenant service --role admin

# Create the neutron service
keystone service-create --name=neutron --type=network --description="Neutron Network Service"

# Create a Networking endpoint
keystone endpoint-create --region RegionOne --service neutron --publicurl=http://NEUTRON-SERVER:9696 --internalurl=http://NEUTRON-SERVER:9696 --adminurl=http://NEUTRON-SERVER:9696

2. on neutron server node, here we use keystone node on it

  yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient

  yum -y update iproute
  yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm
  reboot

3. vi /etc/neutron/neutron.conf
[database]
connection=mysql://neutron:neutron@MYSQL-SERVER/neutron
auth_strategy=keystone
auth_host=controller
  auth_port = 35357
  auth_protocol = http
  auth_uri=http://controller:5000
  admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD
  rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=controller
notify_nova_on_port_status_changes=True
notify_nova_on_port_data_changes=True
nova_url=http://controller:8774/v2
nova_admin_username=nova
nova_admin_tenant_id=$(keystone tenant-list | awk '/service/ { print $2 }')
nova_admin_password=NOVA-USER-PASSWORD
nova_admin_auth_url=http://controller:35357/v2.0
core_plugin=ml2
service_plugins=router
verbose = True


  Comment out any lines in the [service_providers] section
  

  4. vi /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]

  type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch
  [ml2_type_gre]
  tunnel_id_ranges=1:1000

  [securitygroup]
  firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True

5. on nova controller node

  vi /etc/nova/nova.conf
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://NEUTRON-SERVER:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=NEUTRON-USER-PASSWORD
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
  vif_plugging_is_fatal=false
vif_plugging_timeout=0
  

  7. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini

8. service openstack-nova-api restart; service openstack-nova-scheduler restart; service openstack-nova-conductor restart

9. chown -R neutron:neutron /etc/neutron /var/log/neutron

  service neutron-server start; chkconfig neutron-server on

Neutron Network Node:
1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on

disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.30.0/24), it's recommended to use seperated nic for management network
  

  vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none

  

  3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1
192.168.1.12    neutronnet
  

  4. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

5. yum -y install  http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
  yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python

6. yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

  yum -y update iproute
  yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm
  reboot
  
7. Enable packet forwarding and disable packet destination filtering
vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

sysctl -p

8. vi /etc/neutron/neutron.conf
  auth_strategy = keystone
[keystone_authtoken]
auth_host=controller
auth_port = 35357
auth_protocol = http
  auth_uri=http://controller:5000
  admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=controller
core_plugin=ml2
service_plugins=router
verbose = True


  Comment out any lines in the [service_providers] section
  
9. vi /etc/neutron/l3_agent.ini
  interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces=True
verbose = True
  

  vi /etc/neutron/dhcp_agent.ini
  interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
use_namespaces=True
verbose = True
  

  10. vi /etc/neutron/metadata_agent.ini
auth_url = http://controller:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON-USER-PASSWORD
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA-PASSWORD
  verbose = True
  

  11. on nova controller node
vi /etc/nova/nova.conf
neutron_metadata_proxy_shared_secret=METADATA-PASSWORD
service_neutron_metadata_proxy=true

service openstack-nova-api restart
  

  12. vi /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]

  type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch
  [ml2_type_gre]
  tunnel_id_ranges=1:1000
  [ovs]

  local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #192.168.30.12
tunnel_type = gre
enable_tunneling = True

  [securitygroup]
  firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
  

  13. service openvswitch start; chkconfig openvswitch on
  ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2
  ethtool -K eth2 gro off
  ethtool -k eth2
  

  vi /etc/sysconfig/network-scripts/ifcfg-eth2
ETHTOOL_OPTS="-K ${DEVICE} gro off"
  

  14. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini
  

  15. cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent

16. chown -R neutron:neutron /etc/neutron /var/log/neutron

  for s in neutron-{dhcp,metadata,l3,openvswitch}-agent; do
service $s start
chkconfig $s on
done


  Neutron Compute Node:
  1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on
  

  disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.30.0/24), it's recommended to use seperated nic for management network
  

  3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1
  192.168.1.12    neutronnet
  

  4. yum -y install qemu-kvm libvirt python-virtinst bridge-utils  
# make sure modules are loaded
lsmod | grep kvm

service libvirtd start; chekconfig libvirtd on
service messagebus start; chkconfig messagebus on

5. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

6. yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
  yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python openstack-utils

7. yum install -y openstack-nova-compute
  
8. vi /etc/nova/nova.conf
connection=mysql://nova:NOVA-DATABASE-PASSWORD@MYSQL-SERVER/nova
auth_strategy=keystone
auth_host=controller
auth_port=35357
auth_protocol=http
  auth_uri=http://controller:5000
  admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
my_ip=192.168.1.11
vnc_enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.11
novncproxy_base_url=http://controller:6080/vnc_auto.html
glance_host=controller
  

  9. egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, do nothiong
If this command returns a value of zero, set libvirt_type=qemu in nova.conf

10. chown -R nova:nova /etc/nova /var/log/nova
  service openstack-nova-compute start; chkconfig openstack-nova-compute on

  

  now for neutron plugin agent:

11. disable packet destination filtering
vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

sysctl -p

12. yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch
  yum -y update iproute
  yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm
  reboot
  
13. vi /etc/neutron/neutron.conf
  auth_strategy = keystone
[keystone_authtoken]
auth_host=controller
auth_port = 35357
auth_protocol = http
  auth_uri=http://controller:5000
  admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD
  rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller
  core_plugin=ml2
service_plugins=router
verbose = True


  Comment out any lines in the [service_providers] section
  

  14. vi /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]

  type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch
  [ml2_type_gre]
  tunnel_id_ranges=1:1000
  [ovs]

  local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #192.168.30.11
tunnel_type = gre
enable_tunneling = True

  [securitygroup]
  firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
  

  15. service openvswitch start; chkconfig openvswitch on
  ovs-vsctl add-br br-int
  

  16. vi /etc/nova/nova.conf
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://NEUTRON-SERVER:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=NEUTRON-USER-PASSWORD
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
  vif_plugging_is_fatal=false
vif_plugging_timeout=0
  
17. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini
  

  18. cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
  

  service openstack-nova-compute restart
  

  19. chown -R neutron:neutron /etc/neutron /var/log/neutron

  service neutron-openvswitch-agent start; chkconfig neutron-openvswitch-agent on


  creating neutron network

on controller node:
1. to check neutron-server is communicating with its agents
  neutron agent-list
DSC0004.jpg

  source ~/adminrc (through step 1~2)

  # create external network
neutron net-create ext-net --shared --router:external=True [ --provider:network_type gre --provider:segmentation_id SEG_ID ]

  Note: SEG_ID is the tunnel id.

  

  2. # create subnet on external network
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.1.200,end=192.168.1.210 --disable-dhcp --dns-nameserver 210.22.84.3 --dns-nameserver 210.22.70.3 --gateway 192.168.1.1 192.168.1.0/24

3. # create tenant network
  source ~/demo1rc  (through step 3~7)
  neutron net-create demo-net

  

  4. # create subnet on tenant network
neutron subnet-create demo-net --name demo-subnet --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR

neutron subnet-create demo-net --name demo-subnet --dns-nameserver x.x.x.x --gateway 10.10.10.1 10.10.10.0/24

5. # create virtual router to connect external and tenant network
neutron router-create demo-router

6. # Attach the router to the tenant subnet
neutron router-interface-add demo-router demo-subnet

7. # Attach the router to the external network by setting it as the gateway
neutron router-gateway-set demo-router ext-net

  Note: the tenant router gateway should occupy the lowest IP address inthe floating IP address range -- 192.168.1.200
  

  neutron net-list
  neutron subnet-list
  neutron router-port-list demo-router
  

  Launch Instances

  for demo1 tenant:
source ~/demo1rc

neutron security-group-create --description "Test Security Group" test-sec

# permit ICMP
neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

# permit ssh
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

neutron security-group-rule-list

  

  nova keypair-add demokey > demokey.pem
nova keypair-list
  

  nova flavor-list
nova image-list
  neutron net-list
  neutron subnet-list

  

  demonet=`neutron net-list | grep demo-net | awk '{ print $2 }'`
nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$demonet CirrOS
  Notes: you should have enough memory on KVM nodes, or you will not get instances created.

  

  1. you can use vmware workstation to build images, then upload to glance using dashboard
  

   DSC0005.jpg

  
  ubuntu
1). vi /etc/hosts to remove 127.0.1.1. item
2). enable ssh login
3). enable dhcp client on interface
4). enable normal username/password
5). set root password

centos/redhat
1). rm -rf /etc/ssh/ssh_host_*
2). vi /etc/sysconfig/network-scripts/ifcfg-ethX to remove HWADDR and UUID items
3). rm -rf /etc/udev/rules.d/70-persistent-net.rules
4). enable ssh login
5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

  6). enable normal username/password
7). set root password

2. launch instance without keypair


  

  nova commands:
  nova list; nova show CirrOS
  nova stop CirrOS
nova start CirrOS
  

  # get vnc console address via web browser:
DSC0006.jpg

  nova get-vnc-console CirrOS novnc
  

  # Create a floating IP addresson the ext-net external network
  neutron floatingip-create ext-net
  neutron floatingip-list

  

  # Associate the floating IP address with your instance even it's running

  nova floating-ip-associate CirrOS 192.168.1.201
  ( nova floating-ip-disassociate cirros 192.168.1.201 )

  nova list
  

  

  ping 192.168.1.201 (floating ip)
using xshell or putty to ssh -i demokey.pem cirros@192.168.1.201  (username: cirros, password: cubswin:))
  [ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]

  now we can ping and ssh to 192.168.1.201, and CirrOS can access Internet now.
  

  Notes: you should have enough space in /var/lib/nova/instances for store VMs, you can mount partition to it ( using local or shared storages).
  

  Fixed IP addresses with OpenStack Neutron for tenant networks

neutron subnet-list
neutron subnet-show demo-subnet
neutron port-create demo-net --fixed-ip ip_address=10.10.10.10 --name VM-NAME
nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic port-id=xxx VM-NAME

  

  Access novnc console from Internetmethod1

1. add another interface face to Internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service openstack-nova-compute restart
  

  4. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

  Access novnc console from Internetmethod2
  1. you can publish dashboard web site to Internet (normally keystone+dashboard node)
DSC0007.jpg

  

  2. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service openstack-nova-compute restart
  

  3. nova get-vnc-console CirrOS novnc
  http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-506217-1-1.html 上篇帖子: 五分钟了解什么是Openstack 下篇帖子: OpenStack 学习资料
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表