memn 发表于 2015-10-11 08:52:02

Getting through OpenStack Quantum with Open vSwitch


Over the last couple days I've been working on setting up OpenStack Quantum on my Folsom cluster. I'm using CentOS 6.3, which is by far not the best supported platform to run OpenStack on. I figure I'll try to cover the problems I went through getting this
running, and hopefully explain a bit about how Quantum works. I'll assume you have installed an OpenStack cluster from the EPEL provided packages already, and are familiar with the basics of OpenStack's services architecture.


Open vSwitch


Before I go on to actually configuring Quantum, you will want to build the openvswitch and kmod-openvswitch RPMs. I've opened
a bug with RedHat to get this package in EPEL, and it also gives you an idea of how to build the package for yourself. For reference, something as simple as this should work for you:

$ wget https://bugzilla.redhat.com/attachment.cgi?id=696698 -O $HOME/rpmbuild/SOURCES/openvswitch-1.7.3-el6.patch
$ wget http://openvswitch.org/releases/openvswitch-1.7.3.tar.gz -O $HOME/rpmbuild/SOURCES/openvswitch-1.7.3.tar.gz
$ wget https://bugzilla.redhat.com/attachment.cgi?id=696699 -O $HOME/rpmbuild/SPECS/openvswitch.spec
$ wget \
http://openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=rhel/openvswitch-kmod-rhel6.spec.in;hb=branch-1.7 \
-O $HOME/rpmbuild/SPECS/openvswitch-kmod.spec
$ rpmbuild -vs -D "kversion `uname -r`" $HOME/rpmbuild/SPECS/openvswitch.spec
$ rpmbuild -vs -D "kversion `uname -r`" $HOME/rpmbuild/SPECS/openvswitch-kmod.spec
$ mock $HOME/SRPMS/openvswitch-1.7.3-1.src.rpm
$ mock $HOME/SRPMS/openvswitch-kmod-1.7.3-1.el6.src.rpm

I won't go over the intricacies of RPM building since there is plenty of information available on the
Fedora project's wiki. Once you have the package ready, you'll want to install it on your controller and all of your compute nodes (hypervisors). The Open vSwitch project will replace a lot of the networking functionality in the kernel, especially the
stuff provided by the bridge module. The openvswitch deamon actually won't start up properly unless you reboot after the install.

When you're up and running again and the openvswitch service is running, you will need to create two bridge. One bridge is for external access and the other provides internal networking for your VMs. The cool thing
is that OVS supports the standard network scripts (to an extent) available on EL based distros. An example being:

# ifcfg-br-ext
DEVICE=br-ext
ONBOOT=yes
BOOTPROTO=static
DEVICETYPE=ovs
IPADDR=10.10.20.30
NETMASK=255.255.254.0
TYPE=OVSBridge
GATEWAY=10.10.20.1
# ifcfg-em1
DEVICE=em1
ONBOOT=yes
BOOTPROTO=none
DEVICETYPE=ovs
HWADDR=FF:FF:FF:FF:FF:FF
TYPE=OVSPort
OVS_BRIDGE=br-ext
# ifcfg-br-int
DEVICE=br-int
ONBOOT=yes
BOOTPROTO=none
DEVICETYPE=ovs
TYPE=OVSBridge

Perform a network service restart, and you should be in a pretty good position. You should now see a network similar to the following:

# ovs-vsctl show
1b0f63cf-d024-488e-872e-09a4389a067c
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ext
Port br-ext
Interface br-ext
type: internal
Port "em1"
Interface "em1"
ovs_version: "1.7.3"

You should get your controller and compute nodes to this point before continuing, since this is probably the most difficult part.


Quantum configuration


Getting quantum going takes some thought and configuration on both your network gear and your compute/controller nodes. In my case, I'm trunking a set of VLANs down to my compute nodes and adding the gateway IPs to my core network. I'll be using the LibvirtOpenVswitchDriver driver
for nova, which does not support security groups or other security filtering features. This is fine for my internal cluster, but if you're a service provider you probably want something else. I can mention now that you will need the OVS
bridge compatibility daemon if you plan to use CentOS 6.3 and the LibvirtHybridOVSBridgeDriver.

You will want to follow the Quantum Admin Guide to get a bare
minimal configuration going.

I'll point out a couple things that were not obvious for me. The first being you want to install the openstack-quantum-openvswitchpackage, which provides you with the necessary python bits to bridge the Quantum API
to actual OVS commands/configuration. Next you want to ensure the quantum configuration file is correct, mine looks like this:

# cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[DATABASE]
sql_connection = mysql://username:password@dbhost:3306/ovs_quantum
reconnect_interval = 2
[OVS]
tenant_network_type = vlan
network_vlan_ranges = openstack:1001:2000
bridge_mappings = openstack:br-ext
[AGENT]
polling_interval = 2
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf

In this case I'll be allowing vlan IDs 1001 to 2000 to attach to a named network called openstack. This name will be used when adding networks to the quantum API later, so keep that in mind. The next important piece is the bridge_mappings, which will map your
named network to a bridge that can handle these VLANs. This file needs to be the same on your controller and your compute nodes which will all play an important part in managing your network.

The second most unobvious part, is the init script for quantum-server. It passes --config /etc/quantum/plugin.ini, which doesn't exist by default with the package. I simply
symlink this to the plugin I'm using, which happens to be the OVS plugin. Example:

$ ln -s /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini /etc/quantum/plugin.ini

l3_agent and dhcp_agent configuration


Generally the l3_agent and dhcp_agent will run on your controller node. CentOS 6.3 does not have a version of the iproute2 utilities that supports namespacing. The kernel
does support it, but not the userland utilities which Quantum relies on to configure certain aspects. In order to properly work, you need to disable namespacing:

use_namespaces = False

This should exist in in both agent's configuration files (/etc/quantum/l3_agent.ini and /etc/quantum/dhcp_agent.ini respectively). You will also want to ensure a proper external_network_bridge value
for the l3_agent (br-ext in my example bridges above). Once you have the two agents running on your controller, you should be in great shape. However, the dnsmasq command
used by thedhcp_agent is incorrect for the version of dnsmasq included with CentOS 6.3. I've filed a
bug report with a patch that you can apply to get things going.


libvirt configuration changes


At a bare minimum, your libvirt qemu.conf file needs to include the following:

vnc_listen = "0.0.0.0"
user = "root"
group = "root"
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet", "/dev/net/tun",
]
clear_emulator_capabilities = 0

More information can be found on this on the libvirt
API page. Reason we need this is for the libvirt driver we will be using for Nova. If you aren't using LibvirtOpenVswitchDriver you likely won't need this, but there is probably some other libvirt changes you
will need.


Network configuration


Juniper


As I mentioned before, you need to configure both networking hardware and quantum to get things going. I'm running Juniper gear, so here's an example of setting up a vlan with gateway:

set interfaces vlan unit 1002 family inet address 10.10.10.1/24
set vlans openstack-net1 vlan-id 1002
set vlans openstack-net1 l3-interface vlan.1002
commit

This is on my core network switch, so you will need to make sure the vlan is trunked down to your access switch aka the one or many that are connected to your compute nodes. On the access switches you can do something like this:

set vlans openstack-net1 vlan-id 1002
set groups openstack-configuration interfaces <*> ether-options link-mode full-duplex
set groups openstack-configuration interfaces <*> ether-options speed 1g
set groups openstack-configuration interfaces <*> unit 0 family ethernet-switching port-mode trunk
set groups openstack-configuration interfaces <*> unit 0 family ethernet-switching vlan members openstack-net1
set groups openstack-configuration interfaces <*> unit 0 family ethernet-switching native-vlan-id 80
set interface ge-0/0/10 apply-groups openstack-configuration
set interface ge-0/0/11 apply-groups openstack-configuration
commit

An important thing to note here, is that I'm setting the native-vlan-id to 80, which is the vlan ID that the IP address I assigned to br-ext on my compute nodes can communicate on. This is essentially the same as assigning the port in access mode to the vlan
id 80, since any 802.1q tagged packets will be sent here. My openstack-net1 vlan will be trunked down to the ports ge-0/0/10 andge-0/0/11 which
means I can tag packets from my compute nodes onto vlan id 1002.


Quantum


From a node with the quantum cli tool installed, you can finally add a new network that will us ethe vlan ID 1002.

$ quantum net-create --shared openstack-net1 --provider:network_type vlan \
--provider:physical_network openstack --provider:segmentation_id 1002

See the docs for specifics, but in this case we're creating a network that is shared amongst tenants. The various --provider flags are specific to the OVS plugin. Here we are saying the network_type is
vlan (OVS plugin also supports GRE based networks), that we want to use the named network openstack (remember we mapped it to br-ext earlier?), and that our vlan_id is 1002. I don't know why they chose to use segmentation_id here,
it's a bit confusing, but to each his own.


Conclusion


Getting OpenStack Quantum running on a Folsom based cluster using CentOS 6.3 is challenging. Especially since the docs mostly assume you're using Ubuntu or running the dev-stack. It is possible to do, you just have to work through problems one at a time and
utilize config management to ensure consistency in configuration. Hopefully the couple bug reports I mentioned throughout will get addressed in a timely manner so it will be easier for future installs. If you have any questions, I'm generally around in the#openstack channel
on freenode and can probably help out to an extent. I'd like to point out there's a bunch of great people there that contributed to my own understanding of the Quantum network service.
页: [1]
查看完整版本: Getting through OpenStack Quantum with Open vSwitch