|
Load-Balancer-as-a-Service(LbaaS)
TheLBaaS extension enables OpenStack tenants to load-balance their VMtraffic.
Theextension enables you to:
· Load-balanceclient traffic from one network to application services, such as VMs,on the
sameor a different network.
· Load-balanceseveral protocols, such as TCP and HTTP.
· Monitorthe health of application services.
· Supportsession persistence.
1Concepts
Touse OpenStack LBaaS APIs effectively, you should understand severalkey concepts:
Concepts
Description
LoadBalancer
Theprimary load-balancing configuration object. Specifies the virtualIP address where client traffic is received.
Aload balancer is a logical device. It is used to distributeworkloads between multiple back-end systems or services callednodes, basedon the criteria defined as part
of its configuration.
VIP
AVIP is the primary load balancing configuration object thatspecifies the virtual IP address and port on which client trafficis received,
as well as other details such as the load balancingmethod to be use, protocol, etc. This entity is sometimes known inLB products under the name of a "virtual server", a"vserver" or a "listener".
#!wikicaution
Note
Inthe tradional load balancing vernacular, the term 'VIP' is usedto denote the IP address to which clients connect. As it isdefined in thisdocument,
this IP address is only one attribute of the VIPobject.
Avirtual IP is an Internet
Protocol (IP) address configured on theload balancer for use by clients connecting to a service that isload balanced. Incoming connections and requests are distributedto back-end nodes based on the configuration of the load balancer.
ListenerRepresents a single listening port. Defines the protocol and canoptionally provide TLS termination.
Pool
Aload balancing pool is a logical set of devices, such as webservers, that you group together to receive and process traffic.The load balancing function
chooses a member of the pool accordingto the configured load balancing method to handle the new requestsor connections received on the VIP address. There is only one poolfor a VIP.
PoolMember
Theapplication that runs on the back-end server.
Apool member represents the application running on back-end server.
HealthMonitoring
Ahealth monitor is used to determine whether or not back-endmembers of the VIP's pool are usable for processing a request. Apool can have
several health monitors associated with it.
Thereare different types of health monitors supported by the OpenStackLBaaS service:
PING
usedto ping the members using ICMP.
TCP
usedto connect to the members using TCP.
HTTP
usedto send an HTTP request to the member.
HTTPS
usedto send a secure HTTP request to the member.
Whena pool has several monitors associated with it, each member of thepool is monitored by all these monitors. If any monitor declaresthe
member as unhealthy, then the member status is changed toINACTIVE and the member won't participate in its pool's loadbalancing. In other words, ALL monitors must declare the member tobe healthy for it to stay ACTIVE.
SessionPersistence
Sessionpersistence is a feature of the load balancing service. Itattempts to force connections or requests in the same session tobe processed
by the same member as long as it is active.
TheOpenStack LBaaS service supports three types of persistence:
SOURCE_IP
Withthis persistence mode,
all connectionsoriginating from the same source IP address, will be handled bythe same member of the pool.
HTTP_COOKIE
Withthis persistence mode, the load balancing function will createa cookie on the first request from a client. Subsequentrequests containing the same cookie value will
be handled bythe same member of the pool.
APP_COOKIE
Withthis persistence mode, the load balancing function will rely ona cookie established by the back-end application. All requestscarrying the same cookie value will
be handled by the samemember of the pool.
LBaaSsupports session persistence by ensuring incoming requests arerouted to the same instance within a pool of multiple instances.
ConnectionLimits
Tocontrol incoming traffic on the VIP address as well as traffic foreach member of a pool, you can set a connection limit on the VIPor on
the pool, beyond which the load balancing function willrefuse client requests or connections. This can be used to thwartDoS attacks and to allow each member of the pool to continue towork within its limits.
ForHTTP and HTTPS protocols, since several HTTP requests can bemultiplexed on the same TCP connection, the connection limit valueis interpreted
as the maximum number of requests allowed.
Ingresstraffic can be shaped with connection limits. This feature allowsworkload control, and can also assist with mitigating DoS (Denialof
Service) attacks.
2High-Level Task Flow
Thehigh-level task flow for using LBaaS API to configure load balancingis as follows:
?The tenant creates a Pool, which is initially empty
?The tenant creates one or several Members in the Pool
?The tenant create one or several Health Monitors
?The tenant associates the Health Monitors with the Pool
?The tenant finally creates a VIP with the Pool
Touse the LBaaS extension to configure load balancing
1.Create a pool, which is initially empty.
2.Create one or more members in the pool.
3.Create a health monitor.
4.Associate the health monitor with the pool.
5.Create a load balancer object.
6.Create a listener.
7.Associate the listener with the load balancer.
8.Associate the pool with the listener.
9.Optional. If you use HTTPS termination, complete these tasks:
a.Add the TLS certificate, key, and optional chain to Barbican.
b.Associate the Barbican container with the listener.
10.Optional. If you use layer-7 HTTP switching, complete these tasks:
a.Create any additional pools, members, and health monitors that areused as nondefault pools.
b.Create a layer-7 policy that associates the listener with thenon-default pool.
c.Create rules for the layer-7 policy that describe the logic thatselects the nondefault pool for servicing some requests.
概念
描述
VIP
可以把一个VIP看做是具有一个虚拟IP地址和指定端口的负载均衡器,当然还有其他的属性,比如均衡算法,协议等。
Pool
一个pool代表一组逻辑设备(通常是同质设备),比如web服务器。负载均衡算法会选择pool中的某一member接收进入系统的流量或连接。目前一个VIP对应一个Pool。
Poolmember
后端服务器上运行的application
Healthmonitor
检测pool内member的状态。一个pool可对应多个healthmonitor。
有四种类型:PING、TCP、HTTP、HTTPS。每种类型就是使用相应的协议对member进行检测。
SessionPersistence
也就是一般我们听到的“Session粘滞”,是规定session相同的连接或请求转发的行为。目前支持三种类型:
SOURCE_IP:指从同一个IP发来的连接请求被某个member接收处理;
HTTP_COOKIE:该模式下,loadbalancer为客户端的第一次连接生成cookie,后续携带该cookie的请求会被某个member处理
APP_COOKIE:该模式下,依靠后端应用服务器生成的cookie决定被某个member处理
ConnectionLimits
这个特性主要用来抵御DoS攻击
http://blog.iyunv.com/lynn_kong/article/details/8528512
https://www.ustack.com/blog/neutron_loadbalance/
Grizzly中的LoadBalancer初步分析
本blog欢迎转发,但请保留原作者信息:
新浪微博:@孔令贤HW
Blog地址:http://blog.iyunv.com/lynn_kong
内容系本人学习、研究和总结,如有雷同,实属荣幸!
在Grizzly版本中,Quantum组件引入了一个新的网络服务:LoadBalancer(LBaaS),服务的架构遵从ServiceInsertion框架(也是G版引入)。LoadBalancer为租户提供到一组虚拟机的流量的负载均衡,类似于Amazon的ELB。昨天(2013.1.20)Grizzly_2放出,实现10个BP,修复82个bug。大致过了下代码,目前我能识别到的更新如下:
1.增加servicetype扩展(serviceinsertion实现的前提条件),表示一种网络服务类型(LB,FW,VPN等),为了向后兼容,载入时会创建默认的servicetype
2.安全组功能从Nova移植到了Quantum
3.增加portbinding扩展,给port增加了三个属性:binding:vif_type,binding:host_id,binding:profile(这个属性是Cisco专用)
4.Ryu插件支持provider扩展
5.增加loadbalancer扩展以实现负载均衡功能
6.新增一个Quantum插件(BigSwitch)
1. 基本流程
租户创建一个pool,初始时的member个数为0;
租户在该pool内创建一个或多个member
租户创建一个或多个healthmonitor
租户将healthmonitors与pool关联
租户使用pool创建vip
2. 概念
l VIP
可以把一个VIP看做是具有一个虚拟IP地址和指定端口的负载均衡器,当然还有其他的属性,比如均衡算法,协议等。
l Pool
一个pool代表一组逻辑设备(通常是同质设备),比如web服务器。负载均衡算法会选择pool中的某一member接收进入系统的流量或连接。目前一个VIP对应一个Pool。
l Poolmember
代表了后端的一个应用服务器。
l Healthmonitor
一个healthmonitor用来检测pool内member的状态。一个pool可对应多个healthmonitor。有四种类型:
PING、TCP、HTTP、HTTPS。每种类型就是使用相应的协议对member进行检测。
l SessionPersistence
也就是一般我们听到的“Session粘滞”,是规定session相同的连接或请求转发的行为。目前支持三种类型:
SOURCE_IP:指从同一个IP发来的连接请求被某个member接收处理;
HTTP_COOKIE:该模式下,loadbalancer为客户端的第一次连接生成cookie,后续携带该cookie的请求会被某个member处理
APP_COOKIE:该模式下,依靠后端应用服务器生成的cookie决定被某个member处理
l ConnectionLimits
这个特性主要用来抵御DoS攻击
对象关系模型:
3. 架构图
截止到2013.1.22号,Grizzly_2版本仅实现了LBaaSPlugin部分,LBaaSAgent和Scheduler/DeviceManagement正在开发。
如上图,可见LBaaS与QuantumPlugin的架构基本一致,将上层的逻辑概念与底层的实现分离。主要模块如下:
1.LBaaS Quantum Extension:处理RESTAPI调用
2.LBaaS Quantum AdvSvcPlugin:核心Plugin之一。Quantum在Folsom版本仅支持一个Plugin,但在实现了ServiceInsertion之后可以运行多个服务的不同Plugin共存。
3.LBaaS Agent:同QuantumAgent一样,是部署在各个计算节点的独立进程
4.Driver:与实际的设备打交道,实现逻辑模型
4. Scheduler/DeviceManagement
Scheduler/DeviceManagement是一个单独的Quantum组件,其功能主要有两个方面:
1. 实现服务的逻辑模型
2. 为了实现高级服务(AdvancedService),比如loadbalancers,
firewalls, vpngateways等而提供面向租户的扩展API
以创建Pool为例,流程图如下:
LBaaSPlugin收到创建pool的请求;
LBaaSPlugin在DB中新增记录,返回pool_id;
LBaaSPlugin调用Scheduler,传递service_type,pool_id,pool等参数;
Scheduler选择满足条件的device,保存device和pool的映射,将device_info返回;
Agent调用Driver,Driver将device接入租户网络,实现逻辑模型;
Agent向LBaaS通知,更新Pool在DB中的状态;
本博客欢迎转发,但请保留原作者(@孔令贤HW)信息!内容系本人学习、研究和总结,如有雷同,实属荣幸!
Neutron网络之负载均衡
gong,yongsheng |2013.10.08
Contents [hide]
1 网络规划
2 创建网络
3 负载均衡器数据模型
4 创建和配置负载均衡器
4.1 创建一个pool
4.2 设置VIP
4.3 增加Member
4.4 设置healthmonitor
4.5 添加FloatingIP
5 验证
6 总结
负载均衡在G版落户Neutron以来经历了几次大的变化。G版中实现了API模型和一个Haproxy的参考实现,H版增加了多个agent的调度和以服务的方式重构了代码。当然,由于服务链还没有在Neutron中完全实现,所以暂时还不能看到负载均衡作为一个网络服务如何能动态地插入到虚拟机的网络路径中去。本博客试图展示Neutron负载均衡当前功能的使用过程以及少许介绍一下背后的原理。
网络规划
在“Neutron网络入门”中我们谈到过使用Neutron我们需要进行一些网络规划。这里我们讨论几种网络图:
如上图所示,我们设计三种网络,负载均衡器网络中部署各种负载均衡设备,服务器网络中部署后台服器。访问者则在办公室网路中。我们限制服务器只能被负载均衡器访问。在办公室网络中的用户只能通过负载均衡器访问服务器提供的服务。这里我们在服务器上安装的是tomcat,监听在8080端口。
创建网络
首先我们按照“Neutron网络入门”中的操作构建上面规划等价的Neutron网络:
如上图所示,我们的服务器网络的网址范围为10.0.0.0/24,负载均衡器网络的网址范围是192.168.40.0/24,public网络链接办公网络,网址范围是192.168.10.224/28。路由器链接了所有三个网络。public网络和路由器是通过路由器的”网关臂(NeutronAPI中router的gateway)”相连的。路由器把服务器网络和负载均衡器网络的IP地址SNAT成路由器的”网关臂”在public网络的地址。这样他们就可以访问办公网络的IP啦。但是如果要想从办公网络访问服务器网络和负载均衡器网络,我们还需要动态地址(FloatingIP)。
在实际操作负载均衡器之前我们需要了解一下它的数据模型。
负载均衡器数据模型
如上图所示,数据模型由四个对象组成。处在核心位置的是Pool(我倾向于把它命名成loadballancer),它代表一个负载均衡器。一个负载均衡器拥有一个VIP,也就是虚拟IP。虚拟IP中的虚拟其实是相对后面的Member而言,也就是说这个VIP不固定在任何一个Member上。用户访问这个VIP,有时由这个成员提供服务,有时由那个成员提供服务。Member是后台提供服务的服务器。HealthMonitor用来监控和检查后台服务器的联通情况。当检查到某个服务器不能使用时,负载均衡器就不会用它来向用户提供服务。
创建和配置负载均衡器
我们假设各种网络,路由器和两个具有Tomcat的虚拟机都已经创建完毕
负载均衡器的操作遵循如下步骤:
创建一个pool。
设置VIP
增加Member
设置healthmonitor
配置FloatingIP
下面我们演示如何创建负载均衡器。
创建一个pool
1. 使用demo用户登录,并点击“LoadBalancers”,
在右边的屏幕区域,我们可以看到三个标签页面:Pools,Members
和Monitors。如下图所示:
2. 点击”AddPool”,
弹出”AddPool”窗口,并填入相应的值:
点击”Add”关闭弹出窗口。
设置VIP
如下图所示,点击“More”按钮的”AddVIP”:
在弹出的“AddVIP”窗口里,填入需要的信息:
增加Member
点击Members标签页上的”AddMember”按钮:
在弹出的”AddMember”对话框上,填入如下信息:
点击”Add”完成Member的设置。
设置healthmonitor
点击Monitors标签页上的”AddMonitor”按钮:
在弹出的“AddMonitor”窗口上,填入如下信息:
点击”Add”完成。
接着我们需要把刚才创建的Monitor和我们的均衡器绑在一起。点击“tomcat_lb”的”More”按钮菜单项“AddHealth
Monitor”。如下图所示:
在弹出的”Addassociation”对话框中选择我们刚创建的monitor:
点击”Add”完成绑定工作。
添加FloatingIP
我们说过要想外部网络访问负载均衡器,必须使用动态IP。遗憾的是,Horizon没有提供界面以给VIP绑定FloatingIP。然而我们可以通过命令行来完成这个功能。
$ neutron port-list -F id -F name | grep vip
| 7f9a7278-a9d5-4b50-b318-916b702e2e76 | vip-3a1be80c-70e0-4ae5-9bf2-25ca18ae9bae |
上表中“7f9a7278-a9d5-4b50-b318-916b702e2e76”就是VIP地址所在端口的id。
$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.10.11 |
| floating_network_id | eeb6e13a-f0fb-44b7-b895-b51e3fe32269 |
| id | 565da56d-0ca1-4bb8-8ee7-5284bd7633bd |
| port_id | |
| router_id | |
| tenant_id | cb103c485b3a4f7f947f798cc93e45b4 |
+---------------------+--------------------------------------+
$ neutron floatingip-associate 565da56d-0ca1-4bb8-8ee7-5284bd7633bd 7f9a7278-a9d5-4b50-b318-916b702e2e76
Associated floatingip 565da56d-0ca1-4bb8-8ee7-5284bd7633bd
验证
如何验证呢?我们先来了解一下Neutron负载均衡器参考实现的部分原理。
Neutron现在提供的开源参考实现是基于Haproxy的。我们可以用下面的命令得出这个haproxy进程的启动参数:
$ ps -ef | grep haprox[y]
nobody 22602 1 0 14:01 ? 00:00:00 haproxy -f /opt/stack/data/neutron/lbaas/d7690d6d-9d8a-49b0-918c-08f91456f403/conf -p /opt/stack/data/neutron/lbaas/d7690d6d-9d8a-49b0-918c-08f91456f403/pid -sf 17471
gongysh 27892 5445 0 09:16 pts/15 00:00:27 python /usr/local/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini
从上面的输出可以得出Haproxy的配置文件是/opt/stack/data/neutron/lbaas/d7690d6d-9d8a-49b0-918c-08f91456f403/conf。我们来看看它内容:
global
daemon
user nobody
group nogroup
log /dev/log local0
log /dev/log local1 notice
stats socket /opt/stack/data/neutron/lbaas/d7690d6d-9d8a-49b0-918c-08f91456f403/sock mode 0666 level user
defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend 3a1be80c-70e0-4ae5-9bf2-25ca18ae9bae
option tcplog
bind 192.168.40.3:8080
mode http
default_backend d7690d6d-9d8a-49b0-918c-08f91456f403
option forwardfor
backend d7690d6d-9d8a-49b0-918c-08f91456f403
mode http
balance roundrobin
option forwardfor
timeout check 30s
option httpchk GET /
http-check expect rstatus 200
cookie SRV insert indirect nocache
server 231f1329-8ead-4602-9fcc-260027eb622b 10.0.0.3:8080 weight 100 check inter 30s fall 3 cookie 0
server 6d936b4d-102f-46d7-80fa-4ff1d851deeb 10.0.0.5:8080 weight 100 check inter 30s fall 3 cookie 1
上面的内容显示了下列重要信息:
1.在“frontend”下的“bind192.168.40.3:8080
”是我们的VIP,“backend”
下的“cookieSRV
insert indirect nocache ” 和“server”行中的”cookie”对应着VIP对象的持久性方法HTTP_COOKIE。“server”行中的”cookie”值表示Haproxy用来记住某个客户端正在访问哪个后台服务用的。
2.在“backend”下的“timeoutcheck
30s ”和“optionhttpchk
GET /”是我们的monitor对象的内容;
3.在“backend”下的“server”代表我们的Member对象;
4.在“backend”下的“balanceroundrobin
”代表了我们的Pool对象的负载均衡方法。
在做验证的过程中,我们还需要了解一下下面的curl指令:
$curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s
其中,192.168.10.11是我们VIP地址的动态地址(FloatingIP)。“/manager/html”是Tomcat自带的管理应用,其需要用户认证。tomcat:tomcat是Tomcat的管理应用的访问用户和口令。“-D-”表示我们要列出HTTP回应的头信息。“-o/dev/zero”表示我们要扔掉HTTP返回的数据。”-s”表示我们不想看到curl打出的进度信息。
结合Haproxy的配置信息和以上的curl指令,我们的验证过程如下:
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=EA021AC3A9CD4B42A1A02764491E9297; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:02:05 GMT
Set-Cookie: SRV=0; path=/
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=0D9E1E7FF5F15CFC4F1CE1330ADF726F; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:02:06 GMT
Set-Cookie: SRV=1; path=/
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=7A7A4D865024408591CFB5DD7912204F; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:02:09 GMT
Set-Cookie: SRV=0; path=/
我们可以看出SRV=0SRV=1
在交替出现,这就是Haproxy的roundrobin的均衡方法。如果在访问服务器时带上SRVcookie,
而且这个服务器是可用的,Haproxy会按照要求调度:
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s --cookie "SRV=0"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=30A2865ADE5E44F02EC74D01DDCCDA90; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:11:41 GMT
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s --cookie "SRV=1"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=D043F6F46502631D9DA9D7D657FA07CA; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:11:43 GMT
因为我们只有两个后台服务器,如果我们用SRV=3访问,Haproxy找不到它对应的服务器,于是就按照roundrobin方法选择了一台并且设置了相应的Cookie。
$ curl 192.168.10.11:8080/manager/html --user tomcat:tomcat -D - -o /dev/zero -s --cookie "SRV=2"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: private
Expires: Wed, 31 Dec 1969 19:00:00 EST
Set-Cookie: JSESSIONID=B63E30E9D7A4D42ECA1D46AE9D466F39; Path=/manager/; HttpOnly
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Date: Fri, 27 Sep 2013 09:11:48 GMT
Set-Cookie: SRV=0; path=/
总结
Neutron的负载均衡抽象出一个负载均衡服务的模型和API。参考实现中,在agent方生成Haproxy的配置文件然后启动Haproxy。很明显我们不能要求这个模型能使用上Haproxy的所有功能。现在一个VIP还是在一个Haproxy节点实现,为保证足够的HA,我们日后还需要把Haproxy和keepalived等软件配合。
另外API和Horizon之间还有一些差距。比如API支持TCP层的负载均衡,Horizon界面上就还没做到。NeutronAPI支持给VIP绑定动态IP,界面上就无法达到。
在今后的发展过程中,Neutron应该会引入对相关提供商的硬件负载均衡器的支持,比如F5。
版权声明:本文为博主原创文章,未经博主允许不得转载。 |
|