设为首页 收藏本站
查看: 1370|回复: 0

[经验分享] Building HA Load Balancer with HAProxy and keepalived

[复制链接]

尚未签到

发表于 2015-11-20 02:34:34 | 显示全部楼层 |阅读模式
  以下技术应用于最优质的水果的鲜果篮



For this tutorial I'll demonstrate how to build a simple yet scalable highly available HTTP load balancer using HAProxy [1] and keepalived [2], then later I'll show how to front-end HAProxy with Pound [5] and implement SSL termination and redirect
the insecure connections from port 80 to 443.


Let's assume we have two servers LB1 and LB2 that will host HAProxy and will be made highly available through the use of the VRRP protocol [3] as implemented by keepalived. LB1 will have an IP address of 192.168.29.129 and LB2 will have an IP
address of 192.168.29.130. The HAProxy will listen on the "shared/floating" IP address of 192.168.29.100, which will be raised on the active LB1. If LB1 fails that IP will be moved and raised on LB2 with the help of keepalived.
We are also going to have two back-end nodes that run apache -  WEB1 192.168.29.131 and WEB2 192.168.29.132 - that will be receiving traffic from the HAProxy using round-robing load-balancing algorithm.


First let's install keepalived on both LB1 and LB2. We can either get it from the EPEL repo, or install it from source.




12
[root@lb1 ~] rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm[root@lb1 ~] yum install keepalived


view rawgistfile1.sh hosted
with ❤ by GitHub


Edit the configuration file on both servers to match except the priority parameter:




1234567891011121314151617181920
[root@lb1 ~] vi /etc/keepalived/keepalived.conf vrrp_script chk_haproxy {      script "killall -0 haproxy"      interval 2      weight 2} vrrp_instance VI_1 {      interface eth0      state MASTER    # MASTER on master, BACKUP on backup      virtual_router_id 51      priority 101          # 101 on master, 100 on backup      virtual_ipaddress {           192.168.29.100      }      track_script {           chk_haproxy      }}


view rawgistfile1.sh hosted
with ❤ by GitHub


Add following firewall rule into /etc/sysconfig/iptables on both LBs.

-A INPUT -p vrrp -j ACCEPT

run "service iptables restart" on both LBs.





Save the config on both servers and start keepalived:




1
[root@lb1 ~] /etc/init.d/keepalived start


view rawgistfile1.sh hosted
with ❤ by GitHub


Now that keepalived is running check that LB1 has raised 192.168.29.100:




12
[root@lb1 ~] ip addr show | grep 192.168.29.100inet 192.168.29.100/32 scope global eth0


view rawgistfile1.sh hosted
with ❤ by GitHub


You can test if the IP will move from LB1 to LB2 by failing LB1 (shutdown or bring the network down) and running the above command on LB2.


Now that we have high availability of the IP resource we can install HAProxy on LB1 and LB2:




1
[root@lb1 ~] yum install haproxy


view rawgistfile1.sh hosted
with ❤ by GitHub


Edit the configuration file, and start HAProxy:




123456789101112131415161718192021222324252627282930313233343536373839
[root@lb1 ~] vi /etc/haproxy/haproxy.cfg global        log 127.0.0.1   local7 info        maxconn 4096        user haproxy        group haproxy        daemon        #debug        #quiet defaults        log     global        mode    http        option  httplog        option  dontlognull        retries 3        option redispatch        maxconn 2000        contimeout      5000        clitimeout      50000        srvtimeout      50000 listen webfarm 192.168.29.100:80      mode http      balance roundrobin      cookie JSESSIONID prefix      option httpclose      option forwardfor      option httpchk HEAD /index.html HTTP/1.0      server webA webserver1.example.net:80 cookie A check      server webB webserver2.example.net:80 cookie B check      [root@lb1 ~] vi /etc/default/haproxy # Set ENABLED to 1 if you want the init script to start haproxy.ENABLED=1# Add extra flags here.#EXTRAOPTS="-de -m 16"


view rawgistfile1.sh hosted
with ❤ by GitHub


This is a very simplistic configuration that uses HTTP load-balancing with cookie prefixing. This is how it works:


- LB1 is VRRP master (keepalived), LB2 is backup. Both monitor the haproxy process, and lower their prio if it fails, leading to a failover to theother node.
- LB1 will receive clients requests on IP 192.168.29.100.
- both load-balancers send their checks from their native IP.
- if a request does not contain a cookie, it will be forwarded to a validserver
- in return, if a JESSIONID cookie is seen, the server name will be prefixedinto it, followed by a delimitor ('~')
- when the client comes again with the cookie "JSESSIONID=A~xxx", LB1 will know that it must be forwarded to server A. The server name will then be extracted from the cookie before it is sent to the server.
- if server "webA" dies, the requests will be sent to another valid serverand a cookie will be reassigned.


For more information and examples see [4].


Add the following into /etc/sysctl.conf on both LBs.

net.ipv4.ip_nonlocal_bind=1

run "sysctl -p" on both LBs



Let's start HA proxy on both LB's:




1
[root@lb1 ~] /etc/init.d/haproxy start


view rawgistfile1.sh hosted
with ❤ by GitHub


On the back-end apache nodes create a simple index.html like so:




1234567
[root@web1 ~] cat /var/www/html/index.html This is Web Node 1 [root@web2 ~] cat /var/www/html/index.html This is Web Node 2


view rawgistfile1.sh hosted
with ❤ by GitHub


Now hit 192.168.29.100 in your browser and refresh few times. You should see both nodes rotating in a round-robin fashion.
Also test the HA setup by failing one of the LB servers making sure that you always get a response back from the back-end nodes. Do the same for the back-end nodes.


To send logs from HAProxy to syslog-ng add the following lines to the syslog-ng config file:





1234567891011
[root@logserver ~] vi /etc/syslog-ng/syslog-ng.conf source s_all {    udp(ip(127.0.0.1) port(514));}; destination df_haproxy { file("/var/log/haproxy.log"); }; filter f_haproxy { facility(local7); }; log { source(s_all); filter(f_haproxy); destination(df_haproxy); };


view rawgistfile1.sh hosted
with ❤ by GitHub


We can use Pound, which is a reverse proxy that supports SSL termination to listen for SSL connections on port 443 and terminate them using a local certificate. Pound will then insert a header in each HTTP packet called "X-Forwarded-Proto: https"
that HAproxy will look for and if absent HAProxy will forward the insecure connections to port 443.

Installing pound is straight forward and can be done from a package or from source. Once installed the config file should look like this:



1234567891011121314151617181920212223242526272829
[root@lb1 ~] cat /etc/pound/pound.cfg  User            "www-data"Group           "www-data" LogLevel        3 ## check backend every X secs:Alive           5 Control "/var/run/pound/poundctl.socket" ListenHTTPS        Address 192.168.29.100        Port    443        AddHeader "X-Forwarded-Proto: https"        Cert    "/etc/ssl/local.server.pem"         xHTTP           0         Service                BackEnd                        Address 192.168.29.100                        Port    80                End        EndEnd [root@lb1 ~] /etc/init.d/pound start


view rawgistfile1.sh hosted
with ❤ by GitHub
Pound will now listen on port 443 for secure connections, terminate them using the local.server.pem certificate then inset the "X-Forwarded-Proto: https" header in the HTTP packet and forward it to HAProxy which is running and listening on the
same host on port 80.

To make HAProxy forward all insecure connections from port 80 to port 443 all we need to do is create an access list that looks for the header that Pound inserts and if missing redirect the HTTP connections to Pound (listening on port 443).

The new config needs to look like this:



12345678910111213141516171819202122232425262728293031323334
[root@lb1 ~] cat /etc/haproxy/haproxy.cfg global        log 127.0.0.1   local7 info        maxconn 4096        user haproxy        group haproxy        daemon        #debug        #quiet defaults        log     global        mode    http        option  httplog        option  dontlognull        retries 3        option redispatch        maxconn 2000        contimeout      5000        clitimeout      50000        srvtimeout      50000 listen webfarm 192.168.29.100:80      mode http      balance roundrobin      cookie JSESSIONID prefix      option httpclose      option forwardfor      option httpchk HEAD /index.html HTTP/1.0      acl x_proto hdr(X-Forwarded-Proto) -i https      redirect location https://192.168.29.100/ if !x_proto      server webA webserver1.example.net:80 cookie A check      server webB webserver2.example.net:80 cookie B check


view rawgistfile1.sh hosted
with ❤ by GitHub
The two new lines at 31 and 32 create an access list that looks for (case insensitive -i) the https string in the X-Forwarded-Proto header. If the string is not there (meaning the connection came on port 80 directly hitting HAproxy) redirect
to the secure SSL port 443 that Pound is listening on. This will ensure that each time a client hits port 80 the connection will be redirected to port 443 and secured. Same goes for if the client connects directly to port 443.

To generate a self-signed cert to use in Pound run this:



1
[root@lb1 ~] openssl req -x509 -newkey rsa:1024 -keyout local.server.pem -out local.server.pem -days 365 -nodes


view rawgistfile1.sh hosted
with ❤ by GitHub


Resources:


[1] http://haproxy.1wt.eu/
[2] http://www.keepalived.org/
[3] http://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol
[4] http://haproxy.1wt.eu/download/1.2/doc/architecture.txt

[5] http://www.apsis.ch/pound/

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-141290-1-1.html 上篇帖子: 使用keepalived实现LVS的DR模式热备 下篇帖子: 使用nginx+keepalived实现高性能集群
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表