760176104 发表于 2015-11-20 02:34:34

Building HA Load Balancer with HAProxy and keepalived

  以下技术应用于最优质的水果的鲜果篮



For this tutorial I'll demonstrate how to build a simple yet scalable highly available HTTP load balancer using HAProxy and keepalived , then later I'll show how to front-end HAProxy with Pound and implement SSL termination and redirect
the insecure connections from port 80 to 443.


Let's assume we have two servers LB1 and LB2 that will host HAProxy and will be made highly available through the use of the VRRP protocol as implemented by keepalived. LB1 will have an IP address of 192.168.29.129 and LB2 will have an IP
address of 192.168.29.130. The HAProxy will listen on the "shared/floating" IP address of 192.168.29.100, which will be raised on the active LB1. If LB1 fails that IP will be moved and raised on LB2 with the help of keepalived.
We are also going to have two back-end nodes that run apache -WEB1 192.168.29.131 and WEB2 192.168.29.132 - that will be receiving traffic from the HAProxy using round-robing load-balancing algorithm.


First let's install keepalived on both LB1 and LB2. We can either get it from the EPEL repo, or install it from source.




12
[root@lb1 ~] rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm[root@lb1 ~] yum install keepalived


view rawgistfile1.sh hosted
with ❤ by GitHub


Edit the configuration file on both servers to match except the priority parameter:




1234567891011121314151617181920
[root@lb1 ~] vi /etc/keepalived/keepalived.conf vrrp_script chk_haproxy {      script "killall -0 haproxy"      interval 2      weight 2} vrrp_instance VI_1 {      interface eth0      state MASTER# MASTER on master, BACKUP on backup      virtual_router_id 51      priority 101          # 101 on master, 100 on backup      virtual_ipaddress {         192.168.29.100      }      track_script {         chk_haproxy      }}


view rawgistfile1.sh hosted
with ❤ by GitHub


Add following firewall rule into /etc/sysconfig/iptables on both LBs.

-A INPUT -p vrrp -j ACCEPT

run "service iptables restart" on both LBs.





Save the config on both servers and start keepalived:




1
[root@lb1 ~] /etc/init.d/keepalived start


view rawgistfile1.sh hosted
with ❤ by GitHub


Now that keepalived is running check that LB1 has raised 192.168.29.100:




12
[root@lb1 ~] ip addr show | grep 192.168.29.100inet 192.168.29.100/32 scope global eth0


view rawgistfile1.sh hosted
with ❤ by GitHub


You can test if the IP will move from LB1 to LB2 by failing LB1 (shutdown or bring the network down) and running the above command on LB2.


Now that we have high availability of the IP resource we can install HAProxy on LB1 and LB2:




1
[root@lb1 ~] yum install haproxy


view rawgistfile1.sh hosted
with ❤ by GitHub


Edit the configuration file, and start HAProxy:




123456789101112131415161718192021222324252627282930313233343536373839
[root@lb1 ~] vi /etc/haproxy/haproxy.cfg global      log 127.0.0.1   local7 info      maxconn 4096      user haproxy      group haproxy      daemon      #debug      #quiet defaults      log   global      mode    http      optionhttplog      optiondontlognull      retries 3      option redispatch      maxconn 2000      contimeout      5000      clitimeout      50000      srvtimeout      50000 listen webfarm 192.168.29.100:80      mode http      balance roundrobin      cookie JSESSIONID prefix      option httpclose      option forwardfor      option httpchk HEAD /index.html HTTP/1.0      server webA webserver1.example.net:80 cookie A check      server webB webserver2.example.net:80 cookie B check      [root@lb1 ~] vi /etc/default/haproxy # Set ENABLED to 1 if you want the init script to start haproxy.ENABLED=1# Add extra flags here.#EXTRAOPTS="-de -m 16"


view rawgistfile1.sh hosted
with ❤ by GitHub


This is a very simplistic configuration that uses HTTP load-balancing with cookie prefixing. This is how it works:


- LB1 is VRRP master (keepalived), LB2 is backup. Both monitor the haproxy process, and lower their prio if it fails, leading to a failover to theother node.
- LB1 will receive clients requests on IP 192.168.29.100.
- both load-balancers send their checks from their native IP.
- if a request does not contain a cookie, it will be forwarded to a validserver
- in return, if a JESSIONID cookie is seen, the server name will be prefixedinto it, followed by a delimitor ('~')
- when the client comes again with the cookie "JSESSIONID=A~xxx", LB1 will know that it must be forwarded to server A. The server name will then be extracted from the cookie before it is sent to the server.
- if server "webA" dies, the requests will be sent to another valid serverand a cookie will be reassigned.


For more information and examples see .


Add the following into /etc/sysctl.conf on both LBs.

net.ipv4.ip_nonlocal_bind=1

run "sysctl -p" on both LBs



Let's start HA proxy on both LB's:




1
[root@lb1 ~] /etc/init.d/haproxy start


view rawgistfile1.sh hosted
with ❤ by GitHub


On the back-end apache nodes create a simple index.html like so:




1234567
[root@web1 ~] cat /var/www/html/index.html This is Web Node 1 [root@web2 ~] cat /var/www/html/index.html This is Web Node 2


view rawgistfile1.sh hosted
with ❤ by GitHub


Now hit 192.168.29.100 in your browser and refresh few times. You should see both nodes rotating in a round-robin fashion.
Also test the HA setup by failing one of the LB servers making sure that you always get a response back from the back-end nodes. Do the same for the back-end nodes.


To send logs from HAProxy to syslog-ng add the following lines to the syslog-ng config file:





1234567891011
[root@logserver ~] vi /etc/syslog-ng/syslog-ng.conf source s_all {    udp(ip(127.0.0.1) port(514));}; destination df_haproxy { file("/var/log/haproxy.log"); }; filter f_haproxy { facility(local7); }; log { source(s_all); filter(f_haproxy); destination(df_haproxy); };


view rawgistfile1.sh hosted
with ❤ by GitHub


We can use Pound, which is a reverse proxy that supports SSL termination to listen for SSL connections on port 443 and terminate them using a local certificate. Pound will then insert a header in each HTTP packet called "X-Forwarded-Proto: https"
that HAproxy will look for and if absent HAProxy will forward the insecure connections to port 443.

Installing pound is straight forward and can be done from a package or from source. Once installed the config file should look like this:



1234567891011121314151617181920212223242526272829
[root@lb1 ~] cat /etc/pound/pound.cfgUser            "www-data"Group         "www-data" LogLevel      3 ## check backend every X secs:Alive         5 Control "/var/run/pound/poundctl.socket" ListenHTTPS      Address 192.168.29.100      Port    443      AddHeader "X-Forwarded-Proto: https"      Cert    "/etc/ssl/local.server.pem"         xHTTP         0         Service                BackEnd                        Address 192.168.29.100                        Port    80                End      EndEnd [root@lb1 ~] /etc/init.d/pound start


view rawgistfile1.sh hosted
with ❤ by GitHub
Pound will now listen on port 443 for secure connections, terminate them using the local.server.pem certificate then inset the "X-Forwarded-Proto: https" header in the HTTP packet and forward it to HAProxy which is running and listening on the
same host on port 80.

To make HAProxy forward all insecure connections from port 80 to port 443 all we need to do is create an access list that looks for the header that Pound inserts and if missing redirect the HTTP connections to Pound (listening on port 443).

The new config needs to look like this:



12345678910111213141516171819202122232425262728293031323334
[root@lb1 ~] cat /etc/haproxy/haproxy.cfg global      log 127.0.0.1   local7 info      maxconn 4096      user haproxy      group haproxy      daemon      #debug      #quiet defaults      log   global      mode    http      optionhttplog      optiondontlognull      retries 3      option redispatch      maxconn 2000      contimeout      5000      clitimeout      50000      srvtimeout      50000 listen webfarm 192.168.29.100:80      mode http      balance roundrobin      cookie JSESSIONID prefix      option httpclose      option forwardfor      option httpchk HEAD /index.html HTTP/1.0      acl x_proto hdr(X-Forwarded-Proto) -i https      redirect location https://192.168.29.100/ if !x_proto      server webA webserver1.example.net:80 cookie A check      server webB webserver2.example.net:80 cookie B check


view rawgistfile1.sh hosted
with ❤ by GitHub
The two new lines at 31 and 32 create an access list that looks for (case insensitive -i) the https string in the X-Forwarded-Proto header. If the string is not there (meaning the connection came on port 80 directly hitting HAproxy) redirect
to the secure SSL port 443 that Pound is listening on. This will ensure that each time a client hits port 80 the connection will be redirected to port 443 and secured. Same goes for if the client connects directly to port 443.

To generate a self-signed cert to use in Pound run this:



1
[root@lb1 ~] openssl req -x509 -newkey rsa:1024 -keyout local.server.pem -out local.server.pem -days 365 -nodes


view rawgistfile1.sh hosted
with ❤ by GitHub


Resources:


http://haproxy.1wt.eu/
http://www.keepalived.org/
http://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol
http://haproxy.1wt.eu/download/1.2/doc/architecture.txt

http://www.apsis.ch/pound/
页: [1]
查看完整版本: Building HA Load Balancer with HAProxy and keepalived