LVS+KeepAlived:配置高可用集群
LVS+KeepAlived:配置高可用集群说明:
上周使用keepalived实现了mysql主主同步的高可用,借着刚研究了keepalived,顺便把lvs集群搞一下
本文LVS集群采用DR负载调度方式
DR调度方式是性能最好的,但是要求LB负载机与real sever必须在同一物理网段上
参考文档:
lvs集群简介:http://732233048.blog.运维网.com/9323668/1617201
LVS、Nginx和HAProxy负载均衡区别:http://732233048.blog.运维网.com/9323668/1623375
环境:
前段LB后端web底端共享存储vip(主)192.168.186.134(R1)192.168.186.128192.168.186.131192.168.186.150(备)192.168.186.135(R2)192.168.186.129
配置步骤:
1、安装依赖包:
yum -y install gcc gcc-c++ make pcre pcre-devel kernel-devel openssl-devel
2、安装keepalived:
在(主)192.168.186.134和(备)192.168.186.135分别进行如下操作:
安装keepalived:
cd /usr/local/src/
wget http://www.keepalived.org/software/keepalived-1.2.15.tar.gz
tar -zxf keepalived-1.2.15.tar.gz
cd keepalived-1.2.15
./configure --prefix=/usr/local/keepalived --with-kernel-dir=/usr/src/kernels/2.6.32-504.16.2.el6.i686/
make
make install
注意:--with-kernel-dir=/usr/src/kernels/2.6.32-504.16.2.el6.i686/ 这个选项一定要加;这个选项并不是把keepalived编译进内核,而是指定使用内核源码中的头文件,即include目录(只有在配置lvs时才用此选项,其他时候不需要)
拷贝文件:
cp -a /usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/init.d/
cp -a /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
mkdir /etc/keepalived/
cp -a /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp -a /usr/local/keepalived/sbin/keepalived /usr/sbin/
注意:/etc/sysconfig/keepalived 和 /etc/keepalived/keepalived.conf 的路径一定要正确,因为在执行/etc/init.d/keepalived这个启动脚本时,会读取/etc/sysconfig/keepalived 和 /etc/keepalived/keepalived.conf 这两个文件
配置keepalived:
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.old
vi /etc/keepalived/keepalived.conf #主LB
! Configuration File for keepalived
global_defs {
notification_email {
732233048@qq.com
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id lvs
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
#nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.186.150
}
}
virtual_server 192.168.186.150 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 300
protocol TCP
real_server 192.168.186.128 80 {
weight 3
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.186.129 80 {
weight 2
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
}
}
} 在(备)192.168.186.135,对配置文件做如下修改:
将state MASTER改为state BACKUP
将priority 150改为priority 100
注意:在配置keepalived.conf时,要特别注意配置文件的语法格式,因为keepalived在启动的时候不会去检测配置文件的正确性,即使没有配置文件,keepalived也可以正常启动
注意:persistence_timeout:会话保持时间,单位是秒;
这个选项对动态网页是非常有用的,为集群系统中的session共享提供了一个很好的解决方案;
有了这个会话保持功能,用户的请求会被一直分发到某个服务节点,直到超过这个会话的保持时间;
需要注意的是,这个会话保持时间是最大无响应超时时间,也就是说,用户在操作动态页面时,如果在300秒内没有执行任何操作,那么接下来的操作会被分发到另外的节点,但是如果用户一直在操作动态页面,则不受300秒时间的限制
启动服务:
chkconfig keepalived on
/etc/init.d/keepalived start
查看是否启动:
tail -n 30 /var/log/messages
May 12 06:13:16 scj Keepalived: Starting VRRP child process, pid=2989
May 12 06:13:16 scj Keepalived_vrrp: Netlink reflector reports IP 192.168.186.134 added
May 12 06:13:16 scj Keepalived_vrrp: Netlink reflector reports IP fe80::20c:29ff:fead:145 added
May 12 06:13:16 scj Keepalived_vrrp: Registering Kernel netlink reflector
May 12 06:13:16 scj Keepalived_vrrp: Registering Kernel netlink command channel
May 12 06:13:16 scj Keepalived_vrrp: Registering gratuitous ARP shared channel
May 12 06:13:16 scj Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
May 12 06:13:16 scj Keepalived_vrrp: Configuration is using : 37233 Bytes
May 12 06:13:16 scj Keepalived_vrrp: Using LinkWatch kernel netlink reflector...
May 12 06:13:16 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 12 06:13:16 scj Keepalived_vrrp: VRRP sockpool:
May 12 06:13:16 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.134 added
May 12 06:13:16 scj Keepalived_healthcheckers: Netlink reflector reports IP fe80::20c:29ff:fead:145 added
May 12 06:13:16 scj Keepalived_healthcheckers: Registering Kernel netlink reflector
May 12 06:13:16 scj Keepalived_healthcheckers: Registering Kernel netlink command channel
May 12 06:13:16 scj kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
May 12 06:13:16 scj kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
May 12 06:13:16 scj kernel: IPVS: ipvs loaded.
May 12 06:13:16 scj Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
May 12 06:13:16 scj Keepalived_healthcheckers: Configuration is using : 13636 Bytes
May 12 06:13:17 scj Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector...
May 12 06:13:17 scj Keepalived_healthcheckers: Activating healthchecker for service :80
May 12 06:13:17 scj Keepalived_healthcheckers: Activating healthchecker for service :80
May 12 06:13:17 scj kernel: IPVS: scheduler registered.
May 12 06:13:20 scj Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 12 06:13:21 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 12 06:13:21 scj Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 12 06:13:21 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
May 12 06:13:21 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.150 added
May 12 06:13:26 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
3、安装IPVS:
在(主)192.168.186.134和(备)192.168.186.135分别进行以下操作:
安装依赖包:
yum -y install popt popt-devel libnl libnl-devel popt-static
安装IPVS:
cd /usr/local/src/
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
tar -zxf ipvsadm-1.26.tar.gz
cd ipvsadm-1.26
make
make install
ipvsadm --help #若有帮助提示则表示安装成功
4、 Real Server配置:
注意:每台real server都要进行以下操作:
本文采用的是DR负载方式,用户的请求到达real server后,real server处理完数据后是直接返回给用户的,不再经过LB负载机,因此,需要在每台real server上都绑定一个vip
vip绑定在lo接口
vi /etc/init.d/rscreatevip.sh
#!/bin/bash
#real server create vip
VIP=192.168.186.150
#./etc/rc.d/init.d/functions
case $1 in
start)
echo "real server create vip"
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
stop)
/sbin/ifconfig lo:0 down
echo "real server remove vip"
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "usage: (start|stop)"
exit 1
esac chmod 755 /etc/init.d/rscreatevip.sh #加执行权限
/etc/init.d/rscreatevip.sh start
echo "/etc/init.d/rscreatevip.sh start" >> /etc/rc.d/rc.local #设置开机自动启动
5、常用命令:
在LB负载机上执行以下命令:
ipvsadm -ln #查看监控的real server 有哪些
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.186.150:80 rr persistent 5
-> 192.168.186.128:80 Route 2 0 0
-> 192.168.186.129:80 Route 3 0 0
ip addr #查看vip绑定在哪台LB机
1: lo:mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:0c:29:ad:01:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.186.134/24 brd 192.168.186.255 scope global eth0
inet 192.168.186.150/32 scope global eth0
inet6 fe80::20c:29ff:fead:145/64 scope link
valid_lft forever preferred_lft forever
6、测试:
keepalived主备之间的切换:
在(主)192.168.186.134 执行stop:
/etc/init.d/keepalived stop
查看(主)192.168.186.134 日志:
tail -f /var/log/messages
May 12 09:51:01 scj Keepalived: Stopping Keepalived v1.2.15 (05/12,2015)
May 12 09:51:01 scj Keepalived_vrrp: VRRP_Instance(VI_1) sending 0 priority
May 12 09:51:01 scj Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
May 12 09:51:01 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.150 removed
May 12 09:51:01 scj Keepalived_healthcheckers: Removing service :80 from VS :80
May 12 09:51:01 scj Keepalived_healthcheckers: Removing service :80 from VS :80
查看(备)192.168.186.135 日志:
tail -f /var/log/messages
May 12 09:51:01 scj Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 12 09:51:02 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 12 09:51:02 scj Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 12 09:51:02 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.150 added
May 12 09:51:02 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
May 12 09:51:07 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
在(主)192.168.186.134 执行 start:
查看(主)192.168.186.134 日志:
tail -f /var/log/messages
May 12 09:51:46 scj Keepalived: Starting Keepalived v1.2.15 (05/12,2015)
May 12 09:51:46 scj Keepalived: Starting Healthcheck child process, pid=3591
May 12 09:51:46 scj Keepalived: Starting VRRP child process, pid=3592
May 12 09:51:46 scj Keepalived_vrrp: Netlink reflector reports IP 192.168.186.134 added
May 12 09:51:46 scj Keepalived_vrrp: Netlink reflector reports IP fe80::20c:29ff:fead:145 added
May 12 09:51:46 scj Keepalived_vrrp: Registering Kernel netlink reflector
May 12 09:51:46 scj Keepalived_vrrp: Registering Kernel netlink command channel
May 12 09:51:46 scj Keepalived_vrrp: Registering gratuitous ARP shared channel
May 12 09:51:46 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.134 added
May 12 09:51:46 scj Keepalived_healthcheckers: Netlink reflector reports IP fe80::20c:29ff:fead:145 added
May 12 09:51:46 scj Keepalived_healthcheckers: Registering Kernel netlink reflector
May 12 09:51:46 scj Keepalived_healthcheckers: Registering Kernel netlink command channel
May 12 09:51:46 scj Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
May 12 09:51:46 scj Keepalived_vrrp: Configuration is using : 37233 Bytes
May 12 09:51:46 scj Keepalived_vrrp: Using LinkWatch kernel netlink reflector...
May 12 09:51:46 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 12 09:51:46 scj Keepalived_vrrp: VRRP sockpool:
May 12 09:51:47 scj Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
May 12 09:51:47 scj Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
May 12 09:51:48 scj Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 12 09:51:48 scj Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
May 12 09:51:48 scj Keepalived_healthcheckers: Configuration is using : 13636 Bytes
May 12 09:51:48 scj Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector...
May 12 09:51:48 scj Keepalived_healthcheckers: Activating healthchecker for service :80
May 12 09:51:48 scj Keepalived_healthcheckers: Activating healthchecker for service :80
May 12 09:51:49 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 12 09:51:49 scj Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 12 09:51:49 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
May 12 09:51:49 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.150 added
May 12 09:51:54 scj Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.186.150
May 12 09:52:22 scj dhclient: DHCPREQUEST on eth0 to 192.168.186.254 port 67 (xid=0x5c3132b)
May 12 09:52:22 scj dhclient: DHCPACK from 192.168.186.254 (xid=0x5c3132b)
May 12 09:52:24 scj dhclient: bound to 192.168.186.134 -- renewal in 770 seconds.
查看(备)192.168.186.135 日志:
tail -f /var/log/messages
May 12 09:51:46 scj Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
May 12 09:51:46 scj Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 12 09:51:46 scj Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
May 12 09:51:46 scj Keepalived_healthcheckers: Netlink reflector reports IP 192.168.186.150 removed
May 12 09:54:00 scj dhclient: DHCPREQUEST on eth0 to 192.168.186.254 port 67 (xid=0x1e01f7ac)
May 12 09:54:00 scj dhclient: DHCPACK from 192.168.186.254 (xid=0x1e01f7ac)
May 12 09:54:02 scj dhclient: bound to 192.168.186.135 -- renewal in 785 seconds.
负载均衡测试:
(R1)192.168.186.128:
在网站根目录,如:/opt/nginx/www/www.scj.com/下,创建一个index.html文件
echo "server 128" > index.html
(R2)192.168.186.129:
在网站根目录,如:/opt/nginx/www/www.scj.com/下,创建一个index.html文件
echo "server 129" > index.html
访问192.168.186.150,多次刷新,看是否轮询出现"server 128"和"server 129"
注意:在测试的时候,每次要等300秒再刷新,才会出现轮询,由persistence_timeout这个参数控制,可能需要多等一会,只要是出现轮询就是正常的
real server故障检测:
将(R1)192.168.186.128的httpd服务停掉:
/usr/local/nginx/sbin/nginx -s stop
查看(主)192.168.186.134的日志:
tail -f /var/log/messages
May 12 12:12:23 scj Keepalived_healthcheckers: TCP connection to :80 failed !!!
May 12 12:12:23 scj Keepalived_healthcheckers: Removing service :80 from VS :80
May 12 12:12:23 scj Keepalived_healthcheckers: Remote SMTP server :25 connected.
May 12 12:12:23 scj Keepalived_healthcheckers: SMTP alert successfully sent.
将(R1)192.168.186.128的httpd服务重新起来:
/usr/local/nginx/sbin/nginx
查看(主)192.168.186.134的日志:
tail -f /var/log/messages
May 12 12:14:59 scj Keepalived_healthcheckers: TCP connection to [192.168.186.128]:80 success.
May 12 12:14:59 scj Keepalived_healthcheckers: Adding service :80 to VS :80
May 12 12:14:59 scj Keepalived_healthcheckers: Remote SMTP server :25 connected.
May 12 12:14:59 scj Keepalived_healthcheckers: SMTP alert successfully sent.
7、共享存储配置:
在192.168.186.131进行以下下操作:
yum -yinstall nfs-utils rpcbind
mkdir -p /data/www/ #创建存储目录
cd /data/www/
vi /etc/exports
/data/www/ 192.168.186.0/24(rw,async,no_root_squash) chkconfig nfs on
chkconfigrpcbind on
/etc/init.d/rpcbind start #一定要先启动rpcbind
/etc/init.d/nfs start
在每台real server进行以下操作:
yum -y install nfs-utils
showmount -e 192.168.186.131 #查看共享的目录有哪些
vi /etc/fstab #设置开机自动挂载
添加以下行:
192.168.186.131:/data/www /opt/nginx/www/www.scj.com nfs defaults 0 0 mount -a #这个命令很重要,测试是否成功挂载
df -h #查看挂载情况
Filesystem SizeUsed Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
19G6.1G 12G36% /
tmpfs 58M 0 58M 0% /dev/shm
/dev/sda1 485M 30M430M 7% /boot
192.168.186.131:/data/www
19G3.0G 15G18% /opt/nginx/www/www.scj.com
8、memcached实现session共享:
由于机器数量有限,先临时使用192.168.186.131作为memcached服务器
在192.168.186.131安装配置memcached:
安装依赖包libevent:
cd /usr/local/src/
wget http://downloads.sourceforge.net/levent/libevent-2.0.22-stable.tar.gz
tar -zxf libevent-2.0.22-stable.tar.gz
cd libevent-2.0.22-stable
./configure --prefix=/usr
make
make install
安装memcached:
cd /usr/local/src/
wget http://www.memcached.org/files/memcached-1.4.22.tar.gz
tar -zxf memcached-1.4.22.tar.gz
cd memcached-1.4.22
./configure --prefix=/usr/local/memcached --with-libevent=/usr
make
make install
注意:默认memcached单个进程只支持到2G内存,需要更大内存支持的话,需要打开64位支持,编译的时候加参数:--enable-64bit
groupadd memcached
useradd -g memcached memcached #创建memcached账号
mkdir -p /var/run/memcached/
chown -R memcached.memcached /var/run/memcached/
/usr/local/memcached/bin/memcached-u memcached -m 1024 -c 100 -l 192.168.186.131 -p 11211 -P /var/run/memcached/memcached.pid -d #启动memcached
注意:
-p 监听tcp协议的监听端口
-T 监听UDP协议id监听端口默认都是11211
-s 如果只在本地通信那么可以将其监听在某个套接字上,比如mysql.scok 能够利用共享内存方式进行通信的
-c 最大并发连接数
-l 监听的地址,如果服务器有多块网卡,那么用-l来指定监听的某个网卡上
-d 放在后台运行
-r 设定最大内核大小限制
-u 以某个用户身份运行
-m以兆为单位指定memcached最大内存可用空间
-t 用于处理入站请求最大的线程数,仅在memcached编译时开启了支持线程才有效,而linux对线程支持是非常有限的,所以不用关心了
-f 设定slab定义预先分配内存空间大小固定的块时使用的增长因子
-n 最小的存储单位是多大,默认是48字节,单位是字节
-P 指定pid文件
-L 视图使用最多的内存空间
-S 启用SSL认证功能
echo "/usr/local/memcached/bin/memcached-u memcached -m 1024 -c 100 -l 192.168.186.131 -p 11211 -P /var/run/memcached/memcached.pid -d" >> /etc/rc.d/rc.local #设置开机自动启动
# cat /var/run/memcached/memcached.pid #查看进程号
# killall memcached 关闭memcached服务
在每台real server安装php的memcache扩展:
注意:php的扩展包括memcache扩展和memcached扩展
memcached扩展需要先安装依赖包:libmemcached
这里采用的是memcache扩展,不需要安装依赖包
只有安装php的memcache扩展,php才可以去访问131的memcached服务
cd /usr/local/src/
wget http://pecl.php.net/get/memcache-2.2.7.tgz
tar -zxf memcache-2.2.7.tgz
cd memcache-2.2.7
/usr/local/php/bin/phpize #可以用find / -name phpize 查找路径
./configure --enable-memcache --with-php-config=/usr/local/php/bin/php-config
make
make install
注意:在执行make install时,若没有出错,则会输出:
Installing shared extensions: /usr/local/php/lib/php/extensions/no-debug-non-zts-20121212/
把这个路径记录下来,待会还会用到
vi /usr/local/php/etc/php.ini #修改配置文件
找到 extension_dir ,在相应的位置添加以下行:
extension_dir = "/usr/local/php/lib/php/extensions/no-debug-non-zts-20121212/"
extension = memcache.so
session.cookie_lifetime = 86400
session.gc_maxlifetime = 86400
session.save_handler = memcache
session.save_path = "tcp://192.168.186.131:11211"
; session.save_path = "tcp://192.168.186.131:11211,tcp://192.168.xxx.xxx:11211" # ps -ef | grep php #找到php的主进程号
root 12139 10 05:46 ? 00:00:00 php-fpm: master process (/usr/local/php/etc/php-fpm.conf)
nobody 12140 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12141 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12142 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12143 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12144 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12145 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12146 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12147 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12148 121390 05:46 ? 00:00:00 php-fpm: pool www
nobody 12149 121390 05:46 ? 00:00:00 php-fpm: pool www
root 1216018690 05:47 pts/0 00:00:00 grep php
kill -SIGUSR2 12139 #重新启动php(类似reload)
访问phpinfo();可以看到如下图:
http://s3.运维网.com/wyfs02/M00/6C/D1/wKiom1VStz2CwnFKAAHl0-ZBja0293.jpg
对192.168.186.131的memcached服务进行测试:
telnet 192.168.186.131 11211 #连接memcacehd的11211端口
Trying 192.168.186.131...
Connected to 192.168.186.131.
Escape character is '^]'.
set test 0 0 10 #设置test变量,变量的值为10个字节
test_value #存入test变量的值,10个字节
STORED
get test #获取test的值
VALUE test 0 10
test_value
END
quit #退出
Connection closed by foreign host.
对real server的php的memcache扩展进行测试:
在随便一台real server进行以下操作:
vi test.php #创建一个php文件
访问192.168.186.150/test.php,出现如下:
http://s3.运维网.com/wyfs02/M00/6C/D2/wKiom1VSxNvDsDbGAAEI0qxpmqE658.jpg
页:
[1]