LVS高可用(六)LVS+keepalived主从

在之前的篇幅中对LVS及keepalived都分别做了介绍,本篇开始总结下LVS+keepalived组合方案。这里以LVS的DR模式为例,在LB层再实现HA的功能 。具体加架如下图:



<img src="https://www.361way.com/wp-content/uploads/2016/05/lvs-keepalived-mb.png" title="lvs-keepalived-master-backup" alt="lvs-keepalived-master-backup" width="621" height="232" align="" />

一、IP及规划

<table class="" style="width:231px;" cellspacing="0" cellpadding="0" bordercolor="#003399" border="2">
    <tbody>
        <tr>
            <td rowspan="2" class="xl63" width="102" height="36">
                realserver
            </td>
            <td width="129">
                192.168.122.10
            </td>
        </tr>
        <tr>
            <td height="18">
                192.168.122.20
            </td>
        </tr>
        <tr>
            <td rowspan="2" class="xl64" height="36">
                director
            </td>
            <td>
                192.168.122.30
            </td>
        </tr>
        <tr>
            <td height="18">
                192.168.122.40
            </td>
        </tr>
        <tr>
            <td class="xl64" height="18">
                VIP
            </td>
            <td>
                192.168.122.100
            </td>
        </tr>
    </tbody>
</table>



两台realserver安装apache httpd(yum -y install httpd)



两台director主机安装ipvsadm 、keepalived(yum -y install ipvsadm keepalived)



VIP需要配置在两台director上的eth0网口上,同时需要将该IP配置在两台realserver的lo回环口上。



在之前的篇幅中也介绍过,keepalived底层有关于IPVS的功能模块,可以直接在其配置文件中实现LVS的配置,不需要通过ipvsadm命令再单独配置。

二、director主机配置

MASTER节点的keepalived配置文件如下:



<br />
# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_T1
}
vrrp_sync_group bl_group {
group {
  bl_one
}
}
vrrp_instance bl_one {
    state MASTER
    interface eth0
    lvs_sync_daemon_interface eth0
    virtual_router_id 38
    priority 150
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      192.168.122.100
    }
}
virtual_server 192.168.122.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    persistence_timeout 1
    protocol TCP
    real_server 192.168.122.10 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.122.20 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}                   
BACKUP director主机的配置文件如下:



<br />
# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_T1
}
vrrp_sync_group bl_group {
group {
  bl_one
}
}
vrrp_instance bl_one {
    state MASTER
    interface eth0
    lvs_sync_daemon_interface eth0
    virtual_router_id 38
    priority 150
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      192.168.122.100
    }
}
virtual_server 192.168.122.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    persistence_timeout 1
    protocol TCP
    real_server 192.168.122.10 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.122.20 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}       
从上面的配置可以看到,这里配置的健康检测方式是基于IP端口的,我们也可憎修改成基于URL的。这点可以参考 <a href="https://www.361way.com/keepalived-health-check/5218.html" target="_blank" rel="noopener">keepalived健康检查方式</a> 。主机在启动keepalived服务时,可以从message中看到如下日志:



<a href="https://www.361way.com/wp-content/uploads/2016/05/keepalived-log.png" target="_blank" rel="noopener"><img src="https://www.361way.com/wp-content/uploads/2016/05/keepalived-log.png" title="keepalived-message" alt="keepalived-message" width="973" height="421" /></a>



<br />

三、realserver主机配置

两台realserver上使用的脚本一样,内容如下:



<br />
# cat dr_client.sh
#!/bin/bash
VIP=192.168.122.100
BROADCAST=192.168.122.255  #vip's broadcast
. /etc/rc.d/init.d/functions
case "1" in
 start)
  echo "reparing for Real Server"
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    ifconfig lo:0VIP netmask 255.255.255.255 broadcast BROADCAST up
     /sbin/route add -hostVIP dev lo:0
     ;;
 stop)
     ifconfig lo:0 down
    echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
     ;;
 *)
     echo "Usage: lvs {start|stop}"
     exit 1
esac 

四、测试

分别在两台主机上执行如下命令启动:



<br />
# /etc/init.d/keepalived start
# sh dr_client.sh start
在两台realserver上启动httpd服务,关闭防火墙(或放行80端口)。可以使用如下脚本进行访问测试:



<br />
#!/bin/sh
for((i=1;i<=100;i++));do
curl http://192.168.122.100 >> /tmp/q;
done

1、简单测试

在任一台director主机上使用ipvsadm命令观察:



<br />
[root@lvs-dr ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.122.100:80 rr persistent 1
  -> 192.168.122.10:80            Route   1      0          0
  -> 192.168.122.20:80            Route   1      0          100     
注:<span style="color:#E53333;">这里会现一个问题,发现使用轮询算法,使用一台client得到的结果是不均衡的(在一定的时间内其一直走其中一台realserver</span><span style="color:#E53333;">) 。当使用多台client去访问时,发现访问的结果是均衡的</span>。

2、realserver故障测试

将其中一台realserver主机关闭,通过curl 查看发现其会每次请求获取的都是正常的主机的页面,message中也能看到如下日志:



<br />
Oct  7 22:35:10 lvs-dr Keepalived_vrrp[32634]: IPVS: Daemon has already run
Oct  7 22:35:10 lvs-dr Keepalived_healthcheckers[32633]: Netlink reflector reports IP 192.168.122.100 added
Oct  7 22:35:15 lvs-dr Keepalived_vrrp[32634]: VRRP_Instance(bl_one) Sending gratuitous ARPs on eth0 for 192.168.122.100
Oct  7 22:38:16 lvs-dr Keepalived_healthcheckers[32633]: TCP connection to [192.168.122.10]:80 failed !!!
Oct  7 22:38:16 lvs-dr Keepalived_healthcheckers[32633]: Removing service [192.168.122.10]:80 from VS [192.168.122.100]:80
这台realserver恢复后,message中的日志如下:



<br />
Oct  7 22:39:19 lvs-dr Keepalived_healthcheckers[32633]: TCP connection to [192.168.122.10]:80 success.
Oct  7 22:39:19 lvs-dr Keepalived_healthcheckers[32633]: Adding service [192.168.122.10]:80 to VS [192.168.122.100]:80

3、director主机故障测试

通过关闭master director主机的keepalived服务,在backup director主机上发现日志如下:



<br />
Oct  7 22:40:14 lvs-dr Keepalived_vrrp[1601]: VRRP_Instance(bl_one) Transition to MASTER STATE
Oct  7 22:40:14 lvs-dr Keepalived_vrrp[1601]: VRRP_Group(bl_group) Syncing instances to MASTER state
Oct  7 22:40:17 lvs-dr Keepalived_vrrp[1601]: VRRP_Instance(bl_one) Entering MASTER STATE
Oct  7 22:40:17 lvs-dr Keepalived_vrrp[1601]: VRRP_Instance(bl_one) setting protocol VIPs.
Oct  7 22:40:17 lvs-dr Keepalived_vrrp[1601]: VRRP_Instance(bl_one) Sending gratuitous ARPs on eth0 for 192.168.122.100
Oct  7 22:40:17 lvs-dr Keepalived_healthcheckers[1600]: Netlink reflector reports IP 192.168.122.100 added
此时通过122.100访问服务不受影响 。

LVS高可用(六)LVS+keepalived主从》有1条评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注