qqwe 发表于 2018-5-27 11:03:38

Docker 网络之pipework 工具(3)单主机Docker容器VLAN划分

  pipework不仅可以使用Linux bridge连接Docker容器,还可以与OpenVswitch结合,实现Docker容器的VLAN划分。下面,就来简单演示一下,在单机环境下,如何实现Docker容器间的二层隔离。
为了演示隔离效果,我们将4个容器放在了同一个IP网段中。但实际他们是二层隔离的两个网络,有不同的广播域。
  安装openvswitch
  安装基础环境
# yum -y install gcc make python-devel openssl-devel kernel-devel graphviz kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool  下载openvswitch的包

# wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz  解压与打包

# tar zxvf openvswitch-2.3.1.tar.gz
# mkdir -p ~/rpmbuild/SOURCES
# cp openvswitch-2.3.1.tar.gz~/rpmbuild/SOURCES/
#sed 's/openvswitch-kmod, //g' openvswitch-2.3.1/rhel/openvswitch.spec > openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
#rpmbuild -bb --without check openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec  之后会在~/rpmbuild/RPMS/x86_64/里有2个文件

# ls -l~/rpmbuild/RPMS/x86_64/  total 9552

  -rw-r--r--. 1 root root 2013568 Aug 16 15:47 openvswitch-2.3.1-1.x86_64.rpm
  -rw-r--r--. 1 root root 7763632 Aug 16 15:47 openvswitch-debuginfo-2.3.1-1.x86_64.rpm
  
  安装第一个就可以
# yum localinstall /root/rpmbuild/RPMS/x86_64/openvswitch-2.3.1-1.x86_64.rpm  提示信息按Y

  Is this ok : y
  启动
# /sbin/chkconfigopenvswitch on
# /sbin/serviceopenvswitchstart  Starting openvswitch (via systemctl):                     

  
  或者使用
# systemctlstartopenvswitch  查看状态

# /sbin/serviceopenvswitchstatus  ovsdb-server is running with pid 39963

  ovs-vswitchd is running with pid 39976
  或者使用
# systemctlstatusopenvswitch  

  ● openvswitch.service - LSB: Open vSwitch switch
     Loaded: loaded (/etc/rc.d/init.d/openvswitch; bad; vendor preset: disabled)
     Active: active (running) since Wed 2017-08-16 15:55:58 CST; 2min 27s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 39936 ExecStart=/etc/rc.d/init.d/openvswitch start (code=exited, status=0/SUCCESS)
     CGroup: /system.slice/openvswitch.service
           ├─39962 ovsdb-server: monitoring pid 39963 (healthy)
           ├─39963 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/d...
           ├─39975 ovs-vswitchd: monitoring pid 39976 (healthy)
           └─39976 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-...
  
  Aug 16 15:55:57 localhost.localdomain systemd: Starting LSB: Open vSwitch switch...
  Aug 16 15:55:57 localhost.localdomain openvswitch: /etc/openvswitch/conf.db does not exist ... (warning).
  Aug 16 15:55:57 localhost.localdomain openvswitch: Creating empty database /etc/openvswitch/conf.db
  Aug 16 15:55:57 localhost.localdomain openvswitch: Starting ovsdb-server
  Aug 16 15:55:57 localhost.localdomain openvswitch: Configuring Open vSwitch system IDs
  Aug 16 15:55:57 localhost.localdomain openvswitch: Inserting openvswitch module
  Aug 16 15:55:58 localhost.localdomain openvswitch: Starting ovs-vswitchd
  Aug 16 15:55:58 localhost.localdomain openvswitch: Enabling remote OVSDB managers
  Aug 16 15:55:58 localhost.localdomain systemd: Started LSB: Open vSwitch switch.
  
  可以看到是正常运行状态
  安装pipework
# cd/usr/src/
# ls  centos6.tardebug               kernels            openvswitch-2.3.1         pipework-master.zip

  centos7.tardocker-jdeathe.tarnginx-1.11.2.tar.gzopenvswitch-2.3.1.tar.gzregistry.tar
# unzippipework-master.zip
# cp -ppipework-master/pipework/usr/local/bin/  创建交换机,把物理网卡加入ovs1(xshell可能会断,如果断了就去物理机上面看)

# ovs-vsctl add-br ovs1;ovs-vsctl add-port ovs1 ens33;ip link set ovs1 up;ifconfig ens33 0;ifconfig ovs1 192.168.1.105
# ifconfig  ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu 1500

        inet6 fe80::20c:29ff:fef8:acecprefixlen 64scopeid 0x20<link>
        ether 00:0c:29:f8:ac:ectxqueuelen 1000(Ethernet)
        RX packets 20837bytes 23279865 (22.2 MiB)
        RX errors 0dropped 0overruns 0frame 0
        TX packets 15371bytes 2103761 (2.0 MiB)
        TX errors 0dropped 0 overruns 0carrier 0collisions 0
  ovs1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu 1500
        inet 192.168.1.105netmask 255.255.255.0broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fef8:acecprefixlen 64scopeid 0x20<link>
        ether 00:0c:29:f8:ac:ectxqueuelen 0(Ethernet)
        RX packets 27bytes 2900 (2.8 KiB)
        RX errors 0dropped 2overruns 0frame 0
        TX packets 47bytes 5451 (5.3 KiB)
        TX errors 0dropped 0 overruns 0carrier 0collisions 0
  
  
在主机A上创建4个Docker容器,duyuheng(1-4)

# docker run -itd --name duyuheng1 docker.nmgkj.com /bin/bash  

  9da698d33081e03dceda4c3c1e8f08408af2d031c2f16c24a6ec3c4ab15b1538
# docker run -itd --name duyuheng2 docker.nmgkj.com /bin/bash   ef1b3dc379f14b44835f5c9ff9076a365eddd0fcc5cfab5fa75c741fd13c1876
# docker run -itd --name duyuheng3 docker.nmgkj.com /bin/bash   94f2c6ad810ad75497b6019baecb5b68e5887374ae4d5824d08aa8f2bd9a5554
# docker run -itd --name duyuheng4 docker.nmgkj.com /bin/bash   c099c7de1e5fc12359013dc56d3a7b6d0dc2e4c1d7a2ce2b347babfcf9e2cce6
  将duyuheng1和duyuheng2划分到一个vlan中,vlan在mac地址后加@指定,此处mac地址省略。

# pipework ovs1 duyuheng1 192.168.1.1/24 @100
# pipework ovs1 duyuheng2 192.168.1.2/24 @100  将duyuheng3,duyuheng4划分到另一个vlan中

# pipework ovs1 duyuheng3 192.168.1.3/24 @200
# pipework ovs1 duyuheng4 192.168.1.4/24 @200  完成上述操作后,使用docker attach连到容器中,然后用ping命令测试连通性,发现test1和test2可以相互通信,但与test3和test4隔离。这样,一个简单的VLAN隔离容器网络就已经完成。

  

  进入容器内验证
# docker exec -it 9da698d33081    /bin/bash
# ping 192.168.1.2  PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.

  64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.709 ms
  ^C
  --- 192.168.1.2 ping statistics ---
  2 packets transmitted, 2 received, 0% packet loss, time 1029ms
  rtt min/avg/max/mdev = 0.087/0.398/0.709/0.311 ms
# ping 192.168.1.4  PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.

  ^C
  --- 192.168.1.4 ping statistics ---
  2 packets transmitted, 0 received, 100% packet loss, time 1813ms
  
  由于OpenVswitch本身支持VLAN功能,所以这里pipework所做的工作和之前介绍的基本一样,只不过将Linux bridge替换成了OpenVswitch,在将veth pair的一端加入ovs0网桥时,指定了tag。底层操作如下:
ovs-vsctl add-port ovs0 veth* tag=100
  
页: [1]
查看完整版本: Docker 网络之pipework 工具(3)单主机Docker容器VLAN划分