1、本文介绍centos7下的docker容器互联及端口映射问题
环境介绍:
docker1:192.168.1.230
docker2:192.168.1.231
a.两台宿主分别更改主机名docker1 and docker2
1
2
3
# hostnamectl set-hostname docker1
# reboot
b.在docker 1 和docker 2 上分别用 yum 方式安装 docker 并启动服务
1
2
3
4
5
6
[iyunv@docker1 ~]# yum -y install docker
[iyunv@docker1 ~]# service docker start
Redirecting to /bin/systemctl start docker.service
通过ps -ef | grep docker 查看进程已经启动
c.在 docker1 和 docker2 上分别安装 open vswitch及依赖环境
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[iyunv@docker1 ~]# yum -y install openssl-devel kernel-devel
[iyunv@docker1 ~]# yum groupinstall "Development Tools"
[iyunv@docker1 ~]# adduser ovswitch
[iyunv@docker1 ~]# su - ovswitch
[ovswitch@docker1 ~]$ wget [ovswitch@docker1 ~]$ tar -zxvpf openvswitch-2.3.0.tar.gz
[ovswitch@docker1 ~]$ mkdir -p ~/rpmbuild/SOURCES
[ovswitch@docker1 ~]$ sed 's/openvswitch-kmod, //g' openvswitch-2.3.0/rhel/openvswitch.spec > openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec
[ovswitch@docker1 ~]$ cp openvswitch-2.3.0.tar.gz rpmbuild/SOURCES/
[ovswitch@docker1 ~]$ rpmbuild -bb --without check ~/openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec
[ovswitch@docker1 ~]$ exit
[iyunv@docker1 ~]# yum localinstall /home/ovswitch/rpmbuild/RPMS/x86_64/openvswitch-2.3.0-1.x86_64.rpm
[iyunv@docker1 ~]# systemctl start openvswitch.service # 启动ovs
[iyunv@docker1 ~]# systemctl status openvswitch.service -l #查看服务状态
鈼[0m openvswitch.service - LSB: Open vSwitch switch
Loaded: loaded (/etc/rc.d/init.d/openvswitch)
Active: active (running) since Fri 2016-04-22 02:37:10 EDT; 9s ago
Docs: man:systemd-sysv-generator(8)
Process: 24616 ExecStart=/etc/rc.d/init.d/openvswitch start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/openvswitch.service
鈹溾攢24640 ovsdb-server: monitoring pid 24641 (healthy)
鈹溾攢24641 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor
鈹溾攢24652 ovs-vswitchd: monitoring pid 24653 (healthy)
鈹斺攢24653 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
Apr 22 02:37:10 docker1 openvswitch[24616]: /etc/openvswitch/conf.db does not exist ... (warning).
Apr 22 02:37:10 docker1 openvswitch[24616]: Creating empty database /etc/openvswitch/conf.db [ OK ]
Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovsdb-server [ OK ]
Apr 22 02:37:10 docker1 ovs-vsctl[24642]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.6.0
Apr 22 02:37:10 docker1 ovs-vsctl[24647]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.3.0 "external-ids:system-id=\"7469bdac-d8b0-4593-b300-fd0931eacbc2\"" "system-type=\"unknown\"" "system-version=\"unknown\""
Apr 22 02:37:10 docker1 openvswitch[24616]: Configuring Open vSwitch system IDs [ OK ]
Apr 22 02:37:10 docker1 openvswitch[24616]: Inserting openvswitch module [ OK ]
Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovs-vswitchd [ OK ]
Apr 22 02:37:10 docker1 openvswitch[24616]: Enabling remote OVSDB managers [ OK ]
Apr 22 02:37:10 docker1 systemd[1]: Started LSB: Open vSwitch switch.
d.在docker1和docker2分别建立桥接网卡和路由
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[iyunv@docker1 ~]# cat /proc/sys/net/ipv4/ip_forward
1
[iyunv@docker1 ~]# ovs-vsctl add-br obr0
[iyunv@docker1 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.1.230
[iyunv@docker1 ~]# brctl addbr kbr0
[iyunv@docker1 ~]# brctl addif kbr0 obr0
[iyunv@docker1 ~]# ip link set dev docker0 down
[iyunv@docker1 ~]# ip link del dev docker0
[iyunv@docker1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-kbr0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.100.10
NETMASK=255.255.255.0
GATEWAY=192.168.100.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no
DEVICE=kbr0
[iyunv@docker1 ~]# cat /etc/sysconfig/network-scripts/route-eth0
192.168.101.0/24 via 192.168.1.231 dev eth0~
[iyunv@docker1 ~]# systemctl restart network.service
[iyunv@docker1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 kbr0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 kbr0
192.168.101.0 192.168.1.231 255.255.255.0 UG 0 0 0 eth0
192.168.101.0 192.168.1.231 255.255.255.0 UG 100 0 0 eth0
c.虚拟网卡绑定kbr0后下载容器启动测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[iyunv@docker1 ~]# vi /etc/sysconfig/docker-network
# /etc/sysconfig/docker-network
DOCKER_NETWORK_OPTIONS="-b=kbr0"
[iyunv@docker1 ~]# service docker restart
Redirecting to /bin/systemctl restart docker.service
下载镜像:
[iyunv@docker1 ~]# docker search centos
[iyunv@docker1 ~]# docker pull
[iyunv@docker1 ~]# docker run -dti --name=mytest2 docker.io/nickistre/centos-lamp /bin/bash
[iyunv@docker1 ~]# docker ps -l #查看容器的状态
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
118479ccdebb docker.io/nickistre/centos-lamp "/bin/bash" 16 minutes ago Up About a minute 22/tcp, 80/tcp, 443/tcp mytest1
[iyunv@docker1 ~]# docker attach 118479ccdebb #进入容器
[iyunv@118479ccdebb ~]# ifconfig #容器自动分配的一个ip地址
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:64:01
inet addr:192.168.100.1 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:c0ff:fea8:6401/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7112 errors:0 dropped:0 overruns:0 frame:0
TX packets:3738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:12175213 (11.6 MiB) TX bytes:249982 (244.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:28 (28.0 b) TX bytes:28 (28.0 b)
[iyunv@118479ccdebb ~]# ping 192.168.101.1 #101.1位docker2的容器的ip地址
PING 192.168.101.1 (192.168.101.1) 56(84) bytes of data.
64 bytes from 192.168.101.1: icmp_seq=1 ttl=62 time=1.30 ms
64 bytes from 192.168.101.1: icmp_seq=2 ttl=62 time=0.620 ms
64 bytes from 192.168.101.1: icmp_seq=3 ttl=62 time=0.582 ms
至此不同宿主下面的容器就能互相通信了 ,那么如何通过宿主ip访问容器里面的业务呢呢,,
d.通过Dockerfile构建镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[iyunv@docker1 ~]# cat Dockerfile
#基本镜像
FROM docker.io/nickistre/centos-lamp
#作者
MAINTAINER PAIPX
#RUN命令
ADD apache-tomcat-6.0.43 /usr/local/apache-tomcat-6.0.43
RUN cd /usr/local/ && mv apache-tomcat-6.0.43 tomcat
ADD jdk-6u22-linux-x64.bin /root/
RUN cd /root/ && chmod +x jdk-6u22-linux-x64.bin && ./jdk-6u22-linux-x64.bin && mkdir -p /usr/java/ && cp jdk1.6.0_22 /usr/java/jdk -a
#构建环境变量
ENV JAVA_HOME /usr/java/jdk
ENV CLASSPATH $CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
ENV PATH $JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $
#暴露端口
EXPOSE 8080
CMD [ "catalina.sh", "run"]
build重构一个镜像
1
[iyunv@docker1 ~]# docker build -t tomcat2 .
启动一个容器
1
[iyunv@docker1 ~]# docker run -dti -p 8000:8080 --name=mytest4 tomcat2
然后通过http://ip:8000访问即可
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com