设为首页 收藏本站
查看: 702|回复: 0

[经验分享] Kubernetes deployed on multiple ubuntu nodes

[复制链接]
累计签到:2 天
连续签到:1 天
发表于 2018-1-4 15:10:40 | 显示全部楼层 |阅读模式
  This document describes how to deploy kubernetes on multiple ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to any number of minion nodes by changing some settings with ease.>  Cloud team from ZJU will keep updating this work.

Prerequisites:
  1 The minion nodes have installed docker version 1.2+
  2 All machines can communicate with each orther, no need to connect Internet (should use private docker registry in this case)
  3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions
  4 Dependences of this guide: etcd-2.0.0, flannel-0.2.0, k8s-0.12.0, but it may work with higher versions

Main Steps

I. Make kubernetes , etcd and flanneld binaries
  On your laptop, copy cluster/ubuntu-cluster directory to your workspace.
  The build.sh will download and build all the needed binaries into ./binaries.
  You can customize your etcd version or K8s version in the build.sh by changing variable ETCD_V and K8S_V in build.sh, default etcd version is 2.0.0 and K8s version is 0.12.0.
  

$ cd cluster/ubuntu-cluster  
$ sudo ./build.sh
  

  

  Please copy all the files in ./binaries into /opt/bin of every machine you want to run as Kubernetes cluster node.
  Alternatively, if your Kubernetes nodes have access to Internet, you can copy cluster/ubuntu-cluster directory to every node and run:
  

# in every node  
$ cd cluster/ubuntu-cluster
  
$ sudo ./build.sh
  
$ sudo cp ./binaries/* /opt/bin
  

  

  We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.


II. Configue and install every components upstart script
  An example cluster is listed as below:

IP AddressRole10.10.103.223
minion
10.10.103.224
minion
10.10.103.162
minion
10.10.103.250
master  First of all, make sure cluster/ubuntu-cluster exists on this node,and run configue.sh.
  On master( infra1 10.10.103.250 ) node:
  

# in cluster/ubuntu-cluster  
$ sudo ./configure.sh
  
Welcome to use this script to configure k8s setup
  

  
Please enter all your cluster node ips, MASTER node comes first
  
And separated with blank space like "<ip_1> <ip2> <ip3>": 10.10.103.250 10.10.103.223 10.10.103.224 10.10.103.162
  

  
This machine acts as
  
both MASTER and MINION:      1
  
only MASTER:                 2
  
only MINION:                 3
  
Please choose a role > 2
  

  
IP address of this machine > 10.10.103.250
  

  
Configure Success
  

  

  On every minion ( e.g. 10.10.103.224 ) node:
  

# in cluster/ubuntu-cluster  
$ sudo ./configure.sh
  
Welcome to use this script to configure k8s setup
  

  
Please enter all your cluster node ips, MASTER node comes first
  
And separated with blank space like "<ip_1> <ip2> <ip3>": 10.10.103.250 10.10.103.223 10.10.103.224 10.10.103.162
  

  
This machine acts as
  
both MASTER and MINION:      1
  
only MASTER:                 2
  
only MINION:                 3
  
Please choose a role > 3
  

  
IP address of this machine > 10.10.103.224
  

  
Configure Success
  

  

  If you want a node acts as both running the master and minion, please choose option 1.

III. Start all components


  •   On the master node:
      $ sudo service etcd start
      Then on every minion node:
      $ sudo service etcd start

      The kubernetes commands will be started automatically after etcd


  •   On any node:
      $ /opt/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.0.0.0/16"}'

      You can use the below command on another node to comfirm if the network setting is correct.
      $ /opt/bin/etcdctl get /coreos.com/network/config
      If you got {"Network":"10.0.0.0/16"}, then etcd cluster is working well. If not , please check/var/log/upstart/etcd.log to resolve etcd problem before going forward. Finally, use ifconfig to see if there is a new network interface named flannel0 coming up.


  •   On every minion node
      Make sure you have brctl installed on every minion, otherwise please run sudo apt-get install bridge-utils
      $ sudo ./reconfigureDocker.sh
      This will make the docker daemon aware of flannel network.

  All done !

IV. Validation
  You can use kubectl command to see if the newly created k8s is working correctly.
  For example , $ kubectl get minions to see if you get all your minion nodes comming up.
  Also you can run kubernetes guest-example to build a redis backend cluster on the k8s.

V. Trouble Shooting
  Generally, what of this guide did is quite simple:


  •   Build and copy binaries and configuration files to proper dirctories on every node

  •   Configure etcd using IPs based on input from user

  •   Create and start flannel network

  So, whenver you have problem, do not blame Kubernetes, check etcd configuration first
  Please try:


  •   Check /var/log/upstart/etcd.log for suspicisous etcd log

  •   Check /etc/default/etcd, as we do not have much input validation, a right config should be like:
      

    ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"  

      


  •   Remove data-dir of etcd and run reconfigureDocker.shagain, the default path of data-dir is /infra*.etcd/

  •   You can also customize your own settings in /etc/default/{component_name} after configured success.


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-431550-1-1.html 上篇帖子: Kubernetes集群初探 下篇帖子: run kubernetes
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表