设为首页 收藏本站
查看: 1080|回复: 0

[经验分享] Kubeadm安装k8s集群

[复制链接]

尚未签到

发表于 2018-1-5 18:18:25 | 显示全部楼层 |阅读模式
环境准备
  准备3台机安装一个master,两个Node的k8s集群,本文是在centos7的环境下安装k8s
  

192.168.144.2  (k8s-master)  

192.168.144.3  (k8s-node-1)  

192.168.144.4  (k8s-node-2)  


1,分别修改hostname
  使用vim /etc/hostname修改各个节点为对应的名字

2,将对应的ip映射加入三台物理机的hosts
  

cat > /etc/hosts <<EOF  

192.168.144.2  k8s-master  

192.168.144.3  k8s-node-1  
192.168.144.4  k8s-node-2
  
EOF
  


3,关闭selinux防火墙
  

setenforce 0  


4,安装docker-1.12.6并启动
  

yum install -y docker  
systemctl start docker
  
systemctl enable docker
  


5,网络设置
  

cat > /etc/sysctl.d/k8s.conf <<EOF  
net.bridge.bridge
-nf-call-ip6tables = 1  
net.bridge.bridge
-nf-call-iptables = 1  
EOF
  

  并使它生效
  

sysctl -p /etc/sysctl.d/k8s.conf  


安装k8s

1,增加k8s的yum源
  使用测试是否可以访问k8s的网址
  

curl http://yum.kubernetes.io/repos/kubernetes-el7-x86_64  

  如果可以,使用下面命令增加一个yum源
  

curl http://yum.kubernetes.io/repos/kubernetes-el7-x86_64cat <<EOF > /etc/yum.repos.d/kubernetes.repo  
[kubernetes]
  
name=Kubernetes
  
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
  
enabled=1
  
gpgcheck=1
  
repo_gpgcheck=1
  
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
  https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  
EOF
  


2,安装kubeadm, kubelet, kubectl, kubernets-cni
  

yum install -y kubelet-1.6.2 kubeadm-1.6.2 kubectl-1.6.2 kubernetes-cni  
systemctl enable kubelet.service
  


3,初始化集群(如果不能访问DOCKER镜像服务器,请提前准备好镜像)
  

kubeadm init --kubernetes-version=v1.6.2 --pod-network-cidr=10.96.0.0/16  

  
[kubeadm] WARNING: kubeadm is
in beta, please do not use it for production clusters.  
[init] Using Kubernetes version: v1.
6.2  
[init] Using Authorization mode: RBAC
  
[preflight] Running pre
-flight checks  
[preflight] Starting the kubelet service
  
[certificates] Generated CA certificate and key.
  
[certificates] Generated API server certificate and key.
  
[certificates] API Server serving cert is signed
for DNS names [node0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.144.2]  
[certificates] Generated API server kubelet client certificate and key.
  
[certificates] Generated service account token signing key and public key.
  
[certificates] Generated front
-proxy CA certificate and key.  
[certificates] Generated front
-proxy client certificate and key.  
[certificates] Valid certificates and keys now exist
in "/etc/kubernetes/pki"  
[kubeconfig] Wrote KubeConfig
file to disk: "/etc/kubernetes/admin.conf"  
[kubeconfig] Wrote KubeConfig
file to disk: "/etc/kubernetes/kubelet.conf"  
[kubeconfig] Wrote KubeConfig
file to disk: "/etc/kubernetes/controller-manager.conf"  
[kubeconfig] Wrote KubeConfig
file to disk: "/etc/kubernetes/scheduler.conf"  
[apiclient] Created API client, waiting
for the control plane to become ready  
[apiclient] All control plane components are healthy after
14.583864 seconds  
[apiclient] Waiting
for at least one node to register  
[apiclient] First node has registered after
6.008990 seconds  
[token] Using token: e7986d.e440de5882342711
  
[apiconfig] Created RBAC rules
  
[addons] Created essential addon: kube
-proxy  
[addons] Created essential addon: kube
-dns  

  
Your Kubernetes master has initialized successfully
!  

  
To start using your cluster, you need to run (as a regular user):
  

sudo cp /etc/kubernetes/admin.conf $HOME/  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf
  

  
You should now deploy a pod network to the cluster.
  
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  
http://kubernetes.io/docs/admin/addons/
  

  
You can now join any number of machines by running the following on each node
  
as root:
  

  kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443
  


  因为后续k8s的集群网络采用flannel安装,而flannel的默认网络是10.224.0.0/16,如果是后续不想修改kube-flannel.yml的网段为10.96.0.0/16,此处可以生命--service-cidr=10.244.0.0、16来指定


如果没有安装好,可以清理掉已安装的k8s,重新安装
  

kubeadm reset  

ifconfig cni0 down  
ip link delete cni0
  

ifconfig flannel.1 down  
ip link delete flannel.
1  
rm -rf /var/lib/cni/
  


4,为kubectl增加执行权限,可以调用api-server
  

sudo cp /etc/kubernetes/admin.conf $HOME/  
sudo chown $(id -u):$(id -g) $HOME/admin.conf
  
export KUBECONFIG=$HOME/admin.conf
  


5,在master节点查看已有节点,当dns没有初始化完,状态不会变为Ready
  

kubectl get nodes  
NAME      STATUS     AGE       VERSION
  
node0     NotReady   3m        v1.
6.1  


6,查看POD状态
  执行kubectl get pod --all-namespaces -o wide耐心等待,最后除了dns其他的pod都是running状态,当dns等待网络安装flannel后会自动变为running

7,使master node参与工作负载
  使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。
  这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载
  

kubectl taint nodes --all  node-role.kubernetes.io/master-  


8,安装Flannel初始化POD网络
  

mkdir -p flannel  
cd flanne
  

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml  
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  
cp kube-flannel.yml kube-flannel.yml.default
  
sed -i 's/10.244.0.0/10.96.0.0/g' kube-flannel.yml
  
sed -i 's/"--kube-subnet-mgr"/"--kube-subnet-mgr", "--iface=eth0"/g' kube-flannel.yml
  
kubectl create -f kube-flannel-rbac.yml
  
kubectl apply -f kube-flannel.yml
  


  此处有两个修改:
  1,设置flannel的网段为10.96.0.0/16
  2,指定网卡,不指定默认使用会换网卡lo
  当所有的POD的状态都变成running后测试dns是否正常
  

kubectl run curl --image=radial/busyboxplus:curl -i --tty  

  接着会先是如下内容
  

Waiting for pod default/curl-2421989462-vldmp to be running, status is Pending, pod ready: false  
Waiting
for pod default/curl-2421989462-vldmp to be running, status is Pending, pod ready: false  
If you don
't see a command prompt, try pressing enter.  
[ root@curl-2421989462-vldmp:/ ]$
  

  然后在命令行输入nslookup kubernetes.default
  

[ root@curl-2421989462-vldmp:/ ]$ nslookup kubernetes.default  
Server:
10.96.0.10  
Address
1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local  

  
Name:      kubernetes.default
  
Address
1: 10.96.0.1 kubernetes.default.svc.cluster.local  

  测试成功后删除测试的POD
  

kubectl delete deploy curl  


9,将k8s-node-1和k8s-node-2添加到集群
  执行上面初始化时,控制台打印的命令命令kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443
  

kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443  
[kubeadm] WARNING: kubeadm is
in beta, please do not use it for production clusters.  
[preflight] Running pre
-flight checks  
[discovery] Trying to connect to API Server
"192.168.144.2:6443"  
[discovery] Created cluster
-info discovery client, requesting info from "https://192.168.144.2:6443"  
[discovery] Cluster
info signature and contents are valid, will use API Server "https://192.168.144.2:6443"  
[discovery] Successfully established connection with API Server
"192.168.61.41:6443"  
[bootstrap] Detected server version: v1.
6.2  
[bootstrap] The server supports the Certificates API (certificates.k8s.io
/v1beta1)  
[csr] Created API client to obtain unique certificate
for this node, generating keys and certificate signing request  
[csr] Received signed certificate from the API server, generating KubeConfig...
  
[kubeconfig] Wrote KubeConfig
file to disk: "/etc/kubernetes/kubelet.conf"  

  
Node
join complete:  

* Certificate signing request sent to master and response  
received.
  

* Kubelet informed of new secure connection details.  

  
Run
'kubectl get nodes' on the master to see this machine join.  

  查看集群中节点
  

kubectl get nodes  
NAME      STATUS    AGE       VERSION
  
k8s
-master     Ready     12m       v1.6.2  
k8s
-node-1     Ready     4m        v1.6.2  
k8s
-node-2     Ready     2m        v1.6.2  


10, Dashboard安装
  

mkdir -p dashboard  
cd dashboard
  

wget https://git.io/kube-dashboard  
mv kube-dashboard kube-dashboard.yml
  
kubectl create -f kube-dashboard.yml
  
kubectl proxy &
  
cd ..
  

  执行kubectl get nodes --all-namespaces | grep dashboard查看节点是否启动成功
  执行curl http://127.0.0.1:8001/ui查看能否访问得到页面

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-431985-1-1.html 上篇帖子: k8s记录-3 下篇帖子: [工具开发] 分享两个基于Heapster 和 Influxdb 的 Grafana 监控仪表盘模板
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表