设为首页 收藏本站
查看: 1427|回复: 0

[经验分享] CentOS7下搭建Kubernetes集群

[复制链接]

尚未签到

发表于 2018-9-15 14:00:10 | 显示全部楼层 |阅读模式
1、测试环境

  操作系统:CentOS Linux>

  • 操作机        192.168.1.200
  • kube-master   192.168.1.210
  • kube-minion-1 192.168.1.211
  • kube-minion-2 192.168.1.212
  • kube-minion-3 192.168.1.213
2、准备工作

2.1 所有服务器设置静态IP
  在 /etc/sysconfig/network-scripts 路径下找到 ifcfg- 代表具体网卡,本文修改的网卡是 ifcfg-enp0s3
  

ONBOOT=yes #开机启动  
BOOTPROTO=static #静态IP
  
IPADDR=192.168.1.200 #本机地址
  
NETMASK=255.255.255.0 #子网掩码
  
GATEWAY=192.168.1.1 #默认网关
  

2.2 所有服务器设置DNS
  配置文件/etc/sysconfig/network
  

# Created by anaconda  
DNS1=192.168.1.1
  
DNS2=8.8.8.8
  

2.3 所有服务器修改hostname
  

hostnamectl --static set-hostname [主机名]  

  注:修改成与/etc/hosts对应的相同名称!

2.4 操作机向所有服务器添加公钥实现免密码登录
  通过ssh-keygen -t rsa和ssh-copy-id命令,不再赘述

2.5 在操作机安装Ansible
  

yum install ansible  

2.6 配置Ansible
  编辑/etc/ansible/hosts,在末尾添加服务器信息,添加组:kube、master、nodes:
  

[kube]  
192.168.1.[210:213]
  

  
[master]
  
192.168.1.210
  

  
[nodes]
  
192.168.1.[211:213]
  

2.7 测试Ansible
  查询kube组内所有服务器启动运行时间:
  

ansible kube -a 'uptime'  

2.8 所有服务器安装EPEL扩展源
  

ansible kube -m shell -a 'yum -y install epel-release'  

2.9 所有服务器修改/etc/hosts
  

ansible kube -m shell -a 'echo -e "192.168.1.210    kube-master\n192.168.1.211    kube-minion-1\n192.168.1.212    kube-minion-2\n192.168.1.213    kube-minion-3" >> /etc/hosts'  

  注:如果不加参数-m shell,会默认使用command模块导致添加失败

2.10 查看所有主机的/etc/hosts
  

ansible kube -a 'cat /etc/hosts'  

2.11 所有服务器安装Docker
  

ansible kube -m shell -a 'yum -y install docker'  

  
ansible kube -m yum -a 'name=docker state=present'
  

2.12 查看所有服务器是否成功安装Docker
  

ansible kube -m yum -a 'name=docker state=present'  

2.13 所有服务器设置Docker开机启动并启动服务
  

ansible kube -m service -a 'name=docker state=restarted enabled=yes'  

2.14 检查所有服务器Docker服务是否正常启动
  

ansible kube -m shell -a 'systemctl status docker'  

2.15 所有服务器安装时间同步工具NTP
  

ansible kube -m yum -a 'name=ntp state=present'  

  注:NTP配置文件在/etc/ntp.conf,可以对NTP服务器进行设置,NTP服务器可访问http://www.pool.ntp.org/zh/查看,pool.ntp.org是一个高可用时间服务器虚拟集群项目,网站建议使用下列默认域名,每个域名会每小时随机一组NTP服务器,进行时间同步时它会随机返回离你较近的NTP服务器。
  

server 0.pool.ntp.org  
server 1.pool.ntp.org
  
server 2.pool.ntp.org
  
server 3.pool.ntp.org
  

2.16 所有服务器启动NTP服务并设开机启动
  

ansible kube -m service -a 'name=ntpd state=restarted enabled=yes'  

2.17 查看所有服务器NTP服务是否正常开启
  

ansible kube -m shell -a 'systemctl status ntpd'  

2.18 查看所有服务器获得到的NTP服务器列表
  

ansible kube -a 'ntpq -p'  

  注:NTP服务器列表可能需要等几分钟后才能获得并完成对时,每个服务器获得的NTP服务器不一样是正常的。

3、安装配置Kubernetes
  以下步骤参考Kubernetes官网教程:
  https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/

3.1 所有服务器配置YUM库源
  

ansible kube -m shell -a 'echo "[virt7-docker-common-release]  
name=virt7-docker-common-release
  
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
  
gpgcheck=0" > /etc/yum.repos.d/virt7-docker-common-release.repo'
  

3.2 所有服务器安装Kubernetes,etcd,flannel
  

ansible kube -m shell -a 'yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel'  

3.3 所有服务器修改/etc/kubernetes/config配置文件
  默认内容:
  

###  
# kubernetes system config
  
#
  
# The following values are used to configure various aspects of all
  
# kubernetes services, including
  
#
  
#   kube-apiserver.service
  
#   kube-controller-manager.service
  
#   kube-scheduler.service
  
#   kubelet.service
  
#   kube-proxy.service
  
# logging to stderr means we get it in the systemd journal
  
KUBE_LOGTOSTDERR="--logtostderr=true"
  

  
# journal message level, 0 is debug
  
KUBE_LOG_LEVEL="--v=0"
  

  
# Should this cluster be allowed to run privileged docker containers
  
KUBE_ALLOW_PRIV="--allow-privileged=false"
  

  
# How the controller-manager, scheduler, and proxy find the apiserver
  
KUBE_MASTER="--master=http://127.0.0.1:8080"
  

  需要把KUBE_MASTER改成:
  

KUBE_MASTER="--master=http://kube-master:8080"  

  操作机执行修改命令:
  

ansible kube -m shell -a 'echo  "###  
# kubernetes system config
  
#
  
# The following values are used to configure various aspects of all
  
# kubernetes services, including
  
#
  
#   kube-apiserver.service
  
#   kube-controller-manager.service
  
#   kube-scheduler.service
  
#   kubelet.service
  
#   kube-proxy.service
  
# logging to stderr means we get it in the systemd journal
  
KUBE_LOGTOSTDERR=\"--logtostderr=true\"
  

  
# journal message level, 0 is debug
  
KUBE_LOG_LEVEL=\"--v=0\"
  

  
# Should this cluster be allowed to run privileged docker containers
  
KUBE_ALLOW_PRIV=\"--allow-privileged=false\"
  

  
# How the controller-manager, scheduler, and proxy find the apiserver
  
KUBE_MASTER=\"--master=http://kube-master:8080\"" > /etc/kubernetes/config'
  

3.4 所有服务器关掉SELinux和防火墙,并重启
  

ansible kube -m shell -a 'setenforce 0;  
systemctl disable firewalld;
  
systemctl stop firewalld;
  
reboot'
  

3.5 kube-master修改etcd配置文件
  配置文件在/etc/etcd/etcd.conf,注意确认配置文件中的以下参数与下文一致,主要是两个localhost改成0.0.0.0
  

# [member]  
ETCD_NAME=default
  
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
  

  
#[cluster]
  
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
  

3.6 kube-master修改apiserver配置文件
  打开/etc/kubernetes/apiserver,用以下内容覆盖:
  

# The address on the local server to listen to.  
KUBE_API_ADDRESS="--address=0.0.0.0"
  

  
# The port on the local server to listen on.
  
KUBE_API_PORT="--port=8080"
  

  
# Port kubelets listen on
  
KUBELET_PORT="--kubelet-port=10250"
  

  
# Comma separated list of nodes in the etcd cluster
  
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:2379"
  

  
# Address range to use for services
  
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  

  
# default admission control policies
  
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
  

  
# Add your own!
  
KUBE_API_ARGS=""
  

  注:KUBE_ADMISSION_CONTROL里去掉了ServiceAccount

3.7 kube-master上启动ETCD
  

systemctl start etcd  
etcdctl mkdir /kube-centos/network
  
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
  

3.8 所有服务器上修改flannel配置
  配置文件/etc/sysconfig/flanneld,修改成以下内容:
  

# Flanneld configuration options  

  
# etcd url location.  Point this to the server where etcd runs
  
FLANNEL_ETCD_ENDPOINTS="http://kube-master:2379"
  

  
# etcd config key.  This is the configuration key that flannel queries
  
# For address range assignment
  
FLANNEL_ETCD_PREFIX="/kube-centos/network"
  

  
# Any additional options that you want to pass
  
#FLANNEL_OPTIONS=""
  

  批量修改指令:
  

ansible kube -m shell -a 'echo "# Flanneld configuration options  

  
# etcd url location.  Point this to the server where etcd runs
  
FLANNEL_ETCD_ENDPOINTS=\"http://kube-master:2379\"
  

  
# etcd config key.  This is the configuration key that flannel queries
  
# For address range assignment
  
FLANNEL_ETCD_PREFIX=\"/kube-centos/network\"
  

  
# Any additional options that you want to pass
  
#FLANNEL_OPTIONS=\"\"" > /etc/sysconfig/flanneld'
  

3.9 kube-master上启动服务
  

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do  systemctl restart $SERVICES
  systemctl enable $SERVICES
  systemctl status $SERVICES
  
done
  

3.10 所有nodes服务器配置kubelet
  配置文件/etc/kubernetes/kubelet,改为以下内容:
  

# The address for the info server to serve on  
KUBELET_ADDRESS="--address=0.0.0.0"
  

  
# The port for the info server to serve on
  
KUBELET_PORT="--port=10250"
  

  
# You may leave this blank to use the actual hostname
  
# Check the node number!
  
KUBELET_HOSTNAME="--hostname-override=kube-minion-n"
  

  
# Location of the api-server
  
KUBELET_API_SERVER="--api-servers=http://kube-master:8080"
  

  
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  

  
# Add your own!
  
KUBELET_ARGS=""
  

  注:这里注释掉了KUBELET_HOSTNAME,是为了使用服务器主机名当kubelet名
  批量操作指令:
  

ansible nodes -m shell -a 'echo "# The address for the info server to serve on  
KUBELET_ADDRESS=\"--address=0.0.0.0\"
  

  
# The port for the info server to serve on
  
KUBELET_PORT=\"--port=10250\"
  

  
# You may leave this blank to use the actual hostname
  
# Check the node number!
  
KUBELET_HOSTNAME=\"kube-minion-n\"
  

  
# Location of the api-server
  
KUBELET_API_SERVER=\"--api-servers=http://kube-master:8080\"
  

  
# Add your own!
  
KUBELET_ARGS=\"\"" >/etc/kubernetes/kubelet'
  

  注:KUBELET_HOSTNAME要改成和/etc/hosts里的一致

3.11 所有nodes服务器启动服务
  

ansible nodes -m shell -a 'for SERVICES in kube-proxy kubelet flanneld docker; do  systemctl restart $SERVICES
  systemctl enable $SERVICES
  systemctl status $SERVICES
  
done'
  

3.12 kube-master上启动Kuberneters集群
  

kubectl config set-cluster default-cluster --server=http://kube-master:8080  
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
  
kubectl config use-context default-context
  
kubectl get nodes
  

3.13 大功告成!~
  目前为止Kubernetes就搭建完了,拍拍自己肩膀说干的不错小伙~ :P

4、搭建Dashboard

4.1 获得Dashboard的Docker镜像
  需要用到下列两个Docker镜像:


  • gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
  • registry.access.redhat.com/rhel7/pod-infrastructure:latest
  由于国内被屏蔽无法直接下载到,所以要用可以访问的主机下载后添加到所有主机里。也可以通过docker tag命令添加到Docker私有库后使用。
  操作指令:
  

# 导出镜像  
docker save gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 > dashboard.tar
  

  
# 导入镜像
  
docker load < dashboard.tar
  

4.2 编辑Dashboard的YAML
  编辑kubernetes-dashboard.yaml,内容如下:
  

kind: Deployment  
apiVersion: extensions/v1beta1
  
metadata:
  labels:
  app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  
spec:
  replicas: 1
  selector:
  matchLabels:
  app: kubernetes-dashboard
  template:
  metadata:
  labels:
  app: kubernetes-dashboard
  # Comment the following annotation if Dashboard must not be deployed on master
  annotations:
  scheduler.alpha.kubernetes.io/tolerations: |
  [
  {
  "key": "dedicated",
  "operator": "Equal",
  "value": "master",
  "effect": "NoSchedule"
  }
  ]
  spec:
  containers:
  - name: kubernetes-dashboard
  image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
  imagePullPolicy: IfNotPresent
  ports:
  - containerPort: 9090
  protocol: TCP
  args:
  # Uncomment the following line to manually specify Kubernetes API server Host
  # If not specified, Dashboard will attempt to auto discover the API server and connect
  # to it. Uncomment only if the default does not work.
  - --apiserver-host=http://192.168.1.210:8080    #注意这里是master的api的地址,要写master的IP,写域名会报错提示访问不到
  livenessProbe:
  httpGet:
  path: /
  port: 9090
  initialDelaySeconds: 30
  timeoutSeconds: 30
  
---
  
kind: Service
  
apiVersion: v1
  
metadata:
  labels:
  app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  
spec:
  type: NodePort
  ports:
  - port: 80
  targetPort: 9090
  selector:
  app: kubernetes-dashboard
  

4.2 部署Dashboard
  在kube-master上运行:
  

#开启Dashboard  
kubectl create -f kubernetes-dashboard.yaml
  

  
#查看pod运行状态
  
kubectl get pods --all-namespaces
  
#返回结果示例:
  
#NAMESPACE     NAME                                    READY     STATUS    #RESTARTS   AGE
  
#kube-system   kubernetes-dashboard-3345393181-6vq94   1/1       Running   0          44m
  
#kube-system   zl-redis-1545002913-89r4m               1/1       Running   0          38m
  
#kube-system   zl-redis-1545002913-cbgv5               1/1       Running   0          38m
  

  
#查看单个pod的描述
  
kubectl describe pod/[pod名字] --namespace=[命名空间]
  
#例:kubectl describe pod/zl-redis-1545002913-cbgv5 --namespace=kube-system
  

  
#查看pod日志
  
kubectl logs -f [pod名字] --namespace=[命名空间]
  
#例:kubectl logs -f zl-redis-1545002913-cbgv5 --namespace=kube-system
  

  

4.4 浏览器访问
  访问kube-master网址:http://192.168.1.210:8080/ui



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-583626-1-1.html 上篇帖子: Kubernetes核心概念之Replication Controller详解 下篇帖子: CentOS 7.4搭建Kubernetes 1.8.5集群
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表