设为首页 收藏本站
查看: 834|回复: 0

[经验分享] Centos7.3.1611部署Kubernetes集群

[复制链接]

尚未签到

发表于 2018-1-4 18:53:59 | 显示全部楼层 |阅读模式
  1、环境介绍及准备:
  1.1 操作系统采用Centos7.3.1611 64位。
  

[iyunv@ip11-cent7 ~]# cat /etc/redhat-release  
CentOS Linux>7.3.1611 (Core)  

  1.2 主机信息
  本文准备了三台机器用于部署k8s的运行环境,细节如下:

IP地址
主机名hostname  部署应用
  192.168.2.10(Master节点)

ip10-cent7-k8s-master  docker、flannel、etcd、registry、
  kubernetes(API Server,Controller Manager,Scheduler)

192.168.2.11
ip11-cent7-k8s-node  docker、flannel、
  kubernetes(kubelet,kube-proxy)

192.168.2.12
ip12-cent7-k8s-node  docker、flannel、
  kubernetes(kubelet,kube-proxy)
  设置三台机器的主机名:
  

Master上执行:  
# hostnamectl
--static set-hostname ip10-cent7-k8s-master  
Node1上执行:
  
# hostnamectl
--static set-hostname ip11-cent7-k8s-node  
Node2上执行:
  
# hostnamectl
--static set-hostname ip12-cent7-k8s-node  

  在三台机器上设置hosts,均执行如下命令:
  

echo '192.168.2.10 ip10-cent7-k8s-master  
192.168.2.10 etcd
  
192.168.2.10 registry
  
192.168.2.11 ip11-cent7-k8s-node
  
192.168.2.12 ip12-cent7-k8s-node' >> /etc/hosts
  

  关闭三台机器上的防火墙,同时安装相应软件
  

三台机器同时执行  systemctl disable firewalld.service
  
systemctl stop firewalld.service
  
yum install docker
-y  
yum install flannel
-y  
yum install kubernetes
-y  Master节点继续执行(registry安装参考我前面写的文章http://www.cnblogs.com/hwp0710/p/7499587.html)
  yum install etcd -y
  

  2、软件都安装好了.
  配置Master节点
  配置Flannel,编辑/etc/sysconfig/flanneld,修改红色部分
  

[iyunv@k8s-master ~]# vi /etc/sysconfig/flanneld  

  
# Flanneld configuration options
  

  
# etcd url location. Point this to the server where etcd runs
  
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
  

  
# etcd config key. This is the configuration key that flannel queries
  
# For address range assignment
  
FLANNEL_ETCD_PREFIX="/atomic.io/network"
  

  
# Any additional options that you want to pass
  
#FLANNEL_OPTIONS=""
  

  etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下红色部分信息:
  

[iyunv@localhost ~]# vi /etc/etcd/etcd.conf  

  
# [member]
  
ETCD_NAME
=master  

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  
#ETCD_WAL_DIR
=""  
#ETCD_SNAPSHOT_COUNT
="10000"  
#ETCD_HEARTBEAT_INTERVAL
="100"  
#ETCD_ELECTION_TIMEOUT
="1000"  
#ETCD_LISTEN_PEER_URLS
="http://0.0.0.0:2380"  
ETCD_LISTEN_CLIENT_URLS
="http://0.0.0.0:2379,http://0.0.0.0:4001"  
#ETCD_MAX_SNAPSHOTS
="5"  
#ETCD_MAX_WALS
="5"  
#ETCD_CORS
=""  
#
  
#[cluster]
  
#ETCD_INITIAL_ADVERTISE_PEER_URLS
="http://localhost:2380"  
#
if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."  
#ETCD_INITIAL_CLUSTER
="default=http://localhost:2380"  
#ETCD_INITIAL_CLUSTER_STATE
="new"  
#ETCD_INITIAL_CLUSTER_TOKEN
="etcd-cluster"  
ETCD_ADVERTISE_CLIENT_URLS
="http://etcd:2379,http://etcd:4001"  
#ETCD_DISCOVERY
=""  
#ETCD_DISCOVERY_SRV
=""  
#ETCD_DISCOVERY_FALLBACK
="proxy"  
#ETCD_DISCOVERY_PROXY
=""  

  验证etcd状态
  

[iyunv@localhost ~]# systemctl enable etcd.service  
[iyunv@localhost
~]# systemctl start etcd  
[iyunv@localhost
~]# etcdctl set testdir/testkey0 0  
0
  
[iyunv@localhost ~]# etcdctl get testdir/testkey0
  
0
  
[iyunv@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
  
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
  
cluster is healthy
  
[iyunv@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
  
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
  
cluster is healthy
  

  配置Flannel,Master节点上配置etcd中关于flannel的key
  Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
  

[iyunv@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'  
{ "Network": "10.0.0.0/16" }
  

  配置kubernetes(组件:Kubernets API Server,Kubernets Controller Manager,Kubernets Scheduler)
  配置apiserver
  

[iyunv@k8s-master ~]# vi /etc/kubernetes/apiserver  

  
###
  
# kubernetes system config
  
#
  
# The following values are used to configure the kube
-apiserver  
#
  

  
# The address on the local server to listen to.
  
KUBE_API_ADDRESS
="--insecure-bind-address=0.0.0.0"  

  
# The port on the local server to listen on.
  
KUBE_API_PORT
="--port=8080"  

  
# Port minions listen on
  
# KUBELET_PORT
="--kubelet-port=10250"  

  
# Comma separated list of nodes
in the etcd cluster  
KUBE_ETCD_SERVERS
="--etcd-servers=http://etcd:2379"  

  
# Address range to use
for services  
KUBE_SERVICE_ADDRESSES
="--service-cluster-ip-range=10.254.0.0/16"  

  
# default admission control policies
  

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"  

  
# Add your own
!  
KUBE_API_ARGS
=""  

  配置/etc/kubernetes/config
  

[iyunv@k8s-master ~]# vi /etc/kubernetes/config  

  
###
  
# kubernetes system config
  
#
  
# The following values are used to configure various aspects of all
  
# kubernetes services, including
  
#
  
# kube
-apiserver.service  
# kube
-controller-manager.service  
# kube
-scheduler.service  
# kubelet.service
  
# kube
-proxy.service  
# logging to stderr means we get it
in the systemd journal  
KUBE_LOGTOSTDERR
="--logtostderr=true"  

  
# journal message level,
0 is debug  
KUBE_LOG_LEVEL
="--v=0"  

  
# Should this cluster be allowed to run privileged docker containers
  
KUBE_ALLOW_PRIV
="--allow-privileged=false"  

  
# How the controller
-manager, scheduler, and proxy find the apiserver  
KUBE_MASTER
="--master=http://ip10-cent7-k8s-master:8080"  

  启动服务并设置开机自启动
  

systemctl enable etcd.service flanneld.service kube-apiserver.service kube-controller-manager.service kube-scheduler.service  

systemctl restart etcd.service  systemctl restart flanneld.service
  systemctl restart kube-apiserver.service
  systemctl restart kube-controller-manager.service
  systemctl restart kube-scheduler.service
  

  3、部署配置node
  配置Flannel,编辑/etc/sysconfig/flanneld,修改红色部分
  

[iyunv@k8s-master ~]# vi /etc/sysconfig/flanneld  

  
# Flanneld configuration options
  

  
# etcd url location. Point this to the server where etcd runs
  
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
  

  
# etcd config key. This is the configuration key that flannel queries
  
# For address range assignment
  
FLANNEL_ETCD_PREFIX="/atomic.io/network"
  

  
# Any additional options that you want to pass
  
#FLANNEL_OPTIONS=""
  

  配置/etc/kubernetes/config
  

[iyunv@K8s-node-1 ~]# vi /etc/kubernetes/config  

  
###
  
# kubernetes system config
  
#
  
# The following values are used to configure various aspects of all
  
# kubernetes services, including
  
#
  
# kube
-apiserver.service  
# kube
-controller-manager.service  
# kube
-scheduler.service  
# kubelet.service
  
# kube
-proxy.service  
# logging to stderr means we get it
in the systemd journal  
KUBE_LOGTOSTDERR
="--logtostderr=true"  

  
# journal message level,
0 is debug  
KUBE_LOG_LEVEL
="--v=0"  

  
# Should this cluster be allowed to run privileged docker containers
  
KUBE_ALLOW_PRIV
="--allow-privileged=false"  

  
# How the controller
-manager, scheduler, and proxy find the apiserver  
KUBE_MASTER
="--master=http://ip10-cent7-k8s-master:8080"  

  配置/etc/kubernetes/kubelet
  

[iyunv@K8s-node-1 ~]# vi /etc/kubernetes/kubelet  

  
###
  
# kubernetes kubelet (minion) config
  

  
# The address
for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)  
KUBELET_ADDRESS
="--address=0.0.0.0"  

  
# The port
for the info server to serve on  
# KUBELET_PORT
="--port=10250"  

  
# You may leave this blank to use the actual
hostname  
KUBELET_HOSTNAME
="--hostname-override=ip11-cent7-k8s-node"  

  
# location of the api
-server  
KUBELET_API_SERVER
="--api-servers=http://ip10-cent7-k8s-master:8080"  

  
# pod infrastructure container
  
# KUBELET_POD_INFRA_CONTAINER
="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"  

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.2.10:5000/pod-infrastructure:latest"  #这里image改为私服地址,因为国外image很难下载,需要提前下载并上传到私服上,后面会结束如何下载改镜像并上传到私服,此处提前修改好
  

  

# Add your own! KUBELET_ARGS=""  

  启动服务并设置开机自启动
  

systemctl enable flanneld.service kubelet.service kube-proxy.service  

systemctl restart flanneld.service  systemctl restart kubelet.service
  

systemctl restart kube-proxy.service  

  在master上查看集群中节点及节点状态
  

[iyunv@ip10-cent7-k8s-master ~]# kubectl -s http://ip10-cent7-k8s-master:8080 get node  
NAME                  STATUS    AGE
  
ip11-cent7-k8s-node   Ready     17s
  
ip12-cent7-k8s-node   Ready     22s
  

  至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。
  4 依次重启docker、kubernete。
  

在master执行:  
systemctl restart etcd.service
  systemctl restart flanneld.service
  
service docker restart
  
systemctl restart kube
-apiserver.service  
systemctl restart kube
-controller-manager.service  
systemctl restart kube
-scheduler.service  
在node上执行:
  
systemctl restart flanneld.service
  
service docker restart
  
systemctl restart kubelet.service
  
systemctl restart kube
-proxy.service  

  5.下面介绍基于kubernetes集群部署DashBoard
  首先准备镜像,这两个镜像我是不断重试,重试才下载到本地
  

docker pull rainf/kubernetes-dashboard-amd64  
docker pull tianyebj
/pod-infrastructure  

  然后我把这两个镜像push到本地私服
  

[iyunv@ip10-cent7-k8s-master ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.2.10:5000/pod-infrastructure:latest  
[iyunv@ip10
-cent7-k8s-master ~]# docker push 192.168.2.10:5000/pod-infrastructure:latest  

  
[iyunv@ip10
-cent7-k8s-master ~]# docker tag docker.io/rainf/kubernetes-dashboard-amd64:latest 192.168.2.10:5000/kubernetes-dashboard-amd64:v1.5.1  
[iyunv@ip10
-cent7-k8s-master ~]# docker push 192.168.2.10:5000/kubernetes-dashboard-amd64:v1.5.1  

  编辑dashboard.yaml,注意或更改以下红色部分:
  

apiVersion: extensions/v1beta1  
kind: Deployment
  
metadata:
  
# Keep the name
in sync with image version and  
# gce
/coreos/kube-manifests/addons/dashboard counterparts  name: kubernetes
-dashboard-latest  namespace: kube
-system  
spec:
  replicas:
1  template:
  metadata:
  labels:
  k8s
-app: kubernetes-dashboard  version: latest
  kubernetes.io
/cluster-service: "true"  spec:
  containers:
- name: kubernetes-dashboard  image:
192.168.2.10:5000/kubernetes-dashboard-amd64:v1.5.1  resources:
  # keep request
= limit to keep this container in guaranteed>limits:
  cpu: 100m
  memory: 50Mi
  requests:
  cpu: 100m
  memory: 50Mi
  ports:
- containerPort: 9090  args:
-  --apiserver-host=http://192.168.2.10:8080  
        livenessProbe:
  httpGet:
  path: /
  port: 9090
  initialDelaySeconds: 30
  timeoutSeconds: 30
  

  编辑dashboardsvc.yaml文件:
  

apiVersion: v1  
kind: Service
  
metadata:
  name: kubernetes
-dashboard  namespace: kube
-system  labels:
  k8s
-app: kubernetes-dashboard  kubernetes.io
/cluster-service: "true"  
spec:
  selector:
  k8s
-app: kubernetes-dashboard  ports:
- port: 80  targetPort:
9090  

  在master执行如下命令:
  

kubectl create -f dashboard.yaml  
kubectl create
-f dashboardsvc.yaml  

  命令验证,master上执行如下命令:
  

[iyunv@ip10-cent7-k8s-master k8s]# kubectl get deployment --all-namespaces  
NAMESPACE     NAME                          DESIRED   CURRENT   UP
-TO-DATE   AVAILABLE   AGE  
kube
-system   kubernetes-dashboard-latest   1         1         1            1           6s  
[iyunv@ip10
-cent7-k8s-master k8s]#  kubectl get svc --all-namespaces  
NAMESPACE     NAME                   CLUSTER
-IP      EXTERNAL-IP   PORT(S)   AGE  
default       kubernetes
10.254.0.1      <none>        443/TCP   1h  
kube
-system   kubernetes-dashboard   10.254.211.61   <none>        80/TCP    14s  
[iyunv@ip10
-cent7-k8s-master k8s]# kubectl get pod -o wide --all-namespaces  
NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE       IP          NODE
  
kube
-system   kubernetes-dashboard-latest-3394205155-sgmk5   1/1       Running   0          19s       10.0.46.2   ip11-cent7-k8s-node  

  界面验证,浏览器访问:http://192.168.2.10:8080/ui
  销毁应用,在master上执行:
  

kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system  
kubectl delete svc kubernetes
-dashboard --namespace=kube-system  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-431628-1-1.html 上篇帖子: 手动部署 kubernetes HA 集群 下篇帖子: kubernetes学习笔记1
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表