设为首页 收藏本站
查看: 1093|回复: 0

[经验分享] 在Ubuntu16.04集群上手工部署Kubernetes

[复制链接]

尚未签到

发表于 2017-6-7 10:10:11 | 显示全部楼层 |阅读模式
简介
  目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。
  这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。
  手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解Kubernetes的架构及各功能模块也会很有帮助。

环境信息

版本信息

组件版本
etcd2.3.1
Flannel0.5.5
Kubernetes1.3.4



主机信息

主机IPOS
k8s-master172.16.203.133Ubuntu 16.04
k8s-node01172.16.203.134Ubuntu 16.04
k8s-node02172.16.203.135Ubuntu 16.04

安装Docker
  每台主机上安装最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/

部署etcd集群
  我们将在3台主机上安装部署etcd集群

下载etcd
  在部署机上下载etcd



ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done
配置etcd服务
  在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)
  /opt/config/etcd.conf



sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo  cat <<EOF |  sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=$(hostname)
ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379
GOMAXPROCS=$(nproc)
EOF
  /lib/systemd/system/etcd.service



[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target

[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
  然后在每台主机上运行



sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
下载Flannel



FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf  flannel.tar.gz -C /tmp
编译K8s

在部署机上编译K8s,需要安装docker engine(1.12)和go(1.6.2)





git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release-skip-tests
tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp
  Note
  除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。

部署K8s Master

复制程序文件



cd /tmp
scp kubernetes/server/bin/kube-apiserver \
kubernetes/server/bin/kube-controller-manager \
kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube
scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'
创建证书
  在master主机上 ,运行如下命令创建证书



mkdir -p /srv/kubernetes/
cd /srv/kubernetes
export MASTER_IP=172.16.203.133
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000
配置kube-apiserver服务
  我们使用如下的Service以及Flannel的网段:
  SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
  FLANNEL_NET=192.168.0.0/16
  在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下



[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 \
--logtostderr=true \
--allow-privileged=false \
--service-cluster-ip-range=172.18.0.0/16 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=30000-32767 \
--advertise-address=172.16.203.133 \
--client-ca-file=/srv/kubernetes/ca.crt \
--tls-cert-file=/srv/kubernetes/server.crt \
--tls-private-key-file=/srv/kubernetes/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-controller-manager服务
  在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下



[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--root-ca-file=/srv/kubernetes/ca.crt \
--service-account-private-key-file=/srv/kubernetes/server.key \
--logtostderr=true
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kuber-scheduler服务
  在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下



[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
--logtostderr=true \
--master=127.0.0.1:8080
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置flanneld服务
  在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下



[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStart=/opt/bin/flanneld \
--etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" \
--iface=172.16.203.133 \
--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536
启动服务



/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \
'{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'
sudo systemctl daemon-reload

sudo systemctl enable kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl enable flanneld

sudo systemctl start kube-apiserver
sudo systemctl start kube-controller-manager
sudo systemctl start kube-scheduler
sudo systemctl start flanneld
修改Docker服务



source /run/flannel/subnet.env
sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi

sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker
部署K8s Node

复制程序文件



cd /tmp
for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done
配置Flanned以及修改Docker服务
  参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址

配置kubelet服务
  /lib/systemd/system/kubelet.service,注意修改IP地址



[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/opt/bin/kubelet \
--hostname-override=172.16.203.133 \
--api-servers=http://172.16.203.133:8080 \
--logtostderr=true
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
  启动服务



sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
配置kube-proxy服务
  /lib/systemd/system/kube-proxy.service,注意修改IP地址



[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/opt/bin/kube-proxy  \
--hostname-override=172.16.203.133 \
--master=http://172.16.203.133:8080 \
--logtostderr=true
Restart=on-failure
[Install]
WantedBy=multi-user.target
  启动服务



sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
部署附件组件

部署DNS



DNS_SERVER_IP="172.18.8.8"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
KUBE_APISERVER_URL=http://172.16.203.133:8080
cat <<EOF > skydns.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v17.1
namespace: kube-system
labels:
k8s-app: kube-dns
version: v17.1
kubernetes.io/cluster-service: "true"
spec:
replicas: $DNS_REPLICAS
selector:
k8s-app: kube-dns
version: v17.1
template:
metadata:
labels:
k8s-app: kube-dns
version: v17.1
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.5
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
cpu: 100m
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=$DNS_DOMAIN.
- --dns-port=10053
- --kube-master-url=$KUBE_APISERVER_URL
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
# Note that this container shouldn't really need 50Mi of memory. The
# limits are set higher than expected pending investigation on #29688.
# The extra memory was stolen from the kubedns container to keep the
# net memory requested by the pod constant.
memory: 50Mi
args:
- -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default  # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: $DNS_SERVER_IP
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
EOF
kubectl create -f skydns.yml
  然后,修该各节点的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

部署Dashboard



echo <<'EOF' > kube-dashboard.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://172.16.203.133:8080
        livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
EOF
kubectl create -f kube-dashboard.yml

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-384645-1-1.html 上篇帖子: Ubuntu上手动安装Kubernetes 下篇帖子: Docker中部署Kubernetes
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表