kubernetes 1.5.1, 配置文档
kubernetes 1.5.1, 配置文档# 1 初始化环境
## 1.1 环境:
| 节 点| I P |
|--------|-------------|
|master|192.168.99.117|
|node1|192.168.99.118|
|node2|192.168.99.119|
前提:配置docker私有库,参考docker私有库搭建
## 1.2 设置hostname
hostnamectl --static set-hostname hostname
## 1.3配置/etc/hosts(所有节点都要配置)
# cat/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.99.117 master
192.168.99.118 node1
192.168.99.119 node2
# 2.0 部署 kubernetes master
## 2.1 添加yum
# 配置yum源cat <<EOF> /etc/yum.repos.d/kubernetes.reponame=Mritd Repositorybaseurl=https://yum.mritd.me/centos/7/x86_64enabled=1gpgcheck=1gpgkey=https://cdn.mritd.me/keys/rpm.public.keyEOFyum makecache ##生成yum缓存 yum install -y socat kubelet kubeadm kubectl kubernetes-cni
## 2.2 安装docker
wget -qO- https://get.docker.com/ | sh
或者
yum install -y docker
systemctl enable dockersystemctl start docker
## 2.3 安装 etcd 集群(每个节点都要配置,可以根据自己的情况配置,如下为三节点集群)
yum -y install etcd# 创建etcd data 目录mkdir -p /opt/etcd/datachown -R etcd:etcd /opt/etcd/# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:#以主节点为例
# cat/etc/etcd/etcd.conf
#
#
ETCD_NAME=etcd1#其他节点可改名etcd2或etcd3
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd" #同上
ETCD_LISTEN_PEER_URLS="http://192.168.99.117:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.117:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.117:2380"
#
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.99.117:2380,etcd2=http://192.168.99.118:2380,etcd3=http://192.168.99.119:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.117:2379"
#以下为node1配置
# cat/etc/etcd/etcd.conf
#
#
ETCD_NAME=etcd2
ETCD_DATA_DIR="/opt/etcd/data/etcd2.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.99.118:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.118:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.118:2380"
#
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.99.117:2380,etcd2=http://192.168.99.118:2380,etcd3=http://192.168.99.119:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.118:2379"
# 修改 etcd 启动文件sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 启动 etcdsystemctl enable etcdsystemctl start etcdsystemctl status etcd# 查看集群状态etcdctl cluster-health
## 2.4 下载镜像(每个节点都下载,可用下载后push到私有库,然后在各个节点pull私有库下载)
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; dodocker pull jicki/$imageNamedocker tag jicki/$imageName gcr.io/google_containers/$imageNamedocker rmi jicki/$imageNamedone
# 如果速度很慢,可配置一下加速docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io"
## 2.4 启动 kubernetessystemctl enable kubeletsystemctl start kubelet
systemctl status kubelet #注意观察状态
## 2.5 创建集群
kubeadm init --api-advertise-addresses=192.168.99.117 --external-etcd-endpoints=http://192.168.99.117:2379,http://192.168.99.118:2379,http://192.168.99.119:2379 --use-kubernetes-version v1.5.1 --pod-network-cidr 10.244.0.0/16
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists WARNING: kubeadm is in alpha, please do not use it for production clusters. Running pre-flight checks Starting the kubelet service Using Kubernetes version: v1.5.1 Generated token: "c53ef2.d257d49589d634f0" Generated Certificate Authority key and certificate. Generated API Server key and certificate Generated Service Account signing keys Created keys and certificates in "/etc/kubernetes/pki" Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" Created API client, waiting for the control plane to become ready All control plane components are healthy after 15.299235 seconds Waiting for at least one node to register and become ready First node is ready after 1.002937 seconds Creating a test deployment Test deployment succeeded Created the kube-discovery deployment, waiting for it to become ready kube-discovery is ready after 2.502881 seconds Created essential addon: kube-proxy Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f .yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117 #此行需要记录,后面节点加入集群
## 2.6 记录 token
You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117
## 2.7 配置网络
# 建议先下载镜像,否则容易下载不到docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64# 或者这样docker pull jicki/flannel-git:v0.6.1-28-g5dde68d-amd64docker tag jicki/flannel-git:v0.6.1-28-g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64docker rmi jicki/flannel-git:v0.6.1-28-g5dde68d-amd64
# http://kubernetes.io/docs/admin/addons/这里有多种网络模式,选择一种# 这里选择 Flannel选择 Flannelinit 时必须配置 --pod-network-cidr
wgethttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml
## 2.8 检查 kubelet 状态
systemctl status kubelet
# 3.0 部署 kubernetes node
## 3.1 安装docker
yum install -y docker
## 3.2 下载镜像 #同master下载镜像一样。可从之前的push到的私有库pull并改名。
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; dodocker pull jicki/$imageNamedocker tag jicki/$imageName gcr.io/google_containers/$imageNamedocker rmi jicki/$imageNamedone
或者:(192.168.99.117:5000为私有库地址)
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; dodocker pull 192.168.99.117:5000/$imageNamedocker tag 192.168.99.117:5000/$imageName gcr.io/google_containers/$imageNamedocker rmi 192.168.99.117:5000/$imageNamedone
## 3.3 启动 kubernetes
systemctl enable kubeletsystemctl start kubelet
## 3.4 加入集群
kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117
Node join complete:* Certificate signing request sent to master and responsereceived.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.
## 3.5 查看集群状态 (在主节点上查询,其他节点目前不支持。后面会介绍其他节点查询方法)
# kubectl get nodeNAME STATUS AGEk8s-node-1 Ready,master 27mk8s-node-2 Ready 6sk8s-node-3 Ready 9s
## 3.6 查看服务状态
# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system dummy-2088944543-qrp68 1/1 Running 1 1hkube-system kube-apiserver-k8s-node-1 1/1 Running 2 1hkube-system kube-controller-manager-k8s-node-1 1/1 Running 2 1hkube-system kube-discovery-1769846148-g2lpc 1/1 Running 1 1hkube-system kube-dns-2924299975-xbhv4 4/4 Running 3 1hkube-system kube-flannel-ds-39g5n 2/2 Running 2 1hkube-system kube-flannel-ds-dwc82 2/2 Running 2 1hkube-system kube-flannel-ds-qpkm0 2/2 Running 2 1hkube-system kube-proxy-16c50 1/1 Running 2 1hkube-system kube-proxy-5rkc8 1/1 Running 2 1hkube-system kube-proxy-xwrq0 1/1 Running 2 1hkube-system kube-scheduler-k8s-node-1 1/1 Running 2 1h
# 4.0 设置 kubernetes
## 4.1 其他主机控制集群# 备份master节点的 配置文件/etc/kubernetes/admin.conf拷贝到其他节点相同位置
scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes#kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS AGE
master Ready,master 7h
node1 Ready 6h
node2 Ready 6h
## 4.2 配置dashboard(主节点)
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
#编辑 yaml 文件vi kubernetes-dashboard.yaml
image修改为本地库地址:
image: 192.168.99.117:5000/kubernetes-dashboard-amd64:v1.5.0
# kubectl create -f ./kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created
# 查看 NodePort ,既各个node节点外网访问端口
# kubectl describe svc kubernetes-dashboard --namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.96.94.33
Port: <unset> 80/TCP
NodePort: <unset> 30923/TCP
Endpoints: 10.244.1.3:9090
Session Affinity: None
No events.
# 访问 dashboardhttp://192.168.99.118/119:31736 目前测试只能在node节点访问,master端口不通,问题在查询中。
C:/Users/Administrator/AppData/Local/YNote/data/qq41E380972A3128D92AECC7E981CEFD76/05fc7891b8e847d69bf44501ea21a683/clipboard.png
部署nginx服务
kubectl create-f nginx-rc.yaml
kubectl create-f nginx-service.yaml
#
# catnginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: 192.168.99.117:5000/nginx-php-fpm
ports:
- containerPort: 80
hostPort: 800
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: etc-nginx-confd
- mountPath: /var/www/html
name: nginx-www
volumes:
- hostPath:
path: /etc/nginx
name: etc-nginx-confd
- hostPath:
path: /data
name: nginx-www
# catnginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
type: NodePort
selector:
name: nginx
#
页:
[1]