注:这里就不科普了,直接开始部署。[这里使用HTTP来部署]
1. 关闭 SeLinux 和 FireWall
1
2
3
# sed -i "s/SELINUX=enforcing/SELINUX=disabled" /etc/selinux/config
# systemctl stop firewalld
# systemctl disable firewalld
2. 安装 docker
1
2
3
4
5
6
7
8
9
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum list docker-ce --showduplicates | sort -r
# yum -y install docker-ce
# docker --version
Docker version 17.06.2-ce, build cec0b72
# systemctl start docker
# systemctl status docker
# systemctl enable docker
3. 安装 etcd
1
2
3
4
5
6
7
8
9
10
11
# curl -L https://storage.googleapis.com/e ... -linux-amd64.tar.gz -o /root/etcd-v3.2.9-linux-amd64.tar.gz
# tar -zxvf etcd-v3.2.9-linux-amd64.tar.gz
# cp etcd-v3.2.9-linux-amd64/etcd* /usr/bin/
# etcd --version
etcd Version: 3.2.9
Git SHA: f1d7dd8
Go Version: go1.8.4
Go OS/Arch: linux/amd64
# etcdctl --version
etcdctl version: 3.2.9
API version: 2
4. 安装 Kubernetes
1
2
3
4
5
6
# wget https://storage.googleapis.com/k ... -linux-amd64.tar.gz
# tar -zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin/
# cp kubectl kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /usr/bin/
# kube-apiserver --version
Kubernetes v1.8.1
5. 安装 flanneld
1
2
3
4
5
6
7
# curl -L https://github.com/coreos/flanne ... -linux-amd64.tar.gz -o flannel-v0.9.0-linux-amd64.tar.gz
# tar -zxvf flannel-v0.9.0-linux-amd64.tar.gz
# mv flanneld /usr/bin/
# mkdir /usr/libexec/flannel/
# mv mk-docker-opts.sh /usr/libexec/flannel/
# flanneld --version
v0.9.0
6. 配置并启用 etcd
A. 配置启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos/etcd
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf
Restart=on-failure
LimitNOFILE=65536
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
B. 配置各节点 etcd.conf 配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# mkdir -p /var/lib/etcd/
# mkdir -p /etc/etcd/
# export ETCD_NAME=etcd
# export INTERNAL_IP=192.168.100.104
# cat << EOF > /etc/etcd/etcd.conf
name: '${ETCD_NAME}'
data-dir: "/var/lib/etcd/"
listen-peer-urls: http://${INTERNAL_IP}:2380
listen-client-urls: http://${INTERNAL_IP}:2379,http://127.0.0.1:2379
initial-advertise-peer-urls: http://${INTERNAL_IP}:2380
advertise-client-urls: http://${INTERNAL_IP}:2379
initial-cluster: "etcd=http://${INTERNAL_IP}:2380"
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
EOF
注:
new-----初始化集群安装时使用该选项;
existing-----新加入集群时使用该选项。
C.启动 etcd
1
2
3
# systemctl start etcd
# systemctl status etcd
# systemctl enable etcd
##查看集群成员
1
2
# etcdctl member list
b0f5befc15246c67: name=etcd peerURLs=http://192.168.100.104:2380 clientURLs=http://192.168.100.104:2379 isLeader=true
##查看集群健康状况
1
2
3
# etcdctl cluster-health
member b0f5befc15246c67 is healthy: got healthy result from http://192.168.100.104:2379
cluster is healthy
7. 配置并启用flanneld
A. 配置启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
1
2
3
4
5
6
7
# vim /usr/bin/flanneld-start
#!/bin/sh
exec /usr/bin/flanneld \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
"$@"
# chmod 755 /usr/bin/flanneld-start
B. 配置 flannel 配置文件
1
2
3
4
5
# etcdctl mkdir /kube/network
# etcdctl set /kube/network/config '{ "Network": "10.254.0.0/16" }'
# grep ^[A-Z] /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.100.104:2379"
FLANNEL_ETCD_PREFIX="/kube/network"
C. 启动 flanneld
1
2
3
# systemctl start flanneld
# systemctl status flanneld
# systemctl enable flanneld
D. 查看各节点网段
1
2
3
4
5
# cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.254.0.0/16
FLANNEL_SUBNET=10.254.26.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
E. 更改 docker 网段为 flannel 分配的网段
1
2
3
4
5
6
7
8
# export FLANNEL_SUBNET=10.254.26.1/24
# cat << EOF > /etc/docker/daemon.json
{
"bip" : "$FLANNEL_SUBNET"
}
EOF
# systemctl daemon-reload
# systemctl restart docker
F. 查看是否已分配相应网段
1
2
3
4
5
6
7
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.200.2 0.0.0.0 UG 100 0 0 ens33
10.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
10.254.26.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
G. 使用 etcdctl 命令查看 flannel 的相关信息
1
2
3
4
5
6
7
8
9
# etcdctl ls /kube/network/subnets
/kube/network/subnets/10.254.26.0-24
# etcdctl -o extended get /kube/network/subnets/10.254.26.0-24
Key: /kube/network/subnets/10.254.26.0-24
Created-Index: 6
Modified-Index: 6
TTL: 85638
Index: 6
{"PublicIP":"192.168.100.104"}
H. 测试网络是否正常
1
# ping -c 4 10.254.26.1
8. 配置并启用 Kubernetes Master 节点
Kubernetes Master 节点包含的组件:
kube-apiserver
kube-scheduler
kube-controller-manager
A. 配置 config 文件
1
2
3
4
5
6
# mkdir -p /etc/kubernetes/
# grep ^[A-Z] /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.100.104:8080"
B. 配置 kube-apiserver 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
C. 配置 apiserver 配置文件
1
2
3
4
5
6
# grep ^[A-Z] /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--advertise-address=192.168.100.104 --bind-address=192.168.100.104 --insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.100.104:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/apiserver.log"
注:使用 HTTP 和 使用 HTTPS 的最大不同就是--admission-control=ServiceAccount选项。
D. 启动 kube-apiserver
1
2
3
# systemctl start kube-apiserver
# systemctl status kube-apiserver
# systemctl enable kube-apiserver
E. 配置 kube-controller-manager 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
F. 配置 kube-controller-manager 配置文件
1
2
# grep ^[A-Z] /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes"
G.启动 kube-controller-manager
1
2
3
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager
# systemctl enable kube-controller-manager
H. 配置 kube-scheduler 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
I. 配置 kube-scheduler 配置文件
1
2
# grep ^[A-Z] /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--address=127.0.0.1"
J. 启动 kube-scheduler
1
2
3
# systemctl start kube-scheduler
# systemctl status kube-scheduler
# systemctl enable kube-scheduler
K. 验证 Master 节点
1
2
3
4
5
6
# kubectl get componentstatuses
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
9. 配置并启用 Kubernetes Node 节点
Kubernetes Node 节点包含如下组件:
kubelet
kube-proxy
A. 配置 kubelet 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
B. 配置 kubelet 配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# mkdir -p /var/lib/kubelet
# export MASTER_ADDRESS=192.168.100.104
# export KUBECONFIG_DIR=/etc/kubernetes
# cat <<EOF > "${KUBECONFIG_DIR}/kubelet.kubeconfig"
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://${MASTER_ADDRESS}:8080/
name: local
contexts:
- context:
cluster: local
name: local
current-context: local
EOF
1
2
3
4
5
6
# grep ^[A-Z] /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=192.168.100.104"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=master"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=hub.c.163.com/k8s163/pause-amd64:3.0"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --fail-swap-on=false --cluster-dns=10.254.0.2 --cluster-domain=cluster.local. --serialize-image-pulls=false"
C. 启动 kubelet
1
2
3
# systemctl start kubelet
# systemctl status kubelet
# systemctl enable kubelet
注:
--fail-swap-on ##如果在节点上启用了swap,则Kubelet无法启动.(default true)[该命令是1.8版本开始才有的]
--cluster-dns=10.254.0.2
--cluster-domain=cluster.local.
##与 KubeDNS Pod 配置的参数一致
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
##新版本不再支持 --api-servers 模式
D. 配置 kube-proxy 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
E. 配置 kube-proxy 配置文件
1
2
# grep ^[A-Z] /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--bind-address=192.168.100.104 --hostname-override=192.168.100.104 --cluster-cidr=10.254.0.0/16"
F. 启动 kube-proxy
1
2
3
# systemctl start kube-proxy
# systemctl status kube-proxy
# systemctl enable kube-proxy
G. 查看 Nodes相关信息
1
2
3
4
5
6
7
8
9
10
11
12
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready <none> 5h v1.8.1
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready <none> 5h v1.8.1 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker://Unknown
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready <none> 5h v1.8.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master
# kubectl version --short
Client Version: v1.8.1
Server Version: v1.8.1
H. 查看集群信息
1
2
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
10. 部署 KubeDNS 插件
官方的yaml文件目录:kubernetes/cluster/addons/dns。
https://github.com/kubernetes/ku ... /cluster/addons/dns
##下载 Kube-DNS 相关 yaml 文件
##修改后缀
1
# cp kube-dns.yaml.base kube-dns.yaml
### 替换所有的 images
1
# sed -i 's/gcr.io\/google_containers/192.168.100.100\/k8s/g' kube-dns.yaml
####替换如下
1
2
# sed -i "s/__PILLAR__DNS__SERVER__/10.254.0.2/g" kube-dns.yaml
# sed -i "s/__PILLAR__DNS__DOMAIN__/cluster.local/g" kube-dns.yaml
######对比
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# diff kube-dns.yaml kube-dns.yaml.base
33c33
< clusterIP: 10.254.0.2
---
> clusterIP: __PILLAR__DNS__SERVER__
97c97
< image: 192.168.100.100/k8s/k8s-dns-kube-dns-amd64:1.14.5
---
> image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.6
127,128c127
< - --domain=cluster.local.
< - --kube-master-url=http://192.168.100.104:8080
---
> - --domain=__PILLAR__DNS__DOMAIN__.
149c148
< image: 192.168.100.100/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.5
---
> image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.6
169c168
< - --server=/cluster.local/127.0.0.1#10053
---
> - --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053
188c187
< image: 192.168.100.100/k8s/k8s-dns-sidecar-amd64:1.14.5
---
> image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.6
201,202c200,201
< - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
< - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
---
> - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,SRV
> - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,SRV
注1:这里我用的镜像是自己搭建的镜像仓库来pull,写这篇博文时kube-dns更新到了1.14.6,开会,墙又高了许多,所以拉取困难,我用回了1.14.5的版本,这里自行更改,自己能科学上网拉取最新版当然更好。
注2:看对比,第二个不同,1.14.6版--probe最后是SRV,而1.14.5版最后是A,这儿也要更改,不然会产生 CrashLoopBackOff 错误。
注3:这里我们要使用--kube-master-url命令指定apiserver,不然也会产生 CrashLoopBackOff 错误。
注4: 我将镜像放在了网易蜂巢上,地址:
hub.c.163.com/zhijiansd/k8s-dns-kube-dns-amd64:1.14.5
hub.c.163.com/zhijiansd/k8s-dns-dnsmasq-nanny-amd64:1.14.5
hub.c.163.com/zhijiansd/k8s-dns-sidecar-amd64:1.14.5
### 执行该文件
1
2
3
4
5
# kubectl create -f kube-dns.yaml
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created
### 查看 KubeDNS 服务
1
2
3
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-84f48d556b-qprmw 3/3 Running 3 5h
###查看集群信息
1
2
3
4
5
# kubectl get service -n kube-system | grep dns
kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 5h
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
####查看 KubeDNS 守护程序的日志
1
2
3
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar
11. 部署 Heapster 组件
###下载 heapster
1
2
3
4
5
6
7
# wget https://codeload.github.com/kube ... ar.gz/v1.5.0-beta.0 -O heapster-1.5.0-beta.tar.gz
# tar -zxvf heapster-1.5.0-beta.tar.gz
# cd heapster-1.5.0-beta.0/deploy/kube-config
# cp rbac/heapster-rbac.yaml influxdb/
# cd influxdb/
# ls
grafana.yaml heapster-rbac.yaml heapster.yaml influxdb.yaml
###替换所有 images
1
# sed -i 's/gcr.io\/google_containers/192.168.100.100\/k8s/g' *.yaml
注:这里我也将镜像放在了网易蜂巢上,请自行更改:
hub.c.163.com/zhijiansd/heapster-grafana-amd64:v4.4.3
hub.c.163.com/zhijiansd/heapster-amd64:v1.4.0
hub.c.163.com/zhijiansd/heapster-influxdb-amd64:v1.3.3
###更改 heapster.yaml
注: heapster 默认使用 https 连接 apiserver ,这里更改为使用 http 连接。
###执行 influxdb 目录下的所有文件
1
2
3
4
5
6
7
8
9
# kubectl create -f .
deployment "monitoring-grafana" created
service "monitoring-grafana" created
clusterrolebinding "heapster" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
###检查执行结果
1
2
3
4
# kubectl get deployments -n kube-system | grep -E 'heapster|monitoring'
heapster 1 1 1 1 6h
monitoring-grafana 1 1 1 1 6h
monitoring-influxdb 1 1 1 1 6h
###检查 Pods
1
2
3
4
5
6
7
8
# kubectl get pods -n kube-system | grep -E 'heapster|monitoring'
heapster-6c96ccd7c4-xbmlc 1/1 Running 1 6h
monitoring-grafana-98d44cd67-z5m99 1/1 Running 1 6h
monitoring-influxdb-6b6d749d9c-schdp 1/1 Running 1 6h
# kubectl get svc -n kube-system | grep -E 'heapster|monitoring'
heapster ClusterIP 10.254.201.85 <none> 80/TCP 6h
monitoring-grafana ClusterIP 10.254.138.73 <none> 80/TCP 6h
monitoring-influxdb ClusterIP 10.254.45.121 <none> 8086/TCP 6h
###查看集群信息
1
2
3
4
5
6
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
12. 部署 Kubernetes Dashboard
这里我们使用不需要证书的版本:
### 替换 images
1
# sed -i 's/gcr.io\/google_containers/192.168.100.100\/k8s/g' kubernetes-dashboard.yaml
注:请自行更改镜像地址:
hub.c.163.com/zhijiansd/kubernetes-dashboard-amd64:v1.7.1
###添加 apiserver 地址
1
2
3
# grep apiserver kubernetes-dashboard.yaml
# - --apiserver-host=http://my-address:port
- --apiserver-host=http://192.168.100.104:8080
###执行该文件
1
2
3
4
5
6
# kubectl create -f kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" configured
role "kubernetes-dashboard-minimal" configured
rolebinding "kubernetes-dashboard-minimal" configured
deployment "kubernetes-dashboard" configured
service "kubernetes-dashboard" configured
###检查 kubernetes-dashboard 服务
1
2
# kubectl get pods -n kube-system | grep dash
kubernetes-dashboard-7648996855-54x6l 1/1 Running 1 7h
注:1.7版不能使用 kubectl cluster-info 查看到 kubernetes-dashboard 地址,1.6.3版的可以。
1.7.0版需要使用http://localhost:8080/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ 进行访问。而1.7.1版可以使用http://localhost:8080/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ 访问,也可以使用http://localhost:8080/ui访问,其会自动跳转。
13.查看 kubernetes dashboard
使用http://localhost:8080/ui访问
节点界面
查看Pod界面
14. 查看 grafana
使用http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy访问
查看Cluster
查看Pod
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com