前面一篇已经安装好了ETCD、docker与flannel(k8s1.4.3安装实践记录(1)),现在可以开始安装k8s了
1、K8S
目前centos yum上的kubernetes还是1.2.0,因此我们只能是使用下载的安装包,进行kubernetes的安装
[iyunv@bogon system]# yum list |grep kubernetes
cockpit
-kubernetes.x86_64 0.114-2.el7.centos extras
kubernetes.x86_64
1.2.0-0.13.gitec7364b.el7 extras
kubernetes
-client.x86_64 1.2.0-0.13.gitec7364b.el7 extras
kubernetes
-cni.x86_64 0.3.0.1-0.07a8a2 kubelet
kubernetes
-master.x86_64 1.2.0-0.13.gitec7364b.el7 extras
kubernetes
-node.x86_64 1.2.0-0.13.gitec7364b.el7 extras
kubernetes
-unit-test.x86_64 1.2.0-0.13.gitec7364b.el7 extras
1.1 K8S下载
使用wget 或者下载软件下载k8s安装包:https://github.com/kubernetes/kubernetes/releases/download/v1.4.3/kubernetes.tar.gz,下载完成,我们会拿到当前1.4.3版本的安装文件。
1.2 解压并安装
解压安装包,并将执行文件存放到合适的地方
tar -zxvf kubernetes.tar.gz
cd kubernetes
/server/bin
mkidr
/usr/local/kube
cp -R * /usr/local/kube
设置环境变量文件/etc/profile,将kube执行文件加入的环境变量中
export KUBE_PATH=/usr/local/kube
export PATH
=$PATH:$KUBE_PATH
执行环境变量,使其生效:
source /etc/profile
1.3 启动主节点
当前主节点为192.168.37.130,需要在主节点上执行kube-apiserver ,kube-controller-manager,kube-scheduler三个进程。
1.3.1 开放端口
如果没有关闭防火墙且使用的是firewalld,则需要开放相关的端口
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall
-cmd --zone=public --add-port=10250/tcp --permanent
firewall
-cmd --zone=public --add-port=6443/tcp --permanent
firewall
-cmd --zone=public --add-port=15441/tcp --permanent
firewall
-cmd --reload
firewall
-cmd --list-all
1.3.2 启动kube-apiserver
kube-apiserver --insecure-bind-address=192.168.37.130 --insecure-port=8080 --service-cluster-ip-range='192.168.37.130/24' --log_dir=/usr/local/kubernete_test/logs/kube --v=0 --logtostderr=false --etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379 --allow_privileged=false
1.3.3 启动 kube-controller-manager
kube-controller-manager --v=0 --logtostderr=true --log_dir=/data/kubernets/logs/kube-controller-manager/ --master=http://192.168.37.130:8080
1.3.4 启动kube-scheduler
kube-scheduler --master='192.168.37.130:8080' --v=0 --log_dir=/data/kubernets/logs/kube-scheduler
1.3.5 查看是否启动完成
[iyunv@bogon ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller
-manager Healthy ok
etcd
-0 Healthy {"health": "true"}
etcd
-1 Healthy {"health": "true"}
可以看到两个etcd启动完成
1.3.6 设置service
在/usr/lib/systemd/system文件夹中创建各个进程的service文件
1、kube-apiserver.service
[Unit]
Description
=kube-apiserver
Documentation
=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kube-apiserver
ExecStart=/usr/local/kube/kube-apiserver ${INSECURE_BIND_ADDRESS} ${INSECURE_PORT} ${SERVICE_CLUSTER_IP_RANGE} ${LOG_DIR} ${VERSION} ${LOGTOSTDERR} ${ETCD_SERVERS} ${ALLOW_PRIVILEGED}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
其对应的配置文件/etc/sysconfig/kubernets/kube-apiserver如下:
#INSECURE_BIND_ADDRESS="--insecure-bind-address=0.0.0.0"
INSECURE_BIND_ADDRESS
="--address=0.0.0.0"
INSECURE_PORT
="--insecure-port=8080"
SERVICE_CLUSTER_IP_RANGE
="--service-cluster-ip-range=172.16.0.0/16"
LOG_DIR
="--log_dir=/usr/local/kubernete_test/logs/kube"
VERSION
="--v=0"
LOGTOSTDERR
="--logtostderr=false"
ETCD_SERVERS
="--etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379"
ALLOW_PRIVILEGED
="--allow-privileged=false"
ADMISSION_CONTROL
="--admission-control=NamespaceAutoProvision,ServiceAccount,LimitRanger,ResourceQuota"
注意:配置文件中的参数名不能使用“-”
2016年11月3日补充:在刚开始配置的时候将INSECURE_BIND_ADDRESS="--insecure-bind-address=192.168.37.130",但这样会存在一个使用本地链路127.0.0.1:8080去访问的是被拒绝掉,而修改为INSECURE_BIND_ADDRESS="--address=0.0.0.0"就不会存在这种问题,具体见:http://www.cnblogs.com/lyzw/p/6023935.html
2、 kube-controller-manager
配置 kube-controller-manager.service
[Unit]
Description=kube-controller-manager
Documentation=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kube-controller-manager
ExecStart=/usr/local/kube/kube-controller-manager ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
设置/etc/sysconfig/kubernets/kube-controller-manager
VERSION="--v=0"
LOGTOSTDERR
="--logtostderr=true"
LOG_DIR
="--log_dir=/data/kubernets/logs/kube-controller-manager/"
MASTER
="--master=http://192.168.37.130:8080"
3、设置kube-scheduler服务
kube-scheduler.service
[Unit]
Description
=kube-scheduler
Documentation
=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kube-scheduler
ExecStart=/usr/local/kube/kube-scheduler ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
配置文件如下
VERSION="--v=0"
LOGTOSTDERR
="--logtostderr=true"
LOG_DIR
="--log_dir=/data/kubernets/logs/kube-scheduler"
MASTER
="--master=http://192.168.37.130:8080
4、重启各个服务
systemctl daemon-reload
systemctl start kube
-apiserver
systemctl start kube
-controller-manager
systemctl start kube
-scheduler
1.4 启动minion
Minion需要启动kube-proxy,kubelet两个进程
1.4.1 kube-proxy启动
#在两台机器都执行
kube
-proxy --logtostderr=true --v=0 --master=http://192.168.37.130:8080
1.4.1 kubelet启动
kubelet --logtostderr=true --v=0 --allow-privileged=false --log_dir=/data/kubernets/logs/kubelet --address=0.0.0.0 --port=10250 --hostname_override=192.168.37.130 --api_servers=http://192.168.37.130:8080
1.4.5配置service
1、kube-proxy.service
[Unit]
Description
=kube-proxy
Documentation
=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kube-proxy
ExecStart=/usr/local/kube/kube-proxy ${VERSION} ${LOGTOSTDERR} ${LOG_DIR} ${MASTER}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
/etc/sysconfig/kubernets/kube-proxy
VERSION="--v=0"
LOGTOSTDERR
="--logtostderr=true"
LOG_DIR
="--log_dir=/data/kubernets/logs/kube-controller-manager/"
MASTER
="--master=http://192.168.37.130:8080"
2、kubelet.service
[Unit]
Description
=kubelet
Documentation
=http://kubernetes.io/docs/
[Service]
EnvironmentFile=-/etc/sysconfig/kubernets/kubelet
ExecStart=/usr/local/kube/kubelet ${LOGTOSTDERR} ${VERSION} ${ALLOW_PRIVILEGED} ${LOG_DIR} ${ADDRESS} ${PORT} ${HOSTNAME_OVERRIDE} ${API_SERVERS}
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
/etc/sysconfig/kubernets/kubelet配置文件
LOGTOSTDERR="--logtostderr=true"
VERSION
="--v=0"
ALLOW_PRIVILEGED
="--allow-privileged=false"
LOG_DIR
="--log_dir=/data/kubernets/logs/kubelet"
ADDRESS
="--address=0.0.0.0"
PORT
="--port=10250"
HOSTNAME_OVERRIDE
="--hostname_override=192.168.37.131"
API_SERVERS
="--api_servers=http://192.168.37.130:8080"
经过如上的步骤,k8s基本上已经安装好了,后续在把dashboard给安装上。
问题:
1、采用网上的配置,执行会产生一些警告信息:
[iyunv@bogon server]# kube-apiserver --address=192.168.37.130 --insecure-port=8080 --service-cluster-ip-range='192.168.37.130/24' --log_dir=/usr/local/kubernete_test/logs/kube --kubelet_port=10250 --v=0 --logtostderr=false --etcd_servers=http://192.168.37.130:2379,http://192.168.37.131:2379 --allow_privileged=false
Flag --address has been deprecated, see --insecure-bind-address instead.
Flag --kubelet-port has been deprecated, kubelet-port is deprecated and will be removed.
[restful] 2016/11/01 15:31:15 log.go:30: [restful/swagger] listing is available at https://192.168.37.130:6443/swaggerapi/
[restful] 2016/11/01 15:31:15 log.go:30: [restful/swagger] https://192.168.37.130:6443/swaggerui/ is mapped to folder /swagger-ui/
2、执行kubectl get componentstatuses报错
[iyunv@bogon ~]# kubectl get componentstatuses
The connection to the server localhost:
8080 was refused - did you specify the right host or port?
如上问题,如果在master机器上执行,则是因为/etc/hosts文件没有配置导致,在文件中加入ip localhost条目即可。
如果是在从节点上,在执行kubectl get componentstatuses语句时候,加上kubectl -s $masterIP:$port get componentstatuses,其中$masterIP为主节点IP,$port为主节点的服务IP,即安装文档中的8080端口,如:kubectl -s http://192.168.37.130:8080 get componentstatuses
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com