设为首页 收藏本站
查看: 1932|回复: 0

[经验分享] [k8s]k8s配置nfs做后端存储&配置多nginx共享存储&&statefulset配置

[复制链接]

尚未签到

发表于 2018-1-5 18:38:47 | 显示全部楼层 |阅读模式
所有节点安装nfs
  

yum install nfs-utils rpcbind -y  
mkdir -p /ifs/kubernetes
  
echo "/ifs/kubernetes 192.168.x.0/24(rw,sync,no_root_squash)" >> /etc/exports
  

  
仅在nfs服务器上 systemctl start rpcbind nfs
  
节点测试没问题即可
  

  可以参考下以前写的:
  
http://blog.csdn.net/iiiiher/article/details/77865530

安装nfs作为存储
  参考:
  
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy
  

  deployment这是一个nfs的client的,会挂载/ifs/kubernetes, 以后创建的目录,会在这个目录下创建各个子目录.
  

$ cat deployment.yaml  
kind: Deployment
  
apiVersion: extensions/v1beta1
  
metadata:
  name: nfs-client-provisioner
  
spec:
  replicas: 1
  strategy:
  type: Recreate
  template:
  metadata:
  labels:
  app: nfs-client-provisioner
  spec:
  containers:
  - name: nfs-client-provisioner
  image: quay.io/external_storage/nfs-client-provisioner:latest
  volumeMounts:
  - name: nfs-client-root
  mountPath: /persistentvolumes
  env:
  - name: PROVISIONER_NAME
  value: fuseim.pri/ifs
  - name: NFS_SERVER
  value: 192.168.x.135
  - name: NFS_PATH
  value: /ifs/kubernetes
  volumes:
  - name: nfs-client-root
  nfs:
  server: 192.168.x.135
  path: /ifs/kubernetes
  

$ cat>
apiVersion: storage.k8s.io/v1beta1  
kind: StorageClass
  
metadata:
  name: managed-nfs-storage
  
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
  

$ cat test-claim.yaml  
kind: PersistentVolumeClaim
  
apiVersion: v1
  
metadata:
  name: test-claim
  annotations:
  volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
  
spec:
  accessModes:
  - ReadWriteMany
  resources:
  requests:
  storage: 1Mi
  

$ cat test-pod.yaml  
kind: Pod
  
apiVersion: v1
  
metadata:
  name: test-pod
  
spec:
  containers:
  - name: test-pod
  image: busybox
  command:
  - "/bin/sh"
  args:
  - "-c"
  - "touch /mnt/SUCCESS && exit 0 || exit 1"
  volumeMounts:
  - name: nfs-pvc
  mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
  - name: nfs-pvc
  persistentVolumeClaim:
  claimName: test-claim
  


  
默认情况创建pvc,pv自动创建. pvc手动干掉后,nfs里面是archive状态.还没弄清楚在哪控制这东西
  todo:
  
验证pvc的:
  
容量
  
读写
  
回收策略

实现下共享存储(左半部分)

  

$ cat nginx-pvc.yaml  
kind: PersistentVolumeClaim
  
apiVersion: v1
  
metadata:
  name: nginx-claim
  annotations:
  volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
  
spec:
  accessModes:
  - ReadOnlyMany
  resources:
  requests:
  storage: 1Mi
  

  

$ cat nginx-deployment.yaml  
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
  
kind: Deployment
  
metadata:
  name: nginx-deployment
  labels:
  app: nginx
  
spec:
  replicas: 1
  selector:
  matchLabels:
  app: nginx
  template:
  metadata:
  labels:
  app: nginx
  spec:
  containers:
  - name: nginx
  image: nginx
  ports:
  - containerPort: 80
  volumeMounts:
  - name: nfs-pvc
  mountPath: "/usr/share/nginx/html"
  volumes:
  - name: nfs-pvc
  persistentVolumeClaim:
  claimName: nginx-claim
  

$ cat nginx-svc.yaml  
kind: Service
  
apiVersion: v1
  
metadata:
  name: svc-nginx
  
spec:
  selector:
  app: nginx
  type: NodePort
  ports:
  - protocol: TCP
  targetPort: 80
  

右半部分参考
  https://feisky.gitbooks.io/kubernetes/concepts/statefulset.html
  

  todo:
  
我想验证左边模式ReadWriteOnce情况怎么破.
  

---  
apiVersion: v1
  
kind: Service
  
metadata:
  name: nginx
  labels:
  app: nginx
  
spec:
  ports:
  - port: 80
  name: web
  clusterIP: None
  selector:
  app: nginx
  
---
  
apiVersion: apps/v1beta1
  
kind: StatefulSet
  
metadata:
  name: web
  
spec:
  serviceName: "nginx"
  replicas: 2
  template:
  metadata:
  labels:
  app: nginx
  spec:
  containers:
  - name: nginx
  image: nginx
  ports:
  - containerPort: 80
  name: web
  volumeMounts:
  - name: www
  mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
  name: www
  annotations:
  volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
  spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
  requests:
  storage: 1Mi
  

pvc通过storageClassName调用nfs存储(strorageclass)
  

kind: PersistentVolumeClaim  
apiVersion: v1
  
metadata:
  name: spring-pvc
  namespace: kube-public
  
spec:
  storageClassName: "managed-nfs-storage"
  accessModes:
  - ReadOnlyMany
  resources:
  requests:
  storage: 100Mi
  

  gfs参考:
  
https://github.com/kubernetes-incubator/external-storage/tree/master/gluster/glusterfs

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-431995-1-1.html 上篇帖子: Kubernetes1.6新特性:POD高级调度 下篇帖子: 配置kubectl客户端通过token方式访问kube-apiserver
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表