k8s独占存储及共享存储两种方式的配置
从使用角度,我们一般有两类存储需求,一类是独占存储,一种是共享存储。
独占存储就是每个pod一个独占的存储空间,不与其它POD共用。
共享存储就是多个POD共用一个存储空间,多个POD都可以读写。
不管是哪种存储,都需要存储类,我们先创建一个NFS的存储类:
1.创建NFS存储类
nfs-client-provisioner.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
namespace: default
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: nfs-client-provisioner
spec:
containers:
- env:
- name: PROVISIONER_NAME
value: tencent-cfs-provisioner
- name: NFS_SERVER
value: 10.12.112.36
- name: NFS_PATH
value: /
image: ccr.ccs.tencentyun.com/weimob-public/nfs-client-provisioner:latest
imagePullPolicy: Always
name: nfs-client-provisioner
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /persistentvolumes
name: nfs-client-root
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nfs-client-provisioner
serviceAccountName: nfs-client-provisioner
terminationGracePeriodSeconds: 30
volumes:
- name: nfs-client-root
nfs:
path: /
server: 10.12.112.36
tencent-cfs-storage-storageclass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tencent-cfs-storage
mountOptions:
- vers=4.0
- nolock,tcp,noresvport
parameters:
archiveOnDelete: "false"
provisioner: tencent-cfs-provisioner
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
部署:
[root@sh-saas-k8s1-master-dev-01 ~]# kubectl apply -f nfs-client-provisioner.yaml
[root@sh-saas-k8s1-master-dev-01 ~]# kubectl apply -f tencent-cfs-storage-storageclass.yaml
[root@sh-saas-k8s1-master-dev-01 ~]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER AGE
tencent-cfs-storage tencent-cfs-provisioner 75d
2. 独占存储的配置
创建存储类后,就可以直接在StatefulSet内使用volumeClaimTemplates来配置存储,配置如下:
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 5
labels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
name: saas-jcpt-vdataflow-notify-service-tomcat-dev
namespace: saas-jcpt-tomcat-dev
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
serviceName: saas-jcpt-vdataflow-notify-service-tomcat-dev
template:
metadata:
labels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
spec:
automountServiceAccountToken: true
containers:
- image: ccr.ccs.tencentyun.com/weimob-saas-jcpt-tomcat-dev/saas-jcpt-vdataflow-notify-service-tomcat-dev:v202005141704-master-960b8c7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 2
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 3
name: container-0
readinessProbe:
failureThreshold: 2
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 3
resources:
limits:
cpu: '2'
memory: 4Gi
requests:
cpu: 500m
memory: 512Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data/vdataflow
name: data
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ccr.ccs.tencentyun.com.key
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 30
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "tencent-cfs-storage"
resources:
requests:
storage: 100Gi
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
使用以上配置创建StatefulSet,扩容这个StatefulSet从1个pod到2个pod后,会发现这两个POD每一个都挂了属于自己的PV。
[root@sh-saas-k8s1-master-dev-01 yaml]# kubectl get pv -n saas-jcpt-java-dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-19643e0b-95b7-11ea-8674-525400bf140b 1Ti RWX Delete Bound eventwatcher/cfs-elasticsearch-new-pvc tencent-cfs-storage 40d
pvc-3972e8db-b520-11ea-87f5-5254001a47d3 100Gi RWO Delete Bound saas-jcpt-tomcat-dev/data-saas-jcpt-vdataflow-notify-service-tomcat-dev-0 tencent-cfs-storage 54m
pvc-68ca7140-b520-11ea-87f5-5254001a47d3 100Gi RWO Delete Bound saas-jcpt-tomcat-dev/data-saas-jcpt-vdataflow-notify-service-tomcat-dev-1 tencent-cfs-storage 53m
3. 共享存储的配置
共享存储需要先创建PersistentVolumeClaim,再在StatefulSet内与pvc关联上就可以了。
配置如下:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: saas-jcpt-vdataflow-notify-service-tomcat-dev
namespace: saas-jcpt-tomcat-dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Gi
storageClassName: tencent-cfs-storage
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 5
labels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
name: saas-jcpt-vdataflow-notify-service-tomcat-dev
namespace: saas-jcpt-tomcat-dev
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
serviceName: saas-jcpt-vdataflow-notify-service-tomcat-dev
template:
metadata:
labels:
app: saas-jcpt-vdataflow-notify-service-tomcat-dev
spec:
automountServiceAccountToken: true
containers:
- image: ccr.ccs.tencentyun.com/weimob-saas-jcpt-tomcat-dev/saas-jcpt-vdataflow-notify-service-tomcat-dev:v202005141704-master-960b8c7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 2
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 3
name: container-0
readinessProbe:
failureThreshold: 2
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 3
resources:
limits:
cpu: '2'
memory: 4Gi
requests:
cpu: 500m
memory: 512Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data/vdataflow
name: data
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ccr.ccs.tencentyun.com.key
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: saas-jcpt-vdataflow-notify-service-tomcat-dev
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
使用以上配置创建StatefulSet,扩容这个StatefulSet从1个pod到2个pod后,会发现这两个POD挂的是同一个PV。并且都可以读写。