从使用角度,我们一般有两类存储需求,一类是独占存储,一种是共享存储。
独占存储就是每个pod一个独占的存储空间,不与其它POD共用。
共享存储就是多个POD共用一个存储空间,多个POD都可以读写。
不管是哪种存储,都需要存储类,我们先创建一个NFS的存储类:
从使用角度,我们一般有两类存储需求,一类是独占存储,一种是共享存储。
独占存储就是每个pod一个独占的存储空间,不与其它POD共用。
共享存储就是多个POD共用一个存储空间,多个POD都可以读写。
不管是哪种存储,都需要存储类,我们先创建一个NFS的存储类:
traefik ingress同样可以配置URL的重写:
下面是一个完整例子:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/request-modifier: 'ReplacePathRegex: ^/api3/(.*)
/api/$1'
creationTimestamp: "2020-03-09T03:27:40Z"
generation: 2
labels:
app: public-fe-zhan-operation-node-qa
name: public-fe-zhan-operation-node-qa-7091-2-ingress
namespace: public-fe-node-qa
resourceVersion: "2310342"
selfLink: /apis/extensions/v1beta1/namespaces/public-fe-node-qa/ingresses/public-fe-zhan-operation-node-qa-7091-2-ingress
uid: ee6f6696-61b5-11ea-a82f-52540088db9a
spec:
rules:
- host: www.xxxx.com
http:
paths:
- backend:
serviceName: public-fe-zhan-operation-node-qa
servicePort: 7091
path: /api3
参考文档: https://s0docs0traefik0io.icopy.site/v1.7/basics/#path-matcher-usage-guidelines
以下是网上看到的简介:
Prometheus 是一套开源的系统监控报警框架。它启发于 Google 的 borgmon 监控系统,由工作在 SoundCloud 的 google 前员工在 2012 年创建,作为社区开源项目进行开发,并于 2015 年正式发布。2016 年,Prometheus 正式加入 Cloud Native Computing Foundation,成为受欢迎度仅次于 Kubernetes 的项目。
作为新一代的监控框架,Prometheus 具有以下特点:
下载:
mkdir rook
cd rook/
wget https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/common.yaml
wget https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/cluster.yaml
wget https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/operator.yaml
修改配置:
vim cluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13 #ceph版本
allowUnsupported: false
dataDirHostPath: /data/ceph/rook #存储的目录
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: true
network:
hostNetwork: false
rbdMirroring:
workers: 0
annotations:
resources:
useAllNodes: true
useAllDevices: true
deviceFilter:
location:
config:
directories:
- path: /data/ceph/rook
部署:
kubectl create -f common.yaml
kubectl create -f operator.yaml
kubectl create -f cluster.yaml
有时候经常会有个别容器占用磁盘空间特别大,这个时候就需要通过docker overlay2 目录名查找容器名:
先进入overlay2的目录,这个目录可以通过docker的配置文件(/etc/docker/daemon.json)内找到。然后看看谁占用空间比较多。
[root@sh-saas-k8s1-node-qa-04 overlay2]# du -sc * | sort -rn | more
33109420 total
1138888 20049e2e445181fc742b9e74a8819edf0e7ee8f0c0041fb2d1c9d321f73d8f5b
1066548 010d0a26a1fe5b00e330d0d87649fc73af45f9333fd824bf0f9d91a37276af18
943208 030c0f111675f6ed534eaa6e4183ec91d4c065dd4bdb5a289f4b572357667378
825116 0ad9e737795dd367bb72f7735fb69a65db3d8907305b305ec21232505241d044
824756 bf3c698966bc19318f3263631bc285bde07c6a1a4eaea25c4ecd3b7b8f29b3fd
661000 15763b72802e1e71cc943e09cba8b747779bf80fa35d56318cf1b89f7b1f1e71
575564 02eaa52e2f999dc387a9dee543028bada0762022cef1400596b5cc18a6223635
486780 4353c30611d7f51932d9af24bb1330db0fdb86faa9d9cae02ed618fa975c697a
486420 562a8874cc345b8ea830c1486c42211b288c886c5dca08e14d7057cacab984c1
486420 4f897e8cd355320b0e7ee1ecc9db5c43d5151f9afa29f1925fe264c88429be4c
448652 a8d0596d123fcc59983ce63df3f3acd40d5c930ed72874ce0a9efbc3234466de
448296 851cc4912edb9989e120acf241f26c82e9715e7fcb1d2bd19a002fcfb894f1f4
417780 20608baacae6bafcd4230a18a272857bc75703a6eefef5c9b40ba4ea19496b11
387388 43a8a76de3b5531e4c12f956f7bfcfcdb8fd38548faf20812cafa9a39813abc5
有时需要在k8s 集群上给比如开发人员创建一个只读的service account,在这里记录一下创建方法:
先创建oms-viewonly.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: oms-viewonly
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
- nodes
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- deployments/scale
- replicasets
- replicasets/scale
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- deployments/scale
- ingresses
- networkpolicies
- replicasets
- replicasets/scale
- replicationcontrollers/scale
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
- roles
- rolebindings
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: oms-read
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: oms-read
labels:
k8s-app: oms-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: oms-viewonly
subjects:
- kind: ServiceAccount
name: oms-read
namespace: kube-system