有时需要在k8s 集群上给比如开发人员创建一个只读的service account,在这里记录一下创建方法:

先创建oms-viewonly.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: oms-viewonly
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - persistentvolumeclaims
  - pods
  - replicationcontrollers
  - replicationcontrollers/scale
  - serviceaccounts
  - services
  - nodes
  - persistentvolumeclaims
  - persistentvolumes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - bindings
  - events
  - limitranges
  - namespaces/status
  - pods/log
  - pods/status
  - replicationcontrollers/status
  - resourcequotas
  - resourcequotas/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - deployments/scale
  - replicasets
  - replicasets/scale
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - deployments/scale
  - ingresses
  - networkpolicies
  - replicasets
  - replicasets/scale
  - replicationcontrollers/scale
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  - volumeattachments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - clusterrolebindings
  - clusterroles
  - roles
  - rolebindings
  verbs:
  - get
  - list
  - watch

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: oms-read 
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: oms-read
  labels: 
    k8s-app: oms-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: oms-viewonly
subjects:
- kind: ServiceAccount
  name: oms-read
  namespace: kube-system

阅读全文

k8s 命令常用批量操作

目录

1. kubectl定制显示栏位

新建一个文件:custom-columns-depolyment-requests.txt

NAMESPACE                            NAME                                        Request_CPU 
metadata.namespace                   metadata.name                               spec.template.spec.containers[0].resources.requests.cpu

然后使用参数-o custom-columns-file=custom-columns-depolyment-requests.txt来查看:

[root@sh-saas-k8scs2-master-online-01 ~]# kubectl get deployments --all-namespaces -o custom-columns-file=custom-columns-depolyment-requests.txt  | grep -v -E "00m" | grep -v "none"
NAMESPACE                          NAME                                                        Request_CPU
public-oa-node-online              public-oa-weekly-service-node-online                        250m
saas-caiwu-tomcat-online           saas-caiwu-wmpay-adapter-tomcat-online                      1
saas-caiwu-tomcat-online           saas-caiwu-wmpay-service-tomcat-online                      1
saas-ec-tomcat-online              saas-ec-coupon-management-tomcat-online                     1
saas-ec-tomcat-online              saas-ec-discount-service-tomcat-online                      1
saas-ec-tomcat-online              saas-ec-guide-management-tomcat-online                      1
saas-ec-tomcat-online              saas-ec-merchant-management-tomcat-online                   1
saas-ec-tomcat-online              saas-ec-openapi-tomcat-online                               4
saas-ec-tomcat-online              saas-ec-platform-web-tomcat-online                          2
saas-ec-tomcat-online              saas-ec-promotion-service-tomcat-online                     1
saas-inter-tomcat-ol               saas-interactive-address-service-tomcat-online              1
saas-inter-tomcat-ol               saas-interactive-address-web-tomcat-online                  1
saas-inter-tomcat-ol               saas-interactive-logistics-service-tomcat-online            1
saas-inter-tomcat-ol               saas-interactive-logistics-web-tomcat-online                1
saas-jcpt-tomcat-online            saas-jcpt-uc-base-core-tomcat-online                        1
saas-jcpt-tomcat-online            saas-mc-base-core-service-tomcat-online                     1
saas-kf-tomcat-online              saas-kf-aal-tomcat-online                                   1
saas-kf-tomcat-online              saas-kf-base-tomcat-online                                  1

阅读全文

在容器内,很多时候ping,telnet的命令都没有,进行网络调试很受限,可通过重定向实现基于tcp/udp协议的软件通讯。

linux 设备里面有个比较特殊的文件:

/dev/[tcp|upd]/host/port 只要读取或者写入这个文件,相当于系统会尝试连接:host 这台机器,对应port端口。如果主机以及端口存在,就建立一个socket 连接。将在,/proc/self/fd目录下面,有对应的文件出现。

[chengmo@centos5 shell]$ cat</dev/tcp/127.0.0.1/22
SSH-2.0-OpenSSH_5.1
#我的机器shell端口是:22
#实际:/dev/tcp根本没有这个目录,这是属于特殊设备
[chengmo@centos5 shell]$ cat</dev/tcp/127.0.0.1/223
-bash: connect: 拒绝连接
-bash: /dev/tcp/127.0.0.1/223: 拒绝连接
#223接口不存在,打开失败
 
[chengmo@centos5 shell]$ exec 8<>/dev/tcp/127.0.0.1/22
[chengmo@centos5 shell]$ ls -l /proc/self/fd/
总计 0
lrwx------ 1 chengmo chengmo 64 10-21 23:05 0 -> /dev/pts/0
lrwx------ 1 chengmo chengmo 64 10-21 23:05 1 -> /dev/pts/0
lrwx------ 1 chengmo chengmo 64 10-21 23:05 2 -> /dev/pts/0
lr-x------ 1 chengmo chengmo 64 10-21 23:05 3 -> /proc/22185/fd
lrwx------ 1 chengmo chengmo 64 10-21 23:05 8 -> socket:[15067661]
 
#文件描述符8,已经打开一个socket通讯通道,这个是一个可以读写socket通道,因为用:"<>"打开
[chengmo@centos5 shell]$ exec 8>&-
#关闭通道
[chengmo@centos5 shell]$ ls -l /proc/self/fd/
总计 0
lrwx------ 1 chengmo chengmo 64 10-21 23:08 0 -> /dev/pts/0
lrwx------ 1 chengmo chengmo 64 10-21 23:08 1 -> /dev/pts/0
lrwx------ 1 chengmo chengmo 64 10-21 23:08 2 -> /dev/pts/0
lr-x------ 1 chengmo chengmo 64 10-21 23:08 3 -> /proc/22234/fd

阅读全文

先使用ps auxw 查看进程的ID,再执行:
docker ps -q | xargs docker inspect --format '{{.State.Pid}}, {{.Name}}' | grep "^%PID%"
其中%PID%是ps查看到的CONTAINER PID.

如果ps auxw取到的进程ID不为CONTAINER PID,通常情况下是由于这个进程不是容器的1号进程造成的。可以通过
pstree -sg <PID>
先找到父ID,再执行:
docker ps -q | xargs docker inspect --format '{{.State.Pid}}, {{.Name}}' | grep "^%PID%"
就可以了。

阅读全文

centos 7 已经自喧nsenter这个命令,可以直接使用,它可以方便的让我们进入docker容器的命名空间。

首先获取容器pid,示例如下:

[root@sh-saas-k8s1-master-dev-01 ~]# docker ps
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
f8b1e0b8caa7        nginx                                                                 "nginx -g 'daemon of…"   33 seconds ago      Up 33 seconds       80/tcp              nginx
[root@sh-saas-k8s1-master-dev-01 ~]# pid=$(docker inspect --format "{{ .State.Pid }}" f8b1e0b8caa7)
[root@sh-saas-k8s1-master-dev-01 ~]# echo $pid
16042

然后使用nsenter命令进入:

[root@sh-saas-k8s1-master-dev-01 ~]# nsenter --target $pid --mount --uts --ipc --net --pid
mesg: ttyname failed: No such file or directory
root@f8b1e0b8caa7:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@f8b1e0b8caa7:/# ip a
-bash: ip: command not found
root@f8b1e0b8caa7:/# exit
logout

阅读全文

在使用docker时经常出现一台docker主机上跑了多个容器,可能其中一个容器里的进程导致了整个宿主机load很高,其实一条命令就可以找出罪魁祸首

#查找容器ID

docker inspect -f "{{.Id}} {{.State.Pid}} {{.Name}} " $(docker ps -q) |grep <PID>

#查找k8s pod name

docker inspect -f "{{.Id}} {{.State.Pid}} {{.Config.Hostname}}" $(docker ps -q) |grep <PID>

#如果PID是容器内运行子进程那docker inspect就无法显示了

for i in  `docker ps |grep Up|awk '{print $1}'`;do echo \ &&docker top $i &&echo ID=$i; done |grep -A 10 <PID>

阅读全文

日前写了一个zabbix的监控脚本来监控kubernetes集群,主要用于报警的功能。性能监控还是使用其它方式来实现。

github URL

https://github.com/farmerluo/k8s_zabbix

k8s_zabbix说明

k8s_zabbix实现了使用zabbix监控kubernetes的ingress,hpa,pod状态等功能。

Template Check K8S Cluster Status.xml:zabbix模板,可通过此文件导入到zabbix

check_k8s_status.py:kubernetes的监控脚本

userparameter_k8s.conf:zabbix agent端的配置文件,需要注意脚本的路径

阅读全文

作者的图片

阿辉

容器技术及容器集群等分布式系统研究

容器平台负责人

上海