Конфигурация в Kubernetes

Пример конфигурации Kubernetes

RBAC для вашей ноды

Вариант A. Ограниченный доступ для ноды, ограниченный пространством имен

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: node-runtime-access
  namespace: runtime
  labels:
    app: teamshell-node
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: teamshell-node
  namespace: runtime
rules:
  - apiGroups: ["", "apps", "autoscaling", "batch", "extensions"]
    resources: ["*"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  # Grants RO access to metric server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: teamshell-node
rules:
  # Grants RO access to cluster resources node and namespace
  - apiGroups: [""]
    resources: ["nodes", "namespaces", "pods"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to RBAC resources
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources: ["clusterroles", "roles", "clusterrolebindings", "rolebindings"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to CRD resources
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["get", "list", "watch"]
  # Grants RO access to metric server (if present)
  - apiGroups: ["metrics.k8s.io"]
    resources: ["nodes", "pods"]
    verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: crb-developer-node-runtime-access
  namespace: runtime
subjects:
- kind: ServiceAccount
  name: node-runtime-access
  namespace: runtime
roleRef:
  kind: ClusterRole
  name: teamshell-node
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: developer-node
  namespace: runtime
subjects:
- kind: ServiceAccount
  name: node-runtime-access
  namespace: runtime
roleRef:
  kind: Role
  name: teamshell-node
  apiGroup: rbac.authorization.k8s.io

Вариант B. Доступ в масштабах всего кластера

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: teamshell-rbac
subjects:
  - kind: ServiceAccount
  # Reference to upper's `metadata.name`
    name: default
  # Reference to upper's `metadata.namespace`
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Карта конфигурации с конфигурацией ноды и ключом

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: node-configmap
  namespace: runtime
data:
  settings.yaml: |
    services:
      - name: PROD-node-k8s-bash
        command: /bin/bash
        save_history: true
        env:
          TERM: xterm
        init_commands:
          - /usr/bin/apt-get install -y curl
      - name: PROD-kube-prompt
        command: kube-prompt
        save_history: true
        env:
          TERM: xterm
        tags:
          - k8s
      - name: PROD-django-shell
        command: kubectl exec deploy/site -c site -i -t /usr/local/bin/python manage.py shell_plus -n runtime
        save_history: true
        env:
          TERM: xterm
      - name: PROD-k9s
        command: k9s --namespace runtime
        save_history: true
        env:
          TERM: xterm
        tags:
          - k8s
    server: shell1.teamshell.com:7890
    secret: 1234567csam846nlkef8fehc6wcio
    keyFile: "/etc/teamshell/key.pem"
    node_name: k8s_google_cloud    
  
  key.pem: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEogIBAAKCAQEAy1QM1uxIV06uKUpeWHAm0/KFWUdmkqNbqOjvLf2oGUEHCqWa
    i9Ill54KYOU1p5LLMBAKqdadvhC4MPceop7IhFHUDn0UwNL2TBktGKHjseZMI9xX
    dP3sGlhqQeBlwK3ZuaEELlzsqPekBKPjjHIswvEtRCsoS0Nur/bj2jIgC6KIsUqc
    LWzB+HeNSX9krz1EnHCQP+TPedic2oKPJUiMUhGs5ojhvS7Xa08cFZpEG5FgQ/E7
    /xYBwsOdv5TL7Yzr6VLUnRR8YYBpcFy6Ld3sIwAbnSw5HNcNlpVjYy7uS0CvVdsW
    4wefJgxG76srJvaqsYc0G4cd+3fTorvKBBN+qQIDAQABAoIBABOcYrSkC4IoO5IN
    8zWSc5xDqurmgRUpnmCXPuJvsbPl2rkDKfnMZgXOn2+jC97CwMg889pXsdUwJaPL
    YfALYomonsxa8hJ7BnhmgTv7+UsiTDu1N9Y323rbBjyeWRIRcot95TpXihft8qrj
    58GXDYwr2NLXWsaPlXBxXp6f8QlOEsHNUSAtKtRoG2ZaUqGLEjZ1yUWPENh3Rx98
    iyex+bz2OeY3CNyOrYnqqD0wWESOYx4a+dkXklryCDXDa1A/qCUcIsBdGKqMqfUS
    79xWWGhVZlh/3t6ztxxPq0/KK+uIbzfj1/1ScWZ6BQAmmPQedwQv9RuQFcHu9J9O
    zp8k9YECgYEA0/DEOpkLoH0Pg28vG6FyIcwv2BJDDjC0mACxQb5biM8z1WiAuEmG
    vLsbYEUdExT3/9btcRan03B6/lG3u5kYs0XpYmxFFf0mwJKa4U+U5/xC55gXnqe4
    z84kvG87FTOqhzObCpJsrAk+BwtpjW2Fu+VfHoZf+N/Bg/2NdWPDy5ECgYEA9Zj0
    GS+vS56Fu5ab5j1P3TaOP0Wid7dgYDcYRdGjSUuLjTKVTRFXbK0yeMsUMnzfdGmw
    pe3CZxTUYbcu85ahzucJD9YtcIgEbWHpqUUQ0Ap6Wc9zJ9wqrrIXz4PpAXuGp8bC
    I4O9JJDchtcw7QjikfH8DIey+murrXkPJgNFBZkCgYAZGFkj1xtZVXWQyol5bBRn
    jBXeL8tg0sOPfAWBE1kjSeWJT2Zua8ZYco87RvO6XrE+yeQzj8svbhIiKurme6pB
    D/YigU9s2xzLkJBmPVYUYfpKEeg6EQIBGRegeEs2p3d5qtKg3dWgSTfe/arx5BYB
    uZcZti+G+HeheVRWogl9AQKBgBVL/SbN9sJ07ZZVspElke5Z073y2OLOuQHZQcaU
    Jjet4F0PHNlA/rbC0hSdb5PtNidPHu1Uj74GlWAf1Wd0EXXynNuNtAUFFnUxv2RZ
    Q9r2faOaFJ9JPT5G3T+2bZceUg/scVtJCjbIhQcAVBLJ6XPEaxnfDosemRWC4Ohn
    8i1JAoGAFgIJeb9hML8lnve4TW++NMOd1zsaDu07YO/XxQkJkB3uYMuHZxteMNxi
    YMqE7l70YCj4sjtcw74p3eHQ3N/ZuvwIkZjzv8dPCd1f2PciUenXeNSitCl8mtK3
    jPUVkd0TeAQ24G+pWVmcbDdRUs3MMRDEs4yzSS+5sQA83QSnBVI=
    -----END RSA PRIVATE KEY-----    
---

Развертывание ноды

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: teamshell-node
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: teamshell-node
    spec:
      # specify only if you follow option A.
      serviceAccountName: node-runtime-access 
      volumes:
        - name: config-volume
          configMap:
            name: node-configmap
      containers:
      - name: teamshell-node
        image: teamshell/node
        env:
          - name: GRPC_SERVER
            value: grpc1.teamshell.com:443
        command:
        - /bin/teamshell
        - node
        - start
        - -config
        - /etc/teamshell/settings.yaml
        volumeMounts:
          - mountPath: /etc/teamshell
            name: config-volume
        imagePullPolicy: Always
        resources:
          limits:
            cpu: "2"
          requests:
            cpu: 50m