kubernetes switch context

我经常需要在多个 k8s 集群之间切换,可以使用配置文件来配置对多个集群的访问。有一点需要注意的是,kubectl 与集群的版本要一致,否则可能某些api的有变更导致结果与我们预期的不一致。

这篇文章记录切换不同集群的步骤。

kubectl config view

kubectl config get-contexts                          # 显示上下文列表
kubectl config current-context                       # 展示当前所处的上下文
kubectl config use-context my-cluster-name           # 设置默认的上下文为 my-cluster-name

默认访问说明

可以打开 ~/.kube/config 文件,默认是本集群的访问信息,文件结构默认如下:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: 某ca-file路径 # 也有可能是base64过后的信息, certificate-authority-data: 
    server: https://1.2.3.4:6443
  name: cluster1
contexts:
- context:
    cluster: cluster1
    user: admin
  name: context-cluster1-admin

current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate: 某cert-file路径 # 也有可能是base64过后的信息, client-certificate-data: 
    client-key: 某key-file路径 # 也有可能是base64过后的信息, client-key-data: 
- name: experimenter
  user:
    password: some-password
    username: exp

添加集群访问的相关信息

了解配置文件之后,就可以添加相关信息了。只要修改对应的配置name,不与当前配置冲突即可。

以下是我的例子。

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
    server: https://10.181.1.131:6443
  name: cluster1
- cluster:
    certificate-authority: /vagrant/n1.ca
    server: https://10.181.2.131:6443
  name: n1
- cluster:
    certificate-authority-data: xxx
    server: https://10.181.3.131:6443
  name: n2
contexts:
- context:
    cluster: cluster1
    user: admin
  name: context-cluster1-admin
- context:
    cluster: n2
    user: admin-n2
  name: n2
- context:
    cluster: n1
    user: kubernetes-admin
  name: n1
current-context: context-cluster1-admin
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: xxx
    client-key-data: xxx
- name: admin-n2
  user:
    client-certificate-data: xxx
    client-key-data: xxx
- name: kubernetes-admin
  user:
    client-certificate: /vagrant/n1.crt
    client-key: /vagrant/n1.key

然后就可以查看上下文信息了。

image-20210225111036059

使用以下命令行切换即可:

kubectl config use-context n1
kubectl config use-context n2

其它

记录一个 base64反解析为文件的用法:

img

将1,2,3的 value部分 反向base64还原,分别按顺序保存为 k8s-cluster.cak8s.crtk8s.key,三个文件,反向base64的命令如下:

echo '${替换内容}' | base64 -d

参考资料


grep 命令显示前后几行

grep -C n serch # 显示查找文件中匹配serch字串那行以及上下n行 
grep -B n serch # 显示查找文件中匹配serch字串那行以及前n行 before
grep -A n serch # 显示查找文件中匹配serch字串那行以及后n行 after

在本地 kubernetes 集群中安装 kubesphere

kubesphere 是青云开源的 kubernetes 管理平台,可以纳管原生 的 Kubernetes。我个人挺喜欢他们的UI设计,从kubesphere 1.0 就关注了,当时无法在本地环境中安装,只能蹭了一波他们销售发放的优惠券在云上安装体验。近期又尝试在我本地纯净的kubernetes集群上安装 kubesphere 3.0,虽然仍然有一小番波折,不过总算是安装好了。这篇文章记录相关的要点。

我的 kubernetes 版本 v1.20.2,没有任何可用的外部存储,所能依靠的只有 local-storage。

一、kubesphere前置条件

  • 节点可用资源:1c2g

  • 安装默认storage class

    创建 sc。

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: local-storage
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    

    参考:存储类

    配置为默认 sc

    kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    

    查看已经运行ok

    kubectl get sc
    

    image-20210202113043165

二、安装pv

跌跌撞撞一共起了6个 pv,如图:

image-20210202113542321

分别用于redis、Prometheus、openldap,如果后期安装 service mesh(可选),还需要为 es 起 对应的pv。

以下的yaml文件注意修改亲和性相关的配置。

  1. 创建目录

    mkdir -p /app/redis /app/prometheus1 /app/prometheus2 /app/openldap /app/es/data /app/es/data2 
    
  2. redis,10G

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: redis-pv
    spec:
      capacity:
        storage: 10Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/redis
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.131
    
  3. Prometheus,20G

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prometheus-pv-1
    spec:
      capacity:
        storage: 20Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/prometheus1
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.131
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prometheus-pv-2
    spec:
      capacity:
        storage: 20Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/prometheus2
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.131
    
  4. openldap,10G

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prom2-pv
    spec:
      capacity:
        storage: 10Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/openldap
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.133
       
    
  5. service mesh/es,20G (可选)

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-data-pv
    spec:
      capacity:
        storage: 20Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/es/data
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.131
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: es-data2-pv
    spec:
      capacity:
        storage: 20Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      storageClassName: local-storage
      local:
        path: /app/es/data2
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 10.184.0.131
              - 10.184.0.132
              - 10.184.0.133
    

三、安装 kubesphere

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

等待20分钟左右。

kubectl get po --all-namespaces

image-20210202115821057

可以通过命令行查看安装过程。初始只有一个Pod ks-installer,它在Pod中运行ansible,可以查看它的日志,如果有错误就针对性地解决错误:

kubectl logs -f -n kubesphere-system  ks-installer-xxx

安装完成后日志应该如下:

image-20210202115833559

四、界面尝鲜

image-20210202120103160

image-20210202120131900

image-20210202120202166

image-20210202120226176

image-20210202120357224

image-20210202120249995

五、开启service mesh

至此已经可以使用了。我尝试了开启 service mesh,有三个pod一直无法启动。因为只是随手玩玩,就不深究了。kubesphere的service mesh还是老版本的istio,就是还处于分离状态的组件,不是1.5版本之后的单体istio。

开启办法参考文档:https://kubesphere.io/docs/pluggable-components/service-mesh/

页面左上角 平台管理->集群管理->自定义资源crd-> 搜索 clusterconfiguration -> 进入

image-20210202121929508

拉到最后,将 service mesh 配置 enabled 改为 true,即可。

image-20210202121947614

查看安装进度:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

六、遇到的错误

  1. 无法登录,account not active

    查阅了不少 github issue,基本的情况是openldap那个容器相关的存储或者网络有问题。我一开始没有挂pv,估计是这个原因导致的问题。

    实际上挂上pv之后仍然不行。最后重启kubesphere 所有的pod之后可以登陆了。

  2. service mesh 仍然有3个pod无法启动。

    因为只是顺手在做,并不使用,先记录一下,目前没去理。

七、卸载kubesphere

参考使用kubesphere的卸载脚本即可:

https://github.com/kubesphere/ks-installer/blob/release-3.1/scripts/kubesphere-delete.sh

八、参考资料