仅 K8s 部署

仅 K8s 部署

本篇指南仅适用于接触过 K8s 的用户。此篇文章不会介绍如何搭建 K8s 集群,也不会含有如何使用 kubectl 等命令的教程,同时文中可能出现一些 K8s 的专业术语,需要一定的基础进行阅读。

本文将着重介绍如何在 K8s 集群中部署 GZCTF,GZCTF 自身的配置教程请参考 快速上手

部署须知

  1. GZCTF 支持多实例部署,但经测试,目前来说仅单实例部署且和数据库位于同一节点的部署方式是最稳定的,因此本文中将以单实例部署为例进行介绍。

  2. 多实例部署时,需要给全部实例挂载共享存储作为 PV,否则无法保证文件的一致性;同时需要保证部署 Redis,否则无法保证多实例的缓存一致性。

  3. 多实例部署时,负载均衡器需要配置 sticky session,否则无法使用 websocket 来获取实时数据。

  4. 如果你不想那么麻烦,那就部署单实例吧!

  5. 既然选择使用 K8s 进行部署,一定程度上说明你需要更大的 Pod 数量,请务必注意如下配置项:

    • 在安装时指定 --kube-controller-manager-arg=node-cidr-mask-size=16,默认的 CIDR 为 /24,最多支持 255 个 Pod。此配置项在安装后无法更改

      INSTALL_K3S_EXEC="--kube-controller-manager-arg=node-cidr-mask-size=16"
    • 请自行调整 maxPods 的值,否则可能会导致 Pod 数量达到上限而无法调度。

      apiVersion: kubelet.config.k8s.io/v1beta1
      kind: KubeletConfiguration
      maxPods: 800

部署 GZCTF

  1. 创建命名空间及配置文件

    apiVersion: v1
    kind: Namespace
    metadata:
      name: gzctf-server
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: gzctf-config
      namespace: gzctf-server
    data:
      appsettings.json: |
        { ... } # appsettings.json 中的内容
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: gzctf-sa
      namespace: gzctf-server
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: gzctf-crb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin # 用于访问 Kubernetes API
    subjects:
      - kind: ServiceAccount
        name: gzctf-sa
        namespace: gzctf-server
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: gzctf-tls
      namespace: gzctf-server
    type: kubernetes.io/tls
    data:
      tls.crt: ... # base64 编码后的 TLS 证书
      tls.key: ... # base64 编码后的 TLS 私钥
  2. 创建本地 PV(如果需要多实例共享存储请自行更改配置)

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gzctf-files-pv
      namespace: gzctf-server
    spec:
      capacity:
        storage: 2Gi
      accessModes:
        - ReadWriteOnce # 多实例部署时请改为 ReadWriteMany
      hostPath:
        path: /mnt/path/to/gzctf/files # 本地路径
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gzctf-db-pv
      namespace: gzctf-server
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: /mnt/path/to/gzctf/db # 本地路径
    apiVersion: v1
    ---
    kind: PersistentVolumeClaim
    metadata:
      name: gzctf-files
      namespace: gzctf-server
    spec:
      accessModes:
        - ReadWriteOnce # 多实例部署时请改为 ReadWriteMany
      resources:
        requests:
          storage: 2Gi
      volumeName: gzctf-files-pv
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gzctf-db
      namespace: gzctf-server
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      volumeName: gzctf-db-pv
  3. 创建 GZCTF 的 Deployment

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf
      namespace: gzctf-server
      labels:
        app: gzctf
    spec:
      replicas: 1
      strategy:
        type: RollingUpdate
      selector:
        matchLabels:
          app: gzctf
      template:
        metadata:
          labels:
            app: gzctf
        spec:
          serviceAccountName: gzctf-sa
          nodeSelector:
            kubernetes.io/hostname: xxx # 指定部署节点,强制和数据库位于同一节点
          containers:
            - name: gzctf
              image: registry.cn-shanghai.aliyuncs.com/gztime/gzctf:latest
              imagePullPolicy: Always
              env:
                - name: GZCTF_ADMIN_PASSWORD
                  value: xxx # 管理员密码
                # choose your backend language `en_US` / `zh_CN` / `ja_JP`
                - name: LC_ALL
                  value: zh_CN.UTF-8
              ports:
                - containerPort: 8080
                  name: http
              volumeMounts:
                - name: gzctf-files
                  mountPath: /app/files
                - name: gzctf-config
                  mountPath: /app/appsettings.json
                  subPath: appsettings.json
              resources:
                requests:
                  cpu: 1000m
                  memory: 384Mi
          volumes:
            - name: gzctf-files
              persistentVolumeClaim:
                claimName: gzctf-files
            - name: gzctf-config
              configMap:
                name: gzctf-config
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf-redis
      namespace: gzctf-server
      labels:
        app: gzctf-redis
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: gzctf-redis
      template:
        metadata:
          labels:
            app: gzctf-redis
        spec:
          containers:
            - name: gzctf-redis
              image: redis:alpine
              imagePullPolicy: Always
              ports:
                - containerPort: 6379
                  name: redis
              resources:
                requests:
                  cpu: 10m
                  memory: 64Mi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf-db
      namespace: gzctf-server
      labels:
        app: gzctf-db
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: gzctf-db
      template:
        metadata:
          labels:
            app: gzctf-db
        spec:
          nodeSelector:
            kubernetes.io/hostname: xxx # 指定部署节点,强制和 GZCTF 位于同一节点
          containers:
            - name: gzctf-db
              image: postgres:alpine
              imagePullPolicy: Always
              ports:
                - containerPort: 5432
                  name: postgres
              env:
                - name: POSTGRES_PASSWORD
                  value: xxx # 数据库密码,需要和 appsettings.json 中的数据库密码一致
              volumeMounts:
                - name: gzctf-db
                  mountPath: /var/lib/postgresql/data
              resources:
                requests:
                  cpu: 500m
                  memory: 512Mi
          volumes:
            - name: gzctf-db
              persistentVolumeClaim:
                claimName: gzctf-db
  4. 创建 Service 和 Ingress

    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf
      namespace: gzctf-server
      annotations: # 开启 Traefik 的 Sticky Session
        traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
        traefik.ingress.kubernetes.io/service.sticky.cookie.name: "LB_Session"
        traefik.ingress.kubernetes.io/service.sticky.cookie.httponly: "true"
    spec:
      selector:
        app: gzctf
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf-db
      namespace: gzctf-server
    spec:
      selector:
        app: gzctf-db
      ports:
        - protocol: TCP
          port: 5432
          targetPort: 5432
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf-redis
      namespace: gzctf-server
    spec:
      selector:
        app: gzctf-redis
      ports:
        - protocol: TCP
          port: 6379
          targetPort: 6379
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: gzctf
      namespace: gzctf-server
      annotations: # 一些 Traefik 的 TLS 设置,可以根据自己的需求修改
        kubernetes.io/ingress.class: "traefik"
        traefik.ingress.kubernetes.io/router.tls: "true"
        ingress.kubernetes.io/force-ssl-redirect: "true"
    spec:
      tls:
        - hosts:
            - ctf.example.com # 域名
          secretName: gzctf-tls # 证书名称,需要自行创建对应的 Secret
      rules:
        - host: ctf.example.com # 域名
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: gzctf
                    port:
                      number: 8080
  5. Traefik 的额外配置

    为了让 GZCTF 能够正常通过 XFF 获取用户真实 IP,需要让 Traefik 能够正确地添加 XFF 头。请注意如下内容不一定总是具有时效性和适用于所有版本的 Traefik,此处举例为 helm values,请自行查找最新的配置方法。

    service:
      spec:
      externalTrafficPolicy: Local # 为了让 XFF 能够正常工作,需要将 externalTrafficPolicy 设置为 Local
    deployment:
      kind: DaemonSet
    ports:
      web:
      redirectTo: websecure # 重定向 HTTP 到 HTTPS
    additionalArguments:
      - "--entryPoints.web.proxyProtocol.insecure"
      - "--entryPoints.web.forwardedHeaders.insecure"
      - "--entryPoints.websecure.proxyProtocol.insecure"
      - "--entryPoints.websecure.forwardedHeaders.insecure"

部署提示

  1. 如果需要让 GZCTF 在初始化时自动创建管理员账户请注意传递 GZCTF_ADMIN_PASSWORD 环境变量,否则需要手动创建管理员账户。
  2. 请在系统日志界面调试并参考是否能够正常获取用户真实 IP,如果不能请检查 Traefik 的配置是否正确。
  3. 如有监控需求,请自行部署 Prometheus 和 Grafana,并打开 Traefik 的 Prometheus 支持,并且你可以通过 node exporter 监控题目容器的资源使用情况。
  4. 如果需要根据配置文件更改自动更新 GZCTF 的部署,请参考 Reloader (opens in a new tab)
  5. 在集群内部,可以使用 https://kubernetes.default.svc.cluster.local:443 作为集群配置文件中的 server 字段。