跳至正文

ArgoCD独立管理模式

优势:直接使用官方 Helm Repo,100% 原生渲染

结构和FluxCD类似,分为ArgoCD自身配置和集群应用配置两种仓库,每个集群拥有一个独立的gitlab仓库

仓库结构

仓库 1:argocd-config.git(运维专属)

存放:ArgoCD 自身配置(AppProject、Cluster、RBAC 等) 权限:只有核心运维能修改 同步方式:手动 apply(仅版本管理,不接入 ArgoCD 自动同步)

argocd-config.git
├── projects/
│   └── mix-project.yaml
├── clusters/
│   └── mix-cn-gz-cluster.yaml
└── rbac/
    └── mix-team-rbac.yaml

仓库 2:mix-project.git(业务 / 集群专属)

存放:集群所有应用配置(Root App、Harbor、GitLab 等) 权限:开发 / 测试可修改对应应用目录 同步方式:Root App 自动发现、自动同步

mix-project.git
├── mix-cn-gz/
│   ├── root-app.yaml
│   ├── harbor/
│   │   └── app.yaml
│   └── gitlab/
│       └── app.yaml

核心优势

  • 权限彻底隔离
  • 同步机制清晰
  • 不破坏核心逻辑
  • 易于扩展

主要配置文件

注意:这两个命令要手动执行

kubectl apply -f mix-project.yaml -n argocd
kubectl apply -f root-app.yaml -n argocd

root-app.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: root-mix-cn-gz
  namespace: argocd
spec:
  project: mix-project

  source:
    repoURL: https://gitlab.infraserviceonline.com/infra-team/mix-project.git
    targetRevision: main
    path: mix-cn-gz
    directory:
      recurse: true
      include: '*.yaml'
      exclude: 'root-app.yaml'

  destination:
    server: https://kubernetes.default.svc

  syncPolicy:
    automated:
      prune: true
      selfHeal: true

mix-project.yaml

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: mix-project
  namespace: argocd
spec:
  description: "Mix 集群业务项目"
  sourceRepos:
    - https://gitlab.infraserviceonline.com/infra-team/mix-project.git
    - https://helm.goharbor.io
    - https://charts.gitlab.io/
  destinations:
    - server: https://kubernetes.default.svc
      namespace: "*"
  clusterResourceWhitelist:
    - group: "*"
      kind: "*"

app.yaml(Harbor 示例)

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: mix-cn-gz-harbor
  namespace: argocd
spec:
  project: mix-project

  source:
    repoURL: https://helm.goharbor.io
    chart: harbor
    targetRevision: 1.18.3
    helm:
      releaseName: harbor
      values: |
        expose:
          type: clusterIP
          tls:
            enabled: false
        externalURL: https://harbor.infraserviceonline.com

        database:
          type: external
          external:
            host: "pg-test-rw.postgresql.svc.cluster.local"
            port: 5432
            username: "harbor"
            password: "Rg3lub2dtE"
            coreDatabase: "registry"
            sslmode: "disable"

        redis:
          type: external
          external:
            addr: "valkey-0.valkey-headless.valkey-replica.svc.cluster.local:6379"
            sentinelMasterSet: ""
            coreDatabaseIndex: "0"
            jobserviceDatabaseIndex: "1"
            registryDatabaseIndex: "2"
            trivyAdapterIndex: "5"
            password: ""

        persistence:
          enabled: true
          resourcePolicy: "keep"
          persistentVolumeClaim:
            registry:
              storageClass: "-"
            jobservice:
              jobLog:
                storageClass: "rook-ceph-block"
                accessMode: ReadWriteOnce
                size: 1Gi
            trivy:
              storageClass: "rook-ceph-block"
              accessMode: ReadWriteOnce
              size: 5Gi

          imageChartStorage:
            disableredirect: true
            type: s3
            s3:
              region: us-east-1
              bucket: ceph-bkt-aad0791f-76df-43d3-9313-cca489d46ead
              accesskey: "5XQ8OBGZWG8MNO6M52Y2"
              secretkey: "3negGAxSrskJ0OediH3osHLEAhs36AAoE8sD9nRt"
              regionendpoint: http://rook-ceph-rgw-s3-store.rook-ceph.svc
              v4auth: true
              storageclass: STANDARD

        metrics:
          enabled: true
          core:
            path: /metrics
            port: 8001
          registry:
            path: /metrics
            port: 8001
          jobservice:
            path: /metrics
            port: 8001
          exporter:
            path: /metrics
            port: 8001
          serviceMonitor:
            enabled: true

        harborAdminPassword: "admin@123"
        logLevel: info

        proxy:
          replicas: 1
          nodeAffinity: &nodeAffinity
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: ceph-rbd-plug
                  operator: In
                  values: ["enabled"]
          tolerations: &tolerations
            - key: "ceph-taint"
              operator: "Equal"
              value: "osd"
              effect: "NoSchedule"

        core:
          replicas: 1
          affinity:
            nodeAffinity: *nodeAffinity
          tolerations: *tolerations

        jobservice:
          replicas: 1
          affinity:
            nodeAffinity: *nodeAffinity
          tolerations: *tolerations

        registry:
          replicas: 1
          affinity:
            nodeAffinity: *nodeAffinity
          tolerations: *tolerations

        portal:
          replicas: 1
          affinity:
            nodeAffinity: *nodeAffinity
          tolerations: *tolerations

        trivy:
          replicas: 1
          affinity:
            nodeAffinity: *nodeAffinity
          tolerations: *tolerations

        ipFamily:
          ipv6:
            enabled: false
          ipv4:
            enabled: true

        updateStrategy:
          type: Recreate

  destination:
    server: https://kubernetes.default.svc
    namespace: harbor

  syncPolicy:
    automated:
      prune: true
      selfHeal: false
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true
      - PruneLast=true

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注