I had a call recently with an enterprise platform team looking to implement and run a large scale shared Kubernetes cluster. They wanted to give their developers “freedom of choice" and to therefore support a mixed tooling environment: ArgoCD for some teams, Flux for others, Portainer for operational workloads, and a collection of traditional CD pipelines for those that preferred that world. Their assumption was simple and, on the surface, entirely reasonable. Give each team a namespace. Let them install and use whatever deployment tooling they prefer within it. Keep teams isolated from each other.
The question they were really asking was: how do we allow our engineers to work the way they want to work, without creating a security or governance mess? It is a good question. Let me explain...
ArgoCD and the two-credential problem
ArgoCD is a cluster-level tool. It runs as a set of controllers installed by the cluster administrator, typically in its own dedicated namespace, watching across the whole cluster. A team cannot spin up their own ArgoCD instance in their namespace; the admin installs it once and configures it to serve multiple teams. Access is managed through ArgoCD's own internal RBAC model, built around a construct called AppProjects. An AppProject defines which Git repositories a team can deploy from, which destination namespaces they can target, and which Kubernetes resource types they are permitted to manage.
This is entirely separate from Kubernetes RBAC. A team member's ArgoCD identity is managed through ArgoCD's own user database or an SSO integration, and their ArgoCD role grants them nothing in the Kubernetes API directly. Kubernetes RBAC is an authorization policy layer that governs what a given credential can do once presented to the API; it is not itself a separate credential. But the credential that sits behind it is: a kubeconfig carrying a certificate, token, or OIDC identity, configured independently of ArgoCD entirely. If an engineer needs to reach anything below the layer ArgoCD can see, they need that kubeconfig, and it has no relationship to their ArgoCD login. Two credentials, two independent authorization models, neither aware of the other.
Flux and the hidden boundary problem
Flux takes a different approach. Rather than maintaining its own identity layer, it leans into Kubernetes-native primitives. Its controllers run in the flux-system namespace, installed by the admin. Teams interact with Flux by creating Kubernetes custom resources in their own namespaces: a GitRepository pointing at a repo, a Kustomization or HelmRelease describing what to deploy. Because these are Kubernetes resources, access to create or modify them is governed by standard Kubernetes RBAC. Flux now ships with a Web UI that covers meaningful operational ground: sync status, workload health, HelmRelease and Kustomization deep-dives, and SSO via OIDC backed by Kubernetes RBAC for multi-tenant access. For many deployment and triage scenarios, engineers may never need to reach beyond it.
Kubernetes-native, however, does not automatically mean more locked down. Flux's controllers carry broad cluster-level permissions in order to reconcile resources across namespaces. By default, a Kustomization object created in one namespace can instruct those controllers to deploy resources into a namespace the creating team has no business touching. Preventing that requires deliberate configuration: specifically, Flux's service account impersonation features and careful namespace isolation policy. The admin must make active choices to enforce the security boundary. The architecture does not enforce it for them.
Direct cluster access via kubeconfig remains a separate concern regardless of how much the UI covers. The permissions governing what an engineer can do in the Flux Web UI and the permissions governing what they can do via kubectl are configured independently and do not flow from each other.
Traditional CD pipelines: a credential surface you already know
Jenkins, GitLab CI, GitHub Actions, and their peers interact with Kubernetes through service account tokens or OIDC federation, configured per pipeline, per cluster, sometimes per namespace. Each pipeline needs credentials scoped to what it deploys. Each set of credentials is managed separately, rotated separately, and audited separately. In a mixed environment where some teams are using GitOps tooling and others are still driving deployments from pipelines, the credential surface compounds. You are not managing two access planes per tool; you are managing overlapping planes across multiple tools simultaneously, with no shared identity layer tying them together.
And that is just the deployment layer
The tools above are all concerned with one thing: getting workloads onto the cluster. They do not cover what happens after the workload is running. A realistic platform stack adds observability, a container registry, and secrets management at a minimum, and each one arrives with its own access model.
Grafana has its own user database, its own organization and team structure, and its own dashboard and datasource permission model. An engineer who can deploy via ArgoCD and has kubectl access still needs a separate Grafana identity to view the metrics for the workload they just deployed. That identity is configured independently, scoped independently, and audited independently.
Harbor, or any container registry with access control, adds another layer. Who can push images, who can pull, which projects are visible to which teams: all of it managed inside Harbor's own permission system, with no relationship to anything else on the platform. An engineer whose kubeconfig grants them namespace admin rights cannot pull a private image from Harbor on the basis of that credential alone.
Vault compounds the problem further, because secrets management sits underneath everything else. Vault's auth engine is its own world: Vault tokens, Vault roles, policies written in Vault's own HCL-based language. Every application that needs a secret, and every engineer who needs to manage or rotate one, requires a Vault identity configured separately from their Kubernetes identity, their ArgoCD login, their Grafana account, and their Harbor credentials.
By the time a platform team has assembled deployment tooling, observability, a registry, and secrets management, they are not operating a platform. They are operating a collection of disconnected identity silos that happen to share the same underlying cluster. Every new tool added follows the same pattern: another credential to provision, another permission model to configure, another audit log that does not speak the same language as the others.
What a unified access layer actually looks like
The team on that call was not asking the wrong question. They were asking the right question without yet knowing what the right answer required. What they needed was not a policy for managing multiple access systems in parallel. What they needed was a single authentication, authorization, and audit layer that sits across the cluster estate and applies consistently, and have the tooling leverage this common layer.
Portainer operates as a control plane across clusters and environments. An engineer authenticates once. Their identity, their role, and their scope of access are defined centrally, enforced consistently, and logged in a single audit trail. Whether they are deploying an app via Portainer integrated GitOps, managing a namespace, inspecting a running workload, or operating across multiple clusters, the access model does not change and the credentials do not multiply.
The ongoing overhead of managing discrete access and authorization layers is painful. Every time a new engineer joins, access needs to be provisioned across every tool individually. Every time someone leaves, it needs to be revoked across every tool individually, and a missed account in any one of them is a security exposure that may go unnoticed. Every role change means tracking down permission references across systems that have no awareness of each other. In a team of any meaningful size, with normal staff turnover, this is not a one-time setup cost. It is a recurring operational burden that grows with every tool added to the platform. A unified control plane reduces that to a single operation: change the identity once, and the change propagates consistently across everything it governs.
If you want your engineers to use whatever tools they want, you better be prepared for the operational overhead this brings. In an ideal world, Platform Engineering curates a tooling suite, fully integrates it, and ensures its maintained and supported. The choice is one to be made.



