AWS re:Invent 2025 wrapped up last week, with yet another staggering lineup of fantastic announcements. But there was one thing on my AWS wishlist, which unfortunately remains a significant gap in the AWS portfolio of services.
AWS has spent years building services that appear to add up to “a single UI for all EKS everywhere.” But anyone who runs Kubernetes at enterprise scale across multiple AWS accounts and regions quickly discovers the limitations. There is no AWS-native place where you can see all your clusters, workloads, and Kubernetes policy posture together in one place. And there is definitely no central place where you can manage all your Kubernetes clusters consistently.
AWS provides powerful building blocks, but they are fragmented. Turning them into a cohesive platform is left to the customer, who often ends up relying on professional services or expensive consultants.
This post explains the AWS landscape, why the pieces do not add up to a unified EKS Command Center, and how Portainer fills the gap.
The EKS Console: A Window, Not a Control Plane
Amazon is well known for avoiding services that span regions. EKS follows this pattern. The EKS service and console are scoped to a single AWS account and a single AWS region. You only see clusters in the region you are currently viewing. If you spread development, staging, and production across different accounts, you must role-switch or log out and back in to switch contexts. There is no “show me everything across my entire organization” view.
Large organizations often give every engineer a sandbox account. The result is cluster sprawl. You might have dozens or hundreds of clusters across many accounts and regions, with no central screen that shows all of them. Many companies still discover all their EKS clusters by looking through their AWS billing statement at the end of the month.
AWS took a step toward solving this with the EKS Dashboard launched in May 2025. It helps you discover clusters across accounts and regions, but it is limited. It is primarily an inventory and upgrade-planning view rather than a full multi-cluster operations console, and it does not let you centrally edit or manage clusters. You still have to jump into individual cluster views to make changes. AWS customers are still using spreadsheets, Terraform state files, and internal documentation with login links to inventory their clusters.
If EKS struggles to help you answer the basic question of: “What clusters do we have, and where are they?” what about governance of these clusters?
Governance Tools: Organizations, Control Tower, and SCPs
AWS has invested heavily in centralized governance at the AWS API layer. Organizations and OUs, Control Tower guardrails, Service Control Policies, IAM Identity Center, and centralized CloudTrail all help determine and audit who can create EKS clusters and what AWS API actions they can perform against these EKS clusters. These tools create strong guardrails around the AWS management plane.
However, none of them govern what happens inside a Kubernetes cluster. They cannot manage Kubernetes RBAC, admission policies, namespace restrictions, Pod Security, runtime behavior, or network policy. The centralized governance boundary ends at the AWS API. Everything inside the cluster, which is where applications actually run, is outside the scope of these tools.
This is why EKS environments tend to evolve into a hybrid collection of AWS specific vendor configuration, and Kubernetes native, cluster-specific controls. One cluster might use Gatekeeper, another might use Kyverno. Network policies differ. Pod Security settings differ. There are often custom controllers and dozens of IaC-driven exceptions. Without a unifying layer, each cluster becomes slightly different from the next.
AWS Security Services: Central Findings, but Difficult Resolution
AWS provides several security services that can detect issues across AWS accounts. Security Hub, GuardDuty EKS Runtime Monitoring, Inspector, and Config can all aggregate alerts and findings. These tools will tell you that a workload is running as root or that a node needs a patch.
The problem is that you still need to locate the correct AWS account, assume the right IAM role, switch regions, open the right cluster, and then fix the problem manually inside Kubernetes. AWS provides security aggregation, not centralized Kubernetes remediation. The experience feels more like reviewing SIEM alerts than managing clusters.
Observability: A Choose-Your-Own-Adventure Story
AWS has strong observability primitives. CloudWatch Container Insights, Prometheus and AMP, Grafana and AMG, X-Ray, and OpenTelemetry are all excellent tools. The challenge is that AWS leaves it up to you to integrate them.
If you operate in several accounts or regions, you eventually need a central observability account, along with Prometheus remote_write pipelines, cross-account IAM trust statements, hand-built dashboards, and a fair amount of institutional knowledge to navigate everything. It can work, but it is fragile and never provides the single view of “all workloads across all clusters” that platform teams want.
What AWS Does Well, and What It Does Not Do Well
AWS is very good at providing modular building blocks. It gives you solid primitives for account governance, security findings, resilient regional clusters, and scalable observability. However, AWS does not tie these together into an organization-wide EKS inventory, global policy control, unified Kubernetes RBAC, cross-account namespace governance, workload-level visibility, or a centralized operations UI.
There is no AWS-native EKS Command Center. AWS assumes that customers will either build the missing pieces themselves using IaC and custom tooling, or rely on a third-party platform to unify the experience. This is not a criticism of how AWS does things. AWS intentionally focuses on primitives rather than fully assembled platforms, and that’s okay. It just means you need to add something else to your tool set: Portainer.
Portainer: A Unified Kubernetes Command Center Across the Entire AWS Organization
Platform teams usually converge on wanting the same set of capabilities:
- A global inventory of all clusters across accounts and regions
- A consistent view of namespaces and workloads
- A way to apply uniform policies everywhere,
- A single RBAC layer tied to corporate identity
- Application wide visibility without account switching
- A unified place for day two operations.
AWS provides bits and pieces of this, but never the complete picture.
Portainer fills this gap by providing the missing control plane. A lightweight agent runs in each Kubernetes cluster and connects back to a central Portainer server. With this approach you gain one UI that shows all clusters, one policy engine that lets you define a rule once and apply it everywhere, one place to deploy Helm charts or YAML across multiple clusters, and one RBAC layer that maps directly to your corporate identity provider.
Portainer also avoids cloud vendor lock-in. You can connect clusters from Google Cloud, Azure, or on-prem hardware. You can even manage Docker or Podman hosts for teams that are still transitioning from traditional container hosts to Kubernetes. Portainer becomes the single place where your entire container footprint comes together.
AWS gives you everything you need to build a Kubernetes platform, but it does not give you the platform itself. If you want a unified experience across all of your EKS clusters, you have three choices:
- You can accept a fragmented set of consoles and tools.
- You can build your own control plane out of scripts, IaC, and custom interfaces.
- You can adopt a Kubernetes platform such as Portainer that brings your clusters together into a central command center.
AWS will continue to refine its primitives, but it is unlikely to deliver the fully unified, opinionated, cross-account and cross-region control plane that platform teams want. This is why the gap persists, and why many enterprises running EKS at scale are relying on Portainer to finally achieve the “EKS Command Center” they wanted all along.



