Most organizations running Docker Swarm have been running it for a long time. That's not a criticism... Swarm is stable, it works, and for many teams it has been quietly ticking along doing its job for years. The problem is that the people who built it have often moved on, the decisions baked into the deployment tooling were never properly documented, and what was once well-understood infrastructure has become infrastructure that “works” but that nobody wants to be responsible for changing. Swarm migrations get pushed back not because teams don't want to modernize, but because the risk of touching something that's working feels higher than the risk of leaving it alone.
That calculus eventually inverts. Swarm's development is now at a snail's pace, Kubernetes has become the standard, and the ecosystem of tooling, talent, and integrations has converged around it. The longer the migration is deferred, the wider that gap gets, and the harder it becomes to find engineers who understand the Swarm environment well enough to migrate it safely. Many organizations end up in a split state: new workloads going onto a fresh Kubernetes cluster, old workloads staying on Swarm because nobody wants to touch them. That works as a transitional posture, but it's not a destination. The old stuff has to move eventually, and ideally before the Swarm environment becomes genuinely unmaintainable.
When the decision is finally made, the scope of what's actually involved tends to land harder than expected.
Replacing the orchestrator is the starting point, not the finish line. Before any production workload moves, every application running on Swarm needs to be validated against Kubernetes. These are architecturally different platforms: service discovery works differently, overlay networking works differently, and the assumptions your applications make about how containers communicate don't automatically carry across. If you're using MacVLAN networking, that capability doesn't translate directly. If your services publish low ports (80, 443), Kubernetes doesn't permit that without a load balancer service with BGP support sitting in front of it. These aren't configuration changes... they're design changes that touch the application layer.
The manifest story is its own workstream. Docker Compose files don't natively deploy to Kubernetes. You need Deployments, Services, ConfigMaps, Ingress objects, and PersistentVolumeClaims written, validated, and version-controlled for every service you run. Most teams make the move to GitOps at the same time (the ecosystem has converged on this model, for better or for worse), which means also configuring sync policies and rethinking how deployments are triggered. That's a meaningful addition to the project scope.
CI/CD pipelines need to be rethought rather than just reconfigured. The build side (CI) remains largely intact: you still need a pipeline that compiles, tests, and builds container images, and you likely still need a Docker host for running docker build commands unless you're also evaluating a switch to a different OCI image builder. What changes more fundamentally is the delivery side. In a Kubernetes environment, most teams replace the CD portion of their pipeline with GitOps entirely: deployment is driven by a Git repository sync rather than a pipeline stage pushing to a target environment. That's a meaningful architectural shift in how deployments are triggered, audited, and rolled back, and it means the consolidated CI/CD tool your team has been using may end up handling only half of what it used to. Teams that want to retain a single CI/CD tool for both build and deploy can do so, but they're increasingly swimming against the current of how the Kubernetes ecosystem operates.
Monitoring, alerting, and observability all need to be rethought. Kubernetes surfaces metrics in terms of pods, nodes, and namespaces, not containers and stacks, and existing dashboards and alerting thresholds don't map across automatically. Access management gets more complex too. Distributing kubeconfig files to every developer and operator who needs cluster access is workable, but operationally messy at scale. Most teams end up deploying a management UI so that engineers don't need to work directly with kubectl for day-to-day operations. Kubernetes RBAC is considerably more granular than anything Swarm requires (which is nothing!!), and configuring it correctly takes time.
Underneath all of this is the team upskill. Every developer, DevOps engineer, and ops team member who touches infrastructure needs a working understanding of how Kubernetes operates: how pods schedule, how services route, how health checks work, how rollouts behave. This takes months to become second nature, and it happens in parallel with everything else on the list.
This is the full picture of why the migration keeps getting pushed to next quarter. Each of the workstreams above is a project in its own right, and they don't run sequentially... they run in parallel, competing for the same engineering time. A realistic end-to-end migration for a mid-sized Swarm environment typically runs six to twelve months, and that's assuming the project stays funded and prioritized throughout. For larger environments, eighteen months or more is not unusual.
The structural problem is that most of these workstreams have to complete before you can cut over. You can't migrate applications until the Kubernetes platform is ready. You can't retire Swarm until the applications have moved. You can't fully retool CI until you know what the target state looks like. That interdependency collapses what could theoretically be an incremental migration into something that looks a lot like a big-bang cutover in practice: a single point in time where everything has to work, the old environment gets switched off, and any gaps in preparation become production incidents. That's where project risk concentrates, and it's why migration programs of this type have a high failure rate or get abandoned partway through and quietly shelved.
D2K, a new OSS project from Portainer, is that incremental path. It's a lightweight container shim you deploy to your Kubernetes cluster. It understands Docker Compose format, speaks the Docker (and Docker Swarm) API, and translates deployments to native Kubernetes in real time. Your teams keep deploying as they do today, your CI/CD pipelines don't change, and your Swarm operational knowledge stays valid while your teams build confidence on the new platform.
Migration then happens application by application, at whatever pace makes sense. Swap the manifest for one service, update its pipeline, validate it runs cleanly on Kubernetes natively, move to the next. When the last application has made the move, D2K comes out... it was designed as a bridge, not as permanent infrastructure. The migration that kept getting deferred finally has a path forward.
{{article-cta}}

.png)
.png)
.png)