In this first entry in a series of blogs, new team member Niels Buitendijk walks us through his first 30 days working at Portainer, including what he's learning along the way, not just about Portainer but also the wider industry we service.
I’ve spent most of my career deep in infrastructure. I started in systems engineering and solution architecture, and over the last two decades, I’ve worked across a bunch of enterprise IT environments on behalf of companies like IBM, VMware, and Broadcom. Virtualization was my world. Designing datacenter solutions, optimizing workloads, and helping orgs scale, were the problems I solved every day.
Back then, containers felt like someone else’s battle. In most of the organizations I was working with, apps weren’t built for containerization, and there wasn’t an appetite to refactor them. Quite honestly, VMs did the job. We had no lack of compute resources, environments were stable, budgets were plentiful, and the pressure to modernize wasn’t always there. That started to shift.
In recent years, especially while leading tech initiatives in more agile teams and startups, I began seeing a change in thinking. The overhead of managing thick VMs for small services started to feel unnecessary. Especially in greenfield projects where you’re building from scratch. It didn’t make much sense not to seriously consider going with containers from the outset. My team at Exempla made the call to containerize everything from the start. We needed portability, consistent dev environments, efficient resource use and an ability to match application dependencies between dev and prod in a simple way. Using containers on Docker made a lot of sense, so we went down that path.
But managing those containers? That’s where we struggled. Although some of our developers were comfortable managing containers from the CLI, I was certainly not, at least not yet. We chose AWS ECS with Fargate mostly because we had credits and it seemed like a quick way to get started. At first, it felt convenient. But the more we built, the more we were sucked into the AWS ecosystem, using ECR, ALBs, Route53, CloudFront etc. Soon it felt like we became locked in, and concerns were being raised on the long term costs once our credits expired. Updating certain aspects of ECS task definitions required the CLI. The UI didn’t expose enough. Fairly quickly I realized I was learning AWS, not containers. It felt heavy, and expensive, and harder than it should be.
Then I joined Portainer.
After onboarding and team intros, I did what anyone in my shoes would, I launched Portainer CE locally on my Mac (running Docker Desktop). It was unbelievably easy. Literally minutes to install. The UI made sense instantly. It exposed all the container operations I’d only touched via CLI before. I couldn’t believe I hadn’t tried it sooner.
I also spent a good chunk of the afternoon diving into the Portainer blog and documentation. I was sent the post Are You Learning Containers the Wrong Way? by Neil, and got cracking on with the content. It was the right kind of guidance for where I’m at, deep enough to matter, simple enough to follow easily.
That blog led me down a rabbit hole of Portainer training material and community curated learning. I started organizing my own learning by prioritizing some hands-on lab work with the product (since that’s the best way I learn), balanced with reading and tutorials.
By the end of Day 1, I already had Portainer managing containers locally on Docker Desktop, and I felt strangely more comfortable in the UI than I’d been with ECS. It just felt more intuitive to me. Like someone finally designed container management for people like me who’ve spent years in infrastructure using interfaces like vCenter to make VM management easy, but now for containers.
After the quick success with Portainer CE on Docker Desktop in Day 1, I knew I needed to build out something more robust. My Mac is ARM-based (M3 processor), which is great for native performance but not ideal for container compatibility, especially when working with images built for x86_64. So, I set out to simulate a more realistic environment.
To get around the architecture mismatch, I spun up an x86_64 Ubuntu Server VM using UTM, a free virtualization tool for ARM Macs needing x86 emulation. That gave me:
I connected both environments to Portainer CE. One running natively, one emulated. And it just worked.
Connecting both endpoints to Portainer was a welcome sight. It showed me:
And maybe most importantly, it meant I could test things like network setups, image pulls, and volume mounts in two different environments as part of my learning ahead, without having to learn complex docker context usage or juggle remote Docker daemons.
Time to face my container blind spot: Kubernetes.
I’d never deployed K8s before, so I started with MicroK8s on Ubuntu Server, all running on an IBM laptop I have. With some AI help and docs, I built a working cluster. Then I connected it to Portainer Business Edition. The install command was provided right in the Portainer UI. Copy. Paste. Done. In minutes, my cluster was visible and manageable in Portainer.
That’s when I decided to go for it: deploy a real-world App from a startup I’m working on.
By the end of Day 3, I had the App backend and frontend running on Kubernetes via Portainer. It wasn’t perfect, but it was running, and for a first-time K8s deploy, that felt huge.
Today was more introspective. I spent time absorbing more Portainer training and watching some Kubernetes explainers to deepen my mental map. I also took time to reflect on the experience of deploying the Real-World App and how it relates to everything I know from the VMware world.
Concept (VMware World) | Equivalent (Container World) |
VM | Container |
ESXi Host | Kubernetes Host / Docker Host |
OVF / OVA | Dockerfile / Container Image |
vApp | Pod / Stack |
vNetworking / NSX | K8s Network / Ingress |
Datastore / vSAN / vStorage | PVCs / CSI Volume Drivers |
vSphere Permissions | K8s RBAC / Portainer Teams |
vCenter | Portainer |
* note: these are my mappings as of right now in my learning and could change / be added to
The last row is what really clicked for me.
Just like vCenter gives centralized management across ESXi hosts, Portainer does that across Docker/Kubernetes environments. It adds visibility, app lifecycle tools, access control, and simplicity, without removing the power of the underlying platform.
Looking back at the application deployment, I realized how different this experience would’ve been if I were doing it manually in raw Kubernetes. It made me appreciate what Portainer is solving, not just technically, but for people like me, coming from virtualization and needing a gateway into containers.
I went from deploying apps as scattered individual resources to structuring them in Kubernetes native ways, and I finally wrapped my head around Stacks, Ingress, and how Portainer bridges the complexity gap.
Up to this point, I’d been deploying App services one-by-one using the Portainer UI. It worked, but it felt like perhaps the way I was doing it may have been a bit clunky / not optimal. Kubernetes doesn’t naturally think in “single container app” terms. It thinks in Pods, Services, Ingress, ConfigMaps, Secrets etc, and so should I.
My first goal was to figure out the right way to manage multi-service apps like I was deploying. I decided to convert the backend services into a Stack in Portainer’s Kubernetes view.
Using the Stack feature:
Next up: networking. My frontend-ui was still being exposed via a NodePort (mapping to a host port on the Node itself), fine for testing, but not realistic in a production environment.
So I went down a bit of an Ingress rabbit hole.
In conjunction with Portainer, I:
Just like that, clean, hostname based access to my App. No more https://my-app:30080 type urls to use NodePort but instead able to just go to https://my-app.
I didn’t go full TLS yet, but now I understand more about how Portainer lets you map domains, paths, and even load balance multiple services through one entry point.
During all of this usage of Portainer so far, I’ve also been mentally comparing what it gives me in terms of monitoring and alerting to my mental equivalent of it being vCenter.
So far my take is as follows (early days, so I may not have the complete picture here):
What I’m seeing is that in it’s current form, Portainer is meant to simplify ops, not replace your entire monitoring stack.
No major lab work this weekend, but it wasn’t downtime. I spent some time watching various YouTubes on Portainer, Docker and Kubernetes. A quick search for “Portainer” on YouTube and even in Podcasts is a testament to how mature this product has become, how valued it is and how many people have come to love it.
A couple of notable mentions would be NetworkChuck’s various deep dives in to Docker and Kubernetes. An enjoyable watch, to the point and helped me again translate my virtualization infrastructure background to container terminologies, solutions and deployment architectures.
I also made a start on the “Complete Docker and Kubernetes Course” on Udemy. This was one of the links on the Portainer blog post I was recommended on Day 1 for learning.
This was all a great reset as it zoomed me out of the tools and back in to the “why” behind containers:
Make sure to come back next week for the next instalment in this series, where Niels dives deeper into containerization.