Skip to content
Securely manage Docker, Swarm, Kubernetes and Podman clusters in the cloud, on-premise, and in the data center.
Secure app deployment and device management for your Industrial IoT, IoT and Edge devices.
Let Portainer's Managed Platform Services accelerate your containerization journey.
A fully integrated, multi-cluster Kubernetes platform that’s scalable, secure and supported.
Deployment scenarios
Partner Solutions (Hybrid Cloud)
Onboard, manage and deploy workloads across hundreds of devices securely with Portainer.
Deployment scenarios
Partner Solutions (Edge/IIoT)
Niels BuitendijkJune 13, 202510 min read

My First 30 Days at Portainer - Week 1

In this first entry in a series of blogs, new team member Niels Buitendijk walks us through his first 30 days working at Portainer, including what he's learning along the way, not just about Portainer but also the wider industry we service.

Background

I’ve spent most of my career deep in infrastructure. I started in systems engineering and solution architecture, and over the last two decades, I’ve worked across a bunch of enterprise IT environments on behalf of companies like IBM, VMware, and Broadcom. Virtualization was my world. Designing datacenter solutions, optimizing workloads, and helping orgs scale, were the problems I solved every day.

Back then, containers felt like someone else’s battle. In most of the organizations I was working with, apps weren’t built for containerization, and there wasn’t an appetite to refactor them. Quite honestly, VMs did the job. We had no lack of compute resources, environments were stable, budgets were plentiful, and the pressure to modernize wasn’t always there. That started to shift.

In recent years, especially while leading tech initiatives in more agile teams and startups, I began seeing a change in thinking. The overhead of managing thick VMs for small services started to feel unnecessary. Especially in greenfield projects where you’re building from scratch. It didn’t make much sense not to seriously consider going with containers from the outset. My team at Exempla made the call to containerize everything from the start. We needed portability, consistent dev environments, efficient resource use and an ability to match application dependencies between dev and prod in a simple way. Using containers on Docker made a lot of sense, so we went down that path.

But managing those containers? That’s where we struggled. Although some of our developers were comfortable managing containers from the CLI, I was certainly not, at least not yet. We chose AWS ECS with Fargate mostly because we had credits and it seemed like a quick way to get started. At first, it felt convenient. But the more we built, the more we were sucked into the AWS ecosystem, using ECR, ALBs, Route53, CloudFront etc. Soon it felt like we became locked in, and concerns were being raised on the long term costs once our credits expired. Updating certain aspects of ECS task definitions required the CLI. The UI didn’t expose enough. Fairly quickly I realized I was learning AWS, not containers. It felt heavy, and expensive, and harder than it should be.

Then I joined Portainer.

Day 1: First Taste

After onboarding and team intros, I did what anyone in my shoes would, I launched Portainer CE locally on my Mac (running Docker Desktop). It was unbelievably easy. Literally minutes to install. The UI made sense instantly. It exposed all the container operations I’d only touched via CLI before. I couldn’t believe I hadn’t tried it sooner.

30days-day1-1

I also spent a good chunk of the afternoon diving into the Portainer blog and documentation. I was sent the post Are You Learning Containers the Wrong Way? by Neil, and got cracking on with the content. It was the right kind of guidance for where I’m at, deep enough to matter, simple enough to follow easily.

That blog led me down a rabbit hole of Portainer training material and community curated learning. I started organizing my own learning by prioritizing some hands-on lab work with the product (since that’s the best way I learn), balanced with reading and tutorials.

By the end of Day 1, I already had Portainer managing containers locally on Docker Desktop, and I felt strangely more comfortable in the UI than I’d been with ECS. It just felt more intuitive to me. Like someone finally designed container management for people like me who’ve spent years in infrastructure using interfaces like vCenter to make VM management easy, but now for containers.

30days-day1-2

Day 2: Building My Lab

After the quick success with Portainer CE on Docker Desktop in Day 1, I knew I needed to build out something more robust. My Mac is ARM-based (M3 processor), which is great for native performance but not ideal for container compatibility, especially when working with images built for x86_64. So, I set out to simulate a more realistic environment.

The Lab Setup

To get around the architecture mismatch, I spun up an x86_64 Ubuntu Server VM using UTM, a free virtualization tool for ARM Macs needing x86 emulation. That gave me:

  • Docker running on ARM (macOS)
  • Docker running on x86 (Ubuntu Server VM on UTM)

I connected both environments to Portainer CE. One running natively, one emulated. And it just worked.

Portainer Magic: Two Nodes, One Pane

Connecting both endpoints to Portainer was a welcome sight. It showed me:

  • Unified container visibility across architectures
  • The ability to manage images, containers, networks, and volumes from a single interface i.e. no switching context or shelling into machines
  • A clearer understanding of how the same image behaves across different CPU architectures, which is super relevant when trying to avoid cross-platform bugs

And maybe most importantly, it meant I could test things like network setups, image pulls, and volume mounts in two different environments as part of my learning ahead, without having to learn complex docker context usage or juggle remote Docker daemons.

30days-day2-1

Day 3: My First Kubernetes Cluster (and Real App Deployment)

Time to face my container blind spot: Kubernetes.

I’d never deployed K8s before, so I started with MicroK8s on Ubuntu Server, all running on an IBM laptop I have. With some AI help and docs, I built a working cluster. Then I connected it to Portainer Business Edition. The install command was provided right in the Portainer UI. Copy. Paste. Done. In minutes, my cluster was visible and manageable in Portainer.

That’s when I decided to go for it: deploy a real-world App from a startup I’m working on.

The Stack

  • Backend services: main-server, ai-server, socket-server, go-server
  • Frontend: frontend-ui, served by NGINX
  • All containerized with Dockerfiles and pushed to AWS ECR

The Process

  • I added ECR as a custom registry in Portainer, using my AWS key/secret
  • Deployed each service one by one using the Portainer UI
  • For the frontend, I initially exposed the service using a NodePort, Ingress was still on my to-do list

The Lessons

  • Private registries take setup: ECR wasn’t pulling images at first. The region, IAM permissions, and token expiry caused some headaches until I reconfigured access properly.
  • Portainer's UI reduced the YAML pain: I thought I’d be manually writing manifests for every service, but Portainer handled most of it with forms and auto-generated manifests.
  • Inter-service networking wasn't plug-and-play: While Docker Compose gives you a default shared network, Kubernetes networking is more explicit. I had to ensure all services had correct labels, service discovery enabled, and were reachable internally.
  • Stack vs App: I started with individual app deployments but quickly realized that managing them as a Stack made rollback and cleanup way easier. Portainer lets you organize them logically this way, which was a big plus.
  • Environment variables and secrets: Initially, I hardcoded some config, but by the end of the day I was experimenting with ConfigMaps and Secrets via the UI. Still learning best practices here, but it felt intuitive enough to experiment safely.

By the end of Day 3, I had the App backend and frontend running on Kubernetes via Portainer. It wasn’t perfect, but it was running, and for a first-time K8s deploy, that felt huge.

Day 4: Connecting the Dots: Virtualization vs Containers

Today was more introspective. I spent time absorbing more Portainer training and watching some Kubernetes explainers to deepen my mental map. I also took time to reflect on the experience of deploying the Real-World App and how it relates to everything I know from the VMware world.

Virtualization vs Containerization: My Mapping

Concept (VMware World) Equivalent (Container World)
VM Container
ESXi Host Kubernetes Host / Docker Host
OVF / OVA Dockerfile / Container Image
vApp Pod / Stack
vNetworking / NSX K8s Network / Ingress
Datastore / vSAN / vStorage PVCs / CSI Volume Drivers
vSphere Permissions K8s RBAC / Portainer Teams
vCenter Portainer

* note: these are my mappings as of right now in my learning and could change / be added to

The last row is what really clicked for me.

Just like vCenter gives centralized management across ESXi hosts, Portainer does that across Docker/Kubernetes environments. It adds visibility, app lifecycle tools, access control, and simplicity, without removing the power of the underlying platform.

Looking back at the application deployment, I realized how different this experience would’ve been if I were doing it manually in raw Kubernetes. It made me appreciate what Portainer is solving, not just technically, but for people like me, coming from virtualization and needing a gateway into containers.

Day 5: From Containers to Clusters: Stacks, Ingress, Services

I went from deploying apps as scattered individual resources to structuring them in Kubernetes native ways, and I finally wrapped my head around Stacks, Ingress, and how Portainer bridges the complexity gap.

Up to this point, I’d been deploying App services one-by-one using the Portainer UI. It worked, but it felt like perhaps the way I was doing it may have been a bit clunky / not optimal. Kubernetes doesn’t naturally think in “single container app” terms. It thinks in Pods, Services, Ingress, ConfigMaps, Secrets etc, and so should I.

Structuring with Stacks

My first goal was to figure out the right way to manage multi-service apps like I was deploying. I decided to convert the backend services into a Stack in Portainer’s Kubernetes view.

Using the Stack feature:

  • I grouped my various app backend servers / containers under a single deployment unit.
  • Portainer let me do this via the UI, so I didn’t even need to author the YAML manually (though I peeked at it to learn what was happening behind the scenes).
  • I really liked how I could view logs across the whole stack in one place, restart pods, or redeploy the whole backend together.

Making My App Reachable - Load Balancers and Ingress

Next up: networking. My frontend-ui was still being exposed via a NodePort (mapping to a host port on the Node itself), fine for testing, but not realistic in a production environment.

So I went down a bit of an Ingress rabbit hole.

In conjunction with Portainer, I:

  • Enabled the Ingress controller on MicroK8s
  • Created an Ingress resource through the UI that routed traffic from a hostname to my frontend service
  • Updated my local /etc/hosts to simulate DNS for testing (prob should have just deployed a dns service too, but this was fine).

Just like that, clean, hostname based access to my App. No more https://my-app:30080 type urls to use NodePort but instead able to just go to https://my-app.

I didn’t go full TLS yet, but now I understand more about how Portainer lets you map domains, paths, and even load balance multiple services through one entry point.

Observability

During all of this usage of Portainer so far, I’ve also been mentally comparing what it gives me in terms of monitoring and alerting to my mental equivalent of it being vCenter. 

So far my take is as follows (early days, so I may not have the complete picture here):

  • You get health indicators, status feedback, basic logs, and some resource usage visuals in the UI
  • You don’t get get long-term metrics, dashboards, or alerts out of the box. That’s where integrating with external tools make sense. 

What I’m seeing is that in it’s current form, Portainer is meant to simplify ops, not replace your entire monitoring stack.

Days 6-7 (Weekend): Theory Mode Activated

No major lab work this weekend, but it wasn’t downtime. I spent some time watching various YouTubes on Portainer, Docker and Kubernetes. A quick search for “Portainer” on YouTube and even in Podcasts is a testament to how mature this product has become, how valued it is and how many people have come to love it.

A couple of notable mentions would be NetworkChuck’s various deep dives in to Docker and Kubernetes. An enjoyable watch, to the point and helped me again translate my virtualization infrastructure background to container terminologies, solutions and deployment architectures.

I also made a start on the “Complete Docker and Kubernetes Course” on Udemy. This was one of the links on the Portainer blog post I was recommended on Day 1 for learning.

This was all a great reset as it zoomed me out of the tools and back in to the “why” behind containers:

  • Why containers are more portable than VMs 
  • How orchestration solves scaling and reliability
  • Why Kubernetes and Docker aren’t competing, they’re layered

 

Make sure to come back next week for the next instalment in this series, where Niels dives deeper into containerization.

COMMENTS

Related articles