Portainer News and Blog

My First 30 Days at Portainer - Week 3

Written by Niels Buitendijk | June 28, 2025

In this third entry in a series of blogs, new team member Niels Buitendijk walks us through his first 30 days working at Portainer, including what he's learning along the way, not just about Portainer but also the wider industry we service.

If you missed the first or second entries in this series, we recommend starting there.

Day 16: Persistent Storage and Volumes

Today was all about persistent storage in Kubernetes, something I had been putting off but knew was absolutely essential for production workloads. Coming from a VMware background, where persistent storage is second nature, I was keen to see how it’s done in the containerized world.

Kubernetes, by default is designed for ephemeral containers. Meaning that if a container is stopped or restarted, any data it holds is lost, just like Docker. This is great for stateless apps, but when you need stateful services (like databases), things get more interesting.

I learnt that’s where Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) come in. These are the Kubernetes way of ensuring data lives on, even when containers are recreated, moved, or scaled down. Kubernetes basically separates storage from the container lifecycle.

In traditional virtual environments, we deal with storage policies and datastores. Kubernetes abstracts this in a similar way, but the big difference is the dynamic nature of containers. Here’s how I’m understanding it so far:

  • Persistent Volumes (PVs):  Represent actual storage resources in the Kubernetes cluster. Like NFS, Cloud Storage (like AWS EBS), NFS and even local disks. 
  • Persistent Volume Claims (PVCs): These are like “requests” for storage. If you think about the way VMs are assigned storage from a datastore, PVCs are the equivalent. A pod will request storage via a PVC, and the Kubernetes scheduler will bind it to an available PV.

Kubernetes Storage in Action (With Portainer's Help)

In Kubernetes, you’d usually need to write a bunch of YAML to set up PVs and PVCs. After going through the documentation and seeing how it's normally done, I decided to try it using Portainer's UI instead, and honestly, I’m glad I did. It made the experience a lot easier.

At this point, I was working with a basic MicroK8s setup and testing local storage only. Through the container deployment flow in Portainer, I was able to:

  • Set the mount path inside the container (e.g. /var/lib/mysql)
  • Choose the storage class (microk8s-hostpath, the default for local volumes)
  • Define the storage size I wanted (e.g. 20 GB)

All of this was done through dropdowns and input fields, absolutely no YAML needed.

That said, I hadn’t yet attached or tested shared storage like NFS. That came later (and turned out to be its own mini-adventure). But even at this stage, I could already see how Portainer abstracts a lot of the low-level Kubernetes plumbing while still letting you drill down when needed.

Day 17: Namespaces, RBAC, Multi-Tenant Clarity

Today I shifted gears from storage to cluster structure and access control. As I dig deeper into Containers, I’m reminded how important role based access control (RBAC) and separation of environments is for the typical enterprise.

Namespaces: Familiar Territory, Different Power

With a VMware background, I’m no stranger to carving up infrastructure. Namespaces in Kubernetes feel like folders or resource pools in vCenter. They let you segment workloads, isolate resources, and keep things organized. But Kubernetes takes it a step further in that you can apply quotas, policies, and access rules per namespace.

Using Portainer, creating namespaces was as easy as typing a name and clicking “Create Namespace”. It’s a small thing, but I could see how that helps bring structure to clusters fast. It lets you scope workloads in a way that mirrors how I’d separate environments like dev, test, and prod back in my VMware days.

Another bonus was that you can apply resource quotas per namespace right in the Portainer UI (again - no YAML required). I could limit how much CPU and memory a given namespace could consume. It’s the kind of thing I’d normally do with resource pools in vSphere. You can also limit the number of load balancers that can be deployed, which registries are available for use, and even set storage quotas, all specific to the namespace. 

RBAC, Simplified

Role-Based Access Control (RBAC) in Kubernetes is powerful but usually buried under YAML. I can’t imagine anything worse than having to define, update, and tweak RBAC all in YAML! Portainer’s UI handles it much more cleanly. I build a Samba-based AD server to test integration. Integrating it with Portainer was very easy using some basic LDAP connection details. Once integrated I could map AD groups to teams, assign roles like “View Only” or “Environment Admin,” and control who could touch what, all through the UI. You can create a Team in Portainer with the same name as an AD group with the same name and they automagically map.

Alternatively you can set up OAuth to work much the same, but on top of that if you have groups that don’t have exactly the same matching name as a team in Portainer, you can use regex mappings also. Small thing - but very handy addition I discovered along the way.

Looking through how Portainer handled RBAC reminded me of assigning permissions in vCenter, but adapted to the Kubernetes world. No complex role bindings or custom policies to figure out, just logical access control done simply.

Multi-Tenancy

One thing Neil pointed out is that Portainer doesn’t offer true multi-tenancy in Kubernetes. While teams can be restricted to their own namespaces with clear RBAC controls, they’re still part of a single Portainer instance. There’s no hard isolation between tenants at the platform level.

For the majority of enterprises, and definitely for my lab setup, that’s more than enough. But if you’re in a highly regulated environment or need strict tenant isolation, it’s something to keep in mind.

Day 18: Hello Helm

Today I went deeper on something I’d been meaning to explore properly, Helm. If Kubernetes is the operating system, Helm is like a package manager. It lets you deploy complex, multi-service apps, like WordPress and it’s DB using a single, versioned chart. It’s repeatable, it’s scalable, and it takes away a lot of the manual YAML heavy-lifting.

From vApps to Helm Charts

For me, the mental model clicked when I thought about vApps in vSphere. You’d group together multiple VMs, like a frontend, a backend, and a DB with all the dependencies and startup logic in one unit. Helm charts feel like the Kubernetes-native version of that, in that you define all the resources an app needs in one package. Only now it’s not just click-and-deploy like in vCenter, unless you’re using Portainer.

Through the UI, I was able to browse and deploy Helm charts directly from repositories, as well as override chart values right in Portainer before deploying. Again, no need to be deploying YAMLs using kubectl. 

Beyond the Lab: Learning About Portainer's Customers

Not everything I did today was technical. I also spent time going through Portainer’s sales process, and reading about some of Portainer’s existing customer base. I wanted to better understand how and why people choose Portainer, what pain points we solve, what scale we work at, and what outcomes we deliver.

There’s a common theme across the stories - customers aren’t always looking to become Kubernetes experts. They’re looking to run apps faster, with fewer people, and without the operational friction that often comes with container orchestration. And Portainer helps them do that, whether they’re a global energy company or a healthcare startup. It seems a lot of customers have people like me - who moved on from using esx-cli back in the day, got used to the power and simplicity of vCenter, and are now looking for something similar in the container world.

This business context matters. It helps me frame technical features in terms of actual value, and it gives me better language to use when I talk about Portainer with prospects, partners, or even friends.

Second Thoughts On My Lab

Toward the end of the day, I realized my current hardware setup might start to limit my ability to run a realistic demo lab. After a bit of research, I ordered a Beelink SER5 Mini PC a small but capable Ryzen-based box I thought could give me a clean x86 environment to test on. I figured it would be perfect for running MicroK8s, Docker Swarm, and Portainer all in one compact setup. It was scheduled to arrive the next day, and I was already planning what to rebuild.

Day 19: First Mini PC Arrives (and Hits the Limit Quickly)

Friday was an exciting one, my new Beelink SER5 Mini PC arrived right on schedule. I wasted no time getting Proxmox installed and spun up my first few lab VMs, one for Portainer, a few for MicroK8s, and a utility node I could use for Samba and DNS duties.

Initially, things looked promising. It was fast, responsive, and the form factor was ideal. I even got far enough to deploy the Portainer agent in a Microk8s cluster and connect all these things together. But as I began planning to layer on NFS, Docker Swarm, Talos Nodes, other flavors of Kubernetes, along with observability tools, I started noticing some performance strain. The SER5 is a great little box but with just 32GB of RAM and a modest CPU, I quickly realized it wouldn’t scale to the kind of multi-cluster, production-style demo lab I wanted to build.

That evening, after a quick sanity check and a lot of wishful thinking, I admitted it, I needed something more powerful. So I went looking and found what I needed, the Beelink GTi12, a proper mini powerhouse with:

  • Intel Core i9
  • 64GB DDR5 RAM
  • 2TB PCIe 4.0 SSD

I ordered it the next day (Saturday), knowing it would arrive Sunday. It felt like the right move. After all, the goal here is to showcase what Portainer can really do in enterprise environments, and to do that I needed hardware that wouldn’t hold me back.

In between lab setup and hardware comparisons, I also took some time to dive into customer communications in our CRM. I wanted to get a better feel for the types of environments our users are working with, the kinds of questions they’re asking, and where Portainer is really delivering value. It’s early days, but I’m starting to recognize patterns, especially with teams that are modernizing quickly but don’t have deep Kubernetes expertise on hand. That theme of simplifying complexity really resonates, both in the field and in my own journey so far.

Even though the SER5 was a bit of a misstep for my scale, I don’t regret it. The process helped me think more critically about how to structure my lab going forward. I sketched out a plan on my whiteboard for a clean architecture with:

  • Portainer BE as the central UI running atop a Ubuntu VM with Docker
  • Samba AD and DNS on a VM acting as my core infra services
  • Multiple MicroK8s nodes
  • Sidero Talos Nodes
  • Docker Swarm
  • Podman nodes
  • NFS-backed storage for persistent volumes
  • Tailscale for remote access

It’s ambitious, but now with the GTi12 on the way, totally achievable.

Day 20-21: Graduation, Upgrades, and Getting Back to the Lab

This weekend was a bit different, less keyboard time, more life stuff. My daughter was graduating high school, which is a pretty major milestone here in the US, so the weekend was full of prep, celebrations, and a lot of proud-dad moments. 

Saturday was mostly logistics. We picked up one of her close friends who flew in from out of state, got things organized for the big day, and squeezed in some celebration time too. Even without extended family around, it was a full-on weekend for us.

In between all that, I realized I’d hit the ceiling with the Beelink SER5 I’d tested the day before. Great little machine, just not cut out for running multiple clusters, persistent storage, monitoring, and everything else I had in mind. So Saturday night I went ahead and ordered the Beelink GTi12, a serious step up with an i9 CPU, 64GB DDR5, and 2TB of fast storage. It was arriving the very next day.

Sunday was all about the graduation ceremony. It was emotional, exciting, and full of proud dad feelings. Once we got back and things settled down, I finally unboxed the GTi12 and started rebuilding the lab properly later that evening.

By the end of the evening, I had:

  • Proxmox installed
  • VMs spun up for Portainer BE, MicroK8s, and infra-core
  • AD (Samba) and DNS configured on infra-core
  • MicroK8s deployed
  • Tailscale set up for remote access
  • A plan in place for the rest of the lab

It felt like a clean slate. And with the right gear under the hood this time, I could build things out without having to compromise. Portainer was quick to deploy again and made it easy to connect environments and manage workloads across the new cluster, and I could already feel how much smoother things were going to be with this setup.

Even though it was a quieter weekend on the technical front, this felt like a major turning point. Real infrastructure, better use cases, and now the hardware to match.

 

Make sure to come back next week for the next instalment in this series, where Niels gets TLS and AD integration sorted in his lab, fights with NFS, and experiments with Talos and Podman environments.