In this second entry in a series of blogs, new team member Niels Buitendijk walks us through his first 30 days working at Portainer, including what he's learning along the way, not just about Portainer but also the wider industry we service.
If you missed the first entry in this series, we recommend starting there.
Today I went deep with the Docker course, working through it entirely in my lab.
Some key wins:
Even though I’ve already moved into some Kubernetes, revisiting Docker fundamentals helped me better understand what K8s is orchestrating under the hood.
I also spent some time reading through Portainer customer success stories, it gave me real-world context for where the product shines in production, especially for teams that don’t want to live in kubectl and YAML.
These last few days helped me connect abstract concepts with practical workflows:
Today was less about adding new services and more about zooming out. I spent the first half of the day digging into Portainer’s Ideal Customer Profile (ICP) and studying some of our customer success stories. It gave me a much clearer picture of how Portainer delivers value, not just technically, but strategically.
Reading through some more case studies some common themes stood out:
I broke my lab. Not on purpose. I was testing out different Kubernetes distributions, MicroK8s, Minikube, and others, on multiple virtual architectures. Eventually, I hit a wall and decided to wipe the machine and start clean!
The process:
Now the lab is cleaner, leaner and ready for some heavier testing.
I then went back to the Udemy course I was working through which covered a few more Docker topics and then started on the Kubernetes sections of the course. Some of the hands on I covered today:
The more I work with Portainer, the more I appreciate it as a platform abstraction layer, one that fits neatly between what infra teams know and what modern dev teams need.
By spending time today understanding who Portainer is for, I also better understand who I’m here to help as a TAM. It's not just about container fluency, it's about helping companies modernize without friction, without needing to hire a bunch of new K8s engineers, and without losing visibility into their stack.
Today was all about sharpening my Kubernetes fundamentals and reinforcing what I’ve been learning behind the scenes in Portainer. I’ve been trying to get more confident using kubectl not to replace what I do in Portainer, but to better understand what’s happening under the hood so I can compare workflows and better understand why Portainer makes so much sense.
I deployed a small Node.js app I built, pushed the image to Docker Hub, and got it running in my lab on MicroK8s. But not without hitting my first proper architecture issue because I built the Docker image on my Mac (ARM/M2) and was trying to run it on my MicroK8s node (x86). The pods crashed on rollout. At first it was frustrating, but then I realized Kubernetes had only updated part of the deployment and left the working pods running. That was some k8s coolness and it gave me time to fix the image and re-deploy.
It was one of those learning moments where I really saw how Kubernetes works in practice and where having Portainer helped me instantly identify the issue. The failing pods were right there, red and clearly unhealthy in the UI. I didn’t have to dig through lines of CLI output to figure out which replica was failing. I could just see it.
After fixing the image architecture mismatch and triggering a new rolling update with the new image, everything came back clean. The update rolled out and the app was fully functional again.
On the networking side, I tried exposing my app using different Kubernetes service types. ClusterIP (internal only), NodePort (access via high port on the actual node), and LoadBalancer (with help from MetalLB on MicroK8s). It was a great way to understand how Kubernetes service routing actually works, and how these abstractions expose workloads inside and outside the cluster.
Today was a mix of hands-on experimentation and strategic learning. I spent some time comparing the Kubernetes Dashboard to Portainer, but I also got to step away from the lab and have a proper chat with Neil, which helped clarify some other concepts I’ve been wondering about as I progress through my Portainer/Docker/Kubernetes journey.
I’ve been jumping between CLI, Portainer, and the Kubernetes Dashboard, and today I wanted to experience how Kubernetes handles things like workload visibility, deployment control, and operational usability across each interface. After enabling the Kubernetes Dashboard and logging in, I deployed a couple of test apps and worked through exposing services, scaling replicas, and viewing logs. It’s a functional interface, and it gives you access to some useful basics. But it also made me appreciate just how much Portainer is doing for me in the background.
Portainer gives me something that feels more familiar. It’s not just about being “easier” to use, but it’s about surfacing things in a way that makes sense for people used to managing platforms. It may not seem that way for some, but for people with my background in Virtualization Infrastructure, Portainer does make things easier to understand. Where the Kubernetes Dashboard feels more like a bolt-on dev tool, Portainer feels like a proper control plane. I can move between Docker and Kubernetes clusters, dig into logs, set policies, roll out updates, and manage my app stack without bouncing between tools or needing to write YAML every time I want to tweak something.
A highlight of the day came from a chat I had with Neil. I’ve been working through deployments and services, and I started thinking about how Kubernetes handles things like workload placement, resource allocation, and protecting system processes, the sorts of things I used to manage with DRS rules, resource pools, and host reservations in vSphere. I asked how Kubernetes handles that side of things, and how Portainer helps with it.
We talked through how Portainer helps you influence where workloads land in the cluster using taints and tolerations (new concepts to me). Taints and tolerations are a way to keep workloads off nodes unless they’re explicitly allowed there, a bit like marking nodes for infrastructure-only services. Then we covered resource requests and limits, which protect against noisy neighbor scenarios, and namespace quotas, which ensure teams or apps don’t hog all the cluster resources.
What stood out most from all of that was seeing how Portainer makes all of this configurable from the UI. You don’t need to handwrite a heap of YAML or deep-dive into the Kubernetes API. You can define CPU and memory limits per container, apply quotas to a namespace, and even control node placement preferences visually, all from within Portainer. For me, that’s huge. It’s not that Kubernetes doesn’t offer the control, because it clearly does. But Portainer makes that control practical for teams who want to operate without hiring a team of platform engineers.
One really cool thing I learned from talking to Neil was how Portainer actually implements all of this under the hood. When you define these settings in the UI, Portainer generates all the necessary Kubernetes YAML in the background. But the key point is that config doesn’t live in Portainer. It gets pushed into the cluster the same way it would if you wrote it yourself. That means you’re not locked in. If you decide to stop using Portainer tomorrow, your workloads keep running just the same. Knowing that makes it a no-brainer for people to try Portainer in their environment. In that sense it’s really a visual accelerator, not a proprietary black box.
Today was one of those days where the theory and the tools came together. I started off spending some time on Portainer Academy, diving into how Kubernetes is structured from the control plane down to worker nodes, and how components like the API server, etcd, controller manager, and scheduler all interact. I already had some idea of this from earlier the Udemy course and YouTube content, but the Academy walked through it in a way that really helped anchor it in practical reality especially since I could immediately map each part to something I’d already seen in the Portainer UI. I didn’t get through all of the Academy content, so I will come back to this later.
After that, I spent time reviewing some of Portainer’s Internal Sales Material such as the Ideal Customer Profile (ICP) and some real-world positioning. What stood out was just how aligned Portainer is with teams like the ones I’ve worked in throughout my career, teams with smart people, but not always deep Docker and Kubernetes expertise. Teams that want visibility, consistency, and a safe path to containerization without needing to triple their headcount or train everyone in kubectl.
To bring it back to hands-on work, I finished the day by building another test environment. I wanted to try a different deployment approach. So I set up a clean x86 Ubuntu VM (inside UTM on my Mac), and instead of going the manual MicroK8s install route again, I let Portainer deploy it for me.
This time, it was dead simple. I used Portainer’s Add Environment wizard, selected Microk8s, added the IP and credentials for it to SSH in to my Ubuntu Server, and within minutes I had a new Kubernetes node up and running in my Portainer instance. No kubeconfigs, no token headaches, no weird networking edge cases. Just a clean, working cluster ready for workloads.
It was good to feel like I’d gone full circle, from learning what Docker and Kubernetes are and how they work, to using Portainer to provision and manage a live Kubernetes node with full visibility.
It was Memorial Day weekend here in the US, so I spent most of the time doing family things, which was a welcome reset. But that didn’t mean I switched off completely.
I still squeezed in a few hours here and there to watch more Kubernetes, Docker, and Portainer content, mostly on YouTube and podcasts. Nothing too intense, but enough to keep the learning ticking over.
Some topics from what I watched:
I didn’t do any big lab builds or deployments, but I came away from the weekend feeling sharper, more confident in how the puzzle fits together, and more able to speak to the "why" behind the "what".
If anything, the weekend helped me zoom out a little. Not just learning how to use Kubernetes or Portainer, but thinking about what this shift means for teams like the ones I used to work with, and what kind of conversations I’ll be having with customers now that I’m part of Portainer.
The weekend also helped me zoom out a bit and think about what this shift means for the organizations I’ll be helping as a TAM. It’s not just about learning Kubernetes or Portainer for my own benefit it’s about enabling other teams to adopt containers safely, scale their infrastructure without hiring a bunch of new engineers, and gain visibility into their workloads.
Make sure to come back next week for the next instalment in this series, where Niels digs into persistent storage, RBAC, and Helm, and starts to expand his lab.