Portainer News and Blog

My First 30 Days at Portainer - Week 4

Written by Niels Buitendijk | July 4, 2025

In the fourth and final entry in a series of blogs, new team member Niels Buitendijk walks us through his first 30 days working at Portainer, including what he's learning along the way, not just about Portainer but also the wider industry we service.

If you missed the first, second or third entries in this series, we recommend starting there.

Day 22: Certificates, RBAC (Revisited), and Lab Access

Today was about shoring up the essentials in my newly rebuilt lab environment. I’d already explored RBAC and namespaces a little back on Day 17, but now that my new Beelink GTi12 lab setup was stable, I wanted to properly implement those concepts end-to-end.

First up, TLS. I created a self-signed certificate with SAN support to my Portainer BE instance. I did this by copying them over to my underlying portainer server and reloading my Portainer BE docker with the new certs. Even though this is all running internally, it felt right to get rid of the annoying browser warnings and run things cleanly over HTTPS. Anyone who's managed vCenter or NSX Manager will appreciate that small sense of polish.

Then it was time to bring my RBAC setup to life. I rebuilt my Samba-based AD domain controller and reconnected Portainer to it using LDAP (selecting the AD option in Portainer). I'd already proven this flow during my first pass, but now I wanted to test it properly using actual users and team mappings in my new live lab.

Compared to doing this manually in Kubernetes by writing out RoleBindings, ClusterRoles, and YAML policies, Portainer just makes the whole thing so much easier! Also, unlike Docker, which doesn’t have RBAC at all, Kubernetes gives you this capability but makes you pay for it in complexity. Portainer closes that gap beautifully. 

By the end of the day, my lab had:

  • Self-signed TLS certificates in place for Portainer
  • Working AD integration using Samba
  • Teams and access rules mapped via Portainer’s RBAC UI
  • Role separation working cleanly across my new environments

It all reminded me a bit of the first time I got vCenter connected to LDAP and started assigning roles across different clusters. The power of visibility and control, without the YAML headache I’ve heard so much about.

Day 23: Talos the Portainer way, and My First Real Customer Call

Tuesday was a great mix of learning and real-world exposure. I got my hands dirty deploying a Talos Kubernetes cluster in my lab, and I also sat in on my first customer call with the Portainer team which was a solid reality check on what our users are actually dealing with.

Talos: Secure K8s, Minus the Mess

I’ve been wanting to get a Talos cluster running ever since I first learned about Portainer’s work with Sidero, and today I finally made it happen. What made it extra slick was that I didn’t need to script anything manually. Instead, I used Portainer’s built-in “Deploy Kubernetes Cluster” feature to provision Talos right from the UI.

I spun up three fresh VMs in Proxmox, created an account on the Omni site and downloaded a boot iso to boot up my VMs so they turned up in Omni ready for deployment. After they registered, I used Portainer BE to push the cluster configuration to those nodes. A few clicks later, I had a fully functioning Talos Kubernetes cluster wired into my Portainer environment.

That pairing of Talos for the secure, hands-off OS and Portainer for the actual workload deployment and access control, is pretty unbeatable. Especially for people like me who want to understand the internals without living in them. With Talos, there’s no shell and no kubeadm setup needed, just clean cluster bootstrapping. And Portainer brings it all to life.

Sitting in on My First Customer Call

Later in the day, I joined my first customer call with the Portainer team. The customer was in the early stages of leaving Cloud Foundry behind and starting to investigate Kubernetes as their next platform. They were running a mix of in-house and vendor apps, some already containerized, and had teams using both GitLab and GitHub with a goal to make Git-based CI/CD their standard.

Some of their key concerns were familiar:

  • Getting RBAC to scale across a large org
  • Simplifying operations across multiple data centers
  • Avoiding tool sprawl and reducing license overhead
  • Overcoming internal resistance to change

Neil walked them through a Portainer BE demo, showing how they could enforce access controls per namespace, map AD groups directly into teams, and operate in a multi-cluster world without needing heavyweight add-ons. He even touched on the no-vendor lock-in aspect of Portainer and how our config lives outside the tool.

What struck me was how closely their pain aligned with what I’ve been tackling in my own lab. They’re starting small, experimenting with tools, trying to align around GitOps, and needing something to make Kubernetes feel manageable. That’s exactly what Portainer is doing for me.

Day 24: The NFS Battle (and a Win for Portainer)

Today was one of those deep-dive, beat your head against the wall, and then finally get-it kind of days. I decided to tackle persistent storage in Kubernetes properly using shared NFS storage and at first try initially, it wasn’t easy. But getting it working through Portainer made the journey worthwhile.

I built a TrueNAS box and my goal was to get a Kubernetes workload writing to a shared NFS volume using my Talos cluster nodes. Sounds simple, right? Not so much.

First I validated that I could actually reach the NFS share from the Talos nodes. This involved spinning up a privileged BusyBox pod and using it to test mount the NFS path. I just love how quickly you can spin up containers like that to help trouble-shoot. That step was crucial, as it turned out I had some network-level issues to iron out first (routing, firewall, NFS permissions, etc.).

Once I confirmed I had access, I tried deploying a workload with a manually defined PersistentVolume and PersistentVolumeClaim using YAML, pointing at the NFS share. That technically worked, but wasn’t scalable as each volume had to be statically created, and I’m also trying to steer clear of YAML wherever possible.

Picking the Wrong CSI Driver (the First Time)

At first, I did what many would do, I grabbed what seemed like the official NFS CSI driver (nfs.csi.k8s.io) and started trying to deploy that. Everything looked right in the docs… but nothing worked for me. Pods wouldn't bind properly, PVCs stayed in “Pending,” and there was zero NFS activity. I spent literally hours tweaking YAML, checking pod logs, and second-guessing everything from DNS to storage class naming.

One HUGE help here when trouble-shooting, was how easy it was to dive into the cli with kubectl literally by clicking the blue kubectl button in my left-navigation pane. 

Eventually, I figured out that the CSI-based provisioner I was using wasn't playing nicely with Talos and my TrueNAS setup. It was either overcomplicated for what I needed or simply incompatible out of the box.

Switching to nfs-subdir-external-provisioner

So I pivoted to the community supported nfs-subdir-external-provisioner, and that’s when things shifted in to gear for me.

After applying the YAML to install the provisioner (using Portainer’s “Add from Code”), I went into Cluster → Setup and enabled it in the Portainer UI. Once it was registered, it showed up as a selectable storage class when deploying apps, right in form.

From there, everything just started working:

  • I deployed a MySQL container using Portainer
  • Selected the new NFS storage class in the persistent folder section
  • Restarted the workload and confirmed the data persisted
  • Checked my TrueNAS box and saw new folders auto-created under the NFS export, clean, and organized by app

Note, the folder creation didn’t happen until I used the provisioner to deploy workloads. But when it did, it was instant feedback that the dynamic provisioning was actually working. Huge relief.

Day 25: Podman Hosts and GitOps Foundations

Today was all about extending the lab and laying the foundation for something I’d wanted to tackle for a while, GitOps.

I spun up three VMs to act as Podman nodes, aiming to simulate a small distributed container setup. Rather than acting like a Kubernetes cluster, I found out that Podman works more like managing individual Docker hosts, which is actually a great demo for Portainer's ability to unify those environments.

To get each Podman node working I did the following:

  • I enabled the Podman socket with sudo systemctl enable --now podman.socket
  • Then I launched the Portainer agent container, running it with:
    • Port 9001 exposed
    • The --privileged flag
    • The Podman socket mounted at /var/run/docker.sock
    • Key volume mounts

All of that was really easy. Why? Because Portainer literally gives you the command to run on your Podman nodes:

I didn’t use the Edge Agent this time, just registered each node in Portainer as its own standard environment.

The result? Each Podman node appeared in Portainer with container visibility and lifecycle control. I could deploy containers, inspect logs, manage volumes, and even track resource usage, all through the same familiar UI I’d been using for my other environments.

What I appreciated here was how consistent the experience felt. Even though Podman isn’t clustered like Swarm or Kubernetes, Portainer treats each node like a manageable unit. For demo scenarios or lightweight infrastructure setups, that’s actually quite powerful.

Another nice side effect of this test was that it gave me a clearer sense of just how many different backend environments Portainer can manage. That flexibility is a major value add for users migrating from traditional infrastructure who are now experimenting with mixed container strategies.

Laying the Groundwork for GitOps

Alongside the Podman setup, I also started preparing my GitOps pipeline. I’d read about Portainer’s GitOps integration and sketched out what the workflow should look like. My goal was to have GitHub Actions build images, update deployment manifests automatically, and let Portainer pick up those changes and redeploy apps without any manual touch.

To make that work, I set up:

  • A sample GitHub repo with a basic Angular app containerized via Docker
  • A deployment.yaml file that Portainer would later watch for changes
  • My GitHub Actions workflow skeleton with steps to build and tag the image

I didn’t trigger the full pipeline yet, but this was the day the pieces started coming together. I now had a clear picture of how to automate image builds, update manifests with commit SHAs, and let Portainer handle the redeploys from there.

Tomorrow will be all about putting that theory into practice.

Day 26: GitOps with Portainer and GitHub Actions

Today was a big milestone, I got full GitOps CI/CD automation working in Portainer, wired up through GitHub Actions and Docker Hub. It’s one thing to deploy containers with manifests, but it’s another to have a system that watches your Git repo and deploys changes automatically without ever touching the Portainer UI. That’s what I built today.

While I’ve spent most of my career in enterprise infrastructure, I’ve also had the chance to build and ship actual apps. At VMware, I co-developed an internal tool using Angular, TypeScript, Express, Postgres and Node.js, so I’m no stranger to NPM, GitHub workflows, and CI pipelines. I’ve used GitHub Actions before, but seeing it paired with GitOps and Portainer was a very cool "full circle" moment.

Here’s the flow I now have running:

  1. I push code to the main branch in my Git repo
  2. GitHub Actions builds a new Docker image
  3. The image is tagged with the updated Git commit SHA and pushed to Docker Hub
  4. The deployment yaml is updated with that tag and committed back to the same branch
  5. Portainer’s GitOps sync picks up the manifest change and deploys the new image to one of my Kubernetes clusters

Zero clicks. Just code.

A Few Key Learnings from Today

Portainer doesn’t track image changes, only Git changes

At first, I assumed Portainer might redeploy any new versions of an image pushed to Docker Hub. It doesn’t and that’s by design. GitOps in Portainer watches your Git repository for changes to manifests, like in my case deployment.yaml (in my repo).

So if your manifest still says image: app-name:latest, Portainer does nothing when a new :latest is pushed. It only reacts when the manifest itself changes. To make it work, you just update your GitHub Actions workflow yaml to update manifest as part of its workflow.

You don’t need a personal access token (PAT)

Initially I set up a GitHub PAT and tried to use it in the workflow to push changes to the repo. I got all kinds of 403 errors, even though the token had the right scopes. Maybe I did something wrong there, but it turns out I could just use the default GITHUB_TOKEN provided by GitHub Actions if I added this to my workflow yaml:

permissions:
    contents: write

That was all I needed to allow the workflow to commit and push the updated manifest back to main. This is a handy thing to know if you find your Git Actions builds failing with 403 permission type errors when trying to write back to your repo.

SHA-based tags are ideal for GitOps

I needed to set a unique label on my images being built and then update my deployment.yaml with it, so I decided just to use the Git commit SHA since that updates on every commit. This was also a handy way to make every build uniquely traceable to code pushed.

What Worked Really Well

  • Portainer's GitOps sync was rock solid. I set the interval to 1 minute (for testing) and it picked up the new manifest instantly after GitHub pushed the commit.
  • Debugging builds was easy since the workflow was clearly broken until the deployment.yaml was in sync with the image tags.
  • I love how visual Portainer makes the GitOps model. Watching apps auto-redeploy in the UI reinforces the power of declarative pipelines, especially for teams coming from traditional IT tooling.

Days 27-28: Blog Catch-Up and Adding Observability

Outside of family time, this weekend was mostly about playing catch-up. I’d fallen a bit behind on blogging the last stretch of my 30-day journey, so I spent a bit of my time going back through my notes and screenshots to write up what I’d been learning, especially around Talos, NFS storage, GitOps, and my new lab setup.

That said, it wasn’t all writing. I also carved out a few hours to do something I’d been meaning to test, adding proper observability to a Kubernetes cluster with Prometheus and Grafana.

Prometheus + Grafana (The Manual Way)

Portainer gives a nice high-level view of workloads, CPU/memory usage, logs, container status and more, but for observability metrics collection and long-term visibility, you still need something like Prometheus + Grafana. Instead of using Helm (which I know is the standard way a lot of folks install these tools), I wanted to keep things simple and visible. So I deployed Prometheus and Grafana manually by uploading the YAML manifests straight into Portainer via the "Add from Code".

I appreciated how Portainer let me toggle between visual control (Forms) and raw YAML when I needed it. I didn’t need to open a terminal or run any CLI commands outside of Portainer, the whole deployment was driven through the UI. Once both tools were up and running, I could access them using the LoadBalancer IPs Portainer had assigned.

If you haven’t used these tools before,  Prometheus handles scraping and storing metrics from across your cluster, like a data-collector. Grafana sits on top to visualize that data, giving you dashboards and graphs that help make sense of what’s going on under the hood.

I also configured Grafana to pull data directly from the Portainer API which was cool, using the yesoreyeram-infinity-datasource plugin. It was surprisingly straightforward. You just install the plugin, select JSON as the data type, and point it to the Portainer API endpoint. 

One tip: set the X-API-Key header with your Portainer API key (you can generate one under your user profile in the Portainer UI). That part tripped me up at first, but once it was in place, Grafana started showing live Portainer environment stats.

I didn’t set up anything fancy dashboard wise, but it was just impressive to see how easy it was to integrate with Portainer’s API:

It was one of those “ah-hah” weekends. I’d known that Portainer wasn’t trying to replace monitoring tools, but this experience made it clear how nicely it integrates with them. Portainer gives you the Kubernetes and Docker visibility, while tools like Prometheus and Grafana let you go deeper if you need real-time graphs, alerts, and history.

Weekend Reflections

I didn’t write code or deploy any apps this weekend. But I did reflect on how far I’ve come from zero Kubernetes experience to building a multi-node, multi-platform container lab with integrated GitOps, RBAC, persistent storage, and observability.

If nothing else, it reinforced that Portainer doesn’t try to be everything. But it does make everything else easier, especially for folks like me coming from a virtualization background who are more used to vCenter than YAML.

Day 29: Node Failover Testing and Observability, Portainer-Style

With the lab infrastructure finally stabilized, today I focused on two themes that every production cluster eventually faces, observability and resilience.

Enabling Kubernetes Metrics in Portainer

Before I jumped into dashboards and failover tests, I wanted to make sure Kubernetes resource metrics were flowing into Portainer.

To get this working:

  1. I deployed metrics-server manually using:
    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml --namespace=kube-system
  2. After that, I verified everything was healthy using:
    kubectl top nodes
    kubectl top pods
  3. Then inside Portainer, I went to Cluster → Setup and toggled “Enable features using the metrics API” under Resources and Metrics.

Once that was set, Portainer began pulling in real-time CPU and memory usage metrics across all nodes and workloads. Super useful for seeing at a glance what your cluster is doing.

Prometheus, Grafana, and Portainer Metrics

I focused on giving my lab better visibility by deploying Prometheus and Grafana. To do that I deployed the full Prometheus and Grafana stack using a Helm chart from the prometheus-community repo (https://prometheus-community.github.io/helm-charts), all straight from Portainer’s Helm UI.

This single chart spun up:

  • Prometheus to scrape Kubernetes metrics
  • Grafana, already linked to Prometheus and pre-loaded with useful dashboards
  • Supporting components like Alertmanager and node exporters

It was super efficient using one chart to deploy it all.

Once Grafana was up, I installed the yesoreyeram-infinity-datasource plugin, which lets you pull in external data sources like REST APIs. I pointed it at the Portainer API and used a read-only API key (created under My Account → Access Tokens) for authentication and passed it in as the X-API-Key header.

From there I created custom Root/Rows/Columns mappings to pull in data like container counts, memory usage, and running environments from Portainer’s /api/endpoints endpoint. It gave me the flexibility to build dashboards that blend Portainer’s API data with native Kubernetes metrics all in one view.

A great help when building the dashboard was using Portainer’s Swagger API reference to explore what API endpoints were available. It’s a great resource when building dashboards or even scripting automation later. You can see example responses, which really helped when mapping Grafana fields.

Day 30: Final Reflections and What Comes Next

It’s kind of poetic that Day 30 of this 30-day journey just so happens to fall on my birthday. I couldn’t have scripted it better, wrapping up this deep dive into Portainer, Kubernetes, and containers while also turning a year older. A good reminder that learning never stops, no matter how many candles are on the cake!

Thirty days. Dozens of deployments, YAML mistakes, network headaches, and a whole lot of "a-ha" moments later, I’ve made it.

What started as a simple idea to document my onboarding at Portainer turned into something much bigger. This wasn’t just a checklist of tasks or a technical journal, it became a complete mindset shift. I went from thinking in virtual machines and hypervisors to living in the world of containers, Kubernetes, and automation. I have to say, I’ve genuinely enjoyed every step of it. Especially seeing firsthand how Portainer removes so many of the obstacles that would otherwise slow someone down coming from a virtual infrastructure background like mine.

There were moments where I felt totally in control, and others where I was completely in the weeds. But what kept me grounded was that each step added a layer of understanding, and every tool I added to the mix felt like it had a purpose.

What's Next

Now that I’m officially part of the Portainer team, I’m genuinely excited for what’s ahead.

Customer conversations

I’ve had the chance to join a few calls with teams navigating their early container journeys. Some moving away from Cloud Foundry, others starting fresh with Kubernetes, and a customer who had deployed Portainer CE but were now asking about Portainer BE because, as they put it, "we don’t know what we want, because we don’t know what we’re missing."

It’s clear that many organizations are only just beginning to confront the complexity of modern infrastructure. What really stood out to me is how Portainer helps simplify that journey, making RBAC more approachable, improving visibility, and introducing smart guardrails that empower teams without getting in their way.

I’m really looking forward to more of these conversations, especially with existing customers. I want to hear how they’re using Portainer in the real world, what problems it’s helping them solve, what business value they’re seeing, and where we can help take their implementation even further.

Helping people bridge the gap

I’ve been on the other side, using and architecting virtualization solutions using traditional enterprise tools was my world. Now, I’m hoping to help folks with that same background cross the bridge into containerization with clarity and confidence. Portainer is honestly the right tool for that job.

Real-world adoption

Whether it’s making GitOps actually usable, surfacing metrics visually through Grafana, or managing clusters across a mix of infrastructure, Portainer has proven itself to be the vCenter of containers. It gives you a single control plane to manage Docker, Swarm, Kubernetes, Podman, Edge and KaaS environments, all through one consistent UI. For someone coming from a VMware background, it feels like the natural evolution of infrastructure management, in that it’s familiar in concept, but built for the world of containers. It helps teams adopt modern tooling faster without having to become platform experts first.

Final Thoughts

If I had to sum up why Portainer matters to me, it’s this:

Portainer simplifies container management, without sacrificing visibility or control.

You don’t need to be a Kubernetes expert to get real work done. You don’t have to be a CLI warrior or wrestle with YAML just to ship an app. What I’ve found is that Portainer gives you just enough abstraction to move faster, without dumbing things down or hiding what matters. It keeps the power and visibility, but lowers the barrier.

There’s still plenty for me to learn. But now I’m learning with purpose, working alongside a sharp, innovative, passionate team, and getting to help real users unlock the value of containers in environments that look a lot like the ones I came from.

This blog may be wrapping up… but the real journey is just getting started.