One of the first questions we get when introducing KubeSolo is, “Why drag Kubernetes all the way down to the device edge? Isn’t Docker or Podman good enough down there?” Fair question. And the answer is… yes and no.
Yes, if you’re just running a simple container, Docker or Podman can handle that. You throw in a docker run
, maybe a docker-compose
, and off you go. Job done.
But no, if you’re trying to deploy anything that’s more than a standalone service. Especially not if you’re working with commercial software vendors. Because increasingly, the way these vendors are shipping their applications isn’t just as containers. It’s as Operators. And Operators don’t run on Docker. They run on Kubernetes.
This shift is exactly why we built KubeSolo. The "device edge" is no longer about a couple of containers running discrete services. It’s becoming a target for full application stacks. And those stacks are starting to show up as Kubernetes-native Operators.
So if you’re managing edge infrastructure, and you want to deploy the kinds of modern apps vendors are actually offering, you’re going to need more than just Docker. You’re going to need Kubernetes.
What’s an Operator, and Why Should You Care?
Operators are controllers that handle the full lifecycle of an application on Kubernetes. They don’t just deploy the app. They configure it, scale it, monitor it, and recover it when things go wrong.
And they do this based on Custom Resource Definitions, or CRDs. These are extensions to the Kubernetes API that let you define your own resource types. So instead of writing a bunch of YAML for Deployments and Services and ConfigMaps, you can just say kind: HiveMQCluster
or kind: EdgeCache
, and the Operator takes care of the rest.
Think of Operators as the new way vendors are packaging their operational expertise. The stuff that used to be in runbooks or bash scripts? It’s now embedded inside a Kubernetes controller that watches your CRDs and keeps the application running properly.
And because Operators follow the same patterns across all environments, they’re portable. You can use the same configuration on a dev cluster, a production cluster, or a KubeSolo node running on the plant floor.
Why This Matters at the Edge
At the device/far edge, things break. Networks drop. Devices restart. Bandwidth is limited. You can’t afford to babysit every node.
Operators are built for this. They constantly reconcile the actual state of the application with the desired state you declared in the CR. If something changes unexpectedly (a pod dies, a config is lost, a node reboots) the Operator fixes it.
That self-healing loop is why Operators are so powerful. And why they make so much sense at the edge, where automation isn't just nice to have, it's the only realistic option.
Real-World Examples: This Isn’t Just Theory
Let’s take a look at some real industrial and IoT software vendors who are already shipping their software as Operators.
HiveMQ
HiveMQ is a high-performance MQTT platform used for IoT messaging at scale. Their Kubernetes Operator handles everything: provisioning brokers, managing TLS, configuring clustering, and scaling out horizontally. You install a CRD, define your HiveMQCluster
, and the Operator builds the entire stack.
EMQX
Another MQTT broker designed for industrial workloads. EMQX’s Operator helps deploy clusters that can handle millions of connections, with automated scaling and traffic shaping. You don’t have to write complex scripts to stand it up. You just apply a custom resource, and the Operator does the heavy lifting.
Cumulocity IoT
Cumulocity is an industrial IoT platform from Software AG. Their Kubernetes-based deployment is powered by an Operator that installs the full edge gateway stack: data collectors, processing pipelines, cloud connectors, and more. The CRDs let you declaratively configure each node based on its role in the wider deployment.
Dapr
Dapr isn’t an industrial product per se, but it’s widely used in distributed IoT systems. It brings service-to-service communication, pub/sub, state management, and observability into a lightweight runtime that runs as sidecars. The Dapr Operator installs and manages these sidecars, keeping everything aligned.
These are not niche tools. They’re used across manufacturing, retail, logistics, smart cities, and energy. And they all ship with Operators.
If your edge infrastructure doesn’t support Kubernetes, you can’t deploy them. Not properly. Not with automation. Not at scale.
KubeSolo Is Built for This Reality
We didn’t build KubeSolo because we thought running Kubernetes on a Raspberry Pi was a fun science experiment. We built it because the industry is moving towards Kubernetes-native packaging, even at the edge.
KubeSolo strips Kubernetes down to a single node, no clustering logic, minimal memory footprint, no unnecessary disk I/O. It’s designed to run on embedded and industrial hardware. But it still supports CRDs. It still supports Operators. It still talks to your centralized control multi-cluster management platform (eg Portainer).
That means you can deploy the same apps you’d run in the cloud, using the same tools and the same declarative configs, on devices at the edge. You don’t need to invent a parallel deployment mechanism for remote environments. You just use Kubernetes.
What This Changes for Platform Teams
For the teams responsible for deploying and managing software across thousands of locations, this matters a lot.
It means:
-
You can standardize your deployment model across cloud and edge
-
You can use GitOps and CI/CD pipelines to manage all environments
-
You can get visibility into application health with
.status
fields -
You can integrate with monitoring and logging platforms uniformly
-
You can enable self-healing and automated upgrades without writing bespoke tooling
Most importantly, you can actually support the software that modern vendors are building, because you’ve given your infrastructure the capability to run their Operators.
The reality is, containers are no longer the final unit of delivery. The industry has moved on to APIs and automation. If you want to run modern software, you need to give it what it expects.
And these days, it expects Kubernetes.

COMMENTS