Portainer's support for Docker Swarm lets you manage your Swarm deployment using the Portainer Agent deployed on each node. The installation instructions we provide for this work well in most cases, but they do make the assumption that all your Swarm nodes are configured in the same way. However, this might not always be the case.
First, let's look at our standard deployment on Linux.
version: '3.2' services: agent: image: portainer/agent:2.19.5 volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes networks: - agent_network deploy: mode: global placement: constraints: [node.platform.os == linux] portainer: image: portainer/portainer-ee:2.19.5 command: -H tcp://tasks.agent:9001 --tlsskipverify ports: - "9443:9443" - "9000:9000" - "8000:8000" volumes: - portainer_data:/data networks: - agent_network deploy: mode: replicated replicas: 1 placement: constraints: [node.role == manager] networks: agent_network: driver: overlay attachable: true volumes: portainer_data:
This YAML file installs Portainer Server on a manager node in your Swarm, and deploys the Portainer Agent globally (to all nodes in the Swarm).
As you can see, we assume that all of the nodes in the Swarm are configured the same way. But what if some of your nodes had differences in configuration to others?
To provide for this, we need to deploy the Portainer Agent as separate services, each with the required changes to the configuration. We can then use the placement constraints system to define which nodes in the Swarm get which service.
A mixed Linux and Windows Swarm
For example, Docker installs on Windows use different path structures than on Linux to point to the Docker socket and to the volumes path, so we can't just use the same configuration for both Linux and Windows nodes. This is why we have the placement constraint in the above example restricting the deployment to nodes where the OS is Linux. To accommodate a Swarm that has a mix of Linux and Windows nodes, we'd need different configurations for each. So let's do that.
We can reuse the existing agent
service definition for the Linux nodes, since it's good to go. We will make a couple of changes however to make life easier for us in the future.
First, we'll rename it from agent
to agent_linux
as that's a more accurate name.
services:
agent_linux:
Next, we'll add an environment variable that will define the cluster address. This is used for internal communication between the agents. The environment variable to use here is AGENT_CLUSTER_ADDR
, and we'll set it to tasks.agent
.
agent_linux:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
The volumes
definitions stay the same for Linux, so we can copy them straight in.
agent_linux:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
In the networks section we want to add an alias
of agent
to the service. This, combined with the AGENT_CLUSTER_ADDR
we set above, will ensure the Linux and Windows agents are able to communicate with each other and with the Portainer Server. To add the alias we need to tweak our networks section a bit:
agent_linux:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
agent_network:
aliases:
- agent
Everything else we can keep the same. The full agent_linux
definition should now look like this:
agent_linux: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes networks: agent_network: aliases: - agent deploy: mode: global placement: constraints: [node.platform.os == linux]
Next we create our Windows variant. Note this should be defined within the services:
block but outside of the agent_linux:
block, as this is a separate service (the same way that the Portainer Server container is a separate service to the Agent). We'll call this one agent_windows
.
services:
agent_linux:
...
agent_windows:
image: portainer/agent:2.19.5
We'll set the same environment variable as we did for the Linux variant so that they can communicate with each other successfully.
agent_windows:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
We now need to adjust the volume paths for Windows. Since Windows uses npipe
to bind to sockets, we'll use the long form syntax of the volume definitions here.
agent_windows:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
The Portainer Agent knows what to do with the Windows paths once they are available.
The networks
section remains the same as in our Linux variant, as we want each service to have the same alias.
agent_windows:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
agent_network:
aliases:
- agent
The final change we need to make is in the deploy
section, and this is where we define our constraint. For Windows nodes, we can simply change the node.platform.os
to windows
.
agent_windows:
image: portainer/agent:2.19.5
environment:
- AGENT_CLUSTER_ADDR=tasks.agent
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
agent_network:
aliases:
- agent
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
That's all we need to change. Your full YAML file should now look like this:
version: '3.2' services: agent_linux: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes networks: agent_network: aliases: - agent deploy: mode: global placement: constraints: [node.platform.os == linux] agent_windows: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - type: npipe source: \\.\pipe\docker_engine target: \\.\pipe\docker_engine - type: bind source: C:\ProgramData\docker\volumes target: C:\ProgramData\docker\volumes networks: agent_network: aliases: - agent deploy: mode: global placement: constraints: [node.platform.os == windows] portainer: image: portainer/portainer-ee:2.19.5 command: -H tcp://tasks.agent:9001 --tlsskipverify ports: - "9443:9443" - "9000:9000" - "8000:8000" volumes: - portainer_data:/data networks: - agent_network deploy: mode: replicated replicas: 1 placement: constraints: [node.role == manager] networks: agent_network: driver: overlay attachable: true volumes: portainer_data:
You can now deploy this on your mixed Linux and Windows Swarm and be up and running with Portainer.
Note: Depending on your setup you may also want to add an OS constraint to the portainer
service to ensure it lands on a Linux node. For example:
portainer:
image: portainer/portainer-ee:2.19.5
...
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
- node.platform.os == linux
Extending to custom labels
You're not restricted to node.platform.os
for placement constraints. As on the portainer
container you can use node.role
to restrict to node types. But you can also use custom labels to define your constraints.
A user in our community Slack channel recently reached out as they have an interesting setup. Some of their Swarm nodes have NVMe drives, and on those nodes they redirected Docker's data-root
(normally /var/lib/docker
) to the NVMe drive. This meant a different volume path (instead of /var/lib/docker/volumes
) for those nodes as compared to the other, "default" nodes.
We can see how we'd solve this by looking at the above Linux and Windows examples, as they have different volume paths in each config. For example, you could create a service called agent_default
for the "default" nodes, and one called agent_nvme
for the nodes with NVMe drives (we'll use /nvme/docker/volumes
for the NVMe volume path):
services: agent_default: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes agent_nvme: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /nvme/docker/volumes:/var/lib/docker/volumes
Note we need to ensure the right side of the volume mount is still /var/lib/docker/volumes
, as that's the internal path that the Portainer Agent is expecting.
But what about placement constraints? We can't use node.platform.os
for this, so let's create some custom labels. We'll apply a label to each node to specify which disk type it is, and then reference that label in our placement constraints.
Let's assume a 3 node cluster, where node01
and node02
have the NVMe configuration, and node03
has the "default" configuration. We can create a label on each node named disktype
and set it to nvme
or default
depending on each node. To do this, on a manager node in the Swarm you would run the following commands:
docker node update --label-add disktype=nvme node01 docker node update --label-add disktype=nvme node02 docker node update --label-add disktype=default node03
These commands set the disktype
label for each node, with node01
and node02
set to nvme
and node03
set to default
.
Now let's use those labels in placement constraints for our two Agent service configurations above.
services: agent_default: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes deploy: mode: global placement: constraints: [node.labels.disktype == default] agent_nvme: image: portainer/agent:2.19.5 environment: - AGENT_CLUSTER_ADDR=tasks.agent volumes: - /var/run/docker.sock:/var/run/docker.sock - /nvme/docker/volumes:/var/lib/docker/volumes deploy: mode: global placement: constraints: [node.labels.disktype == nvme]
We're accessing the disktype
label we just set using node.labels.disktype
, and deploying the agent_nvme
service only to the nodes labeled as nvme
(node01
and node02
) and the agent_default
service only to the nodes labeled as default
(node03
).
Through the use of labels and placement constraints you can see how you can deploy different Portainer Agent configurations across a custom Swarm environment depending on your needs.
COMMENTS