This guide shows how to automatically restore a Portainer instance on a Kubernetes cluster using:
- A monitoring script (
portainer_hb.sh) - The Portainer API
- An S3-compatible backup (for example, MinIO)
The script:
- Monitors the primary Portainer instance
- Deploys a new instance if the primary becomes unavailable
- Restores from an S3 backup
- Allows you to remap DNS to maintain continuity
Prerequisites
Before starting, ensure the following:
- Portainer is running on Kubernetes
- Portainer backups are configured to an S3-compatible bucket
- The Portainer instance is accessible via a FQDN
kubectlis installed and configured for your clusterjqis installed (used to parse API responses)
After the restore process completes, update your DNS record to point your original FQDN to the new Portainer instance IP address. This is critical if you are using Edge Agents.
Step 1: Monitor the primary portainer instance
The script continuously checks whether the primary Portainer instance is reachable. It sends an unauthenticated API request. If the response contains "Unauthorized", the server is up. If not, failover begins.
while true
do
portainer_up=$(curl --silent --insecure -X GET https://your-portainer-fqdn/api/status | jq -r '.details')
if [ "$portainer_up" = "Unauthorized" ]; then
echo -ne 'Portainer is up\r'
else
break
fi
sleep 5
done
Step 2: Deploy Portainer on the secondary kubernetes cluster
When the primary instance is unreachable, deploy Portainer on the secondary cluster.
kubectl apply -n portainer -f portainer.yaml
echo "Deploying Portainer server"
This assumes:
- A namespace named
portainerexists - You have a valid Portainer deployment manifest
Refer to our documentation on installing Portainer on your Kubernetes environment for more details.
Step 3: Wait until the new Portainer pod is running
Before restoring the backup, confirm that the new instance is fully operational.
while true
do
portainer_running=$(kubectl get po -n portainer | tail -1 | awk '{print $3}')
if [ "$portainer_running" != "Running" ]; then
echo -ne 'Portainer is not running yet\r'
else
break
fi
sleep 1
done
This loop ensures:
- The pod status is
Running - The restore only begins once the server is ready
Step 4: Restore Portainer from an S3 backup
Once the new instance is running, use the Portainer API to restore from your S3 backup.
Set the following variables in your script:
- ACCESSKEYID: S3 access key
- SECRETKEY: S3 secret key
- BUCKETNAME: Bucket storing the backup
- FILENAME: Backup file name
- FILEPASSWORD: Backup password (if configured)
- REGIONS3: region
- SERVERS3: hostname or IP
- PORT: S3 service port
For example:
ACCESSKEYID="portainer"
BUCKETNAME="portainerbkp"
FILENAME="portainer-backup_2024-02-27_00-55-00.tar.gz"
FILEPASSWORD="restore1234"
REGION="us-east-1"
SERVER="s3server.example.com"
PORT="9001"
SECRETKEY="changeme"
Restore Call
curl -X POST \
--insecure \
--header "Content-Type: application/json" \
--url https://new-portainer-instance/api/restore \
--data "{
\"accessKeyID\": \"$ACCESSKEYID\",
\"bucketName\": \"$BUCKETNAME\",
\"filename\": \"$FILENAME\",
\"password\": \"$FILEPASSWORD\",
\"region\": \"$REGION\",
\"s3CompatibleHost\": \"$SERVER:$PORT\",
\"secretAccessKey\": \"$SECRETKEY\"
}"
After the restore completes:
- Update your DNS record to point the original FQDN to the new IP address
- Verify endpoints reconnect
- Confirm registries and authentication settings are intact
Check out our documentation for details on backing up to S3.
Complete script example
#!/bin/bash
# 1. Monitor primary instance
while true
do
portainer_up=$(curl --silent --insecure -X GET https://your-portainer-fqdn/api/status | jq -r '.details')
if [ "$portainer_up" = "Unauthorized" ]; then
echo -ne 'Portainer is up\r'
else
break
fi
sleep 5
done
# 2. Deploy secondary instance
kubectl apply -n portainer -f portainer.yaml
echo "Deploying Portainer server"
# 3. Wait for pod to be running
while true
do
portainer_running=$(kubectl get po -n portainer | tail -1 | awk '{print $3}')
if [ "$portainer_running" != "Running" ]; then
echo -ne 'Portainer is not running yet\r'
else
break
fi
sleep 1
done
sleep 5
# 4. Restore from S3
ACCESSKEYID="portainer"
BUCKETNAME="portainerbkp"
FILENAME="portainer-backup_2024-02-27_00-55-00.tar.gz"
FILEPASSWORD="restore1234"
REGION="us-east-1"
SERVER="s3server.example.com"
PORT="9001"
SECRETKEY="changeme"
curl -X POST \
--insecure \
--header "Content-Type: application/json" \
--url https://new-portainer-instance/api/restore \
--data "{
\"accessKeyID\": \"$ACCESSKEYID\",
\"bucketName\": \"$BUCKETNAME\",
\"filename\": \"$FILENAME\",
\"password\": \"$FILEPASSWORD\",
\"region\": \"$REGION\",
\"s3CompatibleHost\": \"$SERVER:$PORT\",
\"secretAccessKey\": \"$SECRETKEY\"
}"
echo "Portainer restored"
This approach provides:
- Automated failover detection
- Automated deployment
- Automated restore
- Minimal operational disruption
All existing configuration is preserved, including:
- Endpoints
- Registries
- Authentication
- Edge Agents
The result is continuity of service across Kubernetes clusters using the Portainer API and S3 backups.
Follow along with a video
The video below demonstrates the full automated restore workflow using the script.
In this example environment:
- The primary Portainer server runs on
192.168.10.171 - The secondary (failover) Portainer server runs on
192.168.10.176 - The S3-compatible backup server (MinIO) runs on
192.168.10.1
Try Portainer with 3 Nodes Free
If you're ready to get started with Portainer Business, 3 nodes free is a great place to begin. If you'd prefer to get in touch with us, we'd love to hear from you!

