Introducing CAPE!

We are excited to release CAPE v1.0.0 CAPE is a kubernetes operator for

  • multi-cluster Application deployment
  • multi-cluster Disaster recovery

CAPE is based on battle tested kubernetes backup & DR tool Velero. This release primarily focuses on workload backup and restore features. To understand more about Backup and DR in kubernetes, see Blog. Go try it out and let us know what you think!

Where can you get in touch with CAPE community?

What's included / not included ?

FeatureSingle ClusterMultiple ClusterTarget release
Kubernetes application manifests backupsReadyReadyv1.0.0
OCP/OKD supportReadyReadyv1.0.0
Using same/different s3 based backend storage for application manifest backupsReadyReadyv1.0.0
Kubernetes application data backupsReadyReadyv1.0.0
RBAC / multi-tenancy basicReadyReadyv1.0.0
Combining different type of object storages when sharing backupsReadyNot Readyv1.1.0
Alicloud OSSReadyNot Supportedv1.0.0

Kubernetes backups and recovery

There are few ways to snapshot applications and app data in kubernetes

  • Using etcd backups
  • Exporting kubernetes manifests
  • Storing all kubernetes manifests to git before applying to k8s
  • Using tools like velero to export and backup all objects in a namespace
  • for multi-cluster backup and recovery

Let's look into each approach for pros and cons

Etcd backup

  • Etcd backups are full k8s manifest backups hence application level backup is not possible. So it's either all or nothing backup
  • Also etcd backup will not take application data backup, hence additional tooling is required

Exporting kubernetes manifests manually

  • While this solution address application level backups issue, it's manual an requires additional tooling to automate
  • Also application data backups are not possible

Storing in all kubernetes manifests in git before applying

  • This is a preferred approach for repeatable deployments and backing up manifests but has some edge cases. let's discuss the edge cases
  • Application data backups are not possible
  • If there is a failed deployment just after you store generated manifest in git there should be additional mechanism to capture, revert or mark manifests as unstable release. This leads into creating special logic for handling different failed scenarios.
  • Additional tooling/pipeline is required to generate manifests and store in git.
  • Also application data backups are not possible


  • In all the above cases application data and k8s manifests mapping need to be tracked
  • Some of the attributes like image SHA (which is a true representation of immutability on a docker image) are only avaialble (in most cases) after application is deployed. All the methods above do not capture such attributes of a backup

Using tools like velero to export and backup all objects in a namespace

  • By far my most preferred way to backup and restore k8s workloads
  • Velero can either take all namespaces backup or specific
  • You can also filter applications withing namespaces or across namespaces for backup
  • Velero can backup exact state of workloads (ex:- docker image sha)
  • Automation tools can simply call Velero api to trigger partial or full backups (along with application data) by Biqmind

What does CAPE do then?

  • While velero works for single cluster very well, multiple-clusters is not its forte. CAPE fills that gap to make velero work across multiple clusters
  • CAPE also provides a unifed console with a rich UX to make operations life easy in a secure way.