Organizations frequently use snapshots for Cassandra backup and recovery, or restore. Cassandra database snapshots help enterprises go back in time to recover from accidental data deletion or an issue arising from application corruption. However, there are a few significant limitations when it comes to using snapshots for Cassandra backup:
- Cassandra snapshots lead to storage amplification due to compaction
- Cassandra snapshots need a scheduler to work effectively
Storage Amplification Due to Compaction
Snapshots in Cassandra use the hard links feature—where a directory entry associates a name with a file on a file system—in the underlying file system (Ext4, XFS). Hard links increase the reference count on all the files for which a snapshot is taken.
When a snapshot of a table is taken, the underlying file system will increase the reference count on all the files in the storage directory corresponding to the table. This ensures that if the table is dropped and the user tries to clean up the storage directory, the actual files will not be deleted since the snapshot holds an additional reference to the storage directory and the files contained within it. However, this process will result in storage amplification due to another Cassandra process that happens in parallel, compaction.
During the compaction process, the files of a particular generation are combined to create a new SSTable file for which cleanup has been done—tombstones have been removed, deleted columns have been cleaned up, and data has been sorted.
After the completion of compaction, SSTables corresponding to a previous generation are typically deleted. Once a snapshot is taken, however, SSTables from the previous generation cannot be deleted because the snapshots have an additional reference pointer to these files. There are now two sets of files, one set of files for which the snapshot has a reference, and one set of files created by the compaction process. This results in storage amplification. For example, if you have a snapshot taken of a table when the storage directory for the table was 1TB in size, then the snapshot can potentially take up an additional 1TB of space.
Organizations often report that they can’t take more than two to three snapshots of their Cassandra environments before they run out of storage space on their production Cassandra nodes. Having just two to three point-in-time copies of your backup falls very short of most enterprise data backup retention requirements.
Scheduler Needed to Work Effectively
The frequency of taking and retaining snapshots varies by business requirements. You may need to manage specific keyspaces and tables differently based on their relative value. Let’s say you have ten tables in your Cassandra environment. You may have three tables that require a higher degree of protection, and hence, need to take snapshots every hour. The other seven tables may need snapshots taken only once a day. You need some form of automated scheduler with a policy engine that sits on top of your snapshot infrastructure.
Unfortunately, Cassandra does not provide any scheduling capabilities for managing snapshots. As a stopgap measure, Cassandra DBAs typically end up writing scripts to create snapshots at suitable user-defined intervals, as well as deleting them at the end of the desired retention period. It is extremely difficult to script this process and manage this level of complexity in an enterprise Cassandra deployment.
Learn more about the Cohesity solution for Cassandra backup and recovery.