Go to content Go to navigation

ZetaWatch / ZFS Snapshoting · 2019-10-12 00:43 by Black in

ZetaWatch recently received the capability for snapshot management. This includes displaying snapshots, but their creation and destruction. Snapshots can also be cloned, rolled back to, and mounted. This article describes how ZetaWatch can interact with snapshots, the shortcomings of it, and how zfs provides this capability. And what is planed to be improved.


Mounting snapshots is different from normal datasets, their mount point is fixed into the hidden .zfs/snapshot directory at the root of every dataset. (It can be made visible by setting zfs set snapdir=visible pool/dataset). On OS X, this seems to make finder, and all other Cocoa applications, unable to see it. Posix applications such as shells can use it though. It is therefore of rather limited usefulness.

Rolling back to a snapshot is problematic, because it applies to the whole dataset. It is not something that is useful in a lot of situation, since dataset granularity is not very fine grained. Rolling back to a snapshot will destroy all snapshots that are newer than it, and also all dependent clones. This is why ZetaWatch requires authentication before performing a rollback.

Cloning is more generally useful. It creates a new dependent file system that can be used like the original, including modifications. But it still depends on the original snapshot, and the original dataset. In ZetaWatch, this also requires authentication, since it allows creation of a new dataset.

Destroying snapshots is easy, but will fail if a dependent clone exists. Recursive deletion or readable errors are not yet provided.

Snapshot related API

The ZFS API for snapshot interaction is a bit inconsistent, combining handles, names and even nvlists in places, and sometimes offering higher level convenience, and in other places lacking. This isn’t too surprising though, the ZFS libraries seem intended only for internal use by the command line tools.

takes a libzfs_handle_t* handle for the library itself, the desired name of the snapshot (including pool and dataset name, a flag for recursive and an nvlist_t* for additional properties for the created snapshots. The snapshot name is validated, and split into a dataset and snapshot portion. A handle to the dataset is created from the name, and used to iterate over all child filesystems recursively, if requested. The names of all to-be-created snapshots are added to an nvlist, and zfs_snapshot_nvl is invoked. It in turn validates the properties, and calls lzc_snapshot to actually create the snapshots.
is similar, but it takes an zfs_handle_t* to the snapshot to be cloned, the desired full name and an nvlist with properties. It validates the operation, verifying among other things that both the source snapshot (passed via handle) and the destination dataset (passes as part of the path) are valid and exist, then passes control on to lzc_clone. Which again accepts strings instead of handles for everything.
takes a zfs_handle_t* for the filesystem, a zfs_handle_t* for the snapshot and a force flag as argument. This is strange, since snapshots only apply to one filesystem. Internally, this function first destroys all bookmarks and snapshots that are newer, and then calls lzc_rollback_to, passing it the names of the base dataset and the snapshot.
is much lower level than the previous functions. It can destroy datasets and snapshots, which are passed in via a handle. But it doesn’t allow any recursion, unlike zfs_rollback, or even unmount the dataset. And unlike zfs_clone, it doesn’t verify if the operation makes sense either. And it directly interacts with zfs_ioctl. If it wasn’t for taking a handle as parameter, it’d feel right at home with the lzc family of functions.

Planed Improvements to ZetaWatch

  • Capability for Clone Promotion
  • Ask for confirmation before destroying snapshots / clones on dataset rollback
  • Recursive destruction with confirmation.
  • Better error message on non-recursive destruction.
  Textile help