Overview
Steps to Rename a Zvol:
- Stop services: Shut down VMs or applications using the zvol (e.g., bhyve virtual machine).
- Rename the zvol:
zfs rename tank/vm-disk1 tank/new-vm-disk1
E.G.
zfs rename tank/linuxdebian12disk tank/vm-2-linuxdebian12disk
Move Zvol
To move a ZFS volume (zvol) to another location within the same pool, the most efficient method is using zfs rename . This command instantly moves the zvol without copying data. For complex structural changes, you can also use zfs send | zfs recv to move a snapshot to a new dataset path.
Important Notes:
- Ensure the VM or process using the zvol is shut down before moving to prevent data corruption.
- If moving a Proxmox VM disk, remember to update the VM configuration file to point to the new location.
- ZVOLs are not "files" in a directory; zfs rename directly updates the ZFS hierarchy.
zfs rename tank/vm-2-linuxdebian12disk tank/vms/vm2/vm2-linuxdebian12disk
To list all zvols in the system, use the zfs list command filtered by type:
zfs list -t volume
NAME USED AVAIL REFER MOUNTPOINT
tank/linuxdebian0 19.8G 3.40T 2.69G -
tank/linuxdisk0 16.3G 3.40T 56K -
tank/vm-2-linuxdebian12disk 20.4G 3.40T 2.82G -
tank/vm-2-zvol 64.2G 3.44T 3.26G -
tank/vms/vm-0 2.80G 3.38T 2.80G -
tank/vms/vm1/vm-1-zvol 2.80G 3.38T 2.80G -
zroot/linuxdebian0 16.3G 221G 56K -
To list all volumes with specific properties:
zfs list -o name,volsize,used,referenced,compressratio -t volume
NAME VOLSIZE USED REFER RATIO
tank/linuxdebian0 16G 19.8G 2.69G 1.60x
tank/linuxdisk0 16G 16.3G 56K 1.00x
tank/vm-2-linuxdebian12disk 16G 20.4G 2.82G 1.70x
tank/vm-2-zvol 60G 64.2G 3.26G 1.39x
tank/vms/vm-0 16G 2.80G 2.80G 1.75x
tank/vms/vm1/vm-1-zvol 16G 2.80G 2.80G 1.75x
zroot/linuxdebian0 16G 16.3G 56K 1.00x
Locate Zvol Device Nodes (/dev/zvol)
Zvols act as raw block devices. You can list them in the filesystem:
ls -l /dev/zvol/tank/
total 1
crw-r----- 1 root operator 0xc6 Feb 9 10:34 linuxdebian0
crw-r----- 1 root operator 0x92 Feb 9 10:34 linuxdisk0
crw-r----- 1 root operator 0x111 Feb 9 11:15 vm-2-linuxdebian12disk
crw-r----- 1 root operator 0x86 Feb 9 10:34 vm-2-zvol
dr-xr-xr-x 3 root wheel 512 Feb 9 10:34 vms
To see detailed information about a specific zvol or all zvols (including compression, block size, etc.):
zfs get all tank/vm-2-linuxdebian12disk
NAME PROPERTY VALUE SOURCE
tank/vm-2-linuxdebian12disk type volume -
tank/vm-2-linuxdebian12disk creation Sun Jan 11 18:23 2026 -
tank/vm-2-linuxdebian12disk used 20.4G -
tank/vm-2-linuxdebian12disk available 3.40T -
tank/vm-2-linuxdebian12disk referenced 2.82G -
tank/vm-2-linuxdebian12disk compressratio 1.70x -
tank/vm-2-linuxdebian12disk reservation none default
tank/vm-2-linuxdebian12disk volsize 16G local
tank/vm-2-linuxdebian12disk volblocksize 16K default
tank/vm-2-linuxdebian12disk checksum on default
tank/vm-2-linuxdebian12disk compression on default
tank/vm-2-linuxdebian12disk readonly off default
tank/vm-2-linuxdebian12disk createtxg 275461 -
tank/vm-2-linuxdebian12disk copies 1 default
tank/vm-2-linuxdebian12disk refreservation 16.3G local
tank/vm-2-linuxdebian12disk guid 14499331991764456063 -
tank/vm-2-linuxdebian12disk primarycache all default
tank/vm-2-linuxdebian12disk secondarycache all default
tank/vm-2-linuxdebian12disk usedbysnapshots 1.44G -
tank/vm-2-linuxdebian12disk usedbydataset 2.82G -
tank/vm-2-linuxdebian12disk usedbychildren 0B -
tank/vm-2-linuxdebian12disk usedbyrefreservation 16.1G -
tank/vm-2-linuxdebian12disk logbias latency default
tank/vm-2-linuxdebian12disk objsetid 11385 -
tank/vm-2-linuxdebian12disk dedup off default
tank/vm-2-linuxdebian12disk mlslabel none default
tank/vm-2-linuxdebian12disk sync standard default
tank/vm-2-linuxdebian12disk refcompressratio 1.72x -
tank/vm-2-linuxdebian12disk written 148M -
tank/vm-2-linuxdebian12disk logicalused 7.21G -
tank/vm-2-linuxdebian12disk logicalreferenced 4.84G -
tank/vm-2-linuxdebian12disk volmode dev local
tank/vm-2-linuxdebian12disk snapshot_limit none default
tank/vm-2-linuxdebian12disk snapshot_count none default
tank/vm-2-linuxdebian12disk snapdev hidden default
tank/vm-2-linuxdebian12disk context none default
tank/vm-2-linuxdebian12disk fscontext none default
tank/vm-2-linuxdebian12disk defcontext none default
tank/vm-2-linuxdebian12disk rootcontext none default
tank/vm-2-linuxdebian12disk redundant_metadata all default
tank/vm-2-linuxdebian12disk encryption off default
tank/vm-2-linuxdebian12disk keylocation none default
tank/vm-2-linuxdebian12disk keyformat none default
tank/vm-2-linuxdebian12disk pbkdf2iters 0 default
tank/vm-2-linuxdebian12disk special_small_blocks 0 default
tank/vm-2-linuxdebian12disk snapshots_changed Mon Feb 9 9:16:40 2026 -
tank/vm-2-linuxdebian12disk prefetch all default
tank/vm-2-linuxdebian12disk volthreading on default
Check Zvol Usage and I/O
If you are using zvols for virtualization (bhyve) or iSCSI, you can monitor their I/O performance.
Note: This shows I/O for devices, but to monitor a specific, active, and mounted ZFS volume's I/O, you may need to look at gstat or examine the pool-level statistics if the zvol is busy.
zpool iostat -v 1
capacity operations bandwidth
pool alloc free read write read write
tank 23.8G 3.60T 13 23 203K 471K
mirror-0 23.8G 3.60T 13 23 203K 471K
ada0 - - 6 11 105K 236K
ada1 - - 6 11 98.6K 236K
zroot 1.83G 228G 0 9 9.80K 68.7K
mirror-0 1.83G 228G 0 9 9.80K 68.7K
nda0p4 - - 0 4 4.82K 34.4K
nda1p4 - - 0 4 4.98K 34.4K
^X capacity operations bandwidth
pool alloc free read write read write
tank 23.8G 3.60T 0 0 0 0
mirror-0 23.8G 3.60T 0 0 0 0
ada0 - - 0 0 0 0
ada1 - - 0 0 0 0
zroot 1.83G 228G 0 0 0 0
mirror-0 1.83G 228G 0 0 0 0
nda0p4 - - 0 0 0 0
nda1p4 - - 0 0 0 0
To delete a ZFS zvol and all associated snapshots, use the zfs destroy -r command, which recursively destroys the zvol and all its children (snapshots). Run:
zfs destroy -r pool/path/to/zvol
-r: Recursively destroys all child datasets and snapshots.
-f: (Optional) Forcibly unmounts if necessary.
For example, to destroy a zvol named vm-100-disk-1 in tank/vms:
zfs destroy -r tank/vms/vm-100-disk-1
Important Notes:
This action is permanent and cannot be undone.
If the zvol has active clones, you may need to use zfs destroy -R to destroy the dependents as well.
Ensure no processes are using the block device before deletion.
Make an identical independent copy of a zvol
In ZFS, creating a "full copy" of an original dataset can be achieved in two main ways, depending on whether you want an instant, space-efficient clone (linked to the original) or a completely independent copy (full duplication).
- Instant, Writable Clone (Linked)
This creates a new, independent dataset that is a writable copy of a specific snapshot. It consumes almost no space initially because it shares data blocks with the original.
Method: Snapshot, then clone.
Command:
zfs snapshot pool/original@clone_snap
zfs clone pool/original@clone_snap pool/new_clone
Note: The clone is dependent on the snapshot. You cannot delete the snapshot without promoting the clone (zfs promote) first.
- Full Independent Copy (Separate)
This method creates a complete, standalone copy of the dataset. It does not share blocks with the original and is not dependent on the source snapshot, allowing for total separation.
Method: zfs send and zfs receive.
Command:
zfs snapshot -r pool/original@full_copy
zfs send -R pool/original@full_copy | zfs receive pool/new_copy
zfs send -R tank/vms/vm2/ispconfig-debian12-zvol@090226-0915 | zfs receive tank/vms/vm2/ispconfig-debian12__FOR_TESTING-zvol
Note: This consumes space equal to the amount of data being copied, but is required if you want to remove the original pool or dataset completely.
Key Differences
Feature zfs clone zfs send + zfs recv
Speed Instant Depends on data size
Space Usage Almost none (initially) Full duplication
Dependency Dependent on Snapshot Independent
Best For Testing/Instant provisioning Backups/Full migrations
Important Considerations
Pool Restriction: Clones must reside in the same pool as the snapshot they are created from.
Promote: To make a clone independent (so you can destroy the original), use zfs promote <clone_dataset>.
Full Replication: Using zfs send -R preserves all snapshots, clones, and properties.