Difference between revisions of "Features"

From OpenZFS
Jump to navigation Jump to search
m (large_blocks is in the man page for FreeBSD 11 current so I assume that it's C)
(19 intermediate revisions by 6 users not shown)
Line 5: Line 5:
== Feature Flags ==
== Feature Flags ==


{| class="wikitable"
See the [[Feature_Flags|Feature Flags]] wiki page.
|
| '''illumos'''
| '''FreeBSD'''
| '''ZFS on Linux'''
| '''OpenZFS on OS X'''
|-
| '''async_destroy'''
| Y
| Y
| Y
| Y
|-
| '''bookmarks'''
| Y
| S
| N
| Y
|-
| '''embedded_data'''
| Y
| C
|
| Y
|-
| '''empty_bpobj'''
| Y
| Y
| Y
| Y
|-
| '''enabled_txg'''
| Y
| S
| N
| Y
|-
| '''extensible_dataset'''
| Y
| S
| N
| Y
|-
| '''filesystem_limits'''
| Y
| C
|
| N
|-
| '''hole_birth'''
| Y
| S
| N
| Y
|-
| '''large_blocks'''
| Y
|
| C
| N
|-
| '''lz4_compress'''
| Y
| Y
| Y
| Y
|-
| '''multi_vdev_crash_dump'''
| Y
| S
| N
| N
|-
| '''spacemap_histogram'''
| Y
| S
| N
| Y
|}
 
'''Table legends'''
 
FreeBSD:
* N = not implemented
* C = implemented in -CURRENT
* S = implemented in -CURRENT and -STABLE
* Y = implemented in -RELEASE
 
Other distributions:
* N = not implemented
* Y = implemented
 
For greatest compatibility, features that are exclusive (when enabled) should be periodically ported to all other distributions.
 
=== Background ===
 
Originally the ZFS on-disk format was versioned with a single number that was increased whenever a new on-disk format change was introduced. This worked well when a single entity controlled the development of ZFS; however, in the more distributed development model of OpenZFS a single version number is not ideal. Every OpenZFS implementation would need to agree on every change to the on-disk format.
 
One of the first OpenZFS projects was a new versioning system called "feature flags" that tags on-disk format changes with unique names. The system supports both completely independent format changes, as well as format changes that depend on each other. A pool's on-disk format is portable between OpenZFS implementations as long as all of the feature flags in use by the pool are supported by both implementations.
 
For more details see these [http://blog.delphix.com/csiden/files/2012/01/ZFS_Feature_Flags.pdf slides (Jan 2012)] and [http://illumos.org/man/5/zpool-features <tt>zpool-features(5)</tt>] (illumos) or [http://www.freebsd.org/cgi/man.cgi?query=zpool-features&sektion=7&manpath=FreeBSD+11-current <tt>zpool-features(7)</tt>] (FreeBSD).


== libzfs_core ==
== libzfs_core ==
Line 316: Line 216:
|-
|-
|'''OpenZFS on OS X'''
|'''OpenZFS on OS X'''
|??
|[https://github.com/openzfsonosx/zfs/commit/6a06ef26abc87c6ede9bec8246713dc94c98fa78 May 2015]
|}
|}


Line 417: Line 317:


An alternative method, which is arguably better, and works by tracking the metaslab allocator is also in progress and can be found here:
An alternative method, which is arguably better, and works by tracking the metaslab allocator is also in progress and can be found here:
[http://cr.illumos.org/~webrev/skiselkov/zfs_unmap/ ZFS TRIM Support for illumos]
[https://www.illumos.org/issues/6363 Add UNMAP/TRIM functionality to ZFS and Illumos]


There is [https://github.com/zfsonlinux/zfs/pull/1016 a pull request for ZFS On Linux] which implements FreeBSD's (Sep 2012) ZFS TRIM support.
There is [https://github.com/zfsonlinux/zfs/pull/3656 a pull request for ZFS On Linux] which implements FreeBSD's (Sep 2012) ZFS TRIM support.


{| class="wikitable"
{| class="wikitable"
Line 476: Line 376:
|[https://github.com/zfsonlinux/zfs/commit/55d85d5a8c45c4559a4a0e675c37b0c3afb19c2f May 2013]
|[https://github.com/zfsonlinux/zfs/commit/55d85d5a8c45c4559a4a0e675c37b0c3afb19c2f May 2013]
|[https://github.com/zfsonlinux/zfs/commit/e51be06697762215dc3b679f8668987034a5a048 June 2013]
|[https://github.com/zfsonlinux/zfs/commit/e51be06697762215dc3b679f8668987034a5a048 June 2013]
|not yet
|[https://github.com/zfsonlinux/zfs/commit/c2e42f9d53bec422abb71efade2c004383345038 Oct 2013]
|-
|-
|'''OpenZFS on OS X'''
|'''OpenZFS on OS X'''
|[https://github.com/openzfsonosx/zfs/commit/55d85d5a8c45c4559a4a0e675c37b0c3afb19c2f May 2013]
|[https://github.com/openzfsonosx/zfs/commit/55d85d5a8c45c4559a4a0e675c37b0c3afb19c2f May 2013]
|[https://github.com/openzfsonosx/zfs/commit/e51be06697762215dc3b679f8668987034a5a048 June 2013]
|[https://github.com/openzfsonosx/zfs/commit/e51be06697762215dc3b679f8668987034a5a048 June 2013]
|not yet
|[https://github.com/openzfsonosx/zfs/commit/c2e42f9d53bec422abb71efade2c004383345038 Oct 2013]
|}
|}


==== nop-write ====
==== nop-write ====


ZFS supports end-to-end checksumming of every data block. When a cryptographically secure checksum is being used OpenZFS will compare the checksums of incoming writes to checksum of the existing on-disk data and avoid issuing any write i/o for data that has not changed. This can help performance and snapshot space usage in situations were the same files are regularly overwritten with almost-identical data (e.g. regular full-backups of large random-access files).
ZFS supports end-to-end checksumming of every data block. When a cryptographically secure checksum is being used (and compression is enabled) OpenZFS will compare the checksums of incoming writes to checksum of the existing on-disk data and avoid issuing any write i/o for data that has not changed. This can help performance and snapshot space usage in situations were the same files are regularly overwritten with almost-identical data (e.g. regular full-backups of large random-access files).


{| class="wikitable"
{| class="wikitable"
Line 500: Line 400:
|-
|-
|'''OpenZFS on OS X'''
|'''OpenZFS on OS X'''
|not yet
|[https://github.com/openzfsonosx/zfs/commit/03c6040bee6c87a9413b7da41d9f580f79a8ab62 Nov 2013]
|}
|}


Line 598: Line 498:
|'''ZFS on Linux'''
|'''ZFS on Linux'''
|[https://github.com/zfsonlinux/zfs/commit/556011dbec2d10579819078559a77630fc559112 Jul 2013]
|[https://github.com/zfsonlinux/zfs/commit/556011dbec2d10579819078559a77630fc559112 Jul 2013]
|not yet ported
|[https://github.com/zfsonlinux/zfs/commit/9f500936c82137ef3a57c53013894f622dcec14e Feb 26, 2016]
|-
|-
|'''OpenZFS on OS X'''
|'''OpenZFS on OS X'''
Line 623: Line 523:
|-
|-
|'''OpenZFS on OS X'''
|'''OpenZFS on OS X'''
|[https://github.com/openzfsonosx/zfs/commit/e8b96c6007bf97cdf34869c1ffbd0ce753873a3d Mar 2014]
|}
==== Disable LBA Weighting on files and SSDs ====
On rotational media, the bandwidth of the outermost tracks is approximately twice that of innermost tracks. A heuristic called LBA weighting was put into the metaslab allocator to account for this by favoring the outermost tracks over the innermost tracks. This has the consequence that metaslabs tend to fill at different rates depending on their location. This causes the metaslabs corresponding to outermost tracks to enter the best-fit allocation strategy.
The best-fit allocation strategy is more CPU intensive than the typical first-fit because it looks for the smallest region of free space able to fulfill an allocation rather than picking the next avaliable one. The CPU time is fairly excessive and is known to harm IOPS, but it exists to minimize use of gang blocks as a metaslab becomes excessively full. Gaining a bandwidth improvement from LBA weighting at the expense of an earlier switch to the best-fit allocation behavior on the weighted metaslabs is reasonable on rotational disks. However, it makes no sense on files, where the underlying filesystem is free to place things however way it sees fit, and on SSDs, where there is no bandwidth difference based on LBA.
With this change, we will more evenly fill metaslabs on pools whose vdevs consist of only files and SSDs, which will minimize the metaslabs that enter the best fit allocation strategy when a pool is mostly full, but still below 96% full. This is particularly important on SSDs, where drops in IOPS are more pronounced.
{| class="wikitable"
|-
|'''illumos'''
|not yet
|-
|'''FreeBSD'''
|not yet
|not yet
|-
|'''ZFS on Linux'''
|[https://github.com/zfsonlinux/zfs/commit/fb40095f5f0853946f8150481ca22602d1334dfe Aug 2015]
|-
|'''OpenZFS on OS X'''
|[https://github.com/openzfsonosx/zfs/commit/fb40095f5f0853946f8150481ca22602d1334dfe Sep 2015]
|}
|}


Line 670: Line 593:
|[https://github.com/illumos/illumos-gate/commit/77372cb0f35e8d3615ca2e16044f033397e88e21 Feb 2013]
|[https://github.com/illumos/illumos-gate/commit/77372cb0f35e8d3615ca2e16044f033397e88e21 Feb 2013]
|[https://github.com/freebsd/freebsd/commit/2bed8f5691b7572d6f31bfc68e9f762a938af863 Mar 2013]
|[https://github.com/freebsd/freebsd/commit/2bed8f5691b7572d6f31bfc68e9f762a938af863 Mar 2013]
|not yet
|[https://github.com/zfsonlinux/zfs/commit/24a64651b4163d47b1187821152d762e9a263d5a Oct 2013]
|not yet
|[https://github.com/openzfsonosx/zfs/commit/24a64651b4163d47b1187821152d762e9a263d5a Nov 2013]
|-
|-
|}
|}

Revision as of 19:03, 16 March 2018

This page describes some of the more important features and performance improvements that are part of OpenZFS.

Help would be appreciated in porting features between platforms whose status is "not yet".

Feature Flags

See the Feature Flags wiki page.

libzfs_core

See this blog post (Jan 2012) and associated slides and video for more details.

First introduced in:

illumos June 2012
FreeBSD March 2013
ZFS on Linux August 2013
OpenZFS on OS X October 2013

CLI Usability

These are improvements to the command line interface. While the end result is a generally more friendly user interface, getting the desired behavior often required modifications to the core of ZFS.

Listed in chronological order (oldest first).

Pool Comment

OpenZFS has a per-pool comment property which can be set with the zpool set command and can be read even if the pool is not imported, so it is accessible even if pool import fails.

illumos Nov 2011
FreeBSD Nov 2011
ZFS on Linux Aug 2012
OpenZFS on OS X Aug 2012

Size Estimates for zfs send and zfs destroy

This feature enhances OpenZFS's internal space accounting information. This new accounting information is used to provide a -n (dry-run) option for zfs send which can instantly calculate the amount of send stream data a specific zfs send command would generate. It is also used for a -n option for zfs destroy which can instantly calculate the amount of space that would be reclaimed by a specific zfs destroy command.

illumos Nov 2011
FreeBSD Nov 2011
ZFS on Linux Jul 2012
OpenZFS on OS X Jul 2012

vdev Information in zpool list

OpenZFS adds a -v option to the zpool list command which shows detailed sizing information about the vdevs in the pool:

$ zpool list -v
NAME          SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
dcenter      5.24T  3.85T  1.39T         -    73%  1.00x  ONLINE  -
  mirror      556G   469G  86.7G         -
    c2t1d0       -      -      -         -
    c2t0d0       -      -      -         -
  mirror      556G   493G  63.0G         -
    c2t3d0       -      -      -         -
    c2t2d0       -      -      -         -
  mirror      556G   493G  62.7G         -
    c2t5d0       -      -      -         -
    c2t4d0       -      -      -         -
  mirror      556G   494G  62.5G         -
    c2t8d0       -      -      -         -
    c2t6d0       -      -      -         -
  mirror      556G   494G  62.2G         -
    c2t10d0      -      -      -         -
    c2t9d0       -      -      -         -
  mirror      556G   494G  61.9G         -
    c2t12d0      -      -      -         -
    c2t11d0      -      -      -         -
  mirror     1016G   507G   509G         -
    c1t1d0       -      -      -         -
    c1t5d0       -      -      -         -
  mirror     1016G   496G   520G         -
    c1t3d0       -      -      -         -
    c1t4d0       -      -      -         -
illumos Jan 2012
FreeBSD May 2012
ZFS on Linux Sept 2012
OpenZFS on OS X Sept 2012

ZFS list snapshot property alias

Functionally identical to Solaris 11 extension zfs list -t snap.

illumos not yet
FreeBSD Oct 2013
ZFS on Linux Apr 2012
OpenZFS on OS X Apr 2012

ZFS snapshot alias

Functionally identical to Solaris 11 extension zfs snap.

illumos not yet
FreeBSD Oct 2013
ZFS on Linux Apr 2012
OpenZFS on OS X Apr 2012

zfs send Progress Reporting

OpenZFS introduces a -v option to zfs send which reports per-second information on how much data has been sent, how long it has taken, and how much data remains to be sent.

illumos May 2012
FreeBSD May 2012
ZFS on Linux Sept 2012
OpenZFS on OS X Sept 2012

Arbitrary Snapshot Arguments to zfs snapshot

illumos June 2012
FreeBSD March 2013
ZFS on Linux August 2013
OpenZFS on OS X September 2013

Performance

These are significant performance improvements, often requiring substantial restructuring of the source code.

Listed in chronological order (oldest first).

SA based xattrs

Improves performance of linux-style (short) xattrs by storing them in the dnode_phys_t's bonus block. (Not to be confused with Solaris-style Extended Attributes which are full-fledged files or "forks", like NTFS streams. This work could be extended to also improve the performance on illumos of small Extended Attributes whose permissions are the same as the containing file.)

Requires a disk format change and is off by default until Filesystem (ZPL) Feature Flags are implemented (not to be confused with zpool Feature Flags).

illumos not yet (needs additional functionality)
FreeBSD ??
ZFS on Linux Oct 2011
OpenZFS on OS X May 2015

Note that SA based xattrs are no longer used on symlinks as of Aug 2013 until an issue is resolved.

Use the slog even with logbias=throughput

illumos ??
FreeBSD ??
ZFS on Linux Oct 2011
OpenZFS on OS X Oct 2011

Asynchronous Filesystem and Volume Destruction

Destroying a filesystem requires traversing all of its data in order to return its used blocks to the pool's free list. Before this feature the filesystem was not fully removed until all blocks had been reclaimed. If the destroy operation was interrupted by a reboot or power outage the next attempt to import the pool (probably during boot) would need to complete the destroy operation synchronously, possibly delaying a boot for long periods of time.

With asynchronous destruction the filesystem's data is immediately moved to a "to be freed" list, allowing the destroy operation to complete without traversing any of the filesystem's data. A background process reclaims blocks from this "to be freed" list and is capable of resuming this process after reboots without slowing the pool import process.

The new freeing algorithm also has a significant performance improvement when destroying clones. The old algorithm took time proportional to the number of blocks referenced by the clone, even if most of those blocks could not be reclaimed because they were still referenced by the clone's origin. The new algorithm only takes time proportional to the number of blocks unique to the clone.

See this blog post for more detailed performance analysis.

Note: The async_destroy feature flag must be enabled to take advantage of this.

illumos May 2012
FreeBSD June 2012
ZFS on Linux Jan 2013
OpenZFS on OS X Jan 2013

Reduce Number of Empty bpobjs

Every time OpenZFS takes a snapshot it creates on-disk block pointer objects (bpobj's) to track blocks associated with that snapshot. In common use cases most of these bpobj's are empty, but the number of bpobjs per-snapshot is proportional to the number of snapshots already taken of the same filesystem or volume. When a single filesystem or volume has many (tens of thousands) snapshots these unecessary empty bpobjs can waste space and cause performance problems. OpenZFS waits to create each bpobjs until the first entry is added to it, thus eliminating the empty bpobjs.

Note: The empty_bpobj feature flag must be enabled to take advantage of this.

illumos Aug 2012
FreeBSD Aug 2012
ZFS on Linux Dec 2012
OpenZFS on OS X Dec 2012

Single Copy ARC

OpenZFS caches disk blocks in-memory in the adaptive replacement cache (ARC). Originally when the same disk block was accessed from different clones it was cached multiple times (one for each clone accessing the block) in case a clone planned to modify the block. With these changes OpenZFS caches at most one copy of every block unless a clone is actually modifying the block.

illumos Sep 2012
FreeBSD Nov 2012
ZFS on Linux Dec 2012
OpenZFS on OS X Dec 2012

TRIM Support

TRIM support provides the ability to pass deletes / frees through to underlying vdevs that help to ensure devices such as SSD's, which rely on receiving TRIM / UNMAP requests for sectors which are no longer needed, maintain optimal performance.

The current FreeBSD implementation builds a map of regions that were freed. On every write the code consults the map and removes ranges that were freed before, but are now overwritten.

Freed blocks are not TRIMed immediately, there is a low priority thread that TRIMs ranges when the time comes.

Support for TRIM has been demonstrated to significantly improved the general performance of SSD in the field, eliminating the need for regular secure erase cycles on busy hosts.

An alternative method, which is arguably better, and works by tracking the metaslab allocator is also in progress and can be found here: Add UNMAP/TRIM functionality to ZFS and Illumos

There is a pull request for ZFS On Linux which implements FreeBSD's (Sep 2012) ZFS TRIM support.

illumos not yet ported
FreeBSD Sep 2012
ZFS on Linux not yet ported
OpenZFS on OS X not yet ported

FASTWRITE Algorithm

Improves synchronous IO performance.

illumos not yet ported
FreeBSD not yet ported
ZFS on Linux Oct 2012
OpenZFS on OS X Oct 2012

Note that a locking enhancement is being reviewed.

Block Freeing Performance Improvments

Performance analysis of OpenZFS revealed that the algorithms used when freeing blocks could cause significant performance problems when freeing a large amount of blocks in a single transaction or when dealing with fragmented pools. Several performance improvements were made in this area.

illumos Nov 2012 Feb 2013 Feb 2013
FreeBSD Nov 2012 Feb 2013 Feb 2013
ZFS on Linux May 2013 June 2013 Oct 2013
OpenZFS on OS X May 2013 June 2013 Oct 2013

nop-write

ZFS supports end-to-end checksumming of every data block. When a cryptographically secure checksum is being used (and compression is enabled) OpenZFS will compare the checksums of incoming writes to checksum of the existing on-disk data and avoid issuing any write i/o for data that has not changed. This can help performance and snapshot space usage in situations were the same files are regularly overwritten with almost-identical data (e.g. regular full-backups of large random-access files).

illumos Nov 2012
FreeBSD Nov 2012
ZFS on Linux Nov 2013
OpenZFS on OS X Nov 2013

lz4 compression

OpenZFS supports on-the-fly compression of all user data with a variety of compression algorithm. This feature adds support for the lz4 compression algorithm. lz4 is usually faster and compresses data better than lzjb, the old default OpenZFS compression algorithm.

Note: The lz4_compress feature flag must be enabled to take advantage of this.

illumos Jan 2013
FreeBSD Feb 2013
ZFS on Linux Jan 2013
OpenZFS on OS X Jan 2013

synctask rewrite

illumos Feb 2013
FreeBSD March 2013
ZFS on Linux Sept 2013
OpenZFS on OS X Sept 2013

l2arc compression

illumos Jun 2013
FreeBSD Jun 2013
ZFS on Linux Aug 2013
OpenZFS on OS X Aug 2013

ARC Shouldn't Cache Freed Blocks

Originally cached blocks in the ARC remained cached until they were evicted due to memory pressure, even if the underlying disk block was freed. In some workloads these freed blocks were so frequently accessed before they were freed that the ARC continued to cache them while evicting blocks which had not been freed yet. Since freed blocks could never be accessed again continuing to cache them was unnecessary. In OpenZFS ARC blocks are evicted immediately when their underlying data blocks are freed.

illumos Jun 2013
FreeBSD Jun 2013
ZFS on Linux Jun 2013
OpenZFS on OS X Jun 2013

Improve N-way mirror read performance

Queues read requests to least busy leaf vdev in mirrors.

In addition to the vdev load biasing first implemented by ZFS on Linux in July 2013, the FreeBSD October 2013 version added I/O locality and device rotational information to further enhance the performance.

OS Load Load + I/O Locality & Rotational Information
illumos not yet ported not yet ported
FreeBSD N/A 23rd October 2013
ZFS on Linux Jul 2013 Feb 26, 2016
OpenZFS on OS X Jul 2013 not yet ported

Smoother Write Throttle

The write throttle (dsl_pool_tempreserve_space() and txg_constrain_throughput()) is rewritten to produce much more consistent delays when under constant load. The new write throttle is based on the amount of dirty data, rather than guesses about future performance of the system. When there is a lot of dirty data, each transaction (e.g. write() syscall) will be delayed by the same small amount. This eliminates the "brick wall of wait" that the old write throttle could hit, causing all transactions to wait several seconds until the next txg opens. One of the keys to the new write throttle is decrementing the amount of dirty data as i/o completes, rather than at the end of spa_sync(). Note that the write throttle is only applied once the i/o scheduler is issuing the maximum number of outstanding async writes. See the block comments in dsl_pool.c and above dmu_tx_delay() for more details.

The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync read, sync write, async read, async write, and scrub/resilver. The scheduler issues a number of concurrent i/os from each class to the device. Once a class has been selected, an i/o is selected from this class using either an elevator algorithem (async, scrub classes) or FIFO (sync classes). The number of concurrent async write i/os is tuned dynamically based on i/o load, to achieve good sync i/o latency when there is not a high load of writes, and good write throughput when there is. See the block comment in vdev_queue.c for more details.

illumos Aug 2013
FreeBSD Nov 2013
ZFS on Linux Dec 2013
OpenZFS on OS X Mar 2014

Disable LBA Weighting on files and SSDs

On rotational media, the bandwidth of the outermost tracks is approximately twice that of innermost tracks. A heuristic called LBA weighting was put into the metaslab allocator to account for this by favoring the outermost tracks over the innermost tracks. This has the consequence that metaslabs tend to fill at different rates depending on their location. This causes the metaslabs corresponding to outermost tracks to enter the best-fit allocation strategy.

The best-fit allocation strategy is more CPU intensive than the typical first-fit because it looks for the smallest region of free space able to fulfill an allocation rather than picking the next avaliable one. The CPU time is fairly excessive and is known to harm IOPS, but it exists to minimize use of gang blocks as a metaslab becomes excessively full. Gaining a bandwidth improvement from LBA weighting at the expense of an earlier switch to the best-fit allocation behavior on the weighted metaslabs is reasonable on rotational disks. However, it makes no sense on files, where the underlying filesystem is free to place things however way it sees fit, and on SSDs, where there is no bandwidth difference based on LBA.

With this change, we will more evenly fill metaslabs on pools whose vdevs consist of only files and SSDs, which will minimize the metaslabs that enter the best fit allocation strategy when a pool is mostly full, but still below 96% full. This is particularly important on SSDs, where drops in IOPS are more pronounced.

illumos not yet
FreeBSD not yet
ZFS on Linux Aug 2015
OpenZFS on OS X Sep 2015

Dataset Properties

These are new filesystem, volume, and snapshot properties which can be accessed with the zfs(1) command's get subcommand. See the zfs(1) manpage for your distribution for more details on each of these properties.

Property Description illumos FreeBSD ZFS on Linux OpenZFS on OS X
refcompressratio The compression ratio acheived for all data referenced by (but not necessarily unique to) a snapshot, filesystem, or volume, expressed as a multiplier. Jun 2011 Jun 2011 Aug 2012 Aug 2012
clones For snapshots, this property is a comma-separated list of filesystems or volumes which are clones of this snapshot. Nov 2011 Nov 2011 Jul 2012 Jul 2012
written The amount of referenced space written to this dataset since the previous snapshot. Nov 2011 Nov 2011 Jul 2012 Jul 2012
written@<snap> The amount of referenced space written to this dataset since the specified snapshot. This is the space referenced by this dataset, but not referenced by the specified snapshot. Nov 2011 Nov 2011 Jul 2012 Jul 2012
logicalused, logicalreferenced The amount of space used or referenced, before taking into account compression. Feb 2013 Mar 2013 Oct 2013 Nov 2013