2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBignore_hole_birth\fR (int)
33 When set, the hole_birth optimization will not be used, and all holes will
34 always be sent on zfs send. Useful if you suspect your datasets are affected
35 by a bug in hole_birth.
37 Use \fB1\fR for on and \fB0\fR (default) for off.
43 \fBl2arc_feed_again\fR (int)
46 Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
49 Use \fB1\fR for yes (default) and \fB0\fR to disable.
55 \fBl2arc_feed_min_ms\fR (ulong)
58 Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
59 applicable in related situations.
61 Default value: \fB200\fR.
67 \fBl2arc_feed_secs\fR (ulong)
70 Seconds between L2ARC writing
72 Default value: \fB1\fR.
78 \fBl2arc_headroom\fR (ulong)
81 How far through the ARC lists to search for L2ARC cacheable content, expressed
82 as a multiplier of \fBl2arc_write_max\fR
84 Default value: \fB2\fR.
90 \fBl2arc_headroom_boost\fR (ulong)
93 Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
94 successfully compressed before writing. A value of 100 disables this feature.
96 Default value: \fB200\fR.
102 \fBl2arc_nocompress\fR (int)
105 Skip compressing L2ARC buffers
107 Use \fB1\fR for yes and \fB0\fR for no (default).
113 \fBl2arc_noprefetch\fR (int)
116 Do not write buffers to L2ARC if they were prefetched but not used by
119 Use \fB1\fR for yes (default) and \fB0\fR to disable.
125 \fBl2arc_norw\fR (int)
128 No reads during writes
130 Use \fB1\fR for yes and \fB0\fR for no (default).
136 \fBl2arc_write_boost\fR (ulong)
139 Cold L2ARC devices will have \fBl2arc_write_nax\fR increased by this amount
140 while they remain cold.
142 Default value: \fB8,388,608\fR.
148 \fBl2arc_write_max\fR (ulong)
151 Max write bytes per interval
153 Default value: \fB8,388,608\fR.
159 \fBmetaslab_aliquot\fR (ulong)
162 Metaslab granularity, in bytes. This is roughly similar to what would be
163 referred to as the "stripe size" in traditional RAID arrays. In normal
164 operation, ZFS will try to write this amount of data to a top-level vdev
165 before moving on to the next one.
167 Default value: \fB524,288\fR.
173 \fBmetaslab_bias_enabled\fR (int)
176 Enable metaslab group biasing based on its vdev's over- or under-utilization
177 relative to the pool.
179 Use \fB1\fR for yes (default) and \fB0\fR for no.
185 \fBmetaslab_debug_load\fR (int)
188 Load all metaslabs during pool import.
190 Use \fB1\fR for yes and \fB0\fR for no (default).
196 \fBmetaslab_debug_unload\fR (int)
199 Prevent metaslabs from being unloaded.
201 Use \fB1\fR for yes and \fB0\fR for no (default).
207 \fBmetaslab_fragmentation_factor_enabled\fR (int)
210 Enable use of the fragmentation metric in computing metaslab weights.
212 Use \fB1\fR for yes (default) and \fB0\fR for no.
218 \fBmetaslabs_per_vdev\fR (int)
221 When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs.
223 Default value: \fB200\fR.
229 \fBmetaslab_preload_enabled\fR (int)
232 Enable metaslab group preloading.
234 Use \fB1\fR for yes (default) and \fB0\fR for no.
240 \fBmetaslab_lba_weighting_enabled\fR (int)
243 Give more weight to metaslabs with lower LBAs, assuming they have
244 greater bandwidth as is typically the case on a modern constant
245 angular velocity disk drive.
247 Use \fB1\fR for yes (default) and \fB0\fR for no.
253 \fBspa_config_path\fR (charp)
258 Default value: \fB/etc/zfs/zpool.cache\fR.
264 \fBspa_asize_inflation\fR (int)
267 Multiplication factor used to estimate actual disk consumption from the
268 size of data being written. The default value is a worst case estimate,
269 but lower values may be valid for a given pool depending on its
270 configuration. Pool administrators who understand the factors involved
271 may wish to specify a more realistic inflation factor, particularly if
272 they operate close to quota or capacity limits.
274 Default value: \fB24\fR.
280 \fBspa_load_verify_data\fR (int)
283 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
284 import. Use 0 to disable and 1 to enable.
286 An extreme rewind import normally performs a full traversal of all
287 blocks in the pool for verification. If this parameter is set to 0,
288 the traversal skips non-metadata blocks. It can be toggled once the
289 import has started to stop or start the traversal of non-metadata blocks.
291 Default value: \fB1\fR.
297 \fBspa_load_verify_metadata\fR (int)
300 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
301 pool import. Use 0 to disable and 1 to enable.
303 An extreme rewind import normally performs a full traversal of all
304 blocks in the pool for verification. If this parameter is set to 0,
305 the traversal is not performed. It can be toggled once the import has
306 started to stop or start the traversal.
308 Default value: \fB1\fR.
314 \fBspa_load_verify_maxinflight\fR (int)
317 Maximum concurrent I/Os during the traversal performed during an "extreme
318 rewind" (\fB-X\fR) pool import.
320 Default value: \fB10000\fR.
326 \fBspa_slop_shift\fR (int)
329 Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
330 in the pool to be consumed. This ensures that we don't run the pool
331 completely out of space, due to unaccounted changes (e.g. to the MOS).
332 It also limits the worst-case time to allocate space. If we have
333 less than this amount of free space, most ZPL operations (e.g. write,
334 create) will return ENOSPC.
336 Default value: \fB5\fR.
342 \fBzfetch_array_rd_sz\fR (ulong)
345 If prefetching is enabled, disable prefetching for reads larger than this size.
347 Default value: \fB1,048,576\fR.
353 \fBzfetch_max_distance\fR (uint)
356 Max bytes to prefetch per stream (default 8MB).
358 Default value: \fB8,388,608\fR.
364 \fBzfetch_max_streams\fR (uint)
367 Max number of streams per zfetch (prefetch streams per file).
369 Default value: \fB8\fR.
375 \fBzfetch_min_sec_reap\fR (uint)
378 Min time before an active prefetch stream can be reclaimed
380 Default value: \fB2\fR.
386 \fBzfs_arc_dnode_limit\fR (ulong)
389 When the number of bytes consumed by dnodes in the ARC exceeds this number of
390 bytes, try to unpin some of it in response to demand for non-metadata. This
391 value acts as a floor to the amount of dnode metadata, and defaults to 0 which
392 indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of
393 the ARC meta buffers that may be used for dnodes.
395 See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used
396 when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather
397 than in response to overall demand for non-metadata.
400 Default value: \fB0\fR.
406 \fBzfs_arc_dnode_limit_percent\fR (ulong)
409 Percentage that can be consumed by dnodes of ARC meta buffers.
411 See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a
412 higher priority if set to nonzero value.
414 Default value: \fB10\fR.
420 \fBzfs_arc_dnode_reduce_percent\fR (ulong)
423 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
424 when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fB.
427 Default value: \fB10% of the number of dnodes in the ARC\fR.
433 \fBzfs_arc_average_blocksize\fR (int)
436 The ARC's buffer hash table is sized based on the assumption of an average
437 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
438 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
439 For configurations with a known larger average block size this value can be
440 increased to reduce the memory footprint.
443 Default value: \fB8192\fR.
449 \fBzfs_arc_evict_batch_limit\fR (int)
452 Number ARC headers to evict per sub-list before proceeding to another sub-list.
453 This batch-style operation prevents entire sub-lists from being evicted at once
454 but comes at a cost of additional unlocking and locking.
456 Default value: \fB10\fR.
462 \fBzfs_arc_grow_retry\fR (int)
465 After a memory pressure event the ARC will wait this many seconds before trying
468 Default value: \fB5\fR.
474 \fBzfs_arc_lotsfree_percent\fR (int)
477 Throttle I/O when free system memory drops below this percentage of total
478 system memory. Setting this value to 0 will disable the throttle.
480 Default value: \fB10\fR.
486 \fBzfs_arc_max\fR (ulong)
489 Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
490 RAM. This value must be at least 67108864 (64 megabytes).
492 This value can be changed dynamically with some caveats. It cannot be set back
493 to 0 while running and reducing it below the current ARC size will not cause
494 the ARC to shrink without memory pressure to induce shrinking.
496 Default value: \fB0\fR.
502 \fBzfs_arc_meta_limit\fR (ulong)
505 The maximum allowed size in bytes that meta data buffers are allowed to
506 consume in the ARC. When this limit is reached meta data buffers will
507 be reclaimed even if the overall arc_c_max has not been reached. This
508 value defaults to 0 which indicates that a percent which is based on
509 \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data.
511 This value my be changed dynamically except that it cannot be set back to 0
512 for a specific percent of the ARC; it must be set to an explicit value.
514 Default value: \fB0\fR.
520 \fBzfs_arc_meta_limit_percent\fR (ulong)
523 Percentage of ARC buffers that can be used for meta data.
525 See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a
526 higher priority if set to nonzero value.
529 Default value: \fB75\fR.
535 \fBzfs_arc_meta_min\fR (ulong)
538 The minimum allowed size in bytes that meta data buffers may consume in
539 the ARC. This value defaults to 0 which disables a floor on the amount
540 of the ARC devoted meta data.
542 Default value: \fB0\fR.
548 \fBzfs_arc_meta_prune\fR (int)
551 The number of dentries and inodes to be scanned looking for entries
552 which can be dropped. This may be required when the ARC reaches the
553 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
554 in the ARC. Increasing this value will cause to dentry and inode caches
555 to be pruned more aggressively. Setting this value to 0 will disable
556 pruning the inode and dentry caches.
558 Default value: \fB10,000\fR.
564 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
567 The number of restart passes to make while scanning the ARC attempting
568 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
569 This value should not need to be tuned but is available to facilitate
570 performance analysis.
572 Default value: \fB4096\fR.
578 \fBzfs_arc_min\fR (ulong)
583 Default value: \fB100\fR.
589 \fBzfs_arc_min_prefetch_lifespan\fR (int)
592 Minimum time prefetched blocks are locked in the ARC, specified in jiffies.
593 A value of 0 will default to 1 second.
595 Default value: \fB0\fR.
601 \fBzfs_arc_num_sublists_per_state\fR (int)
604 To allow more fine-grained locking, each ARC state contains a series
605 of lists for both data and meta data objects. Locking is performed at
606 the level of these "sub-lists". This parameters controls the number of
607 sub-lists per ARC state.
609 Default value: \fR1\fB or the number of online CPUs, whichever is greater
615 \fBzfs_arc_overflow_shift\fR (int)
618 The ARC size is considered to be overflowing if it exceeds the current
619 ARC target size (arc_c) by a threshold determined by this parameter.
620 The threshold is calculated as a fraction of arc_c using the formula
621 "arc_c >> \fBzfs_arc_overflow_shift\fR".
623 The default value of 8 causes the ARC to be considered to be overflowing
624 if it exceeds the target size by 1/256th (0.3%) of the target size.
626 When the ARC is overflowing, new buffer allocations are stalled until
627 the reclaim thread catches up and the overflow condition no longer exists.
629 Default value: \fB8\fR.
636 \fBzfs_arc_p_min_shift\fR (int)
639 arc_c shift to calc min/max arc_p
641 Default value: \fB4\fR.
647 \fBzfs_arc_p_aggressive_disable\fR (int)
650 Disable aggressive arc_p growth
652 Use \fB1\fR for yes (default) and \fB0\fR to disable.
658 \fBzfs_arc_p_dampener_disable\fR (int)
661 Disable arc_p adapt dampener
663 Use \fB1\fR for yes (default) and \fB0\fR to disable.
669 \fBzfs_arc_shrink_shift\fR (int)
672 log2(fraction of arc to reclaim)
674 Default value: \fB5\fR.
680 \fBzfs_arc_sys_free\fR (ulong)
683 The target number of bytes the ARC should leave as free memory on the system.
684 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
685 option to a non-zero value will override the default.
687 Default value: \fB0\fR.
693 \fBzfs_autoimport_disable\fR (int)
696 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
698 Use \fB1\fR for yes (default) and \fB0\fR for no.
704 \fBzfs_dbgmsg_enable\fR (int)
707 Internally ZFS keeps a small log to facilitate debugging. By default the log
708 is disabled, to enable it set this option to 1. The contents of the log can
709 be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
710 this proc file clears the log.
712 Default value: \fB0\fR.
718 \fBzfs_dbgmsg_maxsize\fR (int)
721 The maximum size in bytes of the internal ZFS debug log.
723 Default value: \fB4M\fR.
729 \fBzfs_dbuf_state_index\fR (int)
732 This feature is currently unused. It is normally used for controlling what
733 reporting is available under /proc/spl/kstat/zfs.
735 Default value: \fB0\fR.
741 \fBzfs_deadman_enabled\fR (int)
744 Enable deadman timer. See description below.
746 Use \fB1\fR for yes (default) and \fB0\fR to disable.
752 \fBzfs_deadman_synctime_ms\fR (ulong)
755 Expiration time in milliseconds. This value has two meanings. First it is
756 used to determine when the spa_deadman() logic should fire. By default the
757 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
758 Secondly, the value determines if an I/O is considered "hung". Any I/O that
759 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
760 in a zevent being logged.
762 Default value: \fB1,000,000\fR.
768 \fBzfs_dedup_prefetch\fR (int)
771 Enable prefetching dedup-ed blks
773 Use \fB1\fR for yes and \fB0\fR to disable (default).
779 \fBzfs_delay_min_dirty_percent\fR (int)
782 Start to delay each transaction once there is this amount of dirty data,
783 expressed as a percentage of \fBzfs_dirty_data_max\fR.
784 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
785 See the section "ZFS TRANSACTION DELAY".
787 Default value: \fB60\fR.
793 \fBzfs_delay_scale\fR (int)
796 This controls how quickly the transaction delay approaches infinity.
797 Larger values cause longer delays for a given amount of dirty data.
799 For the smoothest delay, this value should be about 1 billion divided
800 by the maximum number of operations per second. This will smoothly
801 handle between 10x and 1/10th this number.
803 See the section "ZFS TRANSACTION DELAY".
805 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
807 Default value: \fB500,000\fR.
813 \fBzfs_delete_blocks\fR (ulong)
816 This is the used to define a large file for the purposes of delete. Files
817 containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
818 while smaller files are deleted synchronously. Decreasing this value will
819 reduce the time spent in an unlink(2) system call at the expense of a longer
820 delay before the freed space is available.
822 Default value: \fB20,480\fR.
828 \fBzfs_dirty_data_max\fR (int)
831 Determines the dirty space limit in bytes. Once this limit is exceeded, new
832 writes are halted until space frees up. This parameter takes precedence
833 over \fBzfs_dirty_data_max_percent\fR.
834 See the section "ZFS TRANSACTION DELAY".
836 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
842 \fBzfs_dirty_data_max_max\fR (int)
845 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
846 This limit is only enforced at module load time, and will be ignored if
847 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
848 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
849 "ZFS TRANSACTION DELAY".
851 Default value: 25% of physical RAM.
857 \fBzfs_dirty_data_max_max_percent\fR (int)
860 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
861 percentage of physical RAM. This limit is only enforced at module load
862 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
863 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
864 one. See the section "ZFS TRANSACTION DELAY".
866 Default value: \fN25\fR.
872 \fBzfs_dirty_data_max_percent\fR (int)
875 Determines the dirty space limit, expressed as a percentage of all
876 memory. Once this limit is exceeded, new writes are halted until space frees
877 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
878 one. See the section "ZFS TRANSACTION DELAY".
880 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
886 \fBzfs_dirty_data_sync\fR (int)
889 Start syncing out a transaction group if there is at least this much dirty data.
891 Default value: \fB67,108,864\fR.
897 \fBzfs_fletcher_4_impl\fR (string)
900 Select a fletcher 4 implementation.
902 Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR,
903 \fBavx2\fR, and \fBavx512f\fR.
904 All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction
905 set extensions to be available and will only appear if ZFS detects that they are
906 present at runtime. If multiple implementations of fletcher 4 are available,
907 the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR
908 results in the original, CPU based calculation, being used. Selecting any option
909 other than \fBfastest\fR and \fBscalar\fR results in vector instructions from
910 the respective CPU instruction set being used.
912 Default value: \fBfastest\fR.
918 \fBzfs_free_bpobj_enabled\fR (int)
921 Enable/disable the processing of the free_bpobj object.
923 Default value: \fB1\fR.
929 \fBzfs_free_max_blocks\fR (ulong)
932 Maximum number of blocks freed in a single txg.
934 Default value: \fB100,000\fR.
940 \fBzfs_vdev_async_read_max_active\fR (int)
943 Maximum asynchronous read I/Os active to each device.
944 See the section "ZFS I/O SCHEDULER".
946 Default value: \fB3\fR.
952 \fBzfs_vdev_async_read_min_active\fR (int)
955 Minimum asynchronous read I/Os active to each device.
956 See the section "ZFS I/O SCHEDULER".
958 Default value: \fB1\fR.
964 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
967 When the pool has more than
968 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
969 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
970 the dirty data is between min and max, the active I/O limit is linearly
971 interpolated. See the section "ZFS I/O SCHEDULER".
973 Default value: \fB60\fR.
979 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
982 When the pool has less than
983 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
984 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
985 the dirty data is between min and max, the active I/O limit is linearly
986 interpolated. See the section "ZFS I/O SCHEDULER".
988 Default value: \fB30\fR.
994 \fBzfs_vdev_async_write_max_active\fR (int)
997 Maximum asynchronous write I/Os active to each device.
998 See the section "ZFS I/O SCHEDULER".
1000 Default value: \fB10\fR.
1006 \fBzfs_vdev_async_write_min_active\fR (int)
1009 Minimum asynchronous write I/Os active to each device.
1010 See the section "ZFS I/O SCHEDULER".
1012 Default value: \fB1\fR.
1018 \fBzfs_vdev_max_active\fR (int)
1021 The maximum number of I/Os active to each device. Ideally, this will be >=
1022 the sum of each queue's max_active. It must be at least the sum of each
1023 queue's min_active. See the section "ZFS I/O SCHEDULER".
1025 Default value: \fB1,000\fR.
1031 \fBzfs_vdev_scrub_max_active\fR (int)
1034 Maximum scrub I/Os active to each device.
1035 See the section "ZFS I/O SCHEDULER".
1037 Default value: \fB2\fR.
1043 \fBzfs_vdev_scrub_min_active\fR (int)
1046 Minimum scrub I/Os active to each device.
1047 See the section "ZFS I/O SCHEDULER".
1049 Default value: \fB1\fR.
1055 \fBzfs_vdev_sync_read_max_active\fR (int)
1058 Maximum synchronous read I/Os active to each device.
1059 See the section "ZFS I/O SCHEDULER".
1061 Default value: \fB10\fR.
1067 \fBzfs_vdev_sync_read_min_active\fR (int)
1070 Minimum synchronous read I/Os active to each device.
1071 See the section "ZFS I/O SCHEDULER".
1073 Default value: \fB10\fR.
1079 \fBzfs_vdev_sync_write_max_active\fR (int)
1082 Maximum synchronous write I/Os active to each device.
1083 See the section "ZFS I/O SCHEDULER".
1085 Default value: \fB10\fR.
1091 \fBzfs_vdev_sync_write_min_active\fR (int)
1094 Minimum synchronous write I/Os active to each device.
1095 See the section "ZFS I/O SCHEDULER".
1097 Default value: \fB10\fR.
1103 \fBzfs_disable_dup_eviction\fR (int)
1106 Disable duplicate buffer eviction
1108 Use \fB1\fR for yes and \fB0\fR for no (default).
1114 \fBzfs_expire_snapshot\fR (int)
1117 Seconds to expire .zfs/snapshot
1119 Default value: \fB300\fR.
1125 \fBzfs_admin_snapshot\fR (int)
1128 Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1129 directory to cause the creation, destruction, or renaming of snapshots.
1130 When enabled this functionality works both locally and over NFS exports
1131 which have the 'no_root_squash' option set. This functionality is disabled
1134 Use \fB1\fR for yes and \fB0\fR for no (default).
1140 \fBzfs_flags\fR (int)
1143 Set additional debugging flags. The following flags may be bitwise-or'd
1155 Enable dprintf entries in the debug log.
1157 2 ZFS_DEBUG_DBUF_VERIFY *
1158 Enable extra dbuf verifications.
1160 4 ZFS_DEBUG_DNODE_VERIFY *
1161 Enable extra dnode verifications.
1163 8 ZFS_DEBUG_SNAPNAMES
1164 Enable snapshot name verification.
1167 Check for illegally modified ARC buffers.
1170 Enable spa_dbgmsg entries in the debug log.
1172 64 ZFS_DEBUG_ZIO_FREE
1173 Enable verification of block frees.
1175 128 ZFS_DEBUG_HISTOGRAM_VERIFY
1176 Enable extra spacemap histogram verifications.
1179 * Requires debug build.
1181 Default value: \fB0\fR.
1187 \fBzfs_free_leak_on_eio\fR (int)
1190 If destroy encounters an EIO while reading metadata (e.g. indirect
1191 blocks), space referenced by the missing metadata can not be freed.
1192 Normally this causes the background destroy to become "stalled", as
1193 it is unable to make forward progress. While in this stalled state,
1194 all remaining space to free from the error-encountering filesystem is
1195 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1196 permanently leak the space from indirect blocks that can not be read,
1197 and continue to free everything else that it can.
1199 The default, "stalling" behavior is useful if the storage partially
1200 fails (i.e. some but not all i/os fail), and then later recovers. In
1201 this case, we will be able to continue pool operations while it is
1202 partially failed, and when it recovers, we can continue to free the
1203 space, with no leaks. However, note that this case is actually
1206 Typically pools either (a) fail completely (but perhaps temporarily,
1207 e.g. a top-level vdev going offline), or (b) have localized,
1208 permanent errors (e.g. disk returns the wrong data due to bit flip or
1209 firmware bug). In case (a), this setting does not matter because the
1210 pool will be suspended and the sync thread will not be able to make
1211 forward progress regardless. In case (b), because the error is
1212 permanent, the best we can do is leak the minimum amount of space,
1213 which is what setting this flag will do. Therefore, it is reasonable
1214 for this flag to normally be set, but we chose the more conservative
1215 approach of not setting it, so that there is no possibility of
1216 leaking space in the "partial temporary" failure case.
1218 Default value: \fB0\fR.
1224 \fBzfs_free_min_time_ms\fR (int)
1227 During a \fRzfs destroy\fB operation using \fRfeature@async_destroy\fB a minimum
1228 of this much time will be spent working on freeing blocks per txg.
1230 Default value: \fB1,000\fR.
1236 \fBzfs_immediate_write_sz\fR (long)
1239 Largest data block to write to zil. Larger blocks will be treated as if the
1240 dataset being written to had the property setting \fRlogbias=throughput\fB.
1242 Default value: \fB32,768\fR.
1248 \fBzfs_max_recordsize\fR (int)
1251 We currently support block sizes from 512 bytes to 16MB. The benefits of
1252 larger blocks, and thus larger IO, need to be weighed against the cost of
1253 COWing a giant block to modify one byte. Additionally, very large blocks
1254 can have an impact on i/o latency, and also potentially on the memory
1255 allocator. Therefore, we do not allow the recordsize to be set larger than
1256 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1257 this tunable, and pools with larger blocks can always be imported and used,
1258 regardless of this setting.
1260 Default value: \fB1,048,576\fR.
1266 \fBzfs_mdcomp_disable\fR (int)
1269 Disable meta data compression
1271 Use \fB1\fR for yes and \fB0\fR for no (default).
1277 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1280 Allow metaslabs to keep their active state as long as their fragmentation
1281 percentage is less than or equal to this value. An active metaslab that
1282 exceeds this threshold will no longer keep its active status allowing
1283 better metaslabs to be selected.
1285 Default value: \fB70\fR.
1291 \fBzfs_mg_fragmentation_threshold\fR (int)
1294 Metaslab groups are considered eligible for allocations if their
1295 fragmentation metric (measured as a percentage) is less than or equal to
1296 this value. If a metaslab group exceeds this threshold then it will be
1297 skipped unless all metaslab groups within the metaslab class have also
1298 crossed this threshold.
1300 Default value: \fB85\fR.
1306 \fBzfs_mg_noalloc_threshold\fR (int)
1309 Defines a threshold at which metaslab groups should be eligible for
1310 allocations. The value is expressed as a percentage of free space
1311 beyond which a metaslab group is always eligible for allocations.
1312 If a metaslab group's free space is less than or equal to the
1313 threshold, the allocator will avoid allocating to that group
1314 unless all groups in the pool have reached the threshold. Once all
1315 groups have reached the threshold, all groups are allowed to accept
1316 allocations. The default value of 0 disables the feature and causes
1317 all metaslab groups to be eligible for allocations.
1319 This parameter allows to deal with pools having heavily imbalanced
1320 vdevs such as would be the case when a new vdev has been added.
1321 Setting the threshold to a non-zero percentage will stop allocations
1322 from being made to vdevs that aren't filled to the specified percentage
1323 and allow lesser filled vdevs to acquire more allocations than they
1324 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1326 Default value: \fB0\fR.
1332 \fBzfs_no_scrub_io\fR (int)
1335 Set for no scrub I/O. This results in scrubs not actually scrubbing data and
1336 simply doing a metadata crawl of the pool instead.
1338 Use \fB1\fR for yes and \fB0\fR for no (default).
1344 \fBzfs_no_scrub_prefetch\fR (int)
1347 Set to disable block prefetching for scrubs.
1349 Use \fB1\fR for yes and \fB0\fR for no (default).
1355 \fBzfs_nocacheflush\fR (int)
1358 Disable cache flush operations on disks when writing. Beware, this may cause
1359 corruption if disks re-order writes.
1361 Use \fB1\fR for yes and \fB0\fR for no (default).
1367 \fBzfs_nopwrite_enabled\fR (int)
1372 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1378 \fBzfs_pd_bytes_max\fR (int)
1381 The number of bytes which should be prefetched during a pool traversal
1382 (eg: \fRzfs send\fB or other data crawling operations)
1384 Default value: \fB52,428,800\fR.
1390 \fBzfs_prefetch_disable\fR (int)
1393 This tunable disables predictive prefetch. Note that it leaves "prescient"
1394 prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1395 prescient prefetch never issues i/os that end up not being needed, so it
1396 can't hurt performance.
1398 Use \fB1\fR for yes and \fB0\fR for no (default).
1404 \fBzfs_read_chunk_size\fR (long)
1407 Bytes to read per chunk
1409 Default value: \fB1,048,576\fR.
1415 \fBzfs_read_history\fR (int)
1418 Historic statistics for the last N reads will be available in
1419 \fR/proc/spl/kstat/zfs/POOLNAME/reads\fB
1421 Default value: \fB0\fR (no data is kept).
1427 \fBzfs_read_history_hits\fR (int)
1430 Include cache hits in read history
1432 Use \fB1\fR for yes and \fB0\fR for no (default).
1438 \fBzfs_recover\fR (int)
1441 Set to attempt to recover from fatal errors. This should only be used as a
1442 last resort, as it typically results in leaked space, or worse.
1444 Use \fB1\fR for yes and \fB0\fR for no (default).
1450 \fBzfs_resilver_delay\fR (int)
1453 Number of ticks to delay prior to issuing a resilver I/O operation when
1454 a non-resilver or non-scrub I/O operation has occurred within the past
1455 \fBzfs_scan_idle\fR ticks.
1457 Default value: \fB2\fR.
1463 \fBzfs_resilver_min_time_ms\fR (int)
1466 Resilvers are processed by the sync thread. While resilvering it will spend
1467 at least this much time working on a resilver between txg flushes.
1469 Default value: \fB3,000\fR.
1475 \fBzfs_scan_idle\fR (int)
1478 Idle window in clock ticks. During a scrub or a resilver, if
1479 a non-scrub or non-resilver I/O operation has occurred during this
1480 window, the next scrub or resilver operation is delayed by, respectively
1481 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1483 Default value: \fB50\fR.
1489 \fBzfs_scan_min_time_ms\fR (int)
1492 Scrubs are processed by the sync thread. While scrubbing it will spend
1493 at least this much time working on a scrub between txg flushes.
1495 Default value: \fB1,000\fR.
1501 \fBzfs_scrub_delay\fR (int)
1504 Number of ticks to delay prior to issuing a scrub I/O operation when
1505 a non-scrub or non-resilver I/O operation has occurred within the past
1506 \fBzfs_scan_idle\fR ticks.
1508 Default value: \fB4\fR.
1514 \fBzfs_send_corrupt_data\fR (int)
1517 Allow sending of corrupt data (ignore read/checksum errors when sending data)
1519 Use \fB1\fR for yes and \fB0\fR for no (default).
1525 \fBzfs_sync_pass_deferred_free\fR (int)
1528 Flushing of data to disk is done in passes. Defer frees starting in this pass
1530 Default value: \fB2\fR.
1536 \fBzfs_sync_pass_dont_compress\fR (int)
1539 Don't compress starting in this pass
1541 Default value: \fB5\fR.
1547 \fBzfs_sync_pass_rewrite\fR (int)
1550 Rewrite new block pointers starting in this pass
1552 Default value: \fB2\fR.
1558 \fBzfs_top_maxinflight\fR (int)
1561 Max concurrent I/Os per top-level vdev (mirrors or raidz arrays) allowed during
1562 scrub or resilver operations.
1564 Default value: \fB32\fR.
1570 \fBzfs_txg_history\fR (int)
1573 Historic statistics for the last N txgs will be available in
1574 \fR/proc/spl/kstat/zfs/POOLNAME/txgs\fB
1576 Default value: \fB0\fR.
1582 \fBzfs_txg_timeout\fR (int)
1585 Flush dirty data to disk at least every N seconds (maximum txg duration)
1587 Default value: \fB5\fR.
1593 \fBzfs_vdev_aggregation_limit\fR (int)
1596 Max vdev I/O aggregation size
1598 Default value: \fB131,072\fR.
1604 \fBzfs_vdev_cache_bshift\fR (int)
1607 Shift size to inflate reads too
1609 Default value: \fB16\fR (effectively 65536).
1615 \fBzfs_vdev_cache_max\fR (int)
1618 Inflate reads small than this value to meet the \fBzfs_vdev_cache_bshift\fR
1621 Default value: \fB16384\fR.
1627 \fBzfs_vdev_cache_size\fR (int)
1630 Total size of the per-disk cache in bytes.
1632 Currently this feature is disabled as it has been found to not be helpful
1633 for performance and in some cases harmful.
1635 Default value: \fB0\fR.
1641 \fBzfs_vdev_mirror_rotating_inc\fR (int)
1644 A number by which the balancing algorithm increments the load calculation for
1645 the purpose of selecting the least busy mirror member when an I/O immediately
1646 follows its predecessor on rotational vdevs for the purpose of making decisions
1649 Default value: \fB0\fR.
1655 \fBzfs_vdev_mirror_rotating_seek_inc\fR (int)
1658 A number by which the balancing algorithm increments the load calculation for
1659 the purpose of selecting the least busy mirror member when an I/O lacks
1660 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
1661 this that are not immediately following the previous I/O are incremented by
1664 Default value: \fB5\fR.
1670 \fBzfs_vdev_mirror_rotating_seek_offset\fR (int)
1673 The maximum distance for the last queued I/O in which the balancing algorithm
1674 considers an I/O to have locality.
1675 See the section "ZFS I/O SCHEDULER".
1677 Default value: \fB1048576\fR.
1683 \fBzfs_vdev_mirror_non_rotating_inc\fR (int)
1686 A number by which the balancing algorithm increments the load calculation for
1687 the purpose of selecting the least busy mirror member on non-rotational vdevs
1688 when I/Os do not immediately follow one another.
1690 Default value: \fB0\fR.
1696 \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int)
1699 A number by which the balancing algorithm increments the load calculation for
1700 the purpose of selecting the least busy mirror member when an I/O lacks
1701 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
1702 this that are not immediately following the previous I/O are incremented by
1705 Default value: \fB1\fR.
1711 \fBzfs_vdev_read_gap_limit\fR (int)
1714 Aggregate read I/O operations if the gap on-disk between them is within this
1717 Default value: \fB32,768\fR.
1723 \fBzfs_vdev_scheduler\fR (charp)
1726 Set the Linux I/O scheduler on whole disk vdevs to this scheduler
1728 Default value: \fBnoop\fR.
1734 \fBzfs_vdev_write_gap_limit\fR (int)
1737 Aggregate write I/O over gap
1739 Default value: \fB4,096\fR.
1745 \fBzfs_vdev_raidz_impl\fR (string)
1748 Parameter for selecting raidz parity implementation to use.
1750 Options marked (always) below may be selected on module load as they are
1751 supported on all systems.
1752 The remaining options may only be set after the module is loaded, as they
1753 are available only if the implementations are compiled in and supported
1754 on the running system.
1756 Once the module is loaded, the content of
1757 /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options
1758 with the currently selected one enclosed in [].
1759 Possible options are:
1760 fastest - (always) implementation selected using built-in benchmark
1761 original - (always) original raidz implementation
1762 scalar - (always) scalar raidz implementation
1763 sse2 - implementation using SSE2 instruction set (64bit x86 only)
1764 ssse3 - implementation using SSSE3 instruction set (64bit x86 only)
1765 avx2 - implementation using AVX2 instruction set (64bit x86 only)
1767 Default value: \fBfastest\fR.
1773 \fBzfs_zevent_cols\fR (int)
1776 When zevents are logged to the console use this as the word wrap width.
1778 Default value: \fB80\fR.
1784 \fBzfs_zevent_console\fR (int)
1787 Log events to the console
1789 Use \fB1\fR for yes and \fB0\fR for no (default).
1795 \fBzfs_zevent_len_max\fR (int)
1798 Max event queue length. A value of 0 will result in a calculated value which
1799 increases with the number of CPUs in the system (minimum 64 events). Events
1800 in the queue can be viewed with the \fBzpool events\fR command.
1802 Default value: \fB0\fR.
1808 \fBzil_replay_disable\fR (int)
1811 Disable intent logging replay. Can be disabled for recovery from corrupted
1814 Use \fB1\fR for yes and \fB0\fR for no (default).
1820 \fBzil_slog_limit\fR (ulong)
1823 Max commit bytes to separate log device
1825 Default value: \fB1,048,576\fR.
1831 \fBzio_delay_max\fR (int)
1834 A zevent will be logged if a ZIO operation takes more than N milliseconds to
1835 complete. Note that this is only a logging facility, not a timeout on
1838 Default value: \fB30,000\fR.
1844 \fBzio_requeue_io_start_cut_in_line\fR (int)
1847 Prioritize requeued I/O
1849 Default value: \fB0\fR.
1855 \fBzio_taskq_batch_pct\fR (uint)
1858 Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
1859 for IO. These workers are responsible for IO work such as compression and
1860 checksum calculations. Fractional number of CPUs will be rounded down.
1862 The default value of 75 was chosen to avoid using all CPUs which can result in
1863 latency issues and inconsistent application performance, especially when high
1864 compression is enabled.
1866 Default value: \fB75\fR.
1872 \fBzvol_inhibit_dev\fR (uint)
1875 Do not create zvol device nodes. This may slightly improve startup time on
1876 systems with a very large number of zvols.
1878 Use \fB1\fR for yes and \fB0\fR for no (default).
1884 \fBzvol_major\fR (uint)
1887 Major number for zvol block devices
1889 Default value: \fB230\fR.
1895 \fBzvol_max_discard_blocks\fR (ulong)
1898 Discard (aka TRIM) operations done on zvols will be done in batches of this
1899 many blocks, where block size is determined by the \fBvolblocksize\fR property
1902 Default value: \fB16,384\fR.
1908 \fBzvol_prefetch_bytes\fR (uint)
1911 When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
1912 from the start and end of the volume. Prefetching these regions
1913 of the volume is desirable because they are likely to be accessed
1914 immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
1917 Default value: \fB131,072\fR.
1920 .SH ZFS I/O SCHEDULER
1921 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1922 The I/O scheduler determines when and in what order those operations are
1923 issued. The I/O scheduler divides operations into five I/O classes
1924 prioritized in the following order: sync read, sync write, async read,
1925 async write, and scrub/resilver. Each queue defines the minimum and
1926 maximum number of concurrent operations that may be issued to the
1927 device. In addition, the device has an aggregate maximum,
1928 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1929 must not exceed the aggregate maximum. If the sum of the per-queue
1930 maximums exceeds the aggregate maximum, then the number of active I/Os
1931 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1932 be issued regardless of whether all per-queue minimums have been met.
1934 For many physical devices, throughput increases with the number of
1935 concurrent operations, but latency typically suffers. Further, physical
1936 devices typically have a limit at which more concurrent operations have no
1937 effect on throughput or can actually cause it to decrease.
1939 The scheduler selects the next operation to issue by first looking for an
1940 I/O class whose minimum has not been satisfied. Once all are satisfied and
1941 the aggregate maximum has not been hit, the scheduler looks for classes
1942 whose maximum has not been satisfied. Iteration through the I/O classes is
1943 done in the order specified above. No further operations are issued if the
1944 aggregate maximum number of concurrent operations has been hit or if there
1945 are no operations queued for an I/O class that has not hit its maximum.
1946 Every time an I/O is queued or an operation completes, the I/O scheduler
1947 looks for new operations to issue.
1949 In general, smaller max_active's will lead to lower latency of synchronous
1950 operations. Larger max_active's may lead to higher overall throughput,
1951 depending on underlying storage.
1953 The ratio of the queues' max_actives determines the balance of performance
1954 between reads, writes, and scrubs. E.g., increasing
1955 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1956 more quickly, but reads and writes to have higher latency and lower throughput.
1958 All I/O classes have a fixed maximum number of outstanding operations
1959 except for the async write class. Asynchronous writes represent the data
1960 that is committed to stable storage during the syncing stage for
1961 transaction groups. Transaction groups enter the syncing state
1962 periodically so the number of queued async writes will quickly burst up
1963 and then bleed down to zero. Rather than servicing them as quickly as
1964 possible, the I/O scheduler changes the maximum number of active async
1965 write I/Os according to the amount of dirty data in the pool. Since
1966 both throughput and latency typically increase with the number of
1967 concurrent operations issued to physical devices, reducing the
1968 burstiness in the number of concurrent operations also stabilizes the
1969 response time of operations from other -- and in particular synchronous
1970 -- queues. In broad strokes, the I/O scheduler will issue more
1971 concurrent operations from the async write queue as there's more dirty
1976 The number of concurrent operations issued for the async write I/O class
1977 follows a piece-wise linear function defined by a few adjustable points.
1980 | o---------| <-- zfs_vdev_async_write_max_active
1987 |-------o | | <-- zfs_vdev_async_write_min_active
1988 0|_______^______|_________|
1989 0% | | 100% of zfs_dirty_data_max
1991 | `-- zfs_vdev_async_write_active_max_dirty_percent
1992 `--------- zfs_vdev_async_write_active_min_dirty_percent
1995 Until the amount of dirty data exceeds a minimum percentage of the dirty
1996 data allowed in the pool, the I/O scheduler will limit the number of
1997 concurrent operations to the minimum. As that threshold is crossed, the
1998 number of concurrent operations issued increases linearly to the maximum at
1999 the specified maximum percentage of the dirty data allowed in the pool.
2001 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2002 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
2003 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
2004 maximum percentage, this indicates that the rate of incoming data is
2005 greater than the rate that the backend storage can handle. In this case, we
2006 must further throttle incoming writes, as described in the next section.
2008 .SH ZFS TRANSACTION DELAY
2009 We delay transactions when we've determined that the backend storage
2010 isn't able to accommodate the rate of incoming writes.
2012 If there is already a transaction waiting, we delay relative to when
2013 that transaction will finish waiting. This way the calculated delay time
2014 is independent of the number of threads concurrently executing
2017 If we are the only waiter, wait relative to when the transaction
2018 started, rather than the current time. This credits the transaction for
2019 "time already served", e.g. reading indirect blocks.
2021 The minimum time for a transaction to take is calculated as:
2023 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
2024 min_time is then capped at 100 milliseconds.
2027 The delay has two degrees of freedom that can be adjusted via tunables. The
2028 percentage of dirty data at which we start to delay is defined by
2029 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
2030 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
2031 delay after writing at full speed has failed to keep up with the incoming write
2032 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
2033 this variable determines the amount of delay at the midpoint of the curve.
2037 10ms +-------------------------------------------------------------*+
2053 2ms + (midpoint) * +
2056 | zfs_delay_scale ----------> ******** |
2057 0 +-------------------------------------*********----------------+
2058 0% <- zfs_dirty_data_max -> 100%
2061 Note that since the delay is added to the outstanding time remaining on the
2062 most recent transaction, the delay is effectively the inverse of IOPS.
2063 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
2064 was chosen such that small changes in the amount of accumulated dirty data
2065 in the first 3/4 of the curve yield relatively small differences in the
2068 The effects can be easier to understand when the amount of delay is
2069 represented on a log scale:
2073 100ms +-------------------------------------------------------------++
2082 + zfs_delay_scale ----------> ***** +
2093 +--------------------------------------------------------------+
2094 0% <- zfs_dirty_data_max -> 100%
2097 Note here that only as the amount of dirty data approaches its limit does
2098 the delay start to increase rapidly. The goal of a properly tuned system
2099 should be to keep the amount of dirty data out of that range by first
2100 ensuring that the appropriate limits are set for the I/O scheduler to reach
2101 optimal throughput on the backend storage, and then by changing the value
2102 of \fBzfs_delay_scale\fR to increase the steepness of the curve.