states that all statements must be indented to an appropriate tab stop,
and any continuation lines after them must be indented \fIexactly\fP four
spaces from the start line. This option enables a series of checks
-designed to find contination line problems within functions only. The
+designed to find continuation line problems within functions only. The
checks have some limitations; see CONTINUATION CHECKING, below.
.LP
.TP 4
.LP
.TP 4
.B \-o \fIconstructs\fP
-Allow a comma-seperated list of additional constructs. Available
+Allow a comma-separated list of additional constructs. Available
constructs include:
.LP
.TP 10
.LP
.SH CONTINUATION CHECKING
.LP
-The continuation checker is a resonably simple state machine that knows
-something about how C is layed out, and can match parenthesis, etc. over
+The continuation checker is a reasonably simple state machine that knows
+something about how C is laid out, and can match parenthesis, etc. over
multiple lines. It does have some limitations:
.LP
.TP 4
.B
multiple statements continued over multiple lines
A multi-line statement which is not broken at statement
-boundries. For example:
+boundaries. For example:
.RS 4
.HP 4
if (this_is_a_long_variable == another_variable) a =
\fBprobe_failure\fR
.ad
.RS 12n
-Issued when a probe fails on a vdev. This would occur if a vdeev
+Issued when a probe fails on a vdev. This would occur if a vdev
have been kicked from the system outside of ZFS (such as the kernel
have removed the device).
.RE
\fBvdev_state\fR
.ad
.RS 12n
-State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healty).
+State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
.RE
.sp
allocations. The value is expressed as a percentage of free space
beyond which a metaslab group is always eligible for allocations.
If a metaslab group's free space is less than or equal to the
-the threshold, the allocator will avoid allocating to that group
+threshold, the allocator will avoid allocating to that group
unless all groups in the pool have reached the threshold. Once all
groups have reached the threshold, all groups are allowed to accept
allocations. The default value of 0 disables the feature and causes
\fBzio_delay_max\fR (int)
.ad
.RS 12n
-Max zio millisec delay before posting event
+Max zio millisecond delay before posting event
.sp
Default value: \fB30,000\fR.
.RE
When this feature is enabled, the contents of highly-compressible blocks are
stored in the block "pointer" itself (a misnomer in this case, as it contains
-the compresseed data, rather than a pointer to its location on disk). Thus
+the compressed data, rather than a pointer to its location on disk). Thus
the space of the block (one sector, typically 512 bytes or 4KB) is saved,
and no additional i/o is needed to read and write the data block.
# zdb -S rpool
Simulated DDT histogram:
-bucket allocated referenced
+bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
.LP
.nf
-\fBzfs\fR \fBsnapshot | snap\fR [\fB-r\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ...
+\fBzfs\fR \fBsnapshot | snap\fR [\fB-r\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ...
\fIfilesystem@snapname\fR|\fIvolume@snapname\fR ...
.fi
.LP
.nf
-\fBzfs\fR \fBget\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR][\fB-Hp\fR][\fB-o\fR \fIfield\fR[,...]] [\fB-t\fR \fItype\fR[,...]]
+\fBzfs\fR \fBget\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR][\fB-Hp\fR][\fB-o\fR \fIfield\fR[,...]] [\fB-t\fR \fItype\fR[,...]]
[\fB-s\fR \fIsource\fR[,...]] "\fIall\fR" | \fIproperty\fR[,...] \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ...
.fi
.LP
.nf
-\fBzfs\fR \fBmount\fR
+\fBzfs\fR \fBmount\fR
.fi
.LP
.LP
.nf
-\fBzfs\fR \fBallow\fR [\fB-ldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] \fIperm\fR|\fI@setname\fR[,...]
+\fBzfs\fR \fBallow\fR [\fB-ldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] \fIperm\fR|\fI@setname\fR[,...]
\fIfilesystem\fR|\fIvolume\fR
.fi
.LP
.nf
-\fBzfs\fR \fBunallow\fR [\fB-rldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] [\fIperm\fR|@\fIsetname\fR[,... ]]
+\fBzfs\fR \fBunallow\fR [\fB-rldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] [\fIperm\fR|@\fIsetname\fR[,... ]]
\fIfilesystem\fR|\fIvolume\fR
.fi
.sp
\fBWARNING: DO NOT ENABLE DEDUPLICATION UNLESS YOU NEED IT AND KNOW EXACTLY WHAT YOU ARE DOING!\fR
.sp
-Deduplicating data is a very resource-intensive operation. It is generally recommended that you have \fIat least\fR 1.25 GB of RAM per 1 TB of storage when you enable deduplication. But calculating the exact requirenments is a somewhat complicated affair. Please see the \fBOracle Dedup Guide\fR for more information..
+Deduplicating data is a very resource-intensive operation. It is generally recommended that you have \fIat least\fR 1.25 GB of RAM per 1 TB of storage when you enable deduplication. But calculating the exact requirements is a somewhat complicated affair. Please see the \fBOracle Dedup Guide\fR for more information..
.sp
Enabling deduplication on an improperly-designed system will result in extreme performance issues (extremely slow filesystem and snapshot deletions etc.) and can potentially lead to data loss (i.e. unimportable pool due to memory exhaustion) if your system is not built for this purpose. Deduplication affects the processing power (CPU), disks (and the controller) as well as primary (real) memory.
.sp
Every dataset has a set of properties that export statistics about the dataset as well as control various behaviors. Properties are inherited from the parent unless overridden by the child. Some properties apply only to certain types of datasets (file systems, volumes, or snapshots).
.sp
.LP
-The values of numeric properties can be specified using human-readable suffixes (for example, \fBk\fR, \fBKB\fR, \fBM\fR, \fBGb\fR, and so forth, up to \fBZ\fR for zettabyte). The following are all valid (and equal) specifications:
+The values of numeric properties can be specified using human-readable suffixes (for example, \fBk\fR, \fBKB\fR, \fBM\fR, \fBGb\fR, and so forth, up to \fBZ\fR for zettabyte). The following are all valid (and equal) specifications:
.sp
.in +2
.nf
this property, \fBon\fR does not select a fixed compression type. As
new compression algorithms are added to ZFS and enabled on a pool, the
default compression algorithm may change. The current default compression
-algorthm is either \fBlzjb\fR or, if the \fBlz4_compress\fR feature is
+algorithm is either \fBlzjb\fR or, if the \fBlz4_compress\fR feature is
enabled, \fBlz4\fR.
.sp
The \fBlzjb\fR compression algorithm is optimized for performance while
.ad
.sp .6
.RS 4n
-Controls the mount point used for this file system. See the "Mount Points" section for more information on how this property is used.
+Controls the mount point used for this file system. See the "Mount Points" section for more information on how this property is used.
.sp
When the \fBmountpoint\fR property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is \fBlegacy\fR, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously \fBlegacy\fR or \fBnone\fR, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location.
.RE
.ad
.sp .6
.RS 4n
-Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. \fBZFS\fR automatically tunes block sizes according to internal algorithms optimized for typical access patterns.
+Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. \fBZFS\fR automatically tunes block sizes according to internal algorithms optimized for typical access patterns.
.sp
For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a \fBrecordsize\fR greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance.
.sp
.sp
If the \fBsharesmb\fR property is set to \fBoff\fR, the file systems are unshared.
.sp
-In Linux, the share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means samba must be able to authenticate a real user, system passwd/shadow, ldap or smbpasswd based) by default. This means that any additional access control (dissalow specific user specific access etc) must be done on the underlaying filesystem.
+In Linux, the share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means samba must be able to authenticate a real user, system passwd/shadow, ldap or smbpasswd based) by default. This means that any additional access control (disallow specific user specific access etc) must be done on the underlaying filesystem.
.sp
.in +2
Example to mount a SMB filesystem shared through ZFS (share/tmp):
.ad
.sp .6
.RS 4n
-Promotes a clone file system to no longer be dependent on its "origin" snapshot. This makes it possible to destroy the file system that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin file system becomes a clone of the specified file system.
+Promotes a clone file system to no longer be dependent on its "origin" snapshot. This makes it possible to destroy the file system that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin file system becomes a clone of the specified file system.
.sp
The snapshot that was cloned, and any snapshots previous to this snapshot, are now owned by the promoted clone. The space they use moves from the origin file system to the promoted clone, so enough space must be available to accommodate these snapshots. No new space is consumed by this operation, but the space accounting is adjusted. The promoted clone must not have any conflicting snapshot names of its own. The \fBrename\fR subcommand can be used to rename any conflicting snapshots.
.RE
.ad
.sp .6
.RS 4n
-Recursively display any children of the dataset on the command line.
+Recursively display any children of the dataset on the command line.
.RE
.sp
.ad
.sp .6
.RS 4n
-Same as the \fB-s\fR option, but sorts by property in descending order.
+Same as the \fB-s\fR option, but sorts by property in descending order.
.RE
.sp
.ad
.sp .6
.RS 4n
-A comma-separated list of columns to display. \fBname,property,value,source\fR is the default value.
+A comma-separated list of columns to display. \fBname,property,value,source\fR is the default value.
.RE
.sp
.RS 4n
Upgrades file systems to a new on-disk version. Once this is done, the file systems will no longer be accessible on systems running older versions of the software. \fBzfs send\fR streams generated from new snapshots of these file systems cannot be accessed on systems running older versions of the software.
.sp
-In general, the file system version is independent of the pool version. See \fBzpool\fR(8) for information on the \fBzpool upgrade\fR command.
+In general, the file system version is independent of the pool version. See \fBzpool\fR(8) for information on the \fBzpool upgrade\fR command.
.sp
In some cases, the file system version and the pool version are interrelated and the pool version must be upgraded before the file system version can be upgraded.
.sp
.ad
.sp .6
.RS 4n
-Upgrade the specified file system.
+Upgrade the specified file system.
.RE
.sp
.ad
.sp .6
.RS 4n
-Upgrade the specified file system and all descendent file systems
+Upgrade the specified file system and all descendent file systems
.RE
.sp
.ad
.sp .6
.RS 4n
-Shares available \fBZFS\fR file systems.
+Shares available \fBZFS\fR file systems.
.sp
.ne 2
.mk
.ad
.sp .6
.RS 4n
-Share all available \fBZFS\fR file systems. Invoked automatically as part of the boot process.
+Share all available \fBZFS\fR file systems. Invoked automatically as part of the boot process.
.RE
.sp
.ad
.sp .6
.RS 4n
-Unshare all available \fBZFS\fR file systems. Invoked automatically as part of the boot process.
+Unshare all available \fBZFS\fR file systems. Invoked automatically as part of the boot process.
.RE
.sp
.RS 4n
Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved.
.sp
-If the \fB-i\fR or \fB-I\fR flags are used in conjunction with the \fB-R\fR flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the \fB-F\fR flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.
+If the \fB-i\fR or \fB-I\fR flags are used in conjunction with the \fB-R\fR flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the \fB-F\fR flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.
.RE
.sp
rename subcommand Must also have the 'mount' and 'create'
ability in the new parent
rollback subcommand Must also have the 'mount' ability
-send subcommand
+send subcommand
share subcommand Allows sharing file systems over NFS or SMB
protocols
snapshot subcommand Must also have the 'mount' ability
userused other Allows reading any userused@... property
acltype property
-aclinherit property
-atime property
-canmount property
-casesensitivity property
-checksum property
-compression property
-copies property
+aclinherit property
+atime property
+canmount property
+casesensitivity property
+checksum property
+compression property
+copies property
dedup property
-devices property
-exec property
+devices property
+exec property
filesystem_limit property
logbias property
mlslabel property
-mountpoint property
-nbmand property
-normalization property
-primarycache property
-quota property
-readonly property
-recordsize property
-refquota property
-refreservation property
-reservation property
-secondarycache property
-setuid property
-sharenfs property
-sharesmb property
-snapdir property
+mountpoint property
+nbmand property
+normalization property
+primarycache property
+quota property
+readonly property
+recordsize property
+refquota property
+refreservation property
+reservation property
+secondarycache property
+setuid property
+sharenfs property
+sharesmb property
+snapdir property
snapshot_limit property
-utf8only property
-version property
-volblocksize property
-volsize property
-vscan property
-xattr property
-zoned property
+utf8only property
+version property
+volblocksize property
+volsize property
+vscan property
+xattr property
+zoned property
.fi
.in -2
.sp
create,destroy
Local+Descendent permissions on (tank/users)
group staff create,mount
--------------------------------------------------------------
+-------------------------------------------------------------
.fi
.in -2
.sp
cindys% \fBzfs set quota=10G users/home/marks\fR
cindys% \fBzfs get quota users/home/marks\fR
NAME PROPERTY VALUE SOURCE
-users/home/marks quota 10G local
+users/home/marks quota 10G local
.fi
.in -2
.sp
create,destroy
Local+Descendent permissions on (tank/users)
group staff @pset,create,mount
--------------------------------------------------------------
+-------------------------------------------------------------
.fi
.in -2
.sp
.ad
.sp .6
.RS 4n
-Successful completion.
+Successful completion.
.RE
.sp
.LP
.nf
-\fBzpool upgrade\fR
+\fBzpool upgrade\fR
.fi
.LP
\fB\fBdisk\fR\fR
.ad
.RS 10n
-.rt
+.rt
A block device, typically located under \fB/dev\fR. \fBZFS\fR can use individual partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev"). For example, "sda" is equivalent to "/dev/sda". A whole disk can be specified by omitting the partition designation. When given a whole disk, \fBZFS\fR automatically labels the disk, if necessary.
.RE
\fB\fBfile\fR\fR
.ad
.RS 10n
-.rt
+.rt
A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path.
.RE
\fB\fBmirror\fR\fR
.ad
.RS 10n
-.rt
+.rt
A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before data integrity is compromised.
.RE
\fB\fBraidz3\fR\fR
.ad
.RS 10n
-.rt
+.rt
A variation on \fBRAID-5\fR that allows for better distribution of parity and eliminates the "\fBRAID-5\fR write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a \fBraidz\fR group.
.sp
A \fBraidz\fR group can have single-, double- , or triple parity, meaning that the \fBraidz\fR group can sustain one, two, or three failures, respectively, without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias for \fBraidz1\fR.
\fB\fBspare\fR\fR
.ad
.RS 10n
-.rt
+.rt
A special pseudo-\fBvdev\fR which keeps track of available hot spares for a pool. For more information, see the "Hot Spares" section.
.RE
\fB\fBlog\fR\fR
.ad
.RS 10n
-.rt
+.rt
A separate-intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more information, see the "Intent Log" section.
.RE
\fB\fBcache\fR\fR
.ad
.RS 10n
-.rt
+.rt
A device used to cache storage pool data. A cache device cannot be configured as a mirror or \fBraidz\fR group. For more information, see the "Cache Devices" section.
.RE
In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.
.sp
.LP
-A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.
+A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.
.sp
.LP
The health of the top-level vdev, such as mirror or \fBraidz\fR device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states:
\fB\fBDEGRADED\fR\fR
.ad
.RS 12n
-.rt
+.rt
One or more top-level vdevs is in the degraded state because one or more component devices are offline. Sufficient replicas exist to continue functioning.
.sp
One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows:
\fB\fBFAULTED\fR\fR
.ad
.RS 12n
-.rt
-One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning.
+.rt
+One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning.
.sp
One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows:
.RS +4
.TP
.ie t \(bu
.el o
-The device could be opened, but the contents did not match expected values.
+The device could be opened, but the contents did not match expected values.
.RE
.RS +4
.TP
\fB\fBOFFLINE\fR\fR
.ad
.RS 12n
-.rt
+.rt
The device was explicitly taken offline by the "\fBzpool offline\fR" command.
.RE
\fB\fBONLINE\fR\fR
.ad
.RS 12n
-.rt
+.rt
The device is online and functioning.
.RE
\fB\fBREMOVED\fR\fR
.ad
.RS 12n
-.rt
+.rt
The device was physically removed while the system was running. Device removal detection is hardware-dependent and may not be supported on all platforms.
.RE
\fB\fBUNAVAIL\fR\fR
.ad
.RS 12n
-.rt
+.rt
The device could not be opened. If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its path since the path was never correct in the first place.
.RE
.SS "Hot Spares"
.sp
.LP
-\fBZFS\fR allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" \fBvdev\fR with any number of devices. For example,
+\fBZFS\fR allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" \fBvdev\fR with any number of devices. For example,
.sp
.in +2
.nf
\fB\fBavailable\fR\fR
.ad
.RS 20n
-.rt
+.rt
Amount of storage available within the pool. This property can also be referred to by its shortened column name, "avail".
.RE
\fB\fBcapacity\fR\fR
.ad
.RS 20n
-.rt
+.rt
Percentage of pool space used. This property can also be referred to by its shortened column name, "cap".
.RE
\fB\fBhealth\fR\fR
.ad
.RS 20n
-.rt
+.rt
The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR", "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR".
.RE
\fB\fBguid\fR\fR
.ad
.RS 20n
-.rt
+.rt
A unique identifier for the pool.
.RE
\fB\fBsize\fR\fR
.ad
.RS 20n
-.rt
+.rt
Total size of the storage pool.
.RE
\fB\fBused\fR\fR
.ad
.RS 20n
-.rt
+.rt
Amount of storage space used within the pool.
.RE
.ad
.sp .6
.RS 4n
-Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a temporary pool that is never cached, and the special value \fB\&''\fR (empty string) uses the default location.
+Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a temporary pool that is never cached, and the special value \fB\&''\fR (empty string) uses the default location.
.sp
Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a \fBcachefile\fR is exported or destroyed, the file is removed.
.RE
.ad
.sp .6
.RS 4n
-Threshold for the number of block ditto copies. If the reference count for a deduplicated block increases above this number, a new ditto copy of this block is automatically stored. The default setting is 0 which causes no ditto copies to be created for deduplicated blocks. The miniumum legal nonzero setting is 100.
+Threshold for the number of block ditto copies. If the reference count for a deduplicated block increases above this number, a new ditto copy of this block is automatically stored. The default setting is 0 which causes no ditto copies to be created for deduplicated blocks. The minimum legal nonzero setting is 100.
.RE
.sp
\fB\fBwait\fR\fR
.ad
.RS 12n
-.rt
+.rt
Blocks all \fBI/O\fR access until the device connectivity is recovered and the errors are cleared. This is the default behavior.
.RE
\fB\fBcontinue\fR\fR
.ad
.RS 12n
-.rt
+.rt
Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked.
.RE
\fB\fBpanic\fR\fR
.ad
.RS 12n
-.rt
+.rt
Prints out a message to the console and generates a system crash dump.
.RE
\fB\fB-f\fR\fR
.ad
.RS 6n
-.rt
+.rt
Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
.RE
\fB\fB-n\fR\fR
.ad
.RS 6n
-.rt
+.rt
Displays the configuration that would be used without actually adding the \fBvdev\fRs. The actual pool creation can still fail due to insufficient privileges or device sharing.
.RE
\fB\fB-f\fR\fR
.ad
.RS 6n
-.rt
+.rt
Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner.
.RE
\fB\fB-f\fR\fR
.ad
.RS 6n
-.rt
+.rt
Forces any active datasets contained within the pool to be unmounted.
.RE
\fB\fB-i\fR\fR
.ad
.RS 6n
-.rt
+.rt
Displays internally logged \fBZFS\fR events in addition to user initiated events.
.RE
\fB\fB-l\fR\fR
.ad
.RS 6n
-.rt
+.rt
Displays log records in long format, which in addition to standard format includes, the user name, the hostname, and the zone in which the operation was performed.
.RE
.ad
.sp .6
.RS 4n
-Lists pools available to import. If the \fB-d\fR option is not specified, this command searches for devices in "/dev". The \fB-d\fR option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, are not listed unless the \fB-D\fR option is specified.
+Lists pools available to import. If the \fB-d\fR option is not specified, this command searches for devices in "/dev". The \fB-d\fR option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, are not listed unless the \fB-D\fR option is specified.
.sp
The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.
.sp
\fB\fB-c\fR \fIcachefile\fR\fR
.ad
.RS 16n
-.rt
+.rt
Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices.
.RE
\fB\fB-d\fR \fIdir\fR\fR
.ad
.RS 16n
-.rt
-Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times.
+.rt
+Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times.
.RE
.sp
\fB\fB-D\fR\fR
.ad
.RS 16n
-.rt
+.rt
Lists destroyed pools only.
.RE
\fB\fB-o\fR \fImntopts\fR\fR
.ad
.RS 21n
-.rt
+.rt
Comma-separated list of mount options to use when mounting datasets within the pool. See \fBzfs\fR(8) for a description of dataset properties and mount options.
.RE
\fB\fB-o\fR \fIproperty=value\fR\fR
.ad
.RS 21n
-.rt
+.rt
Sets the specified property on the imported pool. See the "Properties" section for more information on the available pool properties.
.RE
\fB\fB-c\fR \fIcachefile\fR\fR
.ad
.RS 21n
-.rt
+.rt
Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices.
.RE
\fB\fB-d\fR \fIdir\fR\fR
.ad
.RS 21n
-.rt
+.rt
Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. This option is incompatible with the \fB-c\fR option.
.RE
\fB\fB-D\fR\fR
.ad
.RS 21n
-.rt
+.rt
Imports destroyed pools only. The \fB-f\fR option is also required.
.RE
\fB\fB-f\fR\fR
.ad
.RS 21n
-.rt
+.rt
Forces import, even if the pool appears to be potentially active.
.RE
\fB\fB-a\fR\fR
.ad
.RS 21n
-.rt
-Searches for and imports all pools found.
+.rt
+Searches for and imports all pools found.
.RE
.sp
\fB\fB-R\fR \fIroot\fR\fR
.ad
.RS 21n
-.rt
+.rt
Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR" property to "\fIroot\fR".
.RE
\fB\fB-T\fR \fBu\fR | \fBd\fR\fR
.ad
.RS 12n
-.rt
+.rt
Display a time stamp.
.sp
Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1).
\fB\fB-v\fR\fR
.ad
.RS 12n
-.rt
+.rt
Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within the pool, in addition to the pool-wide statistics.
.RE
\fB\fB-H\fR\fR
.ad
.RS 12n
-.rt
+.rt
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
.RE
\fB\fB-T\fR \fBd\fR | \fBu\fR\fR
.ad
.RS 12n
-.rt
+.rt
Display a time stamp.
.sp
Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1).
\fB\fB-o\fR \fIprops\fR\fR
.ad
.RS 12n
-.rt
+.rt
Comma-separated list of properties to display. See the "Properties" section for a list of valid properties. The default list is "name, size, used, available, fragmentation, expandsize, capacity, dedupratio, health, altroot"
.RE
\fB\fB-t\fR\fR
.ad
.RS 6n
-.rt
+.rt
Temporary. Upon reboot, the specified physical device reverts to its previous state.
.RE
\fB\fB-e\fR\fR
.ad
.RS 6n
-.rt
+.rt
Expand the device to use all available space. If the device is part of a mirror or \fBraidz\fR then all devices must be expanded before the new space will become available to the pool.
.RE
\fB\fB-f\fR\fR
.ad
.RS 6n
-.rt
+.rt
Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner.
.RE
\fB\fB-s\fR\fR
.ad
.RS 6n
-.rt
+.rt
Stop scrubbing.
.RE
.ad
.sp .6
.RS 4n
-Set \fIaltroot\fR for \fInewpool\fR and automaticaly import it. This can be useful to avoid mountpoint collisions if \fInewpool\fR is imported on the same filesystem as \fIpool\fR.
+Set \fIaltroot\fR for \fInewpool\fR and automatically import it. This can be useful to avoid mountpoint collisions if \fInewpool\fR is imported on the same filesystem as \fIpool\fR.
.RE
.sp
\fB\fB-x\fR\fR
.ad
.RS 12n
-.rt
+.rt
Only display status for pools that are exhibiting errors or are otherwise unavailable. Warnings about pools not using the latest on-disk format will not be included.
.RE
\fB\fB-v\fR\fR
.ad
.RS 12n
-.rt
+.rt
Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub.
.RE
.sp
.LP
-Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the \fBiostat\fR option as follows:
+Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the \fBiostat\fR option as follows:
.sp
.in +2
.LP
The following command displays the detailed information for the \fIdata\fR
pool. This pool is comprised of a single \fIraidz\fR vdev where one of its
-devices increased its capacity by 10GB. In this example, the pool will not
+devices increased its capacity by 10GB. In this example, the pool will not
be able to utilized this extra capacity until all the devices under the
\fIraidz\fR vdev have been expanded.
\fB\fB0\fR\fR
.ad
.RS 5n
-.rt
-Successful completion.
+.rt
+Successful completion.
.RE
.sp
\fB\fB1\fR\fR
.ad
.RS 5n
-.rt
+.rt
An error occurred.
.RE
\fB\fB2\fR\fR
.ad
.RS 5n
-.rt
+.rt
Invalid command line options were specified.
.RE