]> granicus.if.org Git - zfs/commit
Defer new resilvers until the current one ends
authorTom Caputi <tcaputi@datto.com>
Fri, 19 Oct 2018 04:06:18 +0000 (00:06 -0400)
committerBrian Behlendorf <behlendorf1@llnl.gov>
Fri, 19 Oct 2018 04:06:18 +0000 (21:06 -0700)
commit80a91e7469669e2a5da5873b8f09a752f7869062
treeef5a4462892becccb939b2cd42a54ed580f5894f
parent9f438c5f948c0072f16431407a373ead34fabf6e
Defer new resilvers until the current one ends

Currently, if a resilver is triggered for any reason while an
existing one is running, zfs will immediately restart the existing
resilver from the beginning to include the new drive. This causes
problems for system administrators when a drive fails while another
is already resilvering. In this case, the optimal thing to do to
reduce risk of data loss is to wait for the current resilver to end
before immediately replacing the second failed drive, which allows
the system to operate with two incomplete drives for the minimum
amount of time.

This patch introduces the resilver_defer feature that essentially
does this for the admin without forcing them to wait and monitor
the resilver manually. The change requires an on-disk feature
since we must mark drives that are part of a deferred resilver in
the vdev config to ensure that we do not assume they are done
resilvering when an existing resilver completes.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: @mmaybee
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #7732
28 files changed:
cmd/zpool/zpool_main.c
configure.ac
include/sys/fs/zfs.h
include/sys/spa_impl.h
include/sys/vdev.h
include/sys/vdev_impl.h
include/zfeature_common.h
include/zfs_gitrev.h [new file with mode: 0644]
man/man5/zpool-features.5
man/man8/zpool.8
module/zcommon/zfeature_common.c
module/zfs/dsl_scan.c
module/zfs/spa.c
module/zfs/vdev.c
module/zfs/vdev_label.c
tests/runfiles/linux.run
tests/zfs-tests/tests/functional/cli_root/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_get/zpool_get.cfg
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen.shlib
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_004_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_005_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/Makefile.am [new file with mode: 0644]
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/cleanup.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/setup.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/zpool_resilver.cfg [new file with mode: 0644]
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/zpool_resilver_bad_args.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_resilver/zpool_resilver_restart.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_offline_device.ksh