]> granicus.if.org Git - zfs/commit
Detect IO errors during device removal
authorBrian Behlendorf <behlendorf1@llnl.gov>
Tue, 4 Dec 2018 17:37:37 +0000 (09:37 -0800)
committerGitHub <noreply@github.com>
Tue, 4 Dec 2018 17:37:37 +0000 (09:37 -0800)
commit7c9a42921e60dbad0e3003bd571591f073860233
tree7dcdfdf535f286a9c3d4dc5f4996ed0e59c501c2
parentc40a1124e1d1010b665909ad31d2904630018f6f
Detect IO errors during device removal

* Detect IO errors during device removal

While device removal cannot verify the checksums of individual
blocks during device removal, it can reasonably detect hard IO
errors from the leaf vdevs.  Failure to perform this error
checking can result in device removal completing successfully,
but moving no data which will permanently corrupt the pool.

Situation 1: faulted/degraded vdevs

In the configuration shown below, the removal of mirror-0 will
permanently corrupt the pool.  Device removal will preferentially
copy data from 'vdev1 -> vdev3' and from 'vdev2 -> vdev4'.  Which
in this case will result in nothing being copied since one vdev
in each of those groups in unavailable.  However, device removal
will complete successfully since all IO errors are ignored.

  tank                DEGRADED     0     0     0
    mirror-0          DEGRADED     0     0     0
      /var/tmp/vdev1  FAULTED      0     0     0  external fault
      /var/tmp/vdev2  ONLINE       0     0     0
    mirror-1          DEGRADED     0     0     0
      /var/tmp/vdev3  ONLINE       0     0     0
      /var/tmp/vdev4  FAULTED      0     0     0  external fault

This issue is resolved by updating the source child selection
logic to exclude unreadable leaf vdevs.  Additionally, unwritable
destination child vdevs which can never succeed are skipped to
prevent generating a large number of write IO errors.

Situation 2: individual hard IO errors

During removal if an unexpected hard IO error is encountered when
either reading or writing the child vdev the entire removal
operation is cancelled.  While it may be possible to reconstruct
the data after removal that cannot be guaranteed.  The only
strictly safe thing to do is to cancel the removal.

As a future improvement we may want to instead suspend the removal
process and allow the damaged region to be retried.  But that work
is left for another time, hard IO errors during the removal process
are expected to be exceptionally rare.

Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Tom Caputi <tcaputi@datto.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #6900
Closes #8161
man/man5/zfs-module-parameters.5
man/man8/zpool.8
module/zfs/vdev_removal.c
tests/runfiles/linux.run
tests/zfs-tests/include/libtest.shlib
tests/zfs-tests/tests/functional/removal/Makefile.am
tests/zfs-tests/tests/functional/removal/removal.kshlib
tests/zfs-tests/tests/functional/removal/removal_with_errors.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/removal/removal_with_faulted.ksh [new file with mode: 0755]