]> granicus.if.org Git - zfs/commit
Fix 'zpool add' handling of nested interior VDEVs
authorLOLi <loli10K@users.noreply.github.com>
Thu, 28 Dec 2017 18:15:32 +0000 (19:15 +0100)
committerTony Hutter <hutter2@llnl.gov>
Tue, 30 Jan 2018 16:27:31 +0000 (10:27 -0600)
commita8fa31b50b958306cd39c21e8518f776ee59f1b6
tree847c4082960a2bb9e77f3c83cf51a1f933700b5c
parent8d82a19def540bba43c8c7597142ff53f7a0b7e5
Fix 'zpool add' handling of nested interior VDEVs

When replacing a faulted device which was previously handled by a spare
multiple levels of nested interior VDEVs will be present in the pool
configuration; the following example illustrates one of the possible
situations:

   NAME                          STATE     READ WRITE CKSUM
   testpool                      DEGRADED     0     0     0
     raidz1-0                    DEGRADED     0     0     0
       spare-0                   DEGRADED     0     0     0
         replacing-0             DEGRADED     0     0     0
           /var/tmp/fault-dev    UNAVAIL      0     0     0  cannot open
           /var/tmp/replace-dev  ONLINE       0     0     0
         /var/tmp/spare-dev1     ONLINE       0     0     0
       /var/tmp/safe-dev         ONLINE       0     0     0
   spares
     /var/tmp/spare-dev1         INUSE     currently in use

This is safe and allowed, but get_replication() needs to handle this
situation gracefully to let zpool add new devices to the pool.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #6678
Closes #6996
cmd/zpool/zpool_vdev.c
tests/runfiles/linux.run
tests/zfs-tests/include/libtest.shlib
tests/zfs-tests/tests/functional/cli_root/zpool_add/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_add/add_nested_replacing_spare.ksh [new file with mode: 0755]