ilovezfs [Thu, 28 Jan 2016 12:51:19 +0000 (04:51 -0800)]
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
ilovezfs [Tue, 26 Jan 2016 07:41:11 +0000 (23:41 -0800)]
OpenZFS 6541 - Pool feature-flag check defeated if "verify" is included in the dedup property value
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com> Ported-by: Tony Hutter <hutter2@llnl.gov>
zio_checksum_to_feature() expects a zio_checksum enum not a raw property
intval, so the new checksums weren't being detected when the
ZIO_CHECKSUM_VERIFY flag got in the way.
would incorrectly succeed because ZIO_CHECKSUM_VERIFY would be in the
way, the raw intval would not be a member of the enum, and
zio_checksum_to_feature() would return SPA_FEATURE_NONE, with the result
that spa_feature_is_enabled() would never be called.
This was first detected with edonr, since in that case verify is
required.
This commit clears the ZIO_CHECKSUM_VERIFY flag before calling
zio_checksum_to_feature() using the ZIO_CHECKSUM_MASK and verifies in
zio_checksum_to_feature() that ZIO_CHECKSUM_MASK has been applied by the
caller to attempt to prevent the same bug from occurring again in the
future.
- Copied module/icp/include/sha2/sha2.h directly from illumos
- Removed from module/icp/algs/sha2/sha2.c:
#pragma inline(SHA256Init, SHA384Init, SHA512Init)
- Added 'ctx' to lib/libzfs/libzfs_sendrecv.c:zio_checksum_SHA256() since
it now takes in an extra parameter.
- Added CTASSERT() to assert.h from for module/zfs/edonr_zfs.c
- Added skein & edonr to libicp/Makefile.am
- Added sha512.S. It was generated from sha512-x86_64.pl in Illumos.
- Updated ztest.c with new fletcher_4_*() args; used NULL for new CTX argument.
- In icp/algs/edonr/edonr_byteorder.h, Removed the #if defined(__linux) section
to not #include the non-existant endian.h.
- In skein_test.c, renane NULL to 0 in "no test vector" array entries to get
around a compiler warning.
- Fixup test files:
- Rename <sys/varargs.h> -> <varargs.h>, <strings.h> -> <string.h>,
- Remove <note.h> and define NOTE() as NOP.
- Define u_longlong_t
- Rename "#!/usr/bin/ksh" -> "#!/bin/ksh -p"
- Rename NULL to 0 in "no test vector" array entries to get around a
compiler warning.
- Remove "for isa in $($ISAINFO); do" stuff
- Add/update Makefiles
- Add some userspace headers like stdio.h/stdlib.h in places of
sys/types.h.
- EXPORT_SYMBOL *_Init/*_Update/*_Final... routines in ICP modules.
- Update scripts/zfs2zol-patch.sed
- include <sys/sha2.h> in sha2_impl.h
- Add sha2.h to include/sys/Makefile.am
- Add skein and edonr dirs to icp Makefile
- Add new checksums to zpool_get.cfg
- Move checksum switch block from zfs_secpolicy_setprop() to
zfs_check_settable()
- Fix -Wuninitialized error in edonr_byteorder.h on PPC
- Fix stack frame size errors on ARM32
- Don't unroll loops in Skein on 32-bit to save stack space
- Add memory barriers in sha2.c on 32-bit to save stack space
- Add filetest_001_pos.ksh checksum sanity test
- Add option to write psudorandom data in file_write utility
Romain Dolbeau [Mon, 3 Oct 2016 16:44:00 +0000 (18:44 +0200)]
Add parity generation/rebuild using 128-bits NEON for Aarch64
This re-use the framework established for SSE2, SSSE3 and
AVX2. However, GCC is using FP registers on Aarch64, so
unlike SSE/AVX2 we can't rely on the registers being left alone
between ASM statements. So instead, the NEON code uses
C variables and GCC extended ASM syntax. Note that since
the kernel explicitly disable vector registers, they
have to be locally re-enabled explicitly.
As we use the variable's number to define the symbolic
name, and GCC won't allow duplicate symbolic names,
numbers have to be unique. Even when the code is not
going to be used (e.g. the case for 4 registers when
using the macro with only 2). Only the actually used
variables should be declared, otherwise the build
will fails in debug mode.
This requires the replacement of the XOR(X,X) syntax
by a new ZERO(X) macro, which does the same thing but
without repeating the argument. And perhaps someday
there will be a machine where there is a more efficient
way to zero a register than XOR with itself. This affects
scalar, SSE2, SSSE3 and AVX2 as they need the new macro.
It's possible to write faster implementations (different
scheduling, different unrolling, interleaving NEON and
scalar, ...) for various cores, but this one has the
advantage of fitting in the current state of the code,
and thus is likely easier to review/check/merge.
The only difference between aarch64-neon and aarch64-neonx2
is that aarch64-neonx2 unroll some functions some more.
Richard Laager [Sun, 2 Oct 2016 18:34:17 +0000 (13:34 -0500)]
Correct zpool_vdev_remove() error message
The error message in zpool_vdev_remove() said top-level devices
could be removed, but that has never been true.
Reported-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes #4506
Closes #5213
coverity scan CID:147531,type: Argument cannot be negative
- may copy data with negative size
coverity scan CID:147532,type: resource leaks
- may close a fd which is negative
coverity scan CID:147533,type: resource leaks
- may call pwrite64 with a negative size
coverity scan CID:147535,type: resource leaks
- may call fdopen with a negative fd
Reviewed-by: Richard Laager <rlaager@wiktel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: GeLiXin <ge.lixin@zte.com.cn>
Closes #5176
coverity scan CID:147536, type: Argument cannot be negative
- may write or close fd which is negative
coverity scan CID:147537, type: Argument cannot be negative
- may call dup2 with a negative fd
coverity scan CID:147538, type: Argument cannot be negative
- may read or fchown with a negative fd
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: GeLiXin <ge.lixin@zte.com.cn>
Closes #5185
Brian Behlendorf [Fri, 30 Sep 2016 22:04:21 +0000 (15:04 -0700)]
Fix cppcheck warning in buf_init()
Cppcheck 1.63 erroneously complains about an uninitialized value
in buf_init(). Newer versions of cppcheck (1.72) handle this
correctly but we'll initialize the value anyway to silence the
warning.
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5203
Brian Behlendorf [Fri, 30 Sep 2016 19:12:53 +0000 (12:12 -0700)]
Disable zpool_import_002_pos and ro_props_001_pos
These test cases fail some percentage of the time resulting
in automated testing failures. Disable the offending tests
until they can be made reliable.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #5201
Issue #5202
Closes #5194
Gvozden Neskovic [Wed, 31 Aug 2016 08:12:08 +0000 (10:12 +0200)]
fix: Shift exponent too large
Undefined operation is reported by running ztest (or zloop) compiled with GCC
UndefinedBehaviorSanitizer. Error only happens on top level of dnode indirection
with large enough offset values. Logically, left shift operation would work,
but bit shift semantics in C, and limitation of uint64_t, do not produce desired
result.
Isaac Huang [Thu, 29 Sep 2016 20:13:31 +0000 (14:13 -0600)]
Explicit block device plugging when submitting multiple BIOs
Without plugging, the default 'noop' scheduler will not merge
the BIOs which are part of a large ZIO.
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Isaac Huang <he.huang@intel.com>
Closes #5181
Fix zfs_clone_010_pos.ksh to verify zfs clones property displays right
Because the macro ZFS_MAXPROPLEN used in function print_dataset
differs between platforms set it appropriately and calculate the expected
number of passes.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Richard Laager <rlaager@wiktel.com> Signed-off-by: yuxiang <guo.yong33@zte.com.cn>
Closes #5154
Fix zfs_clone_010_pos.ksh to verify the space used by multiple copies
The default blocksize in Linux is 1024 due to a GNU-ism. Setting the
expected blocksize resolves the issue. As mentioned in the PR an
alternate solution would be to set POSIXLY_CORRECT=1.
Reviewed-by: Richard Laager <rlaager@wiktel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: yuxiang <guo.yong33@zte.com.cn>
Closes #5167
Refactor the code in such a way so that inode->i_mode is being set
at the same time zp->z_mode is being changed. This has the effect of
keeping both in sync without relying on zfs_inode_update.
Reviewed-by: Richard Laager <rlaager@wiktel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
Closes #5158
Linux 4.7 compat: Fix deadlock during lookup on case-insensitive
We must not use d_add_ci if the dentry already has the real name. Otherwise,
d_add_ci()->d_alloc_parallel() will find itself on the lookup hash and wait
on itself causing deadlock.
OpenZFS 6111 - zfs send should ignore datasets created after the ending snapshot
Authored by: Alex Deiter <alex.deiter@gmail.com>
Reviewed by: Alex Aizman alex.aizman@nexenta.com
Reviewed by: Alek Pinchuk alek.pinchuk@nexenta.com
Reviewed by: Roman Strashkin roman.strashkin@nexenta.com
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org> Ported-by: kernelOfTruth <kerneloftruth@gmail.com>
OpenZFS-issue: https://www.illumos.org/issues/6111
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/4a20c933
Closes #5110
Porting notes:
There were changes from upstream due to the following commits:
- zfs send -p send properties only for snapshots that are actually sent
https://github.com/zfsonlinux/zfs/commit/057485504e3a4502c265813ab58e9ec8ffc2a3be
- Produce a full snapshot list for zfs send -p
https://github.com/zfsonlinux/zfs/commit/e890dd85a7522730ad46daf68150aafd3952d0c1
- Implement zfs_ioc_recv_new() for OpenZFS 2605
https://github.com/zfsonlinux/zfs/commit/43e52eddb13d8accbd052fac9a242ce979531aa4
- OpenZFS 6314 - buffer overflow in dsl_dataset_name
ZFS_MAXNAMELEN was changed to the now used ZFS_MAX_DATASET_NAME_LEN since
https://github.com/zfsonlinux/zfs/commit/eca7b76001a7d33f78bd98884aef8325bdbf98e7
Due to how the Linux VFS was designed busy mount points
cannot be destroyed even when given the force option. Update
the zfs_destroy_001_pos test case to expect this behavior when
running under Linux.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: candychencan <chen.can2@zte.com.cn>
Closes #5132
Brian Behlendorf [Wed, 21 Sep 2016 20:45:21 +0000 (13:45 -0700)]
Fix automatically generated release number
When building from the head of a branch a release number is
automatically generated with `git describe` using the last tag
on that branch as the base. For this to work the last tag on the
branch needs to be predictable given the current META file.
This logic was accidentally broken when an -rcX tag was added to
the branch. Update it to search for a VERSION or VERSION-RELEASE
tag.
Reviewed-by: Chris Siebenmann <cks.git01@cs.toronto.edu> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5105
Closes #5140
Moritz Maxeiner [Wed, 21 Sep 2016 20:35:16 +0000 (22:35 +0200)]
Fix regression that broke dracut initramfs generation
Based upon @ryao's initial fix for 1c73494394fc9de9283b3fd4f00bcdf4bd300a7
( 5e9843405f63fdabe76e87b92b81a127d488abc7 ) this one also uses
`command -v` instead of `type`, but additionally only applies the
fix to close zfsonlinux/zfs#4749 when `libgcc_s.so.1` has not been included
by dracut automatically (verified by whether `zpool` links directly to
`libgcc_s.so`), as well as change the fallback option to match `libgcc_s.so*`.
Tested-by: Ben Jencks <ben@bjencks.net> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Moritz Maxeiner <moritz@ucworks.org>
Closes #5089
Closed #5138
zfs_commands.cfg have printed "No such file or directory", When executing
script/zfs-test.sh. The script is a symlink to ../../../zfs-script-config.sh
So delete the symlink, and directly source $SRCDIR/zfs-script-config.sh
when it exists from default.cfg.in
Reviewed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: legend-hua <liu.hua130@zte.com.cn>
Closes #5133
The FALLOC_FL_PUNCH_HOLE flag was introduced in the 2.6.38
kernel. To prevent breaking the build on older systems wrap its use
in a conditional. When FALLOC_FL_PUNCH_HOLE isn't available
return a non-zero status and error message.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: legend-hua <liu.hua130@zte.com.cn>
Closes #5101
Nikolay Borisov [Mon, 12 Sep 2016 19:35:56 +0000 (22:35 +0300)]
Simplify time handling logic in zfs_settattr
Simplify time handling in zfs_setattr by mimicking the logic in
setattr_copy from the linux kernel. In order to achieve this
in the case when ZFS' log is being replayed it is necessary
to unconditionally set the ctime in zfs_replay_setattr.
Also use the timespec_trunc function when assigning values to the
generic inode struct. This is currently a noop since zfs sets
s_time_gran to 1, however in the future rules about precision might
change.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Chunwei Chen <david.chen@osnexus.com> Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
Closes #4916
Nikolay Borisov [Mon, 1 Aug 2016 20:02:25 +0000 (23:02 +0300)]
Refactor generic inode time updating
ZFS doesn't provide a custom update_time method meaning it delegates
this job to the generic VFS layer. The only time when it needs to
set the various *time values is when the inode is being marshalled
to/from the disk. Do this by moving the relevant code from
zfs_inode_update_impl to zfs_node_alloc and zfs_rezget. As a result
from this change it is no longer necessary to have multiple versions
of the zfs_inode_update function - so just nuke them and leave only
one.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Chunwei Chen <david.chen@osnexus.com> Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
Issue #227
Closes #4916
George Wilson [Thu, 2 Jun 2016 04:04:53 +0000 (00:04 -0400)]
OpenZFS 6950 - ARC should cache compressed data
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Tom Caputi <tcaputi@datto.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Ported by: David Quigley <david.quigley@intel.com>
This review covers the reading and writing of compressed arc headers, sharing
data between the arc_hdr_t and the arc_buf_t, and the implementation of a new
dbuf cache to keep frequently access data uncompressed.
I've added a new member to l1 arc hdr called b_pdata. The b_pdata always hangs
off the arc_buf_hdr_t (if an L1 hdr is in use) and points to the physical block
for that DVA. The physical block may or may not be compressed. If compressed
arc is enabled and the block on-disk is compressed, then the b_pdata will match
the block on-disk and remain compressed in memory. If the block on disk is not
compressed, then neither will the b_pdata. Lastly, if compressed arc is
disabled, then b_pdata will always be an uncompressed version of the on-disk
block.
Typically the arc will cache only the arc_buf_hdr_t and will aggressively evict
any arc_buf_t's that are no longer referenced. This means that the arc will
primarily have compressed blocks as the arc_buf_t's are considered overhead and
are always uncompressed. When a consumer reads a block we first look to see if
the arc_buf_hdr_t is cached. If the hdr is cached then we allocate a new
arc_buf_t and decompress the b_pdata contents into the arc_buf_t's b_data. If
the hdr already has a arc_buf_t, then we will allocate an additional arc_buf_t
and bcopy the uncompressed contents from the first arc_buf_t to the new one.
Writing to the compressed arc requires that we first discard the b_pdata since
the physical block is about to be rewritten. The new data contents will be
passed in via an arc_buf_t (uncompressed) and during the I/O pipeline stages we
will copy the physical block contents to a newly allocated b_pdata.
When an l2arc is inuse it will also take advantage of the b_pdata. Now the
l2arc will always write the contents of b_pdata to the l2arc. This means that
when compressed arc is enabled that the l2arc blocks are identical to those
stored in the main data pool. This provides a significant advantage since we
can leverage the bp's checksum when reading from the l2arc to determine if the
contents are valid. If the compressed arc is disabled, then we must first
transform the read block to look like the physical block in the main data pool
before comparing the checksum and determining it's valid.
Nikolay Borisov [Sat, 10 Sep 2016 20:06:17 +0000 (23:06 +0300)]
Refactor spa_load_l2cache to make build happy
In case sav->sav_config was NULL the body of the function
would skip the iteration of the l2 cache devices and will
just cleanup the old devices. However, this wasn't very obvious
since the null check was performed after the loop body and after
the old devices were cleaned. Refactor the code so that it's now
obvious when the iteration of the l2cache devices is skipped.
This fixes the following cppcheck warning:
[module/zfs/spa.c:1552]: (error) Possible null pointer dereference: newvdevs
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
Closes #5087
Tim Chase [Sat, 10 Sep 2016 15:16:13 +0000 (10:16 -0500)]
Free property names with spa_strfree() rather than strfree()
Since they're allocated with spa_strdup(), they should be freed with
spa_strfree() so the proper length buffer is freed.
Reviewed-by: Richard Yao <ryao@gentoo.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com>
Closes #5082
Closes #5086
OpenZFS - Performance regression suite for zfstest
Author: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com>
Approved by: Richard Lowe <richlowe@richlowe.net> Ported-by: Don Brady <don.brady@intel.com>
OpenZFS-issue: https://www.illumos.org/issues/6950
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/dcbf3bd6
Delphix-commit: https://github.com/delphix/delphix-os/commit/978ed49
Closes #4929
ZFS Test Suite Performance Regression Tests
This was pulled into OpenZFS via the compressed arc featureand was
separated out in zfsonlinux as a separate pull request from PR-4768.
It originally came in as QA-4903 in Delphix-OS from John Kennedy.
Porting Notes:
1. Added assertions in the setup script to make sure required tools
(fio, mpstat, ...) are present.
2. For the config.json generation in perf.shlib used arcstats and
other binaries instead of dtrace to query the values.
3. For the perf data collection:
- use "zpool iostat -lpvyL" instead of the io.d dtrace script
(currently not collecting zfs_read/write latency stats)
- mpstat and iostat take different arguments
- prefetch_io.sh is a placeholder that uses arcstats instead of
dtrace
4. Build machines require fio, mdadm and sysstat pakage (YMMV).
Future Work:
- Need a way to measure zfs_read and zfs_write latencies per pool.
- Need tools to takes two sets of output and display/graph the
differences
- Bring over additional regression tests from Delphix
Sydney Vanda [Fri, 22 Jul 2016 15:07:04 +0000 (15:07 +0000)]
Real disk partitioning now enabled in test suite for Linux
When using real devices, specify DISKS="sdb sdc sdd" opposed to
/dev/sdb in zfs-tests.sh - otherwise errors with directory names and
disk names registering as "/dev//dev/sdb" for some tests. The same
goes for mpath: DISK="mpatha mpathad mpathb"
Expected Usage:
$ DISKS="sdb sdc sdd" zfs-tests.sh
SLICE_PREFIX is now set as "p" for a loop device (ie loop0p2) or
"" for a real device (ie sdb2), or either for multipath devices
(ie mpatha1 or mpath1p1) instead of only "p" by default. Note that
kpartx partitioning is not currently supported in this patch
(ie "partx") and may need to be disabled on Debian distributions.
Functions added for determining test directory (/dev or /dev/mapper)
as well as slice prefix are determined and exported mostly in the cfg
file of each test group directory.
Currently zpools cannot be created on whole mpath devices that have
been partitioned. In order to fix this tests have either been revised
to use a partition instead, or if there is a size constraint and the
pool needs to be created on the whole disk, partitions are then deleted
if the device is a multipath device. This functionality is added to
default_cleanup() or to individual cleanup scripts if a non-default
cleanup method is used.
The max partitions is currently set at 8 to account for all of the
tests thus far.
Patch changes are generally encompassed in "if is_linux" construct.
Signed-off-by: Sydney Vanda <sydney.m.vanda@intel.com> Reviewed-by: John Salinas <John.Salinas@intel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com>
Closes #4447
Closes #4964
Closes #5074
Don Brady [Wed, 31 Aug 2016 21:46:58 +0000 (15:46 -0600)]
Bring over illumos ZFS FMA logic -- phase 1
This first phase brings over the ZFS SLM module, zfs_mod.c, to handle
auto operations in response to disk events. Disk event monitoring is
provided from libudev and generates the expected payload schema for
zfs_mod. This work leverages the recently added devid and phys_path
strings in the vdev label.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Don Brady <don.brady@intel.com> Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #4673
Gvozden Neskovic [Sat, 27 Aug 2016 18:12:53 +0000 (20:12 +0200)]
Performance optimization of AVL tree comparator functions
perf: 2.75x faster ddt_entry_compare()
First 256bits of ddt_key_t is a block checksum, which are expected
to be close to random data. Hence, on average, comparison only needs to
look at first few bytes of the keys. To reduce number of conditional
jump instructions, the result is computed as: sign(memcmp(k1, k2)).
Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} ,
which is computed efficiently. Synthetic performance evaluation of
original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R)
CPU E5-2660 v3:
old 6.85789 s
new 2.49089 s
perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare()
Compute the result directly instead of using conditionals
perf: zfs_range_compare()
Speedup between 1.1x - 2.5x, depending on compiler version and
optimization level.
perf: spa_error_entry_compare()
`bcmp()` is not suitable for comparator use. Use `memcmp()` instead.
Brian Behlendorf [Wed, 31 Aug 2016 01:56:36 +0000 (18:56 -0700)]
Fix zhack argument processing
The argument processing is zhack makes the assumption that getopt()
will not permute argv. This isn't true for the GNU implementation of
getopt() unless the optstring is prefixed with a '+'. In which case
this is equivalent to setting the POSIXLY_CORRECT environment variable
In addition, update the usage() and optstrings to reflect the existing
supported options.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: liaoyuxiangqin <guo.yong33@zte.com.cn>
Closes #5047
Brian Behlendorf [Wed, 31 Aug 2016 01:50:11 +0000 (18:50 -0700)]
Update zpool_import_001_pos
Older versions of blkid may not promptly detect ZFS labels when
they're located on partitions. In order to ensure this test passes
reliably always perform a scan of default search paths (-s).
cao [Tue, 23 Aug 2016 02:12:41 +0000 (10:12 +0800)]
Update zfs_destroy_004.ksh script
Issues:
Under Linux, when executing zfs_destroy_004.ksh destroy $fs is an
error. The key issue here is that illumos kernel treats this case
differently than the Linux kernel. On illumos you can unmount and
destroy a filesystem which is busy and all consumers of it get EIO.
On Linux the expected behavior is to prevent the unmount and destroy.
Cause analysis:
When create $fs file system and mount file system to $mntp.
cd $mntp, linux isn't allow to destroy $fs in this mount contents.
No matter what destroy with parameters.
Solution:
So log_mustnot $ZFS destroy $fs is ok.
cd $olddir and destroy $fs.
Signed-off-by: caoxuewen cao.xuewen@zte.com.cn Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5012
ChaoyuZhang [Mon, 22 Aug 2016 02:27:06 +0000 (10:27 +0800)]
Update zfs_create_003_pos.ksh and zfs_create_006_pos.ksh
As the scripts zfs_create_003_pos.ksh and zfs_create_006_pos.ksh can
run successfully in the linux, add them to the <linux.run> file to
increase test scene.
Signed-off-by: ChaoyuZhang <zhang.chaoyu@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5002
Brian Behlendorf [Mon, 20 Jun 2016 21:28:51 +0000 (14:28 -0700)]
Add log_must_{retry,busy} helpers
Add helpers which automatically retry the provided command when
the error message matches the provided keyword. This provides an
easy way to handle the asynchronous nature of some ZFS commands.
For example, the `zfs destroy` command may need to be retried in
the case where the block device is unexpected busy. This can be
accomplished as follows:
log_must_busy $ZFS destroy ...
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #5002
liuhuang [Sun, 21 Aug 2016 23:40:54 +0000 (07:40 +0800)]
Update zfs_mount_005_pos.ksh and zfs_mount_010_neg.ksh
Update zfs_mount_005_pos.ksh and zfs_mount_010_neg.ksh to reflect
the expected Linux behavior. The is_linux wrapper is used so the
test case may be used on Linux and non-Linux platforms.
Signed-off-by: liuhuang <liu.huang@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5000
cao [Tue, 30 Aug 2016 11:32:22 +0000 (19:32 +0800)]
Delete unused zfsctl_snapdir_inactive declaration
zfsctl_snapdir_inactive is defined in zfs-0.6.3. In zfs-0.6.5.7
this is declaration remains even though the implementation was
removed in commit 278bee93. Removed fastreboot_disable_highpil
which is also unused.
Signed-off-by: caoxuewen cao.xuewen@zte.com.cn Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5042
For quite some time I was thinking about possibility to prefetch
ZFS indirection tables while doing sequential reads or writes.
Recent changes in predictive prefetcher made that much easier to
do. My tests on zvol with 16KB block size on 5x striped and 2x
mirrored pool of 10 disks show almost double throughput on sequential
read, and almost tripple on sequential rewrite. While for read alike
effect can be received from increasing maximal prefetch distance
(though at higher memory cost), for rewrite there is no other
solution so far.
Authored by: Alexander Motin <mav@freebsd.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com> Ported-by: kernelOfTruth kerneloftruth@gmail.com Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/6322
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/cb92f413
Closes #5040
Porting notes:
- Change from upstream in module/zfs/dbuf.c in 'int dbuf_read' due
to commit 5f6d0b6 'Handle block pointers with a corrupt logical size'
- Difference from upstream in module/zfs/dmu_zfetch.c,
uint32_t zfetch_max_idistance -> unsigned int zfetch_max_idistance
- Variables have been initialized at the beginning of the function
(void dmu_zfetch) to resemble the order of occurrence and account
for C99, C11 mode errors.
GeLiXin [Thu, 25 Aug 2016 08:40:20 +0000 (16:40 +0800)]
Fix: Build warnings with different gcc optimization levels in debug mode
This fix resolves warnings reported during compiling with different gcc
optimization levels in debug mode,
Test tools:
gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
Linux version: 2.6.32-573.18.1.el6.x86_64, Red Hat Enterprise Linux Server release 6.1 (Santiago)
List of warnings:
CFLAGS=-O1 ./configure --enable-debug ;make
../../module/icp/core/kcf_sched.c: In function ‘kcf_aop_done’:
../../module/icp/core/kcf_sched.c:499: error: ‘fg’ may be used uninitialized in this function
../../module/icp/core/kcf_sched.c:499: note: ‘fg’ was declared here
CFLAGS=-Os ./configure --enable-debug ; make
libzfs_dataset.c: In function ‘zfs_prop_set_list’:
libzfs_dataset.c:1575: error: ‘nvl_len’ may be used uninitialized in this function
Signed-off-by: GeLiXin <ge.lixin@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5022
Brian Behlendorf [Thu, 25 Aug 2016 20:24:01 +0000 (20:24 +0000)]
Fix cv_timedwait_hires
The user space implementation of cv_timedwait_hires() was always passing
a relative time to pthread_cond_timedwait() when an absolute time is
expected. This was accidentally introduced in commit 206971d2.
Replace two magic values with their corresponding preprocessor macro.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Yao <ryao@gentoo.org>
Closes #5024
GeLiXin [Thu, 11 Aug 2016 03:15:37 +0000 (11:15 +0800)]
Add zfs_arc_meta_limit_percent tunable
ARC will evict meta buffers that exceed the arc_meta_limit. Before a further
investigating on whether we should take special protection on meta buffers,
this tunable make arc_meta_limit adjustable for different workloads.
People can set zfs_arc_meta_limit_percent to any value while insmod zfs.ko,
so some range check is added to guarantee a suitable arc_meta_limit.
Suggested by Tim Chase, zfs_arc_dnode_limit is changed to a percent-style
tunable as well.
Signed-off-by: GeLiXin <ge.lixin@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4957
GeLiXin [Mon, 22 Aug 2016 03:20:22 +0000 (11:20 +0800)]
Fix: Array bounds read in zprop_print_one_property()
If the loop index i comes to (ZFS_GET_NCOLS - 1), the cbp->cb_columns[i + 1]
actually read the data of cbp->cb_colwidths[0], which means the array
subscript is above array bounds.
Luckily the cbp->cb_colwidths[0] is always 0 and it seems we haven't
looped enough times to exceed the array bounds so far, but it's really
a secluded risk someday.
Signed-off-by: GeLiXin <ge.lixin@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #5003
Matthew Ahrens [Wed, 20 Jul 2016 22:42:13 +0000 (15:42 -0700)]
OpenZFS 7004 - dmu_tx_hold_zap() does dnode_hold() 7x on same object
Using a benchmark which has 32 threads creating 2 million files in the
same directory, on a machine with 16 CPU cores, I observed poor
performance. I noticed that dmu_tx_hold_zap() was using about 30% of
all CPU, and doing dnode_hold() 7 times on the same object (the ZAP
object that is being held).
dmu_tx_hold_zap() keeps a hold on the dnode_t the entire time it is
running, in dmu_tx_hold_t:txh_dnode, so it would be nice to use the
dnode_t that we already have in hand, rather than repeatedly calling
dnode_hold(). To do this, we need to pass the dnode_t down through
all the intermediate calls that dmu_tx_hold_zap() makes, making these
routines take the dnode_t* rather than an objset_t* and a uint64_t
object number. In particular, the following routines will need to have
analogous *_by_dnode() variants created:
This can improve performance on the benchmark described above by 100%,
from 30,000 file creations per second to 60,000. (This improvement is on
top of that provided by working around the object allocation issue. Peak
performance of ~90,000 creations per second was observed with 8 CPUs;
adding CPUs past that decreased performance due to lock contention.) The
CPU used by dmu_tx_hold_zap() was reduced by 88%, from 340 CPU-seconds
to 40 CPU-seconds.
Sponsored by: Intel Corp.
Signed-off-by: Matthew Ahrens <mahrens@delphix.com> Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7004
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/109
Closes #4641
Closes #4972
Matthew Ahrens [Wed, 20 Jul 2016 22:39:55 +0000 (15:39 -0700)]
OpenZFS 7003 - zap_lockdir() should tag hold
zap_lockdir() / zap_unlockdir() should take a "void *tag" argument which
tags the hold on the zap. This will help diagnose programming errors
which misuse the hold on the ZAP.
Sponsored by: Intel Corp.
Signed-off-by: Matthew Ahrens <mahrens@delphix.com> Signed-off-by: Pavel Zakharov <pavel.zakha@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7003
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/108
Closes #4972
heary-cao [Sat, 6 Aug 2016 07:08:51 +0000 (15:08 +0800)]
Fix spa config generate memory leak in spa_load_best function
When spa retry load succeeds and spa recovery is requested it may
leak in spa_load_best function. Always free the generated config
when it is not assigned to the spa.
Signed-off-by: cao.xuewen <cao.xuewen@zte.com.cn> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4940
Paul Dagnelie [Tue, 9 Aug 2016 21:06:39 +0000 (23:06 +0200)]
OpenZFS 7176 - Yet another hole birth issue
This is another bug in the long line of hole-birth related issues. In
this particular case, it was discovered that a previous hole-birth fix
(illumos bug 6513, commit bc77ba73) did not cover as many cases as we
thought it did. While the issue worked in the case of hole-punching
(writing zeroes to a large part of a file), it did not deal with
truncation, and then writing beyond the new end of the file.
The problem is that dbuf_findbp will return ENOENT if the block it's
trying to find is beyond the end of the file. If that happens, we assume
there is no birth time, and so we lose that information when we write
out new blkptrs. We should teach dbuf_findbp to look for things that are
beyond the current end, but not beyond the absolute end of the file.
Authored by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens mahrens@delphix.com
Reviewed by: George Wilson george.wilson@delphix.com Ported-by: kernelOfTruth <kerneloftruth@gmail.com> Signed-off-by: Boris Protopopov <boris.protopopov@actifio.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7176
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/173/commits/8b9f3ad
Upstream-bugs: DLPX-46009
Porting notes:
- Fix ISO C90 mixed declaration error in dbuf.c ( int nlevels, epbs; ) ;
keep previous position of the initialization
Nikolay Borisov [Tue, 16 Aug 2016 20:00:16 +0000 (23:00 +0300)]
Fix do_link portion of ctime test
From the man page of dirname: " Both dirname() and basename()
may modify the contents of path, so it may be desirable to pass
a copy when calling one of these functions." And in fact on linux
using dirname actually changes the contents of the passed parameter as
evident from the following failure when running the ctime test:
link(/root/zfs-mount, /root/zfs-mount/link_file)
Fix this by creating a copy of the input parameter and passing that
to dirname, thus not compromising the original parameter, allowing
the creation of hard link to succeed.
Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4977
Matthew Ahrens [Thu, 21 Jul 2016 05:50:26 +0000 (22:50 -0700)]
It is not necessary to zero struct dbuf_hold_impl_data
Under a workload which makes heavy use of `dbuf_hold()`, I noticed that a
considerable amount of time was spent in `dbuf_hold_impl()`, due to its call to
`kmem_zalloc(sizeof (struct dbuf_hold_impl_data) * DBUF_HOLD_IMPL_MAX_DEPTH)`,
which is around 2KiB. This structure is used as a stack, to limit the size of
the C stack as dbuf_hold() calls itself recursively. We make a recursive call
to hold the parent's dbuf when the requested dbuf is not found. The vast
majority of the time, the parent or grandparent indirect dbuf is cached, so the
number of recursive calls is very low. However, we initialize this entire
array for every call to dbuf_hold().
To improve performance, this commit changes `dbuf_hold()` to use `kmem_alloc()`
instead of `kmem_zalloc()`. __dbuf_hold_impl_init is changed to initialize all
members of the struct before they are used. I observed ~5% performance
improvement on a workload which creates many files.
Signed-off-by: Matthew Ahrens <mahrens@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4974