Brian Behlendorf [Fri, 20 Mar 2015 22:10:24 +0000 (15:10 -0700)]
Check all vdev labels in 'zpool import'
When using 'zpool import' to scan for available pools prefer vdev names
which reference vdevs with more valid labels. There should be two labels
at the start of the device and two labels at the end of the device. If
labels are missing then the device has been damaged or is in some other
way incomplete. Preferring names with fully intact labels helps weed out
bad paths and improves the likelihood of being able to import the pool.
This behavior only applies when scanning /dev/ for valid pools. If a
cache file exists the pools described by the cache file will be used.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Chris Dunlap <cdunlap@llnl.gov>
Closes #3145
Closes #2844
Closes #3107
Ned Bass [Wed, 25 Mar 2015 00:00:08 +0000 (17:00 -0700)]
dbuf_free_range() overzealously frees dbufs
When called to free a spill block from a dnode, dbuf_free_range() has a
bug that results in all dbufs for the dnode getting freed. A variety of
problems may result from this bug, but a common one was a zap lookup
tripping an ASSERT because the zap buffers had been zeroed out. This
could happen on a dataset with xattr=sa set when extended attributes are
written and removed on a directory concurrently with I/O to files in
that directory.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Fixes #3195
Fixes #3204
Fixes #3222
Tim Chase [Mon, 23 Mar 2015 17:10:19 +0000 (12:10 -0500)]
Set the maximum ZVOL transfer size correctly
ZoL had been setting max_sectors to UINT_MAX, but until Linux 3.19, it
the kernel artifically capped it at 1024 (BLK_DEF_MAX_SECTORS).
This cap was removed in torvalds/linux@34b48db. This patch changes
it to DMU_MAX_ACCESS (in sectors) and also changes the ASSERT in
dmu_tx_hold_write() to allow the maximum transfer size.
Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3212
Gordan Bobic [Mon, 23 Mar 2015 16:17:56 +0000 (16:17 +0000)]
Execute udevadm settle before trying to import pools
Execute udevadm settle before trying to import pools. Otherwise the
disk device nodes may not be ready before import time. This is
analogous to the behavior of the init scripts and systemd units.
Signed-off-by: Gordan Bobic <gordan@steel.shatteredsilicon.net> Signed-off-by: Pavel Snajdr <snajpa@snajpa.net> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3213
Chris Dunlop [Mon, 16 Mar 2015 01:21:21 +0000 (12:21 +1100)]
Reduce size of zfs_sb_t: allocate z_hold_mtx separately
zfs_sb_t has grown to the point where using kmem_zalloc() for allocations
is triggering the 32k warning threshold.
We can't safely convert this entire allocation to use vmem_alloc() instead
of kmem_alloc() because the backing_dev_info structure is embedded here.
It depends on the bit_waitqueue() function which won't behave properly
when given a virtual address.
Instead, use vmem_alloc() to allocate the z_hold_mtx array separately.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Chris Dunlop <chris@onthe.net.au>
Closes #3178
Brian Behlendorf [Tue, 17 Mar 2015 22:08:22 +0000 (15:08 -0700)]
Fix arc_adjust_meta() behavior
The goal of this function is to evict enough meta data buffers from the
ARC in order to enforce the arc_meta_limit. Achieving this is slightly
more complicated than it appears because it is common for data buffers
to have holds on meta data buffers. In addition, dnode meta data buffers
will be held by the dnodes in the block preventing them from being freed.
This means we can't simply traverse the ARC and expect to always find
enough unheld meta data buffer to release.
Therefore, this function has been updated to make alternating passes
over the ARC releasing data buffers and then newly unheld meta data
buffers. This ensures forward progress is maintained and arc_meta_used
will decrease. Normally this is sufficient, but if required the ARC
will call the registered prune callbacks causing dentry and inodes to
be dropped from the VFS cache. This will make dnode meta data buffers
available for reclaim. The number of total restarts in limited by
zfs_arc_meta_adjust_restarts to prevent spinning in the rare case
where all meta data is pinned.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Pavel Snajdr <snajpa@snajpa.net>
Issue #3160
Brian Behlendorf [Tue, 17 Mar 2015 22:07:47 +0000 (15:07 -0700)]
Restructure per-filesystem reclaim
Originally when the ARC prune callback was introduced the idea was
to register a single callback for the ZPL. The ARC could invoke this
call back if it needed the ZPL to drop dentries, inodes, or other
cache objects which might be pinning buffers in the ARC. The ZPL
would iterate over all ZFS super blocks and perform the reclaim.
For the most part this design has worked well but due to limitations
in 2.6.35 and earlier kernels there were some problems. This patch
is designed to address those issues.
1) iterate_supers_type() is not provided by all kernels which makes
it impossible to safely iterate over all zpl_fs_type filesystems in
a single callback. The most straight forward and portable way to
resolve this is to register a callback per-filesystem during mount.
The arc_*_prune_callback() functions have always supported multiple
callbacks so this is functionally a very small change.
2) Commit 050d22b removed the non-portable shrink_dcache_memory()
and shrink_icache_memory() functions and didn't replace them with
equivalent functionality. This meant that for Linux 3.1 and older
kernels the ARC had no mechanism to drop dentries and inodes from
the caches if needed. This patch adds that missing functionality
by calling shrink_dcache_parent() to release dentries which may be
pinning inodes. This will result in all unused cache entries being
dropped which is a bit heavy handed but it's the only interface
available for old kernels.
3) A zpl_drop_inode() callback is registered for kernels older than
2.6.35 which do not support the .evict_inode callback. This ensures
that when the last reference on an inode is dropped it is immediately
removed from the cache. If this isn't done than inode can end up on
the global unused LRU with no mechanism available to ZFS to drop them.
Since the ARC buffers are not dropped the hottest inodes can still
be recreated without performing disk IO.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Pavel Snajdr <snajpa@snajpa.net>
Issue #3160
Richard Yao [Wed, 11 Mar 2015 18:24:46 +0000 (14:24 -0400)]
Increase Linux pipe buffer size on 'zfs receive'
I noticed when reviewing documentation that it is possible for user
space to use fctnl(fd, F_SETPIPE_SZ, (unsigned long) size) to change
the kernel pipe buffer size on Linux to increase the pipe size up to
the value specified in /proc/sys/fs/pipe-max-size. There are users using
mbuffer to improve zfs recv performance when piping over the network, so
it seems advantageous to integrate such functionality directly into the
zfs recv tool. This avoids the addition of two buffers and two copies
(one for the buffer mbuffer adds and another for the additional pipe),
so it should be more efficient.
This could have been made configurable and/or this could have changed
the value back to the original after we were done with the file
descriptor, but I do not see a strong case for doing either, so I
went with a simple implementation.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1161
Hajo Möller [Thu, 12 Mar 2015 18:09:19 +0000 (19:09 +0100)]
Fix warning about AM_INIT_AUTOMAKE arguments
As of automake 1.14.2, currently shipped with Ubuntu 14.04, automake
warns about AM_INIT_AUTOMAKE having more than one argument:
configure.ac:41: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see:
configure.ac:41: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_005fINIT_005fAUTOMAKE-invocation
This commit fixes the warnings by following above link's advice, so
AM_INIT gets called with the package's name and version. As both are
defined in the META file we're parsing it with `grep`, `cut` and `tr`.
NOTE: autoconf < 1.14 not supporting m4_esyscmd_s so m4_esyscmd was
used and modified `tr` to truncate newlines, too.
Signed-off-by: Hajo M<C3><B6>ller <dasjoe@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3174
Bill McGonigle [Fri, 13 Mar 2015 16:44:42 +0000 (09:44 -0700)]
Linux 4.0 compat: bdi_setup_and_register() __must_check
Explicitly disable the unused by variable warnings by setting
__attribute__((unused)) for bdi_setup_and_register(). This is
required because the function is defined with the __must_check
attribute.
Signed-off-by: Bill McGonigle <bill-github.com-public1@bfccomputing.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3141
Justin T. Gibbs [Thu, 12 Mar 2015 00:10:35 +0000 (11:10 +1100)]
Illumos 5630 - stale bonus buffer in recycled dnode_t leads to data corruption
5630 stale bonus buffer in recycled dnode_t leads to data corruption
Author: Justin T. Gibbs <justing@spectralogic.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported-by: Chris Dunlop <chris@onthe.net.au> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Yao <ryao@gentoo.org>
Issue #3172
Illumos 5047 - don't use atomic_*_nv if you discard the return value
5047 don't use atomic_*_nv if you discard the return value
Author: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Reviewed by: Jason King <jason.brian.king@gmail.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Several hunks from the original patch where not specific to ZFS
and thus were dropped.
Ported-by: Chris Dunlop <chris@onthe.net.au> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Yao <ryao@gentoo.org>
Issue #3172
Allowing direct reclaim to re-enter the VFS in the zfs_inactive()
call path has historically been problematic for ZoL. Therefore,
in order to avoid an entire class of current and future issues
caused by this PF_FSTRANS is set for all zfs_inactive() callers.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3163
Ned Bass [Thu, 26 Feb 2015 20:24:11 +0000 (12:24 -0800)]
Use cached feature info in spa_add_feature_stats()
Avoid issuing I/O to the pool when retrieving feature flags information.
Trying to read the ZAPs from disk means that zpool clear would hang if
the pool is suspended and recovery would require a reboot. To keep the
feature stats resident in memory, we hang a cached nvlist off of the
spa. It is built up from disk the first time spa_add_feature_stats() is
called, and refreshed thereafter using the cached feature reference
counts. spa_add_feature_stats() gets called at pool import time so we
can be sure the cached nvlist will be available if the pool is later
suspended.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3082
This commit replaces the zfs.redhat.in init script with a slightly
modified version of the existing zfs.lsb.in init script. This was
done to minimize the functional differences between platforms.
The lsb version of the script was choosen because it's heavily
tested and provides the most functionality.
Features in LSB which are now in RHEL:
* USE_DISK_BY_ID=0 - Use the by-id names
* VERBOSE_MOUNT=0 - Verbose mounts by default
* DO_OVERLAY_MOUNTS=0 - Overlay mounts by default
* MOUNT_EXTRA_OPTIONS=0 - Generic extra options
Existing RHEL features which were removed:
* Automatically mounting FSs on ZVOLs listed in /etc/fstab
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #3153
Brian Behlendorf [Fri, 27 Feb 2015 20:53:35 +0000 (12:53 -0800)]
Change ASSERT(!"...") to cmn_err(CE_PANIC, ...)
There are a handful of ASSERT(!"...")'s throughout the code base for
cases which should be impossible. This patch converts them to use
cmn_err(CE_PANIC, ...) to ensure they are always enabled and so that
additional debugging is logged if they were to occur.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1445
Brian Behlendorf [Sat, 28 Feb 2015 00:09:52 +0000 (16:09 -0800)]
Linux 4.0 compat: bdi_setup_and_register()
The 'capabilities' argument which was passed to bdi_setup_and_register()
has been removed. File systems should no longer pass BDI_CAP_MAP_COPY.
For our purposes this means there are now three different interfaces
which must be handled. A zpl_bdi_setup_and_register() wrapper function
has been introduced to provide a single interface to the ZPL code.
I've also taken this opportunity to remove HAVE_BDI because kernels
older then 2.6.32 are no longer supported. All kernels newer than
this will have one of the above interfaces.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Chunwei Chen <tuxoko@gmail.com>
Closes #3128
Brian Behlendorf [Thu, 26 Feb 2015 23:29:33 +0000 (15:29 -0800)]
Use MUTEX_FSTRANS mutex type
There are regions in the ZFS code where it is desirable to be able
to be set PF_FSTRANS while a specific mutex is held. The ZFS code
could be updated to set/clear this flag in all the correct places,
but this is undesirable for a few reasons.
1) It would require changes to a significant amount of the ZFS
code. This would complicate applying patches from upstream.
2) It would be easy to accidentally miss a critical region in
the initial patch or to have an future change introduce a
new one.
Both of these concerns can be addressed by using a new mutex type
which is responsible for managing PF_FSTRANS, support for which was
added to the SPL in commit zfsonlinux/spl@9099312 - Merge branch
'kmem-rework'.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com>
Closes #3050
Closes #3055
Closes #3062
Closes #3132
Closes #3142
Closes #2983
Isaac Huang [Fri, 27 Feb 2015 05:46:45 +0000 (22:46 -0700)]
Fix deadlock between zpool export and zfs list
Pool reference count is NOT checked in spa_export_common()
if the pool has been imported readonly=on, i.e. spa->spa_sync_on
is FALSE. Then zpool export and zfs list may deadlock:
1. Pool A is imported readonly.
2. zpool export A and zfs list are run concurrently.
3. zfs command gets reference on the spa, which holds a dbuf on
on the MOS meta dnode.
4. zpool command grabs spa_namespace_lock, and tries to evict dbufs
of the MOS meta dnode. The dbuf held by zfs command can't be
evicted as its reference count is not 0.
5. zpool command blocks in dnode_special_close() waiting for the
MOS meta dnode reference count to drop to 0, with
spa_namespace_lock held.
6. zfs command tries to get the spa_namespace_lock with a reference
on the spa held, which holds a dbuf on the MOS meta dnode.
7. Now zpool command and zfs command deadlock each other.
Also any further zfs/zpool command will block on spa_namespace_lock
forever.
The fix is to always check pool reference count in spa_export_common(),
no matter whether the pool was imported readonly or not.
Signed-off-by: Isaac Huang <he.huang@intel.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2034
Brian Behlendorf [Fri, 27 Feb 2015 22:35:56 +0000 (14:35 -0800)]
Prevent "zpool destroy|export" when suspended
Cleanly destroying or exporting a pool requires that the pool
not be suspended. Therefore, set the POOL_CHECK_SUSPENDED flag
for these ioctls so the utilities will output a descriptive
error message rather than block.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2878
Christer Ekholm [Thu, 19 Feb 2015 21:44:53 +0000 (22:44 +0100)]
Fix possible future overflow in zfs_nicenum
The function zfs_nicenum that converts number to human-readable output
uses a index to a string of letters. This patch limits the index to
the length of the string.
Signed-off-by: Christer Ekholm <che@chrekh.se> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3122
Brian Behlendorf [Wed, 18 Feb 2015 23:39:05 +0000 (15:39 -0800)]
Retire spl_module_init()/spl_module_fini()
In the original implementation of the SPL wrappers were provided
for module initialization and cleanup. This was done to abstract
away any compatibility code which might be needed for the SPL.
As it turned out the only significant compatibility issue was that
the default pwd during module load differed under Illumos and Linux.
Since this is such as minor thing and the wrappers complicate the
code they are being retired.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2985
Brian Behlendorf [Fri, 20 Feb 2015 18:28:25 +0000 (10:28 -0800)]
Fix O_APPEND open(2) flag
As described in flags section of open(2):
O_APPEND:
The file is opened in append mode. Before each write(2), the
file offset is positioned at the end of the file, as if with
lseek(2). O_APPEND may lead to corrupted files on NFS filesys-
tems if more than one process appends data to a file at once.
This is because NFS does not support appending to a file, so the
client kernel has to simulate it, which can't be done without a
race condition.
This issue was originally overlooked because normally the generic
VFS code handles this for a filesystem. However, because ZFS explictly
registers a zpl_write() function it's responsible for the seek.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3124
Steffen Müthing [Mon, 16 Feb 2015 03:08:04 +0000 (04:08 +0100)]
Add required files to initramfs
The dracut module installs the udev rules and the vdev_id utility for creating
the /dev/disk/by-vdev/ names, but omits some additional utilities and the
config file required by vdev_id.
Signed-off-by: Steffen M<C3><BC>thing <steffen.muething@iwr.uni-heidelberg.de> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #3110
When loading the ZFS kernel modules they should not populate the
spa namespace using the cache file. This behavior isn't consistent
with other Linux kernel modules and we need to move away from it.
Removing this makes the whole startup process predictable with four
basic steps which are driven by the init system.
This change also helps lay the groundwork for eventually removing
the kobj_* compatibility code on the kernel side. It may need to
be preserved in userspace because libzfs_init() depends on it.
This is why the conditional must be wrapped with an #ifdef _KERNEL.
Signed-off-by: Dan Swartzendruber <dswartz@druber.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2820
Brian Behlendorf [Thu, 12 Feb 2015 23:05:21 +0000 (15:05 -0800)]
Skip bad DVAs during free by setting zfs_recover=1
When a bad DVA is encountered in metaslab_free_dva() the system
should treat it as fatal. This indicates that somehow a damaged
DVA was written to disk and that should be impossible.
However, we have seen a handful of reports over the years of pools
somehow being damaged in this way. Since this damage can render
otherwise intact pools unimportable, and the consequence of skipping
the bad DVA is only leaked free space, it makes sense to provide
a mechanism to ignore the bad DVA. Setting the zfs_recover=1 module
option will cause the DVA to be ignored which may allow the pool to
be imported.
Since zfs_recover=0 by default any pool attempting to free a bad DVA
will treat it as a fatal error preserving the current behavior.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3099
Issue #3090
Issue #2720
Brian Behlendorf [Thu, 30 Oct 2014 18:11:00 +0000 (11:11 -0700)]
Change VERIFY to ASSERT in mutex_destroy()
There have been multiple reports of 'zdb' tripping the VERIFY in
mutex_destroy() because pthread_mutex_destroy() returns EBUSY.
Exactly how this can happen still needs to be explained, but this
doesn't strictly need to be fatal for non-debug builds. Therefore,
this patch converts the VERIFY to an ASSERT until the root cause
is determined and resolved.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2027
dmu_snapshot_list_next stores the index of the next snapshot entry to the offp
argument, which zpl_snapdir_iterate then uses for the dir_emit. This
result in an off-by-one error. Therefore a temporary variable should be
used.
This was a regression introduced in commit zfsonlinux/zfs@0f37d0c.
Signed-off-by: Andrey Vesnovaty <andrey.vesnovaty@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2930
Brian Behlendorf [Fri, 30 Jan 2015 19:25:19 +0000 (11:25 -0800)]
Retire zio_cons()/zio_dest()
The zio_cons() constructor and zio_dest() destructor don't exist
in the upstream Illumos code. They were introduced as a workaround
to avoid issue #2523. Since this issue has now been resolved this
code is being reverted to bring ZoL back in sync with Illumos.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Issue #3063
Long ago the zio_bulk_flags module parameter was introduced to
facilitate debugging and profiling the zio_buf_caches. Today
this code works well and there's no compelling reason to keep
this functionality. In fact it's preferable to revert this so
the code is more consistent with other ZFS implementations.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Issue #3063
Several of the nvlist functions may perform allocations larger than
the 32k warning threshold. Convert them to use vmem_alloc() so the
best allocator is used.
Commit efcd79a retired KM_NODEBUG which was used to suppress large
allocation warnings. Concurrently the large allocation warning threshold
was increased from 8k to 32k. The goal was to identify the remaining
locations, such as this one, where the allocation can be larger than
32k. This patch is expected fine tuning resulting for the kmem-rework
changes, see commit 6e9710f.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3057
Closes #3079
Closes #3081
Tim Chase [Mon, 17 Nov 2014 04:27:58 +0000 (22:27 -0600)]
Produce a full snapshot list for zfs send -p
In order to accelerate zfs receive operations in the face of many
property-containing snapshots, commit 0574855 changed the header nvlist
("fss") of a send stream to exclude snapshots which aren't part of the
stream. This, however, would cause zfs receive -F to erroneously remove
snapshots; it would remove any snapshot which wasn't listed in the header
nvlist.
This patch restores the full list of snapshots in fss[<id>[snaps]] but
still suppresses the properties of non-sent snapshots and also removes a
consistency check in which an error is raised if a listed snapshot does
not have any properties in fss[<id>[snapprops]].
The 0574855 commit also introduced a bug in which zfs send -p of a
complete stream (zfs send -p pool/fs@snap) would exclude the snapshot
properties in fss[<id>[snapprops]]. This patch detects the last snapshot
in a series when no "from" snapshot has been specified and includes its
properties.
Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2907
Brian Behlendorf [Tue, 18 Nov 2014 22:29:04 +0000 (17:29 -0500)]
Don't read space maps during import for readonly pools
Normally when importing a pool the space maps for all top level
vdevs are read from disk. The space maps will be required latter
when an allocation is performed and free blocks need to be located.
However, if the pool is imported readonly then we are guaranteed
that no allocations can occur. In this case the space maps need
not be loaded.. A similar argument can be made for the DTLs
(dirty time logs).
Because a pool import will fail if the space maps cannot be read.
The ability to safely ignore them makes it more likely that a
damaged pool can be imported readonly to recover its contents.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2831
Lukas Wunner [Wed, 4 Feb 2015 09:45:19 +0000 (10:45 +0100)]
Fix loop in Dracut shutdown script
The shell executes each command of a pipeline in a subshell,
thus $ret always had the same value after the while loop that
it had before the loop (http://mywiki.wooledge.org/BashFAQ/024),
signaling success even if some of the zpools could not be exported.
Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3083
Justin T. Gibbs [Sun, 23 Nov 2014 03:14:24 +0000 (19:14 -0800)]
Illumos 5311 - traverse_dnode may report success when it should not
5311 traverse_dnode may report success when it should not
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Will Andrews <willa@spectralogic.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Ned Bass [Wed, 4 Feb 2015 21:57:50 +0000 (13:57 -0800)]
Fix SA header size accounting
The functions sa_find_sizes() and sa_build_layout() fail to account
for the additional 2 bytes of SA header space when calculating whether
a variable size attribute might spill over. They may consequently
determine that an attribute will fit in the bonus buffer along with a
spill block pointer, when in reality the attribute would be partially
overwritten by the spill block pointer if spill over occurs. This also
causes an inconsistency between the SA header size and the number of
variable size attributes in the layout, tripping an assertion when
debugging is on. The following reproducer demonstrates the problem.
Even though sa_find_sizes() computes the index of the attribute where
spill-over will occur, sa_build_layouts() discards the result and
recomputes it itself. As it turns out, both functions get it wrong.
Since this computation is awkward and, as history has shown, easy to
screw up, let's just do it in one place. This patch fixes the bug in
sa_find_sizes() and updates sa_build_layout() to use the result
computed there.
Also improve the comments in sa_find_sizes().
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com>
Closes #3070
When a dbuf is in the DB_EVICTING state it may no longer be on the
dn_dbufs list. In which case it's unsafe to call DB_DNODE_ENTER.
Therefore, any dbuf which is found in this safe must be skipped.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2553
Closes #2495
Chunwei Chen [Fri, 23 Jan 2015 08:05:04 +0000 (16:05 +0800)]
Read spl_hostid module parameter before gethostid()
If spl_hostid is set via module parameter, it's likely different from
gethostid(). Therefore, the userspace tool should read it first before
falling back to gethostid().
Signed-off-by: Chunwei Chen <tuxoko@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3034
Tim Chase [Tue, 3 Feb 2015 05:55:20 +0000 (23:55 -0600)]
Spurious ENOMEM returns when reading dbufs kstat
Commit 7b2d78a046aa4695d434478a439a9438521d73af fixed some improper uses
of snprintf(), however, in __dbuf_stats_hash_table_data() the return
value of snprintf is propagated to the caller. This caused spurious
ENOMEM errors when reading the dbufs kstat.
This commit causes the actual number of characters written to be returned.
Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3072
avg [Thu, 6 Nov 2014 11:08:02 +0000 (11:08 +0000)]
fix l2arc compression buffers leak
Commit log from FreeBSD:
We have observed that arc_release() can be called concurrently with a
l2arc in-flight write. Also, we have observed that arc_hdr_destroy()
can be called from arc_write_done() for a zio with ZIO_FLAG_IO_REWRITE
flag in similar circumstances.
Previously the l2arc headers would be freed while leaking their
associated compression buffers. Now the buffers are placed on
l2arc_free_on_write list for delayed freeing. This is similar to
what was already done to arc buffers that were supposed to be freed
concurrently with in-flight writes of those buffers.
In addition to fixing the discovered leaks this change also adds
some protective code to assert that a compression buffer associated
with a l2arc header is never leaked.
A new kstat l2_cdata_free_on_write is added. It keeps a count
of delayed compression buffer frees which previously would have
been leaks.
Tested by: Vitalij Satanivskij <satan@ukr.net> et al
Requested by: many
MFC after: 2 weeks
Sponsored by: HybridCluster / ClusterHQ
Brian Behlendorf [Thu, 29 Jan 2015 23:09:51 +0000 (15:09 -0800)]
Use zio buffers in zil_itx_create()
The zil_itx_create() function uses the vmem_alloc() allocator for
its buffers because when logging a write that buffer may be as large
as 64K. This is non-optimal because we may need to allocate many of
of these buffers and this interface has the potential to be slow.
Instead, use zio_data_buf_alloc() which is specifically designed to
be able to efficiently allocate a wide range of buffer sizes.
In addition, do some cleanup and use the zil_itx_destroy() function
to always free an itx structure. This way we're always sure the
right allocation functions are used. Notice that in the current
code kmem_free() and vmem_free() were both used. This happened to
work because these wrappers map to the same internal SPL function.
This was identified as a potential problem when a low-end memory
constrained system began logging the following warnings. There
was no deadlock here just repeated allocation failures resulting
in increased latency.
Chris Dunlap [Thu, 15 Jan 2015 00:31:21 +0000 (16:31 -0800)]
Cleanup _zed_event_add_nvpair()
When _zed_event_add_var() was updated to be the common routine
for adding zedlet environment variables, an additional snprintf()
was added to the processing of each nvpair. This commit changes
_zed_event_add_nvpair() to directly call _zed_event_add_var()
for nvpair non-array types, thereby removing a superfluous call to
snprintf(). For consistency, the helper functions for converting
nvpair array types are similarly adjusted to add variables.
The _zed_event_value_is_hex() and _zed_event_add_var() functions have
been moved up in the file since forward declarations are not used,
but no changes have been made to these functions.
Signed-off-by: Chris Dunlap <cdunlap@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3042
Chris Dunlap [Sun, 19 Oct 2014 19:05:07 +0000 (12:05 -0700)]
Protect against adding duplicate strings in ZED
The zed_strings container stores strings in an AVL, but does not
check for duplicate strings being added. Within the AVL, strings
are indexed by the string value itself. avl_add() requires the node
being added must not already exist in the tree, and will assert()
if this is not the case.
This should not cause problems in practice. ZED uses this container
in two places. In zed_conf.c, it is used to store the names of
enabled zedlets as zed scans the zedlet directory listing; duplicate
entries cannot occur here since duplicate names cannot occur within
a directory. In zed_event.c, it is used to store the environment
variables (as "NAME=VALUE" strings) that will be passed to zedlets;
duplicate strings here should never happen unless there is a bug
resulting in a duplicate nvpair or environment variable.
This commit protects against adding a duplicate to a zed_strings
container by first checking for the string being added, and removing
the previous entry should one exist. This implements a "last one
wins" policy.
This commit also changes the prototype for zed_strings_add() to allow
the string key (by which it is indexed in the AVL) to differ from
the string value. By adding zedlet environment variables using the
variable name as the key, multiple adds for the same variable name
will result in only the last value being stored.
Finally, this commit routes all additions of zedlet environment
variables through the updated _zed_event_add_var(). This ensures
all zedlet environment variable names are properly converted.
Signed-off-by: Chris Dunlap <cdunlap@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3042
Thank to commit a4430fce691d492aec382de0dfa937c05ee16500 we're
now correctly returning EROFS when opening a zvol on a read-only
pool. Unfortunately, it looks like this causes us to trigger
some unexpected behavior by __blkdev_get().
In the failure case it's possible __blkdev_get() will call
__blkdev_put() for a bdev which was never successfully opened.
This results in us trying to close the device again and hitting
the NULL dereference.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1343
Brian Behlendorf [Thu, 30 Oct 2014 20:17:33 +0000 (13:17 -0700)]
Make `zpool import -d|-c` behave consistently
When importing pools with zpool import -aN there is inconsistent
behavior between '-d /dev/disk/by-id' (or another path) and
'-c /etc/zfs/zpool.cache'.
The difference in behavior is caused by zpool_find_import_cached()
returning an empty nvlist_t when there are no pools to import but
zpool_find_import_impl() returns NULL for the same situation. The
behavior of zpool_find_import_cached() is arguably more correct
because it allows returning NULL to be used for an error case and
not an empty set.
This change resolves the issue by updating get_configs() such that
it returns an empty set instead of NULL when no config is found.
The updated behavior will now always return 0 for this case.
$ zpool import -aN; echo $?
no pools available to import
0
$ zpool import -aN -d /var/tmp/; echo $?
no pools available to import
0
$ zpool import -aN -c /etc/zfs/zpool.cache; echo $?
no pools available to import
0
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2080
Brian Behlendorf [Wed, 28 Jan 2015 19:06:14 +0000 (11:06 -0800)]
Merge branch 'arc_summary_draft_v2'
Add a port of arc_summary.py to ZFS on Linux, arc_summary.py is a
standard tool in FreeBSD and Illumos. The version of the script used
for the port originally came from FreeNAS.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Kyle Blatter <kyleblatter@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Closes #2920
Kyle Blatter [Mon, 12 Jan 2015 21:31:24 +0000 (13:31 -0800)]
Replace sysctl summary with tunables summary.
The original script displayed tunable parameters using sysctl calls.
This patch modifies this by displaying tunable parameters found in
/sys/modules/zfs/parameters/. modinfo calls are used to capture
descriptions.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Kyle Blatter <kyleblatter@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Kyle Blatter [Wed, 5 Nov 2014 18:08:45 +0000 (10:08 -0800)]
Refactor arc_summary to simplify -p processing
The -p option is used to specify a specific page of output to be
displayed. If omitted, all output pages would be displayed.
arc_summary, as it stood, had really kludgy processing code for
processing the -p option. It relied on a try-except block which was
treated as an if statement and in normal operation would fail any time a
user didn't specify the -p option on the command line. When the
exception was thrown, the script would then display all output pages.
This happened whether the -p option was omitted or malformed. Thus, in
the principle use case, an exception would be raised in order to run the
script as desired. The same except code would be called regardless of
the exception, however, and malformed -p arguments would also cause the
script to execute.
Additionally, this required the function which handles the case where
all output pages were to be displayed, _call_all, to be potentially
called from several locations within main.
This commit refactors the option processing code to simplify it and make
it easier to catch runtime errors in the script. This is done by
specializing the try-except block to only have an exception when the -p
argument is malformed. When the -p option is correctly selected by the
user, it calls a function in the unSub array directly, which will only
display one page of output.
Finally in the context of this refactoring the page breaks have been
removed. Pages seem to have been added into the output in the FreeNAS
version of the script. This patch removes pages from the output to more
closely resemble the freebsd version of the script.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Kyle Blatter <kyleblatter@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
cburroughs [Tue, 11 Feb 2014 15:14:55 +0000 (10:14 -0500)]
Modified arc_summary.py to run on linux
1) Comment out stat sections whose kstats are not currently available
2) Port most of arc_summary to use spl kstats
3) Enable l2arc stats
5) Include compressed l2size
4) Minor style fixes / cleanup
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: cburroughs <chris.burroughs@gmail.com> Signed-off-by: Kyle Blatter <kyleblatter@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
cburroughs [Tue, 11 Feb 2014 14:15:50 +0000 (09:15 -0500)]
Add arc_summary.py from FreeNAS
The arc_summary script is a useful utility for administrators on
other ZFS platforms. It provides a quick and easy way to get a high
level view of the current ARC state.
Historically this was a perl script but it was rewritten in python
for FreeNAS. We've decided to adopt the python version instead of
the perl version for a few reasons.
1) ZoL has no existing perl dependencies, but it does have a python
dependency for scripts such as arcstat.py and dbufstat.py. Using
python for arc_summary.py helps us minimize dependencies.
2) Most major Linux distributions already depend heavily on python
for their core infrastructure. This means it's very likely to
be available even very early in the boot process.
Original source:
https://github.com/freenas/freenas/blob/master/gui/tools/arc_summary.py
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: cburroughs <chris.burroughs@gmail.com> Signed-off-by: Kyle Blatter <kyleblatter@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Tim Chase [Sun, 19 Oct 2014 03:50:01 +0000 (22:50 -0500)]
Fix removal of SA in sa_modify_attrs()
The sa_modify_attrs() function can add, remove or replace an SA.
The main loop in the function uses the index "i" to iterate over the
existing SAs and uses the index "j" for writing them into a new buffer
via SA_ADD_BULK_ATTR(). The write index, "j" is incremented on remove
(SA_REMOVE) operations which leads to a corruption in the new SA buffer.
This patch remove the increment for SA_REMOVE operations.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Ned Bass <bass6@llnl.gov>
Closes #3028
Richard Yao [Sat, 11 Oct 2014 15:01:37 +0000 (11:01 -0400)]
Use kmem_vasprintf() in log_internal()
An attempt to debug zfsonlinux/zfs#2781 revealed that this code could be
simplified by using kmem_asprintf(). It is not clear that switching to
kmem_asprintf() addresses zfsonlinux/zfs#2781. However, switching to
kmem_asprintf() is cleanup that simplifies debugging such that it would
become clear that this is a bug in glibc should the issue persist.
It also brings this function almost back in sync with Illumos. This
was possible due to the recently reworked kmem code which allows us
to use KM_SLEEP in the same fashion as Illumos.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2791
Issue #2781
Brian Behlendorf [Fri, 16 Jan 2015 22:42:46 +0000 (14:42 -0800)]
Merge branch 'kmem-rework'
The core motivation behind these changes is to minimize the
memory management differences between ZFS on Linux and other
platforms. This simplifies the process of porting changes to
Linux from other platforms. This is good for code quality
and is expected to reduce the number of defects accidentally
introduced due to porting. The following key Linux specific
changes have been reverted.
* KM_PUSHPAGE changed back to KM_SLEEP. All contexts where
it is unsafe to perform IO have been marked with PF_FSTRANS.
This context specific mechanism is now used exclusively
and the KM_PUSHPAGE mechanism has been retired.
* The KM_NODEBUG flag has been retired. Allocations larger
than 32K should use vmem_alloc()/vmem_free(). Depending
on the size of the allocation either kmalloc() or vmalloc()
will be used internally, but no warning will be printed.
* Pre-allocated vdev IO buffers and the dedicated SA spill
block cache have been retired. It is now safe and reliable
to allocate buffers of the needed size without fear of
deadlocking. This reduces our memory footprint and paves
the way for larger block sizes.
Depends on zfsonlinux/spl#414.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Tim Chase <tim@chase2k.com>
Closes #2918
Brian Behlendorf [Tue, 16 Dec 2014 19:44:24 +0000 (11:44 -0800)]
Revert "SA spill block cache"
The SA spill_cache was originally introduced to avoid the need to
perform large kmem or vmem allocations. Instead a small dedicated
cache of preallocated SA buffers was kept.
This solution was viable while the maximum block size was limited
to 128K. But with the planned increase of the maximum block size
to 16M callers need to migrate to the zio_buf_alloc(). However,
they should be aware this interface is expected to change again
once the zio buffers are fully backed by scatter-gather lists.
Alternately, if the callers know these buffers will never be large
or be infrequently accessed they may kmem_alloc() or vmem_alloc()
the needed temporary space.
This change has the additional benegit of bringing the code back
inline with the upstream Illumos source.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Sat, 13 Dec 2014 00:40:21 +0000 (16:40 -0800)]
Revert "Pre-allocate vdev I/O buffers"
Commit 86dd0fd added preallocated I/O buffers. This is no longer
required after the recent kmem changes designed to make our memory
allocation interfaces behave more like those found on Illumos. A
deadlock in this situation is no longer possible.
However, these allocations still have the potential to be expensive.
So a potential future optimization might be to perform then KM_NOSLEEP
so that they either succeed of fail quicky. Either case is acceptable
here because we can safely abort the aggregation.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
As part of the spl kmem/vmem refactoring the kmem_cache_* functions
were split in to their own kmem_cache.h header. This was done in
part so that kmem_* consumers would not be forced to include the
kmem_cache_* functions which mask several Linux SLAB/SLAB functions.
Because of this we now much explicitly include kmem_cache.h in the
zfs_context.h. However, consumers such as Lustre which need access
to the KM_FLAGS but not the kmem_cache_* functions can now safely
just include kmem.h.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Fri, 21 Nov 2014 00:09:39 +0000 (19:09 -0500)]
Change KM_PUSHPAGE -> KM_SLEEP
By marking DMU transaction processing contexts with PF_FSTRANS
we can revert the KM_PUSHPAGE -> KM_SLEEP changes. This brings
us back in line with upstream. In some cases this means simply
swapping the flags back. For others fnvlist_alloc() was replaced
by nvlist_alloc(..., KM_PUSHPAGE) and must be reverted back to
fnvlist_alloc() which assumes KM_SLEEP.
The one place KM_PUSHPAGE is kept is when allocating ARC buffers
which allows us to dip in to reserved memory. This is again the
same as upstream.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Callers of kmem_alloc() which passed the KM_NODEBUG flag to suppress
the large allocation warning have been replaced by vmem_alloc() as
appropriate. The updated vmem_alloc() call will not print a warning
regardless of the size of the allocation.
A careful reader will notice that not all callers have been changed
to vmem_alloc(). Some have only had the KM_NODEBUG flag removed.
This was possible because the default warning threshold has been
increased to 32k. This is desirable because it minimizes the need
for Linux specific code changes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Richard Yao [Mon, 3 Nov 2014 14:42:44 +0000 (09:42 -0500)]
Use is_vmalloc_addr() in vdev_disk.c
The initial port of ZFS to Linux required a way to identify virtual
memory to make IO to virtual memory backed slabs work, so kmem_virt()
was created. Linux 2.6.25 introduced is_vmalloc_addr(), which is
logically equivalent to kmem_virt(). Support for kernels before 2.6.26
was later dropped and more recently, support for kernels before Linux
2.6.32 has been dropped. We retire kmem_virt() in favor of
is_vmalloc_addr() to cleanup the code.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Sun, 13 Jul 2014 18:35:19 +0000 (14:35 -0400)]
Mark IO pipeline with PF_FSTRANS
In order to avoid deadlocking in the IO pipeline it is critical that
pageout be avoided during direct memory reclaim. This ensures that
the pipeline threads can always make forward progress and never end
up blocking on a DMU transaction. For this very reason Linux now
provides the PF_FSTRANS flag which may be set in the process context.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This is a follow up commit to 74328ee which correctly resolved a lock
inversion between zfs_putpage() and zfs_free_range(). Unfortunately,
in the process it accidentally introduced another inversion between
zfs_putpage() and zfs_read(). The page must be unlocked before taking
the range lock. This patch corrects that issue.
In addition, because the locking rules here are subtle a block comment
has been added clearly explaining why the ordering here is critical.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Ned Bass <bass6@llnl.gov>
Issue #2976
Ned Bass [Tue, 23 Dec 2014 00:54:43 +0000 (16:54 -0800)]
Document zfs_flags module parameter
Add a table describing the debugging flags that can be set in the zfs_flags
module parameter. Also change the module_param type to 'uint' so users aren't
shown a negative value. The updated man page text is reproduced below for
convenience.
zfs_flags (int)
Set additional debugging flags. The following flags may be
bitwise-or'd together.
Ned Bass [Mon, 15 Dec 2014 21:53:00 +0000 (13:53 -0800)]
Don't use AC_LANG_SOURCE for conftest.h source
Using AC_LANG_SOURCE with some versions of autoconf is problematic if
the given source is to be written to a header file. Such versions assume
the contents are to be written to conftest.c and generate shell code to
that effect. The contents of the test program to detect support for
Linux tracepoints were consequently malformed (containing the source for
conftest.h) so the build system incorrectly disabled tracepoints
support. Fix this in ZFS_LINUX_TRY_COMPILE_HEADER by passing the header
source directly to ZFS_LINUX_COMPILE_IFELSE.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2953
Ned Bass [Sat, 13 Dec 2014 02:07:39 +0000 (18:07 -0800)]
Remove duplicate typedefs from trace.h
Older versions of GCC (e.g. GCC 4.4.7 on RHEL6) do not allow duplicate
typedef declarations with the same type. The trace.h header contains
some typedefs to avoid 'unknown type' errors for C files that haven't
declared the type in question. But this causes build failures for C
files that have already declared the type. Newer versions of GCC (e.g.
v4.6) allow duplicate typedefs with the same type unless pedantic error
checking is in force. To support the older versions we need to remove
the duplicate typedefs.
Removal of the typedefs means we can't built tracepoints code using
those types unless the required headers have been included. To
facilitate this, all tracepoint event declarations have been moved out
of trace.h into separate headers. Each new header is explicitly included
from the C file that uses the events defined therein. The trace.h header
is still indirectly included form zfs_context.h and provides the
implementation of the dprintf(), dbgmsg(), and SET_ERROR() interfaces.
This makes those interfaces readily available throughout the code base.
The macros that redefine DTRACE_PROBE* to use Linux tracepoints are also
still provided by trace.h, so it is a prerequisite for the other
trace_*.h headers.
These new Linux implementation-specific headers do introduce a small
divergence from upstream ZFS in several core C files, but this should
not present a significant maintenance burden.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2953
Brian Behlendorf [Fri, 19 Dec 2014 20:57:54 +0000 (12:57 -0800)]
Fix zfs_putpage() lock inversion
There exists a lock inversions involving the zfs range lock and the
individual page writeback bits which can result in a deadlock. To
prevent this we must always manipulate the writeback bit while
holding the range lock. The exact deadlock is as follows:
------ Process A ------ ------ Process B ------
zpl_writepages zpl_fallocate
write_cache_pages zpl_fallocate_common
zpl_putpage zfs_space
zfs_putpage (set bit) zfs_freesp
zfs_range_lock (wait on lock) zfs_free_range (take lock)
[has not yet initiated I/O, truncate_inode_pages_range
the bit will not be cleared] wait_on_page_writeback (wait on bit)
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Richard Yao <richard.yao@clusterhq.com>
Issue #2976
Ned Bass [Wed, 17 Dec 2014 19:01:42 +0000 (11:01 -0800)]
vdev_id: use mawk-compatible regular expression
Slot mapping in vdev_id doesn't work on systems using mawk as the 'awk'
alternative. A regular expression in map_slot() contains an unquoted
empty string following the alternation (|) operator, which results in an
"missing operand" error with mawk. The solution is to rearrange the
expression so the alternation has two operands.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/pkg-zfs#136
Closes zfsonlinux/zfs#2965
Improve systemd script to not leave stale sharetab
The systemd script zfs-share.service does 'zfs share -a' to share
any required datasets. Unfortunately, /etc/dfs/sharetab is stale
from the previous boot. Delete it before we share.
Signed-off-by: Dan Swartzendruber <dswartz@druber.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2883
Brian Behlendorf [Mon, 20 Oct 2014 21:37:47 +0000 (14:37 -0700)]
Fix snapshots with dirty inodes
Filesystems which are mounted read-only or are immutable because
they are snapshots must not be allowed to dirty and inode. This
will result in a write which will correctly cause a kernel panic
because these filesystem are (and must be) immutable.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2812
Isaac Huang [Tue, 21 Oct 2014 18:20:10 +0000 (12:20 -0600)]
bio_alloc() with __GFP_WAIT never returns NULL
Mark the error handling branch as unlikely() because the current
kernel interface can never return NULL. However, we want to keep
the error handling in case this behavior changes in the futre.
Plus fix a small style issue.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Isaac Huang <he.huang@intel.com>
Closes #2703
Ned Bass [Fri, 14 Nov 2014 18:21:53 +0000 (10:21 -0800)]
Explicitly include SPL compat headers
Inclusion of SPL compatibility headers was moved out of the public
header sys/types.h to avoid conflicts with external packages. Include a
few compatiblity headers explicitly to cope with that change. Also,
sort some linux-specific inclusions alphabetically.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2898
Ned Bass [Fri, 7 Nov 2014 02:18:32 +0000 (18:18 -0800)]
Fix improper null-byte termination handling
Fix a few cases where null-byte termination of strings was done
unnecessarily or incorrectly.
- The snprintf() function always produces a null-byte terminated string
for non-negative return values, so it is not necessary to write out a
null-byte as a separate step.
- Also, it is unsafe to use the return value of snprintf() as an offset
for placing a null-byte, because if the output was truncated the return
value is the number of bytes that _would_ have been written had enough
space been available. Therefore the return value may index beyond the
array boundaries.
- Finally, snprintf() accounts for the null-byte when limiting its output
size, so there is no need to pass it a size parameter that is one less
than the buffer size.
Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2875
smh [Thu, 16 Oct 2014 02:23:27 +0000 (02:23 +0000)]
Prevent ZFS leaking pool free space
When processing async destroys ZFS would leak space every txg timeout
(5 seconds by default), if no writes occurred, until the pool is totally
full. At this point it would be unfixable without a pool recreation.
In addition if the machine was rebooted with the pool in this situation
would fail to import on boot, hanging indefinitely, as the import process
requires the ability to write data to the pool. Any attempts to query
the pool status during the hung import would not return as the import
holds the pool lock.
The only way to import such a pool would be to specify -o readonly=on
to the zpool import.
zdb -bb <pool> can be used to check for "deferred free" size which is
where this lost space will be counted.
This issue was filed as illumos 5347 and a more comprehensive fix is
under review. Once that change is finalized it will be integrated, in
the meanwhile the FreeBSD fix has been merged to prevent the issue.
Ported by: Tim Chase <tim@chase2k.com> Signed-off-by: Matthew Ahrens mahrens@delphix.com Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2896
Tim Chase [Tue, 11 Nov 2014 05:26:33 +0000 (23:26 -0600)]
Undirty freed spill blocks.
If a spill block's dbuf hasn't yet been written when a spill block is
freed, the unwritten version will still be written. This patch handles
the case in which a spill block's dbuf is freed and undirties it to
prevent it from being written.
The most common case in which this could happen is when xattr=sa is being
used and a long xattr is immediately replaced by a short xattr as in:
The first value must be sufficiently long that a spill block is generated
and the second value must be short enough to not require a spill block.
In practice, this would typically happen due to internal xattr operations
as a result of setting acltype=posixacl.
Signed-off-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2663
Closes #2700
Closes #2701
Closes #2717
Closes #2863
Closes #2884