Brian Behlendorf [Fri, 25 Oct 2013 20:58:45 +0000 (13:58 -0700)]
Add dbufstat.py command
The dbufstat.py command was added to provide a conveniant way to
easily determine what ZFS is caching. The script consumes the
raw /proc/spl/kstat/zfs/dbufs kstat data can consolidates it in
to a more human readable form. This was designed primarily as
a tool to aid developers but it may also be useful for advanced
users who want more visibility in to what the ARC is caching.
When run without options dbufstat.py will default to showing a
list of all objects with at least one buffer present in the
cache. The total cache space consumed by that object will be
printed on the right along with the object type. Similar to the
arcstats.py command the -x option may used to display additional
fields.
Two other modes of operation are also supported by dbufstat.py
and the expectation is additional display modes may be added as
needed. The -t option will summerize the total number of bytes
cached for each object type, and the -b option will show every
dbuf currently cached.
The script was designed to be consistent with arcstat.py and
includes most of the same options and funcationality.
Signed-off-by: Prakash Surya <surya1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Currently there is no mechanism to inspect which dbufs are being
cached by the system. There are some coarse counters in arcstats
by they only give a rough idea of what's being cached. This patch
aims to improve the current situation by adding a new dbufs kstat.
When read this new kstat will walk all cached dbufs linked in to
the dbuf_hash. For each dbuf it will dump detailed information
about the buffer. It will also dump additional information about
the referenced arc buffer and its related dnode. This provides a
more complete view in to exactly what is being cached.
With this generic infrastructure in place utilities can be written
to post-process the data to understand exactly how the caching is
working. For example, the data could be processed to show a list
of all cached dnodes and how much space they're consuming. Or a
similar list could be generated based on dnode type. Many other
ways to interpret the data exist based on what kinds of questions
you're trying to answer.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Prakash Surya <surya1@llnl.gov>
This change adds a new kstat to gain some visibility into the
amount of time spent in each call to dmu_tx_assign. A histogram
is exported via the new dmu_tx_assign file. The information
contained in this histogram is the frequency dmu_tx_assign
took to complete given an interval range.
Signed-off-by: Prakash Surya <surya1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This change is an attempt to add visibility in to how txgs are being
formed on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to txg. These entries are then exported through the kstat
interface, which can then be interpreted in userspace.
For each txg, the following information is exported:
* Unique txg number (uint64_t)
* The time the txd was born (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The current txg state ((O)pen/(Q)uiescing/(S)yncing/(C)ommitted)
* The number of reserved bytes for the txg (uint64_t)
* The number of bytes read during the txg (uint64_t)
* The number of bytes written during the txg (uint64_t)
* The number of read operations during the txg (uint64_t)
* The number of write operations during the txg (uint64_t)
* The time the txg was closed (hrtime_t)
* The time the txg was quiesced (hrtime_t)
* The time the txg was synced (hrtime_t)
Note that while the raw kstat now stores relative hrtimes for the
open, quiesce, and sync times. Those relative times are used to
calculate how long each state took and these deltas and printed by
output handlers.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Fri, 11 Oct 2013 21:24:18 +0000 (14:24 -0700)]
Increase default udev wait time
When creating a new pool, or adding/replacing a disk in an existing
pool, partition tables will be automatically created on the devices.
Under normal circumstances it will take less than a second for udev
to create the expected device files under /dev/. However, it has
been observed that if the system is doing heavy IO concurrently udev
may take far longer. If you also throw in some cheap dodgy hardware
it may take even longer.
To prevent zpool commands from failing due to this the default wait
time for udev is being increased to 30 seconds. This will have no
impact on normal usage, the increase timeout should only be noticed
if your udev rules are incorrectly configured.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1646
Richard Yao [Sat, 5 Oct 2013 21:55:24 +0000 (17:55 -0400)]
Linux 3.11 compat: Rename LZ4 symbols
Linus Torvalds merged LZ4 into Linux 3.11. This causes a conflict
whenever CONFIG_LZ4_DECOMPRESS=y or CONFIG_LZ4_COMPRESS=y are set in the
kernel's .config. We rename the symbols to avoid the conflict.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1789
Tim Chase [Sat, 12 Oct 2013 22:33:28 +0000 (17:33 -0500)]
Add missing dsl pool configuration lock
The semantics introduced by the restructured sync task of illumos
3464 require this lock when calling dmu_snapshot_list_next().
The pool is locked/unlocked for each iteration to reduce the
chance of long-running locks.
This was accidentally missed when doing the original port because
ZoL's control directory code is Linux-specific and is in a
different file than in illumos.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1785
Ported-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Porting notes:
This fixes an upstream regression that was introduced in commit
zfsonlinux/zfs@e51be06697762215dc3b679f8668987034a5a048, which
ported the Illumos 3552 changes. This fix was added to upstream
rather quickly, but at the time of the port, no one spotted it and
the race was rare enough that it passed our regression tests. I
discovered this when comparing our metaslab.c to the illumos
metaslab.c.
Without this change it is possible for metaslab_group_alloc() to
consume a large amount of cpu time. Since this occurs under a
mutex in a rcu critical section the kernel will log this to the
console as a self-detected cpu stall as follows:
INFO: rcu_sched self-detected stall on CPU { 0}
(t=60000 jiffies g=11431890 c=11431889 q=18271)
Richard Yao [Thu, 26 Sep 2013 17:44:10 +0000 (13:44 -0400)]
Fix libzfs_core changes to follow GNU libtool guidelines
The GNU libtool documentation states to start with a version of 0:0:0,
rather than 1:1:0. Illumos uses the name libzfs_core.so.1, so to be
consistent, we should go with 1:0:0.
The GNU libtool documentation also provides guidence on how the version
information should be incremented. Doing this does a SONAME bump of the
libzfs and libzpool libraries. This is particularly important on Gentoo
because a SONAME bump enables portage to retain the older libraries
until any packages that link to them are rebuilt. The main example of
this is GRUB2's grub2-mkconfig, which will break unless it is rebuilt
against the new libraries.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1751
Richard Yao [Thu, 26 Sep 2013 17:42:41 +0000 (13:42 -0400)]
Generate libraries with correct DT_NEEDED entries
Libraries that depend on other libraries should list them in ELF's
DT_NEEDED field so that programs linking to them do not need to specify
those libraries unless they depend on them as well. This is not the case
in the current code and the consequence is that anything that needs a
library must know its dependencies. This is fragile and caused GRUB2's
configure script to break when a dependency was added on libblkid in
libzfs.
This resolves that problem by using LIBADD/LDADD to specify libraries in
Makefile.am instead of LDFLAGS. This ensures that proper DT_NEEDED
entries are generated and prevents GRUB2's configure script from
breaking in the presence of a libblkid dependency. This also removes
unneeded dependencies from various files.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1751
Richard Yao [Wed, 28 Aug 2013 20:17:47 +0000 (16:17 -0400)]
Fix libblkid support
libblkid support is dormant because the autotools check is broken and
liblkid identifies ZFS vdevs as "zfs_member", not "zfs". We fix that
with a few changes:
First, we fix the libblkid autotools check to do a few things:
1. Make a 64MB file, which is the minimum size ZFS permits.
2. Make 4 fake uberblock entries to make libblkid's check succeed.
3. Return 0 upon success to make autotools use the success case.
4. Include stdlib.h to avoid implicit declration of free().
5. Check for "zfs_member", not "zfs"
6. Make --with-blkid disable autotools check (avoids Gentoo sandbox violation)
7. Pass '-lblkid' correctly using LIBS not LDFLAGS.
Second, we change the libblkid support to scan for "zfs_member", not
"zfs".
This makes --with-blkid work on Gentoo.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1751
Ned Bass [Mon, 30 Sep 2013 23:29:37 +0000 (16:29 -0700)]
Export symbols dsl_pool_config_{enter,exit}
These are needed by consumers (i.e. Lustre) who wish to use the
dsl_prop_register() interface to register callbacks when pool
properties of interest change. This interface requires that the
DSL pool configuration lock is held when called.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1762
When building the spl with --enable-debug-kmem-tracking a memory
leak is detected in log_internal(). This happens to be a false
positive because the memory was freed using strfree() instead of
kmem_free(). All kmem_alloc()'s must be released with kmem_free()
to ensure correct accounting.
Richard Yao [Thu, 5 Sep 2013 19:23:24 +0000 (15:23 -0400)]
Update drive database
Add Corsair Force GS drive (obtained from drive_id)
Add Kingston HyperX 3K (obtained from drive_id)
Add OCZ Vertex 4 drive (obtained from drive_id)
Add Samsung SM843T enterprise drive (obtained from drive_id)
Add entries for additional sizes of Intel 320/330/335/520 series
Add Cruical C400 (obtained from Illumos user's sd.conf)
Add Toshiba SSD (obtained from Illumos user's sd.conf)
Add Samsung's first SLC SSD (obtained from drive_id)
Add OCZ Core Series (obtained from drive_id)
Add Intel DC S3700 (obtained from drive_id)
Notes:
1. The drive identifer obtained for the Samsung SM843T was MZ7WD480. The
rest were extrapolated. The additional entries were checked with Google
to verify that such drives exist in the wild.
2. The additional entries for Intel drives were extrapolated from
existing entries. The additional entries were checked with Google to
verify that such drives exist in the wild.
3. The "ATA C400-MTFDDAC512M" and "ATA TOSHIBA THNSNH51" entries
are from the sd.conf of gcbirzan on freenode. Additional entries were
extrapolated from them and checked with Google.
4. I obtained the Samsung MCCOE64G entry from an actual drive. The
Samsung MCCOE32G entry was extrapolated from it and checked with
Google.
5. I obtained the SSDSC2BA10 from a 100GB Intel DC S3700 drive and
extrapolated the entries for the additional models.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1752
Tim Chase [Fri, 20 Sep 2013 14:30:04 +0000 (09:30 -0500)]
Allocate the ioctl "output" nvlist with KM_PUSHPAGE.
Some ZFS errors such as certain snapshot failures can occur in
the sync task context. Because they may require additional memory
allocations, the initial nvlist must be allocated with KM_PUSHPAGE.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1746
Issue #1737
Removing unneeded mutex for reading vq_pending_tree size
Locking mutex &vq->vq_lock in vdev_mirror_pending is unneeded:
* no data is modified
* only vq_pending_tree is read
* in case garbage is returned (eg. vq_pending_tree being updated
while the read is made) the worst case would be that a single
read could be queued on a mirror side which more busy than thought
The benefit of this change is streamlining of the code path since
it is taken for *every* mirror member on *every* read.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1739
Reduce the stack usage of dsl_dataset_remove_clones_key
dataset_remove_clones_key does recursion, so if the recursion goes
deep it can overrun the linux kernel stack size of 8KB. I have seen
this happen in the actual deployment, and subsequently confirmed it by
running a test workload on a custom-built kernel that uses 32KB stack.
See the following stack trace as an example of the case where it would
have run over the 8KB stack kernel:
This change reduces the stack usage in dsl_dataset_remove_clones_key
by allocating structures in heap, not in stack. This is not a fundamental
fix, as one can create an arbitrary large data set that runs over any
fixed size stack, but this will make the problem far less likely.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Kohsuke Kawaguchi <kk@kohsuke.org>
Closes #1726
Brian Behlendorf [Fri, 13 Sep 2013 20:20:15 +0000 (13:20 -0700)]
Fix zpl_mknod() return values
The zpl_mknod() function was incorrectly negating its return value.
This doesn't cause any problems in the success case, but it does
prevent us from returning the correct error code for a failure.
The implementation of this function is now consistent with all
the other zpl_* functions.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1717
Brian Behlendorf [Fri, 13 Sep 2013 20:10:36 +0000 (13:10 -0700)]
Fix uninitialized variables
When compiling on an ARM device using gcc 4.7.3 several variables
in the zfs_obj_to_path_impl() function were flagged as uninitialized.
To resolve the warnings explicitly initialize them to zero.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1716
Richard Yao [Tue, 10 Sep 2013 19:13:44 +0000 (15:13 -0400)]
Stop runtime pointer modifications in autotools checks
c38367c73f592ca9729ba0d5e70b5e3bc67e0745 was meant to eliminate runtime
function pointer modifications in autotools checks because they were
prone to false negatives on kernels hardened by the PaX project.
Unfortunately, I missed the xattr_handler and super_block->s_bdi
autotools checks. Recent changes to PaX constified
xattr_handler->get/set, which lead me to discover this oversight.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1433
Tim Chase [Wed, 11 Sep 2013 18:47:43 +0000 (11:47 -0700)]
Fix dmu_objset_find_dp() KM_SLEEP warning
After the restructuring in 13fe019 The 'zfs rename' command will
result in a KM_SLEEP being called in the sync context. This may
deadlock due to reclaim so it was changed to KM_PUSHPAGE.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1711
Matthew Ahrens [Wed, 28 Aug 2013 11:45:09 +0000 (06:45 -0500)]
Illumos #2882, #2883, #2900
2882 implement libzfs_core
2883 changing "canmount" property to "on" should not always remount dataset
2900 "zfs snapshot" should be able to create multiple, arbitrary snapshots at once
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Chris Siden <christopher.siden@delphix.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Reviewed by: Bill Pijewski <wdp@joyent.com>
Reviewed by: Dan Kruchinin <dan.kruchinin@gmail.com>
Approved by: Eric Schrock <Eric.Schrock@delphix.com>
Ported-by: Tim Chase <tim@chase2k.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1293
Porting notes:
WARNING: This patch changes the user/kernel ABI. That means that
the zfs/zpool utilities built from master are NOT compatible with
the 0.6.2 kernel modules. Ensure you load the matching kernel
modules from master after updating the utilities. Otherwise the
zfs/zpool commands will be unable to interact with your pool and
you will see errors similar to the following:
$ zpool list
failed to read pool configuration: bad address
no pools available
$ zfs list
no datasets available
Add zvol minor device creation to the new zfs_snapshot_nvl function.
Remove the logging of the "release" operation in
dsl_dataset_user_release_sync(). The logging caused a null dereference
because ds->ds_dir is zeroed in dsl_dataset_destroy_sync() and the
logging functions try to get the ds name via the dsl_dataset_name()
function. I've got no idea why this particular code would have worked
in Illumos. This code has subsequently been completely reworked in
Illumos commit 3b2aab1 (3464 zfs synctask code needs restructuring).
Squash some "may be used uninitialized" warning/erorrs.
Fix some printf format warnings for %lld and %llu.
Apply a few spa_writeable() changes that were made to Illumos in
illumos/illumos-gate.git@cd1c8b8 as part of the 3112, 3113, 3114 and
3115 fixes.
Add a missing call to fnvlist_free(nvl) in log_internal() that was added
in Illumos to fix issue 3085 but couldn't be ported to ZoL at the time
(zfsonlinux/zfs@9e11c73) because it depended on future work.
Brian Behlendorf [Thu, 22 Aug 2013 20:06:33 +0000 (13:06 -0700)]
Use directory xattrs for symlinks
There is currently a subtle bug in the SA implementation which
can crop up which prevents us from safely using multiple variable
length SAs in one object.
Fortunately, the only existing use case for this are symlinks with
SA based xattrs. Therefore, until the root cause in the SA code
can be identified and fixed we prevent adding SA xattrs to symlinks.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1468
Richard Yao [Sat, 10 Aug 2013 12:24:40 +0000 (08:24 -0400)]
Implement database to workaround misreported physical sector sizes
This implements vdev_bdev_database_check(). It alters the detected
sector size of any device listed in a database of drives known to lie
about their physical sector sizes.
This is based on "6931570 Add flash devices' VID/PID to disk table to
advertising 4K physical sector size" from Open Solaris and on
sg_simple4.c from sg3_utils. About two dozen lines are taken from
sg_simple4.c, which is GPLv2 licensed. However, sg_simple4.c is
analogous to a Hello World program and is safe for us to use. We
requested that Douglas Gilbert, the author of sg_simple4.c, confirm that
this is the case. A cutdown version of his response is as follows:
```
I would consider a SCSI INQUIRY example using the Linux sg
driver interface (also written by me) as the equivalent of an
"hello world" program in C.
```
The database was created with the help of the freenode and ZFSOnLinux
communities.
Some notes:
1. The following drives both were confirmed to lie via reports in IRC
and they contain capacity information in their identifiers:
The identifiers for different capacity models were extrapolated and
added under the assumption that those models also lie. Google was used
to verify that the extrapolated drive identifiers existed prior to their
inclusion.
2. The OCZ-VERTEX2 3.5 identifer applies to two drives that differ
solely in page size (and slightly in capacity). One uses 4096-byte pages
and the other uses 8192-byte pages. Both are set to use 8192-byte pages.
We could detect the page size by checking the capacity, but that would
unnecessarily complicate the code.
3. It is possible for updated drive firmware to correctly report the
sector size. There were reports of a few advanced format drives doing
that. One report stated that the vendor changed the identification
string while another was unclear on this. Both reports involved WDC
models.
4. Google was used to determine the size of pages in the listed flash
devices. Reports of 8192-byte pages took precedence over reports of
4096-byte pages.
5. Devices behind USB adapters can have their identification strings
altered. Identification strings obtained across USB adapters are
omitted and no attempt is made to correct for alterations made by USB
adapters when doing comparisons against the database. Two entries in the
Open Solaris database that appear to have been altered by a USB
adapter were omitted.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1652
Richard Yao [Wed, 7 Aug 2013 12:53:45 +0000 (08:53 -0400)]
Linux 3.11 compat: fops->iterate()
Commit torvalds/linux@2233f31aade393641f0eaed43a71110e629bb900
replaced ->readdir() with ->iterate() in struct file_operations.
All filesystems must now use the new ->iterate method.
To handle this the code was reworked to use the new ->iterate
interface. Care was taken to keep the majority of changes
confined to the ZPL layer which is already Linux specific.
However, minor changes were required to the common zfs_readdir()
function.
Compatibility with older kernels was accomplished by adding
versions of the trivial dir_emit* helper functions. Also the
various *_readdir() functions were reworked in to wrappers
which create a dir_context structure to pass to the new
*_iterate() functions.
Unfortunately, the new dir_emit* functions prevent us from
passing a private pointer to the filldir function. The xattr
directory code leveraged this ability through zfs_readdir()
to generate the list of xattr names. Since we can no longer
use zfs_readdir() a simplified zpl_xattr_readdir() function
was added to perform the same task.
Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1653
Issue #1591
Brian Behlendorf [Wed, 14 Aug 2013 23:18:58 +0000 (16:18 -0700)]
Fix z_wr_iss_h zio_execute() import hang
Because we need to be more frugal about our stack usage under
Linux. The __zio_execute() function was modified to re-dispatch
zios to a ZIO_TASKQ_ISSUE thread when we're in a context which
is known to be stack heavy. Those two contexts are the sync
thread and what ever thread is performing spa initialization.
Unfortunately, this change introduced an unlikely bug which can
result in a zio being re-dispatched indefinitely and never being
executed. If during spa initialization we handle a zio with
ZIO_PRIORITY_NOW it will be moved to the high priority queue.
When __zio_execute() is called again for the zio it will mis-
interpret the context and re-dispatch it again. The system
will get stuck spinning re-dispatching the zio and making no
forward progress.
To fix this rare issue __zio_execute() has been updated not
to re-dispatch zios on either the ZIO_TASKQ_ISSUE or
ZIO_TASKQ_ISSUE_HIGH task queues.
In practice this issue was rarely reported and can usually
be fixed by rebooting the system and importing the pool again.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1455
No point in rewind() mtab in zfs_unshare_proto(). We're not really
reading the file, but instead use libzfs_mnttab_find() which does
the nessesary freopen() for us.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1498
John Layman [Tue, 13 Aug 2013 19:24:58 +0000 (15:24 -0400)]
Fix for re-reading /etc/mtab in zfs_is_mounted()
When /etc/mtab is updated on Linux it's done atomically with
rename(2). A new mtab is written, the existing mtab is unlinked,
and the new mtab is renamed to /etc/mtab. This means that we
must close the old file and open the new file to get the updated
contents. Using rewind(3) will just move the file pointer back
to the start of the file, freopen(3) will close and open the file.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1611
3098 zfs userspace/groupspace fail without saying why when run as non-root
Reviewed by: Eric Schrock <eric.schrock@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Matthew Ahrens [Thu, 21 Mar 2013 22:47:36 +0000 (14:47 -0800)]
Illumos #3618 ::zio dcmd does not show timestamp data
3618 ::zio dcmd does not show timestamp data
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: George Wilson <gwilson@zfsmail.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Approved by: Dan McDonald <danmcd@nexenta.com>
The original changeset mostly deals with mdb ::zio dcmd.
However, in order to provide the requested functionality
it modifies vdev and zio structures to keep the timing data
in nanoseconds instead of ticks. It is these changes that
are ported over in the commit in hand.
One visible change of this commit is that the default value
of 'zfs_vdev_time_shift' tunable is changed:
zfs_vdev_time_shift = 6
to
zfs_vdev_time_shift = 29
The original value of 6 was inherited from OpenSolaris and
was subotimal - since it shifted the raw tick value - it
didn't compensate for different tick frequencies on Linux and
OpenSolaris. The former has HZ=1000, while the latter HZ=100.
(Which itself led to other interesting performance anomalies
under non-trivial load. The deadline scheduler delays the IO
according to its priority - the lower priority the further
the deadline is set. The delay is measured in units of
"shifted ticks". Since the HZ value was 10 times higher,
the delay units were 10 times shorter. Thus really low
priority IO like resilver (delay is 10 units) and scrub
(delay is 20 units) were scheduled much sooner than intended.
The overall effect is that resilver and scrub IO consumed
more bandwidth at the expense of the other IO.)
Now that the bookkeeping is done is nanoseconds the shift
behaves correctly for any tick frequency (HZ).
Ported-by: Cyril Plisko <cyril.plisko@mountall.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1643
Richard Yao [Sun, 14 Jul 2013 16:59:24 +0000 (12:59 -0400)]
Linux 3.8 compat: Support CONFIG_UIDGID_STRICT_TYPE_CHECKS
When CONFIG_UIDGID_STRICT_TYPE_CHECKS is enabled uid_t/git_t are
replaced by kuid_t/kgid_t, which are structures instead of integral
types. This causes any code that uses an integral type to fail to build.
The User Namespace functionality introduced in Linux 3.8 requires
CONFIG_UIDGID_STRICT_TYPE_CHECKS, so we could not build against any
kernel that supported it.
We resolve this by converting between the new kuid_t/kgid_t structures
and the original uid_t/gid_t types.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1589
Brian Behlendorf [Thu, 25 Jul 2013 17:39:31 +0000 (10:39 -0700)]
Evict meta data from ghost lists + l2arc headers
When the meta limit is exceeded the ARC evicts some meta data
buffers from the mfu+mru lists. Unfortunately, for meta data
heavy workloads it's possible for these buffers to accumulate
on the ghost lists if arc_c doesn't exceed arc_size.
To handle this case arc_adjust_meta() has been entended to
explicitly evict meta data buffers from the ghost lists in
proportion to what was evicted from the mfu+mru lists.
If this is insufficient we request that the VFS release
some inodes and dentries. This will result in the release
of some dnodes which are counted as 'other' metadata.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Thu, 25 Jul 2013 17:28:45 +0000 (10:28 -0700)]
Allow arc_evict_ghost() to only evict meta data
The default behavior of arc_evict_ghost() is to start by evicting
data buffers. Then only if the requested number of bytes to evict
cannot be satisfied by data buffers move on to meta data buffers.
This is ideal for honoring arc_c since it's preferable to keep the
meta data cached. However, if we're trying to free memory from the
arc to honor the meta limit it's a problem because we will need to
discard all the data to get to the meta data.
To avoid this issue the arc_evict_ghost() is now passed a fourth
argumented describing which buffer type to start with. The
arc_evict() function already behaves exactly like this for a
same reason so this is consistent with the existing code.
All existing callers have been updated to pass ARC_BUFC_DATA so
this patch introduces no functional change. New callers may
pass ARC_BUFC_METADATA to skip immediately to evicting meta
data leaving the normal data untouched.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Saso Kiselkov [Thu, 1 Aug 2013 20:02:10 +0000 (13:02 -0700)]
Illumos #3137 L2ARC compression
3137 L2ARC compression
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@nexenta.com>
A l2arc_nocompress module option was added to prevent the
compression of l2arc buffers regardless of how a dataset's
compression property is set. This allows the legacy behavior
to be preserved.
Ported by: James H <james@kagisoft.co.uk> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1379
Richard Yao [Sun, 4 Aug 2013 23:13:15 +0000 (19:13 -0400)]
Return -1 from arc_shrinker_func()
This is analogous to SPL commit zfsonlinux/spl@b9b3715. While
we don't have clear evidence of systems getting caught here
indefinately like in the SPL this ensures that it will never
happen.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1579
Richard Yao [Sun, 4 Aug 2013 18:58:45 +0000 (14:58 -0400)]
Return correct type and offset from zfs_readdir
zfs_readdir() is used by getdents(), which provides a list of all files
in directory, their types and an offset that be used by llseek() to seek
to the next directory entry.
On Solaris, the first two directory entries "." and ".." respectively
have offsets 1 and 2 on ZFS while the other files have rather large
numbers. Currently, ZFSOnLinux is giving "." offset 0 and all other
entries large numbers. The first entry's next entry offset points to
itself, which causes software that uses llseek() in conjunction with
getdents() for filesystem navigation to enter an infinite loop. The
offsets used for each directory entry are filesystem specific on all
platforms, so we can fix this by adopting the Solaris behavior.
Also, we currently report each directory entry as having type 0 (???).
This is not wrong, but we can do better. getdents() on Solaris does not
appear to provide this information, but it does on Linux and Mac OS X
do. ZFS provides easy access to type information in zfs_readdir(), so
this patch provides this as well.
Reported-by: Andrey <andrey@kudinov.su> Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1624
George Wilson [Fri, 5 Jul 2013 19:14:17 +0000 (15:14 -0400)]
Illumos #3639 zpool.cache should skip over readonly pools
3639 zpool.cache should skip over readonly pools
Reviewed by: Eric Schrock <eric.schrock@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Basil Crow <basil.crow@delphix.com>
Approved by: Gordon Ross <gwr@nexenta.com>
Normally we don't list pools that are imported read-only in the cache
file, however you can accidentally get one into the cache file by
importing and exporting a read-write pool while a read-only pool is
imported:
Brian Behlendorf [Fri, 26 Jul 2013 17:38:49 +0000 (10:38 -0700)]
Write dirty inodes on close
When the property atime=on is set operations which only access
and inode do cause an atime update. However, it turns out that
dirty inodes with updated atimes are only written to disk when
the inodes get evicted from the cache. Somewhat surprisingly
the source suggests that this isn't a ZoL specific issue.
This behavior may in part explain why zfs's reclaim logic has
been observed to be slow. When reclaiming inodes its likely
that they have a dirty atime which will force a write to disk.
Obviously we don't want to force a write to disk for every
atime update, these needs to be batched. The right way to
do this is to fully implement the .dirty_inode and .write_inode
callbacks. However, to do that right requires proper unification
of some fields in the znode/inode. Then we could just mark the
inode dirty and leave it to the VFS to call .write_inode
periodically.
Until that work gets done we have to settle for some middle
ground. The simplest and safest thing we can do for now is
to write the dirty inode on last close. This should prevent
the majority of inodes in the cache from having dirty atimes
and not drastically increase the number of writes.
Some rudimentally testing to show how long it takes to drop
500,000 inodes from the cache shows promising results. This
is as expected because we're no longer do lots of IO as part
of the eviction, it was done earlier during the close.
w/out patch: ~30s to drop 500,000 inodes with drop_caches.
with patch: ~3s to drop 500,000 inodes with drop_caches.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Sat, 27 Jul 2013 11:42:57 +0000 (04:42 -0700)]
Add kmod repo integration
When the kmod packaging infrastructure was originally added the
dependency on the rpmfusion yum repositories was disabled. This
was done at the time in favour of getting local builds working.
Now the time has come to conditionally re-enable that functionality
so we can properly provide binary kmod packages.
./configure --with-config=srpm
make SRPM_DEFINE_KMOD='--define="repo rpmfusion"' srpm-kmod
mock rebuild zfs-kmod-x.y.z-r.el6.src.rpm
One nice benefit of finishing this work is that the generic and
fedora spl-kmod spec files can be merged again.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The dmu_prefetch, dmu_free_long_range, dmu_free_object,
dmu_prealloc, dmu_write_policy, and dmu_sync symbols have
been exported so they may be used by other modules.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Nathaniel Clark [Tue, 23 Jul 2013 17:32:57 +0000 (13:32 -0400)]
dmu_tx: Fix possible NULL pointer dereference
dmu_tx_hold_object_impl can return NULL on error. Check for this
condition prior to dereferencing pointer. This can only occur if
the passed object was invalid or unallocated.
Signed-off-by: Nathaniel Clark <Nathaniel.Clark@misrule.us> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1610
Brian Behlendorf [Wed, 24 Jul 2013 16:57:56 +0000 (09:57 -0700)]
Change l2arc_norw default to zero
These days modern SSDs can efficiently service concurrent reads
and writes. When this flag was added that wasn't really the
case for a variety of SSD controllers. But now we can set the
default value to take advantage of this parallelism and only
disable this as needed for specific troublesome hardware.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Ying Zhu [Sat, 22 Jun 2013 12:35:18 +0000 (20:35 +0800)]
Fix inaccurate arcstat_l2_hdr_size calculations
Based on the comments in arc.c we know that buffers can exist both
in arc and l2arc, under this circumstance both arc_buf_hdr_t and
l2arc_buf_hdr_t will be allocated. However the current logic only
cares for memory that l2arc_buf_hdr takes up when the buffer's
state transfers from or to arc_l2c_only. This will cause obvious
deviations for illumos's zfs version since the sizeof(l2arc_buf_hdr)
is larger than ZOL's. We can implement the calcuation in the
following simple way:
1. When allocate a l2arc_buf_hdr_t we add its memory consumption
instantly and subtract it when we free or evict the l2arc buf.
2. According to l2arc_hdr_stat_add and l2arc_hdr_stat_remove, if
the buffer only stays in l2arc we should also add the memory
its arc_buf_hdr_t consumes, so we only need to add HDR_SIZE to
arcstat_l2_hdr_size since we already concerned with L2HDR_SIZE
in step 1 and the same for transfering arc bufs from l2arc only
state.
The testbox has 2 4-core Intel Xeon CPUs(2.13GHz), with 16GB memory
and tests were set upped in the following way:
1. Fdisked a SATA disk into two partitions, one partition for zpool
storage and the other one was used as the cache device.
2. Generated some files occupying 14GB altogether in the zpool
prepared in step 1 using iozone.
3. Read them all using md5sum and watched the l2arc related statistics
in /proc/spl/kstat/zfs/arcstats. After the reading ended the
l2_hdr_size and l2_size were shown like this:
these numbers made sense, on 64-bit systems the
sizeof(l2arc_buf_hdr_t) is 16 bytes. Assue all blocks cached by
l2arc are 128KB, so 535600/16*128*1024=4387635200, since not all
blocks are equal-sized, the theoretical result will be a little
bigger, as we can see.
Since I'm familiar with systemtap instrumentation tool I used it to
examine what had happened. The script looked like this:
probe module("zfs").function("arc_chage_state")
{
if ($new_state == $arc_l2_only)
printf("change arc buf to arc_l2_only\n")
}
It will print out some information each time we call funciton
arc_chage_state if the argument new_state is arc_l2_only. I
gathered the trace logs and found that none of the arc bufs ran
into arc state arc_l2_only when the tests was running, this was
the reason why l2_hdr_size in step 3 was 0. The arc bufs fell into
arc_l2_only when the pool or the filesystem was offlined.
Signed-off-by: Ying Zhu <casualfisher@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Brian Behlendorf [Mon, 15 Jul 2013 20:37:51 +0000 (13:37 -0700)]
Fix arc_adapt() spinning in iterate_supers_type()
The iterate_supers_type() function which was introduced in the
3.0 kernel was supposed to provide a safe way to call an arbitrary
function on all super blocks of a specific type. Unfortunately,
because a list_head was used a bug was introduced which made it
possible for iterate_supers_type() to get stuck spinning on a
super block which was just deactivated.
This can occur because when the list head is removed from the
fs_supers list it is reinitialized to point to itself. If the
iterate_supers_type() function happened to be processing the
removed list_head it will get stuck spinning on that list_head.
The bug was fixed in the 3.3 kernel by converting the list_head
to an hlist_node. However, to resolve the issue for existing
3.0 - 3.2 kernels we detect when a list_head is used. Then to
prevent the spinning from occurring the .next pointer is set to
the fs_supers list_head which ensures the iterate_supers_type()
function will always terminate.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1045
Closes #861
Closes #790
Brian Behlendorf [Wed, 17 Jul 2013 16:15:46 +0000 (09:15 -0700)]
Fix read-only pool hang on unmount
During mount a filesystem dataset would have the MS_RDONLY bit
incorrectly cleared even if the entire pool was read-only.
There is existing to code to handle this case but it was being run
before the property callbacks were registered. To resolve the
issue we move this read-only code after the callback registration.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1338
Brian Behlendorf [Thu, 11 Jul 2013 21:11:32 +0000 (14:11 -0700)]
Fix zfsctl_expire_snapshot() deadlock
It is possible for an automounted snapshot which is expiring to
deadlock with a manual unmount of the snapshot. This can occur
because taskq_cancel_id() will block if the task is currently
executing until it completes. But it will never complete because
zfsctl_unmount_snapshot() is holding the zsb->z_ctldir_lock which
zfsctl_expire_snapshot() must acquire.
---------------------- z_unmount/0:2153 ---------------------
mutex_lock <blocking on zsb->z_ctldir_lock>
zfsctl_unmount_snapshot
zfsctl_expire_snapshot
taskq_thread
------------------------- zfs:10690 -------------------------
taskq_wait_id <waiting for z_unmount to exit>
taskq_cancel_id
__zfsctl_unmount_snapshot
zfsctl_unmount_snapshot <takes zsb->z_ctldir_lock>
zfs_unmount_snap
zfs_ioc_destroy_snaps_nvl
zfsdev_ioctl
do_vfs_ioctl
We resolve the deadlock by dropping the zsb->z_ctldir_lock before
calling __zfsctl_unmount_snapshot(). The lock is only there to
prevent concurrent modification to the zsb->z_ctldir_snaps AVL
tree. Moreover, we're careful to remove the zfs_snapentry_t from
the AVL tree before dropping the lock which ensures no other tasks
can find it. On failure it's added back to the tree.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Chris Dunlap <cdunlap@llnl.gov>
Closes #1527
Brian Behlendorf [Thu, 11 Jul 2013 22:33:10 +0000 (15:33 -0700)]
Add dkms_version conditional
By adding a dkms_version conditional it's now possible to specify
an exact version of dkms. This is used by the Fedora and EPEL
yum repositories to ensure the patched version of dkms provided
by the repository is installed. The patched version of dkms
ensures that the spl modules are built before the zfs modules.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1466
Brian Behlendorf [Fri, 31 May 2013 19:07:59 +0000 (12:07 -0700)]
Improve N-way mirror performance
The read bandwidth of an N-way mirror can by increased by 50%,
and the IOPs by 10%, by more carefully selecting the preferred
leaf vdev.
The existing algorthm selects a perferred leaf vdev based on
offset of the zio request modulo the number of members in the
mirror. It assumes the drives are of equal performance and
that spreading the requests randomly over both drives will be
sufficient to saturate them. In practice this results in the
leaf vdevs being under utilized.
Utilization can be improved by preferentially selecting the leaf
vdev with the least pending IO. This prevents leaf vdevs from
being starved and compensates for performance differences between
disks in the mirror. Faster vdevs will be sent more work and
the mirror performance will not be limitted by the slowest drive.
In the common case where all the pending queues are full and there
is no single least busy leaf vdev a batching stratagy is employed.
Of the N least busy vdevs one is selected with equal probability
to be the preferred vdev for T microseconds. Compared to randomly
selecting a vdev to break the tie batching the requests greatly
improves the odds of merging the requests in the Linux elevator.
The testing results show a significant performance improvement
for all four workloads tested. The workloads were generated
using the fio benchmark and are as follows.
1) 1MB sequential reads from 16 threads to 16 files (MB/s).
2) 4KB sequential reads from 16 threads to 16 files (MB/s).
3) 1MB random reads from 16 threads to 16 files (IOP/s).
4) 4KB random reads from 16 threads to 16 files (IOP/s).
Add new kstat for monitoring time in dmu_tx_assign
This change adds a new kstat to gain some visibility into the amount of
time spent in each call to dmu_tx_assign. A histogram is exported via
a new dmu_tx_assign_histogram-$POOLNAME file. The information contained
in this histogram is the frequency dmu_tx_assign took to complete given
an interval range. For example, given the below histogram file:
$ cat /proc/spl/kstat/zfs/dmu_tx_assign_histogram-tank
12 1 0x01 32 1536 1979206807669120516481514522
name type data
1 us 4 859
2 us 4 252
4 us 4 171
8 us 4 2
16 us 4 0
32 us 4 2
64 us 4 0
128 us 4 0
256 us 4 0
512 us 4 0
1024 us 4 0
2048 us 4 0
4096 us 4 0
8192 us 4 0
16384 us 4 0
32768 us 4 1
65536 us 4 1
131072 us 4 1
262144 us 4 4
524288 us 4 0 1048576 us 4 0 2097152 us 4 0 4194304 us 4 0 8388608 us 4 0 16777216 us 4 0 33554432 us 4 0 67108864 us 4 0 134217728 us 4 0 268435456 us 4 0 536870912 us 4 0 1073741824 us 4 0 2147483648 us 4 0
one can see most calls to dmu_tx_assign completed in 32us or less, but a
few outliers did not. Specifically, 4 of the calls took between 262144us
and 131072us. This information is difficult, if not impossible, to gather
without this change.
Signed-off-by: Prakash Surya <surya1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1584
The zpool_read_label() function was subtly broken due to a
difference of behavior in fstat64(2) on Solaris vs Linux.
Under Solaris when a block device is stat'ed the st_size
field will contain the size of the device in bytes. Under
Linux this is only true for regular file and symlinks. A
compatibility function called fstat64_blk(2) was added
which can be used when the Solaris behavior is required.
This flaw was never noticed because the only time we would
need to use the device size is when the first two labels
are damaged. I noticed this issue while adding the
zpool_clear_label() function which is similar in design
and does require us to write all the labels.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The FreeBSD implementation of zfs adds the 'zpool labelclear'
command. Since this functionality is helpful and straight
forward to add it is being included in ZoL.
This patch restores the zpool_clear_label() function from
OpenSolaris. This was removed by commit d603ed6 because
it wasn't clear we had a use for it in ZoL. However, this
functionality is a prerequisite for adding the 'zpool labelclear'
command from FreeBSD.
As part of bringing this change in the zpool_clear_label()
function was changed to use fstat64_blk(2) for compatibility
with Linux.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1126
Tim Chase [Tue, 9 Jul 2013 12:15:26 +0000 (07:15 -0500)]
zdb: enhancement - Display SA xattrs.
If the znode has SA xattrs, display them following the other
standard attributes. The format used is similar to that used
when listing the contents of a ZAP. It is as follows:
Ying Zhu [Sat, 29 Jun 2013 07:03:49 +0000 (15:03 +0800)]
Improve code in arc_buf_remove_ref
When we remove references of arc bufs in the arc_anon state we
needn't take its header's hash_lock, so postpone it to where we
really need it to avoid unnecessary invocations of function buf_hash.
Signed-off-by: Ying Zhu <casualfisher@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1557
There are times when it is desirable for zfs to not automatically
populate the spa namespace at module load time using the pools
in the /etc/zfs/zpool.cache file. The zfs_autoimport_disable
module option has been added to control this behavior.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #330
Chris Dunlop [Mon, 3 Jun 2013 06:58:52 +0000 (16:58 +1000)]
3.10 API change: block_device_operations->release() returns void
Linux kernel commit torvalds/linux@db2a144 changed the return type
of block_device_operations->release() to void. Detect the expected
prototype and defined our callout accordingly.
Signed-off-by: Chris Dunlop <chris@onthe.net.au> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1494
Prior to adopting the kmod style packaging the zfs packages
would conditionally invoke /sbin/chkconfig to create the
proper links for the init script. This is done conditionally
because many distributions are moving away from SysV style
init scripts and we don't want to cause errors on those.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1376
Remove from the zfs package the depenencies on the zfs-dracut and
zfs-test subpackages. Neither of these packages are required for
normal operation and they bring in many unnecessary dependencies
during installation.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1395
One of the side effects of calling zvol_create_minors() in
zvol_init() is that all pools listed in the cache file will
be opened. Depending on the state and contents of your pool
this operation can take a considerable length of time.
Doing this at load time is undesirable because the kernel
is holding a global module lock. This prevents other modules
from loading and can serialize an otherwise parallel boot
process. Doing this after module inititialization also
reduces the chances of accidentally introducing a race
during module init.
To ensure that /dev/zvol/<pool>/<dataset> devices are
still automatically created after the module load completes
a udev rules has been added. When udev notices that the
/dev/zfs device has been create the 'zpool list' command
will be run. This then will cause all the pools listed
in the zpool.cache file to be opened.
Because this process in now driven asynchronously by udev
there is the risk of problems in downstream distributions.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #756
Issue #1020
Issue #1234
We fix this by calling spin_lock_init before blk_init_queue.
The manner in which zvol_init() initializes structures is
suspectible to a race between initialization and a probe on
a zvol. We reorganize zvol_init() to prevent that.
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Call zvol_create_minors() in spa_open_common() when initializing pool
There is an extremely odd bug that causes zvols to fail to appear on
some systems, but not others. Recently, I was able to consistently
reproduce this issue over a period of 1 month. The issue disappeared
after I applied this change from FreeBSD.
This is from FreeBSD's pool version 28 import, which occurred in
revision 219089.
Ported-by: Richard Yao <ryao@cs.stonybrook.edu> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #441
Issue #599
A mount failure was accidentally introduced by commit 0c1171d
which reworked the parse_dataset() function to read pool names
from devices. The error case where a label is read from the
device but the pool name/value pair doesn't exist was not
handled properly. In this case we should fall back to the
previous behavior.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1560
George Wilson [Tue, 2 Jul 2013 20:26:24 +0000 (13:26 -0700)]
Illumos #3498 panic in arc_read()
3498 panic in arc_read(): !refcount_is_zero(&pbuf->b_hdr->b_refcnt)
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Matthew Ahrens [Tue, 2 Jul 2013 20:20:02 +0000 (13:20 -0700)]
Illumos #3122 zfs destroy filesystem should prefetch blocks
3122 zfs destroy filesystem should prefetch blocks
Reviewed by: Christopher Siden <chris.siden@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Richard Yao [Tue, 2 Jul 2013 04:07:15 +0000 (00:07 -0400)]
Use MAXPATHLEN instead of sizeof in snprintf
This silences a GCC 4.8.0 warning by fixing a programming error
caught by static analysis:
../../cmd/ztest/ztest.c: In function ‘ztest_vdev_aux_add_remove’:
../../cmd/ztest/ztest.c:2584:33: error: argument to ‘sizeof’
in ‘snprintf’ call is the same expression as the destination;
did you mean to provide an explicit length?
[-Werror=sizeof-pointer-memaccess]
(void) snprintf(path, sizeof (path), ztest_aux_template,
^
Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1480
#define SYNC_PASS_DEFERRED_FREE 2 /* defer frees after this pass */
#define SYNC_PASS_DONT_COMPRESS 4 /* don't compress after this pass */
#define SYNC_PASS_REWRITE 1 /* rewrite new bps after this pass */
with a tunable parameters:
int zfs_sync_pass_deferred_free = 2; /* defer frees starting in this pass */
int zfs_sync_pass_dont_compress = 5; /* don't compress starting in this pass */
int zfs_sync_pass_rewrite = 2; /* rewrite new bps starting in this pass */
This commit makes these tunables available as module parameters
in Linux. They should only be used for performance analysis
because changing them can result in subtle and pathological
performance problems.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1562
Li Dongyang [Thu, 13 Jun 2013 17:51:09 +0000 (13:51 -0400)]
Add SEEK_DATA/SEEK_HOLE to lseek()/llseek()
The approach taken was the rework zfs_holey() as little as
possible and then just wrap the code as needed to ensure
correct locking and error handling.
Tested with xfstests 285 and 286. All tests pass except for
7-9 of 285 which try to reserve blocks first via fallocate(2)
and fail because fallocate(2) is not yet supported.
Note that the filp->f_lock spinlock did not exist prior to
Linux 2.6.30, but we avoid the need for autotools check by
virtue of the fact that SEEK_DATA/SEEK_HOLE support was not
added until Linux 3.1.
An autoconf check was added for lseek_execute() which is
currently a private function but the expectation is that it
will be exported perhaps as early as Linux 3.11.
Reviewed-by: Richard Laager <rlaager@wiktel.com> Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1384
Matthew Ahrens [Mon, 1 Jul 2013 16:24:43 +0000 (09:24 -0700)]
Readd zfs_holey() from OpenSolaris
This patch restores the zfs_holey() function from OpenSolaris.
This was removed by commit 3558fd7 because it wasn't clear we
had a use for it in ZoL. However, this functionality is a
prerequisite for adding SEEK_DATA/SEEK_HOLE support to the ZPL.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Richard Yao <ryao@gentoo.org>
Issue #1384