AS_IF([test ! -f "$LINUX_OBJ/$LINUX_SYMBOLS"], [
AC_MSG_ERROR([
*** Please make sure the kernel devel package for your distribution
- *** is installed. If your building with a custom kernel make sure the
+ *** is installed. If you are building with a custom kernel, make sure the
*** kernel is configured, built, and the '--with-linux=PATH' configure
*** option refers to the location of the kernel source.])
])
AS_IF([test ! -d "$kernelsrc"], [
AC_MSG_ERROR([
*** Please make sure the kernel devel package for your distribution
- *** is installed then try again. If that fails you can specify the
+ *** is installed and then try again. If that fails, you can specify the
*** location of the kernel source with the '--with-linux=PATH' option.])
])
* read db_data, dmu_buf_will_dirty() before modifying it, and the
* object must be held in an assigned transaction before calling
* dmu_buf_will_dirty. You may use dmu_buf_set_user() on the bonus
- * buffer as well. You must release your hold with dmu_buf_rele().
+ * buffer as well. You must release what you hold with dmu_buf_rele().
*/
int dmu_bonus_hold(objset_t *os, uint64_t object, void *tag, dmu_buf_t **);
int dmu_bonus_max(void);
* Obtain the DMU buffer from the specified object which contains the
* specified offset. dmu_buf_hold() puts a "hold" on the buffer, so
* that it will remain in memory. You must release the hold with
- * dmu_buf_rele(). You musn't access the dmu_buf_t after releasing your
- * hold. You must have a hold on any dmu_buf_t* you pass to the DMU.
+ * dmu_buf_rele(). You must not access the dmu_buf_t after releasing
+ * what you hold. You must have a hold on any dmu_buf_t* you pass to the DMU.
*
* You must call dmu_buf_read, dmu_buf_will_dirty, or dmu_buf_will_fill
* on the returned buffer before reading or writing the buffer's
#define _LIBSPL_UMEM_H
/* XXX: We should use the real portable umem library if it is detected
- * at configure time. However, if the library is not available we can
+ * at configure time. However, if the library is not available, we can
* use a trivial malloc based implementation. This obviously impacts
- * performance but unless you using a full userspace build of zpool for
- * something other than ztest your likely not going to notice or care.
+ * performance, but unless you are using a full userspace build of zpool for
+ * something other than ztest, you are likely not going to notice or care.
*
* https://labs.omniti.com/trac/portableumem
*/
* Devices are always opened by the path provided at configuration
* time. This means that if the provided path is a udev by-id path
* then drives may be recabled without an issue. If the provided
- * path is a udev by-path path then the physical location information
+ * path is a udev by-path path, then the physical location information
* will be preserved. This can be critical for more complicated
* configurations where drives are located in specific physical
* locations to maximize the systems tolerence to component failure.
- * Alternately you can provide your own udev rule to flexibly map
+ * Alternatively, you can provide your own udev rule to flexibly map
* the drives as you see fit. It is not advised that you use the
- * /dev/[hd]d devices which may be reorder due to probing order.
+ * /dev/[hd]d devices which may be reordered due to probing order.
* Devices in the wrong locations will be detected by the higher
* level vdev validation.
*/
check_test || die "${ERROR}"
. ${ZPIOS_TEST}
-# Pull in the zpios test module is not loaded. If this fails it is
+# Pull in the zpios test module if not loaded. If this fails, it is
# likely because the full module stack was not yet loaded with zfs.sh
if check_modules; then
if ! load_modules; then