These days most disk drivers will probe for devices asynchronously.
This means it's possible that when you zfs init script runs all the
required block devices may not yet have been discovered. The result
is the pool may fail to cleanly import at boot time. This is
particularly common when you have a large number of devices.
The fix is for the init script to block until udev settles and we
are no longer detecting new devices. Once the system has settled
the zfs modules can be loaded and the pool with be automatically
imported.
action $"SELinux ZFS policy required: " /bin/false || return 6
fi
+ # Delay until all required block devices are present.
+ udevadm settle
+
# load kernel module infrastructure
if ! grep -q zfs /proc/modules ; then
action $"Loading kernel ZFS infrastructure: " modprobe zfs || return 5
start() {
ebegin "Starting ZFS"
checksystem || return 1
+
+ # Delay until all required block devices are present.
+ udevadm settle
+
if [ ! -c /dev/zfs ]; then
modprobe $ZFS_MODULE
rv=$?
return 4
fi
+ # Delay until all required block devices are present.
+ udevadm settle
+
# Load the zfs module stack
/sbin/modprobe zfs
case $1 in
start) echo "$1ing ZFS filesystems"
+ # Delay until all required block devices are present.
+ udevadm settle
+
if ! grep "zfs" /proc/modules > /dev/null; then
echo "ZFS kernel module not loaded yet; loading...";
if ! modprobe zfs; then
action $"SELinux ZFS policy required: " /bin/false || return 6
fi
+ # Delay until all required block devices are present.
+ udevadm settle
+
# load kernel module infrastructure
if ! grep -q zfs /proc/modules ; then
action $"Loading kernel ZFS infrastructure: " modprobe zfs || return 5