From: Chunwei Chen Date: Tue, 22 Apr 2014 08:45:36 +0000 (+0800) Subject: Fix crash when using ZFS on Ceph rbd X-Git-Tag: spl-0.6.3~9 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=ae16ed992bd0ef5a55b04d9edaaa6456674315f9;p=spl Fix crash when using ZFS on Ceph rbd When using __get_free_pages to get high order memory, only the first page's _count will set to 1, other's will be 0. When an internal page get passed into rbd, it will eventully go into tcp_sendpage. There, it will be called with get_page and put_page, and get freed erroneously when _count jump back to 0. The solution to this problem is to use compound page. All pages in a high order compound page share a single _count. So get_page and put_page in tcp_sendpage will not cause _count jump to 0. Signed-off-by: Chunwei Chen Signed-off-by: Brian Behlendorf Closes #251 --- diff --git a/module/spl/spl-kmem.c b/module/spl/spl-kmem.c index 55c467b..b673c29 100644 --- a/module/spl/spl-kmem.c +++ b/module/spl/spl-kmem.c @@ -864,7 +864,8 @@ kv_alloc(spl_kmem_cache_t *skc, int size, int flags) ASSERT(ISP2(size)); if (skc->skc_flags & KMC_KMEM) - ptr = (void *)__get_free_pages(flags, get_order(size)); + ptr = (void *)__get_free_pages(flags | __GFP_COMP, + get_order(size)); else ptr = __vmalloc(size, flags | __GFP_HIGHMEM, PAGE_KERNEL);