libceph: fix corruption when using page_count 0 page in rbd
authorChunwei Chen <tuxoko@gmail.com>
Wed, 23 Apr 2014 04:35:09 +0000 (12:35 +0800)
committerIlya Dryomov <ilya.dryomov@inktank.com>
Fri, 16 May 2014 17:29:26 +0000 (21:29 +0400)
commit178eda29ca721842f2146378e73d43e0044c4166
tree0a4e1518e04a719ca1c798c0c4a90a6bbe0f4bd9
parentd6d211db37e75de2ddc3a4f979038c40df7cc79c
libceph: fix corruption when using page_count 0 page in rbd

It has been reported that using ZFSonLinux on rbd will result in memory
corruption. The bug report can be found here:

https://github.com/zfsonlinux/spl/issues/241
http://tracker.ceph.com/issues/7790

The reason is that ZFS will send pages with page_count 0 into rbd, which in
turns send them to tcp_sendpage. However, tcp_sendpage cannot deal with
page_count 0, as it will do get_page and put_page, and erroneously free the
page.

This type of issue has been noted before, and handled in iscsi, drbd,
etc. So, rbd should also handle this. This fix address this issue by fall back
to slower sendmsg when page_count 0 detected.

Cc: Sage Weil <sage@inktank.com>
Cc: Yehuda Sadeh <yehuda@inktank.com>
Cc: stable@vger.kernel.org
Signed-off-by: Chunwei Chen <tuxoko@gmail.com>
Reviewed-by: Ilya Dryomov <ilya.dryomov@inktank.com>
net/ceph/messenger.c
This page took 0.027854 seconds and 5 git commands to generate.