ixgbe: Fix ATR so that it correctly handles IPv6 extension headers
authorAlexander Duyck <aduyck@mirantis.com>
Tue, 26 Jan 2016 03:39:40 +0000 (19:39 -0800)
committerJeff Kirsher <jeffrey.t.kirsher@intel.com>
Wed, 30 Mar 2016 05:32:33 +0000 (22:32 -0700)
commite2873d43f9c607e9d855b8ae120d5990ba1722df
treee8a6b92ca88d4f53046e20fd5a238109ffff194e
parent9f12df906cd807a05d71aa53a951532d1dd3b888
ixgbe: Fix ATR so that it correctly handles IPv6 extension headers

The ATR code was assuming that it would be able to use tcp_hdr for
every TCP frame that came through.  However this isn't the case as it
is possible for a frame to arrive that is TCP but sent through something
like a raw socket.  As a result the driver was setting up bad filters in
which tcp_hdr was really pointing to the network header so the data was
all invalid.

In order to correct this I have added a bit of parsing logic that will
determine the TCP header location based off of the network header and
either the offset in the case of the IPv4 header, or a walk through the
IPv6 extension headers until it encounters the header that indicates
IPPROTO_TCP.  In addition I have added checks to verify that the lowest
protocol provided is recognized as IPv4 or IPv6 to help mitigate raw
sockets using ETH_P_ALL from having ATR applied to them.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
This page took 0.024419 seconds and 5 git commands to generate.