deliverable/linux.git
8 years agoi40e/i40evf: Rewrite logic for 8 descriptor per packet check
Alexander Duyck [Wed, 17 Feb 2016 19:02:50 +0000 (11:02 -0800)] 
i40e/i40evf: Rewrite logic for 8 descriptor per packet check

This patch is meant to rewrite the logic for how we determine if we can
transmit the frame or if it needs to be linearized.

The previous code for this function was using a mix of division and modulus
division as a part of computing if we need to take the slow path.  Instead
I have replaced this by simply working with a sliding window which will
tell us if the frame would be capable of causing a single packet to span
several descriptors.

The logic for the scan is fairly simple.  If any given group of 6 fragments
is less than gso_size - 1 then it is possible for us to have one byte
coming out of the first fragment, 6 fragments, and one or more bytes coming
out of the last fragment.  This gives us a total of 8 fragments
which exceeds what we can allow so we send such frames to be linearized.

Arguably the use of modulus might be more exact as the approach I propose
may generate some false positives.  However the likelihood of us taking much
of a hit for those false positives is fairly low, and I would rather not
add more overhead in the case where we are receiving a frame composed of 4K
pages.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Break up xmit_descriptor_count from maybe_stop_tx
Alexander Duyck [Wed, 17 Feb 2016 19:02:43 +0000 (11:02 -0800)] 
i40e/i40evf: Break up xmit_descriptor_count from maybe_stop_tx

In an upcoming patch I would like to have access to the descriptor count
used for the data portion of the frame.  For this reason I am splitting up
the descriptor count function from the function that stops the ring.

Also in order to try and reduce unnecessary duplication of code I am moving
the slow-path portions of the code out of being inline calls so that we can
just jump to them and process them instead of having to build them into
each function that calls them.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Fri, 19 Feb 2016 04:47:04 +0000 (23:47 -0500)] 
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2016-02-18

This series contains updates to i40e and i40evf only.

Alex Duyck provides all the patches in the series to update and fix the
drivers.  Fixed the driver to drop the outer checksum offload on UDP
tunnels, since the issue is that the upper levels of the stack never
requested such an offload and it results in possible errors.  Updates the
TSO function to just use u64 values, so we do not have to end up casting
u32 values.  In the TSO path, factored out the L4 header offsets allowing
us to ignore the L4 header offsets when dealing with the L3 checksum and
length update.  Consolidates all of the spots where we were updating
either the TCP or IP checksums in the TSO and checksum path into the TSO
function.  Fixed two issues by adding support for IPv4 encapsulated in
IPv6, first issue was the fact that iphdr(skb)->protocol was being used to
test for the outer transport protocol which breaks IPv6 support.  The second
was that we cleared the flag for v4 going to v6, but we did not take care
of txflags going the other way.  Added support for IPv6 extension headers
in setting up the Tx checksum.  Added exception handling to the Tx
checksum path so that we can handle cases of TSO where the frame is bad,
or Tx checksum where we did not recognize a protocol.  Fixed a number of
issues to make certain that we are using the correct protocols when
parsing both the inner and outer headers of a frame that is mixed between
IPv4 and IPv6 for inner and outer.  Updated the feature flags to reflect
the newly enabled/added features.

Sorry, no witty patch descriptions this time around, probably should
let Mitch help in writing patch descriptions for Alex. :-)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobnx2x: Add missing HSI for big-endian machines
Yuval Mintz [Wed, 17 Feb 2016 11:15:14 +0000 (13:15 +0200)] 
bnx2x: Add missing HSI for big-endian machines

Commit e5d3a51cefbb ("bnx2x: extend DCBx support") was missing HSI
changes for big-endian machine, breaking compilation on such
platforms.

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoi40e: Add support for ATR w/ IPv6 extension headers
Alexander Duyck [Tue, 26 Jan 2016 03:32:54 +0000 (19:32 -0800)] 
i40e: Add support for ATR w/ IPv6 extension headers

This patch updates the code for determining the L4 protocol and L3 header
length so that when IPv6 extension headers are being used we can determine
the offset and type of the L4 protocol.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40evf: Update feature flags to reflect newly enabled features
Alexander Duyck [Mon, 25 Jan 2016 05:17:57 +0000 (21:17 -0800)] 
i40evf: Update feature flags to reflect newly enabled features

Recent changes should have enabled support for IPv6 based tunnels and
support for TSO with outer UDP checksums.  As such we can update the
feature flags to reflect that.

In addition we can clean-up the flags that aren't needed such as SCTP and
RXCSUM since having the bits there doesn't add any value.

I also found one spot where we were setting the same flag twice.  It looks
like it was probably a git merge error that resulted in the line being
duplicated.  As such I have dropped it in this patch.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Acked-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Update feature flags to reflect newly enabled features
Alexander Duyck [Mon, 25 Jan 2016 05:17:50 +0000 (21:17 -0800)] 
i40e: Update feature flags to reflect newly enabled features

Recent changes should have enabled support for IPv6 based tunnels and
support for TSO with outer UDP checksums.  As such we can update the
feature flags to reflect that.

In addition we can clean-up the flags that aren't needed such as SCTP and
RXCSUM since having the bits there doesn't add any value.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Do not drop support for IPv6 VXLAN or GENEVE tunnels
Alexander Duyck [Mon, 25 Jan 2016 05:17:43 +0000 (21:17 -0800)] 
i40e: Do not drop support for IPv6 VXLAN or GENEVE tunnels

All of the documentation in the datasheets for the XL710 do not call out
any reason to exclude support for IPv6 based tunnels.  As such I am
dropping the code that was excluding these tunnel types from having their
port numbers recognized.  This way we can take advantage of things such as
checksum offload for inner headers over IPv6 based VXLAN or GENEVE
tunnels.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Fix ATR in relation to tunnels
Alexander Duyck [Mon, 25 Jan 2016 05:17:36 +0000 (21:17 -0800)] 
i40e: Fix ATR in relation to tunnels

This patch contains a number of fixes to make certain that we are using
the correct protocols when parsing both the inner and outer headers of a
frame that is mixed between IPv4 and IPv6 for inner and outer.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Acked-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Enable support for SKB_GSO_UDP_TUNNEL_CSUM
Alexander Duyck [Mon, 25 Jan 2016 05:17:29 +0000 (21:17 -0800)] 
i40e/i40evf: Enable support for SKB_GSO_UDP_TUNNEL_CSUM

The XL722 has support for providing the outer UDP tunnel checksum on
transmits.  Make use of this feature to support segmenting UDP tunnels with
outer checksums enabled.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Clean-up Rx packet checksum handling
Alexander Duyck [Mon, 25 Jan 2016 05:17:22 +0000 (21:17 -0800)] 
i40e/i40evf: Clean-up Rx packet checksum handling

This is mostly a minor clean-up for the Rx checksum path in order to avoid
some of the unnecessary conditional checks that were being applied.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoMerge branch 'qed-vlan-filtering'
David S. Miller [Thu, 18 Feb 2016 21:07:45 +0000 (16:07 -0500)] 
Merge branch 'qed-vlan-filtering'

Yuval Mintz says:

====================
qed{,e}: Add vlan filtering offload

This series adds vlan filtering offload to qede.
First patch introduces small additional infrastructure needed in
qed to support it, while second contains the main bulk of driver changes.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqede: Add vlan filtering offload support
Sudarsana Reddy Kalluru [Thu, 18 Feb 2016 15:00:40 +0000 (17:00 +0200)] 
qede: Add vlan filtering offload support

Device would start receiving only vlan-tagged traffic with tags matching
that of one of the configured vlan IDs, unless:
  - Device is expliicly placed in PROMISC mode.
  - Device exhausts its vlan filter credits.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed: Lay infrastructure for vlan filtering offload
Yuval Mintz [Thu, 18 Feb 2016 15:00:39 +0000 (17:00 +0200)] 
qed: Lay infrastructure for vlan filtering offload

Today, interfaces are working in vlan-promisc mode; But once
vlan filtering offloaded would be supported, we'll need a method to
control it directly [e.g., when setting device to PROMISC, or when
running out of vlan credits].

This adds the necessary API for L2 client to manually choose whether to
accept all vlans or only those for which filters were configured.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: phy: dp83848: Fix sysfs naming collision warning
Andrew F. Davis [Thu, 18 Feb 2016 00:10:00 +0000 (18:10 -0600)] 
net: phy: dp83848: Fix sysfs naming collision warning

Files in sysfs are created using the name from the phy_driver struct,
when two names are the same we may get a duplicate filename warning,
fix this.

Reported-by: kernel test robot <ying.huang@linux.intel.com>
Signed-off-by: Andrew F. Davis <afd@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: Optimize local checksum offload
Alexander Duyck [Wed, 17 Feb 2016 19:23:55 +0000 (11:23 -0800)] 
net: Optimize local checksum offload

This patch takes advantage of several assumptions we can make about the
headers of the frame in order to reduce overall processing overhead for
computing the outer header checksum.

First we can assume the entire header is in the region pointed to by
skb->head as this is what csum_start is based on.

Second, as a result of our first assumption, we can just call csum_partial
instead of making a call to skb_checksum which would end up having to
configure things so that we could walk through the frags list.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv6: Annotate change of locking mechanism for np->opt
Benjamin Poirier [Thu, 18 Feb 2016 00:20:33 +0000 (16:20 -0800)] 
ipv6: Annotate change of locking mechanism for np->opt

follows up commit 45f6fad84cc3 ("ipv6: add complete rcu protection around
np->opt") which added mixed rcu/refcount protection to np->opt.

Given the current implementation of rcu_pointer_handoff(), this has no
effect at runtime.

Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'iptunnel-pkt-scrub-consolidate'
David S. Miller [Thu, 18 Feb 2016 19:35:02 +0000 (14:35 -0500)] 
Merge branch 'iptunnel-pkt-scrub-consolidate'

Jiri Benc says:

====================
iptunnel: scrub packet in iptunnel_pull_header

As every IP tunnel has to scrub skb on decapsulation, iptunnel_pull_header
tried to do that and open coded part of skb_scrub_packet. Various tunneling
protocols (VXLAN, Geneve) then called full skb_scrub_packet on their own,
duplicating part of the scrubbing already done.

Consolidate the code, calling skb_scrub_packet from iptunnel_pull_header.
This will allow additional cleanups in VXLAN code, as the packet is scrubbed
early during rx processing after this patchset and VXLAN can start filling
out skb fields earlier.

The full picture of vxlan cleanup patches can be seen at:
https://github.com/jbenc/linux-vxlan/commits/master
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoiptunnel: scrub packet in iptunnel_pull_header
Jiri Benc [Thu, 18 Feb 2016 10:22:52 +0000 (11:22 +0100)] 
iptunnel: scrub packet in iptunnel_pull_header

Part of skb_scrub_packet was open coded in iptunnel_pull_header. Let it call
skb_scrub_packet directly instead.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: move vxlan device lookup before iptunnel_pull_header
Jiri Benc [Thu, 18 Feb 2016 10:22:51 +0000 (11:22 +0100)] 
vxlan: move vxlan device lookup before iptunnel_pull_header

This is in preparation for iptunnel_pull_header calling skb_scrub_packet.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agogeneve: move geneve device lookup before iptunnel_pull_header
Jiri Benc [Thu, 18 Feb 2016 10:22:50 +0000 (11:22 +0100)] 
geneve: move geneve device lookup before iptunnel_pull_header

This is in preparation for iptunnel_pull_header calling skb_scrub_packet.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agogeneve: implement geneve_get_sk_family helper
Jiri Benc [Thu, 18 Feb 2016 10:22:49 +0000 (11:22 +0100)] 
geneve: implement geneve_get_sk_family helper

Similarly to the existing vxlan_get_sk_family.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bridge: log port STP state on change
Vivien Didelot [Tue, 16 Feb 2016 15:09:51 +0000 (10:09 -0500)] 
net: bridge: log port STP state on change

Remove the shared br_log_state function and print the info directly in
br_set_state, where the net_bridge_port state is actually changed.

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'cxgb4-addr-sync'
David S. Miller [Thu, 18 Feb 2016 19:16:13 +0000 (14:16 -0500)] 
Merge branch 'cxgb4-addr-sync'

Hariprasad Shenai says:

====================
cxgb4: Use __dev_[um]c_[un]sync for MAC address syncing

This patch series adds support to use __dev_uc_sync/__dev_mc_sync to add
MAC address and __dev_uc_unsync/__dev_mc_unsync to delete MAC address.

This patch series has been created against net-next tree and includes
patches on cxgb4 and cxgb4vf driver.

We have included all the maintainers of respective drivers. Kindly review
the change and let us know in case of any review comments.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agocxgb4vf: Use __dev_uc_sync/__dev_mc_sync to sync MAC address
Hariprasad Shenai [Tue, 16 Feb 2016 04:37:10 +0000 (10:07 +0530)] 
cxgb4vf: Use __dev_uc_sync/__dev_mc_sync to sync MAC address

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agocxgb4: Use __dev_uc_sync/__dev_mc_sync to sync MAC address
Hariprasad Shenai [Tue, 16 Feb 2016 04:37:09 +0000 (10:07 +0530)] 
cxgb4: Use __dev_uc_sync/__dev_mc_sync to sync MAC address

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoi40e/i40evf: Add exception handling for Tx checksum
Alexander Duyck [Mon, 25 Jan 2016 05:17:10 +0000 (21:17 -0800)] 
i40e/i40evf: Add exception handling for Tx checksum

Add exception handling to the Tx checksum path so that we can handle cases
of TSO where the frame is bad, or Tx checksum where we didn't recognize a
protocol

Drop I40E_TX_FLAGS_CSUM as it is unused, move the CHECKSUM_PARTIAL check
into the function itself so that we can decrease indent.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Do not write to descriptor unless we complete
Alexander Duyck [Mon, 25 Jan 2016 05:17:01 +0000 (21:17 -0800)] 
i40e/i40evf: Do not write to descriptor unless we complete

This patch defers writing to the Tx descriptor bits until we know we have
successfully completed a given operation.  So for example we defer updating
the tunnelling portion of the context descriptor until we have fully
identified the type.

The advantage to this approach is that we can assemble values as we go
instead of having to try and kludge everything together all at once.  As a
result we can significantly clean up the tunneling configuration for
instance as we can just do a pointer walk and do the math for the distance
between each set of points.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agovxlan: tun_id is 64bit, not 32bit
Jiri Benc [Thu, 18 Feb 2016 18:19:29 +0000 (19:19 +0100)] 
vxlan: tun_id is 64bit, not 32bit

The tun_id field in struct ip_tunnel_key is __be64, not __be32. We need to
convert the vni to tun_id correctly.

Fixes: 54bfd872bf16 ("vxlan: keep flags and vni in network byte order")
Reported-by: Paolo Abeni <pabeni@redhat.com>
Tested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Acked-by: Thadeu Lima de Souza Cascardo <cascardo@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoi40e/i40evf: Handle IPv6 extension headers in checksum offload
Alexander Duyck [Mon, 25 Jan 2016 05:16:54 +0000 (21:16 -0800)] 
i40e/i40evf: Handle IPv6 extension headers in checksum offload

This patch adds support for IPv6 extension headers in setting up the Tx
checksum.  Without this patch extension headers would cause IPv6 traffic to
fail as the transport protocol could not be identified.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Add support for IPv4 encapsulated in IPv6
Alexander Duyck [Mon, 25 Jan 2016 05:16:48 +0000 (21:16 -0800)] 
i40e/i40evf: Add support for IPv4 encapsulated in IPv6

This patch fixes two issues.  First was the fact that iphdr(skb)->protocl
was being used to test for the outer transport protocol.  This completely
breaks IPv6 support.  Second was the fact that we cleared the flag for v4
going to v6, but we didn't take care of txflags going the other way.  As
such we would have the v6 flag still set even if the inner header was v4.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Replace header pointers with unions of pointers in Tx checksum path
Alexander Duyck [Mon, 25 Jan 2016 05:16:42 +0000 (21:16 -0800)] 
i40e/i40evf: Replace header pointers with unions of pointers in Tx checksum path

The Tx checksum path was maintaining a set of 3 pointers and two lengths in
order to prepare the packet for being checksummed.  The thing is we only
really needed 2 pointers, and the lengths that were being maintained can
easily be computed.

As such we can replace the IPv4 and IPv6 header pointers with one single
union that represents both, or a generic pointer to the start of the
network header.  For the L4 headers we can do the same with TCP and a
generic pointer to the start of the transport header.  The length of the
TCP header is obtained by simply multiplying doff by 4, and the network
header length can be obtained by subtracting the network header pointer
from the transport header pointer.

While I was at it I renamed l4_hdr to l4_proto to make it a bit more clear
and less likely to be confused with l4.hdr which is the transport header
pointer.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Consolidate all header changes into TSO function
Alexander Duyck [Mon, 25 Jan 2016 05:16:35 +0000 (21:16 -0800)] 
i40e/i40evf: Consolidate all header changes into TSO function

This patch goes through and pulls all of the spots where we were updating
either the TCP or IP checksums in the TSO and checksum path into the TSO
function.  The general idea here is that we should only be updating the
header after we verify we have completed a skb_cow_head check to verify the
head is writable.

One other advantage to doing this is that it makes things much more
obvious.  For example, in the case of IPv6 there was one spot where the
offset of the IPv4 header checksum was being updated which is obviously
incorrect.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Factor out L4 header and checksum from L3 bits in TSO path
Alexander Duyck [Mon, 25 Jan 2016 05:16:28 +0000 (21:16 -0800)] 
i40e/i40evf: Factor out L4 header and checksum from L3 bits in TSO path

This patch makes it so that the L4 header offsets and such can be ignored
when dealing with the L3 checksum and length update.  This is done making
use of two things.

First we can just use the offset from the L4 header to the start of the
packet to determine the L4 offset, and from that we can then make use of
the data offset to determine the full length of the headers.

As far as adjusting the checksum to remove the length we can simply add the
inverse of the length instead of having to recompute the entire
pseudo-header without the length.  In the case of an IPv6 header this
should be significantly cheaper since we can make use of a value we already
needed instead of having to read the source and destination address out of
the packet.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Use u64 values instead of casting them in TSO function
Alexander Duyck [Mon, 25 Jan 2016 05:16:20 +0000 (21:16 -0800)] 
i40e/i40evf: Use u64 values instead of casting them in TSO function

Instead of casing u32 values to u64 it makes more sense to just start out
with u64 values in the first place.  This way we don't need to create a
mess with all of the casts needed to populate a 64b value.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: Drop outer checksum offload that was not requested
Alexander Duyck [Mon, 25 Jan 2016 05:16:13 +0000 (21:16 -0800)] 
i40e/i40evf: Drop outer checksum offload that was not requested

The i40e and i40evf drivers contained code for inserting an outer checksum
on UDP tunnels.  The issue however is that the upper levels of the stack
never requested such an offload and it results in possible errors.

In addition the same logic was being applied to the Rx side where it was
attempting to validate the outer checksum, but the logic there was
incorrect in that it was testing for the resultant sum to be equal to the
header checksum instead of being equal to 0.

Since this code is so massively flawed, and doing things that we didn't ask
for it to do I am just dropping it, and will bring it back later to use as
an offload for SKB_GSO_UDP_TUNNEL_CSUM which can make use of such a
feature.

As far as the Rx feature I am dropping it completely since it would need to
be massively expanded and applied to IPv4 and IPv6 checksums for all parts,
not just the one that supports Tx checksum offload for the outer.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoMerge branch 'netlink-mmap-remove'
David S. Miller [Thu, 18 Feb 2016 16:42:41 +0000 (11:42 -0500)] 
Merge branch 'netlink-mmap-remove'

Florian Westphal says:

====================
netlink: remove mmapped netlink support

As discussed during netconf 2016 in Seville, this series removes
CONFIG_NETLINK_MMAP.

Close to three years after it was merged it has retained several problems
that do not appear to be fixable.

No official netfilter libmnl release contains support for mmap backed netlink
sockets. No openvswitch release makes use of it either.

To use the mmap interface, userspace not only has to probe for mmap netlink
support, it also has to implement a recv/socket receive path in order to
handle messages that exceed the size of an rx ring element (NL_MMAP_STATUS_COPY).

So if there are odd programs out there that attempt to use MMAP netlink
they should continue to work as they already need a socket based code path
to work properly.

The actual revert (first patch) has a list of problems.
The followup patches remove a couple of helpers that are no longer needed
after the revert.

I did a few tests with mmap vs. socket based interface on a 4.4 based
kernel on an i7-4790 box and there are no performance advantages:

loopback, single nfqueue, queueing in -t filter INPUT:
traffic generated by 8 * ping -q -f localhost:
socket backend:
real    0m27.325s
user    0m3.993s
sys     0m23.292s

with mmap ring backend:
real    0m29.054s
user    0m4.924s
sys     0m24.127s

with single tcp stream, unidirectional, loopback mtu set at 1500
(nc localhost discard < /dev/zero > /dev/null):

socket interface:
time nfqdump -b $((8 * 1024 * 1024 * 1024)) -w /dev/null
real    0m15.960s
user    0m1.756s
sys     0m11.143s

mmap ring:
real    0m16.441s
user    0m3.040s
sys     0m13.687s

socket interface nfqdump[1] with --gso option (i.e. MTU is exceeded,
no kernel-side segmentation and checksum fixups) completes in about 5s.

I also tested dumping a conntrack table with 1m entries.
On my box this takes about 2.4 seconds for both mmap and socket backend:

time LD_PRELOAD=../../src/.libs/libmnl.so ./nfct-dump-sk > /dev/null
mnl_cb_run: Success
messages: 1000000
real    0m2.485s
user    0m1.085s
sys     0m1.400s

time LD_PRELOAD=../../src/.libs/libmnl.so ./nfct-dump-mmap > /dev/null
messages: 1000000
real    0m2.451s
user    0m1.124s
sys     0m1.328s

[1] https://git.breakpoint.cc/cgit/fw/nfqdump.git/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonfnetlink: Revert "nfnetlink: add support for memory mapped netlink"
Florian Westphal [Thu, 18 Feb 2016 14:03:28 +0000 (15:03 +0100)] 
nfnetlink: Revert "nfnetlink: add support for memory mapped netlink"

reverts commit 3ab1f683bf8b ("nfnetlink: add support for memory mapped
netlink")'

Like previous commits in the series, remove wrappers that are not needed
after mmapped netlink removal.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonfnetlink: remove nfnetlink_alloc_skb
Florian Westphal [Thu, 18 Feb 2016 14:03:27 +0000 (15:03 +0100)] 
nfnetlink: remove nfnetlink_alloc_skb

Following mmapped netlink removal this code can be simplified by
removing the alloc wrapper.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoRevert "genl: Add genlmsg_new_unicast() for unicast message allocation"
Florian Westphal [Thu, 18 Feb 2016 14:03:26 +0000 (15:03 +0100)] 
Revert "genl: Add genlmsg_new_unicast() for unicast message allocation"

This reverts commit bb9b18fb55b0 ("genl: Add genlmsg_new_unicast() for
unicast message allocation")'.

Nothing wrong with it; its no longer needed since this was only for
mmapped netlink support.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoopenvswitch: Revert: "Enable memory mapped Netlink i/o"
Florian Westphal [Thu, 18 Feb 2016 14:03:25 +0000 (15:03 +0100)] 
openvswitch: Revert: "Enable memory mapped Netlink i/o"

revert commit 795449d8b846 ("openvswitch: Enable memory mapped Netlink i/o").
Following the mmaped netlink removal this code can be removed.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonetlink: remove mmapped netlink support
Florian Westphal [Thu, 18 Feb 2016 14:03:24 +0000 (15:03 +0100)] 
netlink: remove mmapped netlink support

mmapped netlink has a number of unresolved issues:

- TX zerocopy support had to be disabled more than a year ago via
  commit 4682a0358639b29cf ("netlink: Always copy on mmap TX.")
  because the content of the mmapped area can change after netlink
  attribute validation but before message processing.

- RX support was implemented mainly to speed up nfqueue dumping packet
  payload to userspace.  However, since commit ae08ce0021087a5d812d2
  ("netfilter: nfnetlink_queue: zero copy support") we avoid one copy
  with the socket-based interface too (via the skb_zerocopy helper).

The other problem is that skbs attached to mmaped netlink socket
behave different from normal skbs:

- they don't have a shinfo area, so all functions that use skb_shinfo()
(e.g. skb_clone) cannot be used.

- reserving headroom prevents userspace from seeing the content as
it expects message to start at skb->head.
See for instance
commit aa3a022094fa ("netlink: not trim skb for mmaped socket when dump").

- skbs handed e.g. to netlink_ack must have non-NULL skb->sk, else we
crash because it needs the sk to check if a tx ring is attached.

Also not obvious, leads to non-intuitive bug fixes such as 7c7bdf359
("netfilter: nfnetlink: use original skbuff when acking batches").

mmaped netlink also didn't play nicely with the skb_zerocopy helper
used by nfqueue and openvswitch.  Daniel Borkmann fixed this via
commit 6bb0fef489f6 ("netlink, mmap: fix edge-case leakages in nf queue
zero-copy")' but at the cost of also needing to provide remaining
length to the allocation function.

nfqueue also has problems when used with mmaped rx netlink:
- mmaped netlink doesn't allow use of nfqueue batch verdict messages.
  Problem is that in the mmap case, the allocation time also determines
  the ordering in which the frame will be seen by userspace (A
  allocating before B means that A is located in earlier ring slot,
  but this also means that B might get a lower sequence number then A
  since seqno is decided later.  To fix this we would need to extend the
  spinlocked region to also cover the allocation and message setup which
  isn't desirable.
- nfqueue can now be configured to queue large (GSO) skbs to userspace.
  Queing GSO packets is faster than having to force a software segmentation
  in the kernel, so this is a desirable option.  However, with a mmap based
  ring one has to use 64kb per ring slot element, else mmap has to fall back
  to the socket path (NL_MMAP_STATUS_COPY) for all large packets.

To use the mmap interface, userspace not only has to probe for mmap netlink
support, it also has to implement a recv/socket receive path in order to
handle messages that exceed the size of an rx ring element.

Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Ken-ichirou MATSUZAWA <chamaken@gmail.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet_sched: Improve readability of filter processing
Jamal Hadi Salim [Thu, 18 Feb 2016 13:04:43 +0000 (08:04 -0500)] 
net_sched: Improve readability of filter processing

Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: switchdev: Offload VLAN flags to hardware bridge
Ido Schimmel [Thu, 18 Feb 2016 13:01:46 +0000 (14:01 +0100)] 
bridge: switchdev: Offload VLAN flags to hardware bridge

When VLANs are created / destroyed on a VLAN filtering bridge (MASTER
flag set), the configuration is passed down to the hardware. However,
when only the flags (e.g. PVID) are toggled, the configuration is done
in the software bridge alone.

While it is possible to pass these flags to hardware when invoked with
the SELF flag set, this creates inconsistency with regards to the way
the VLANs are initially configured.

Pass the flags down to the hardware even when the VLAN already exists
and only the flags are toggled.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: phy: Add SGMII support for Marvell 88E1510/1512/1514/1518
Stefan Roese [Thu, 18 Feb 2016 09:59:07 +0000 (10:59 +0100)] 
net: phy: Add SGMII support for Marvell 88E1510/1512/1514/1518

Add code to select SGMII-to-copper mode upon SGMII interface selection.

Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoisdn: divamnt: use y2038-safe ktime_get_ts64() for trace data timestamps
Alison Schofield [Thu, 18 Feb 2016 06:35:11 +0000 (22:35 -0800)] 
isdn: divamnt: use y2038-safe ktime_get_ts64() for trace data timestamps

divamnt stores a start_time at module init and uses it to calculate
elapsed time. The elapsed time, stored in secs and usecs, is part of
the trace data the driver maintains for the DIVA Server ISDN cards.
No change to the format of that time data is required.

To avoid overflow on 32-bit systems use ktime_get_ts64() to return
the elapsed monotonic time since system boot.

This is a change from real to monotonic time. Since the driver only
stores elapsed time, monotonic time is sufficient and more robust
against real time clock changes. These new monotonic values can be
more useful for debugging because they can be easily compared to
other monotonic timestamps.

Note elaspsed time values will now start at system boot time rather
than module load time, so they will differ slightly from previously
reported values.

Remove declaration and init of previously unused time constants:
start_sec, start_usec.

Signed-off-by: Alison Schofield <amsfield22@gmail.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Thu, 18 Feb 2016 15:32:18 +0000 (10:32 -0500)] 
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2016-02-17

This series contains updates to i40e/i40evf once again.

Mitch updates the use of a define instead of a magic number.  Adds support
for packet split receive on VFs, which is disabled by default.  Expands on
a code comment which was not verbose or really helpful.  Fixes an issue
where if a reset fails to complete and was not properly setting the
adapter state, which would cause a panic on rmmod, so set the adpater
state to DOWN to avoid a panic.

Jesse cleans up a "dump" in debugfs that never panned out to be useful.

Anjali adds a workaround for cases where we might have interrupts that get
lost but wright-back (WB) happened.  Fixes an issue by falling back to
enabling unicast, multicast and broadcast promiscuous mode when the driver
must disable it's use of "default port" (defport mode) due to internal
incompatibility with Multiple Function per Port (MFP).  Fixes an issue
where queues should never be enabled/disabled in the interrupt handler.

Kiran cleans up th code which used hard coded base VEB SEID since it was
removed from the specification.

Shannon adds a few bits for better debug messages.  Fixes an obscure corner
case, where it was possible to clear the NVM update wait flag when no
update_done message was actually received.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet-sysfs: remove unused fmt_long_hex
Colin Ian King [Mon, 15 Feb 2016 22:54:47 +0000 (22:54 +0000)] 
net-sysfs: remove unused fmt_long_hex

Ever since commit 04ed3e741d0f133e02bed7fa5c98edba128f90e7
("net: change netdev->features to u32") the format string
fmt_long_hex has not been used, so we may as well remove it.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoi40e/i40evf: Bump i40e to 1.4.15 and i40evf to 1.4.11.
Catherine Sullivan [Fri, 15 Jan 2016 22:33:22 +0000 (14:33 -0800)] 
i40e/i40evf: Bump i40e to 1.4.15 and i40evf to 1.4.11.

Bump.

Change-ID: Ie280dc67e37a1cf667c3469499a4fb90f4177b75
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: When in promisc mode apply promisc mode to Tx Traffic as well
Anjali Singhai Jain [Fri, 15 Jan 2016 22:33:21 +0000 (14:33 -0800)] 
i40e: When in promisc mode apply promisc mode to Tx Traffic as well

In MFP mode particularly when we were setting the PF VSI in limited
promiscuous, the HW switch was still mirroring the outgoing packets
from other VSIs (VF/VMdq) onto the PF VSI.

With this new bit set, the mirroring doesn't happen any more and so
we are in limited promiscuous on the PF VSI in MFP which is similar
to defport.

An API check is not required, since this bit is reserved for FW API
version < 1.5

Also update copyright year in file headers.

Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: clean event descriptor before use
Shannon Nelson [Fri, 15 Jan 2016 22:33:20 +0000 (14:33 -0800)] 
i40e: clean event descriptor before use

In one obscure corner case, it was possible to clear the NVM update wait
flag when no update_done message was actually received.  This patch
cleans the event descriptor before use, and moves the opcode check to
where it won't get done if there was no event to clean.

Also update copyright year in file headers.

Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40evf: set adapter state on reset failure
Mitch Williams [Fri, 15 Jan 2016 22:33:19 +0000 (14:33 -0800)] 
i40evf: set adapter state on reset failure

If a reset fails to complete, the driver gets its affairs in order and
awaits the cold solace of rmmod. Unfortunately, it was not properly
setting the adapter state, which would cause a panic on rmmod, instead
of the desired surcease.

Set the adapter state to DOWN in this case, and avoid a panic.

Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: better error reporting for nvmupdate
Shannon Nelson [Fri, 15 Jan 2016 22:33:18 +0000 (14:33 -0800)] 
i40e: better error reporting for nvmupdate

Make sure we return EBUSY while finishing up a reset, and add a few bits
for better debug messages.

Change-ID: I23f6c28a8d96d7aa171abcc265737cec7826c292
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: expand comment
Mitch Williams [Fri, 15 Jan 2016 22:33:17 +0000 (14:33 -0800)] 
i40e: expand comment

Explain why we cannot remove this code, even though it works differently
than any of our other interrupt cause handling code.

Change-ID: Ie66203bd037a466066036611c31d44f759ec5176
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Do not disable queues in the Legacy/MSI Interrupt handler
Anjali Singhai Jain [Fri, 15 Jan 2016 22:33:16 +0000 (14:33 -0800)] 
i40e: Do not disable queues in the Legacy/MSI Interrupt handler

The queues should never be enabled/disabled in the interrupt handler,
ICR0 interrupt enable should be the only thing that needs to be
dynamically changed in the handler.

This patch fixes that. Without this patch X722 platforms were
seeing weird ping timings when in Legacy mode since it takes
a whole lot of time for the HW/FW to re-enable queues.

Change-ID: If065afc45d81c5a19d4a94a00cd5b8f61cefc40c
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: avoid atomics
Mitch Williams [Fri, 15 Jan 2016 22:33:15 +0000 (14:33 -0800)] 
i40e/i40evf: avoid atomics

In the case where we have a page fully used by receive data, we need to
release the page fully to the stack. Instead of calling get_page (which
increments the page count) followed by free_page (which decrements the
page count), just donate our reference to the stack. Although this
donation is not tax deductible, it does allow us to avoid two very
expensive atomic operations that reverse each other.

Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Removal of code which relies on BASE VEB SEID
Kiran Patil [Fri, 15 Jan 2016 22:33:14 +0000 (14:33 -0800)] 
i40e: Removal of code which relies on BASE VEB SEID

Fixed mapping of SEID is removed from specification. Hence
this patch removes code which was using hard coded base VEB SEID.

Changed FCoE code to use "hw->pf_id" to obtain correct "idx"
and verified.

Removed defines for BASE VSI/VEB SEID and BASE_PF_SEID since it
is not used anymore.

Change-ID: Id507cf4b1fae1c0145e3f08ae9ea5846ea5840de
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Fix PROMISC mode for Multi-function per port (MFP) devices
Anjali Singhai Jain [Fri, 15 Jan 2016 22:33:13 +0000 (14:33 -0800)] 
i40e: Fix PROMISC mode for Multi-function per port (MFP) devices

This patch falls back to enabling unicast, multicast and
broadcast promiscuous mode when the driver must disable it's use
of "default port" aka defport mode (which is normally used to
provide a promiscuous mode), due to internal incompatibility
with Multiple Function per Port (aka MFP).

The situation that requires this patch is when Physical
Function 0 is the device being used, and it can support SR-IOV
when MFP is enabled, via the driver creating a VEB on an MFP
enabled adapter.

Change-ID: Ie90b00d0d58782a5dfcf2c3c9725a2eb90bd63d8
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: Add a SW workaround for lost interrupts
Anjali Singhai Jain [Fri, 15 Jan 2016 22:33:12 +0000 (14:33 -0800)] 
i40e: Add a SW workaround for lost interrupts

This patch adds a workaround for cases where we might have
interrupts that got lost but WB happened.
If that happens without this patch we will see a tx_timeout.
To work around it, this patch goes ahead and reschedules NAPI
in that situation, if NAPI is not already scheduled.
We also add a counter in ethtool to keep track of when
we detect a case of tx_lost_interrupt.

Note: napi_reschedule() can be safely called from process/service_task
context and is done in other drivers as well without an issue.

Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: trivial: cleanup use of pf->hw
Jesse Brandeburg [Fri, 15 Jan 2016 22:33:11 +0000 (14:33 -0800)] 
i40e: trivial: cleanup use of pf->hw

This patch makes use of a pointer called hw consistent
in the i40e_remove function.

Change-ID: Idacc7ff0a09a68289c57457a78618bf5497de077
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40evf: support packet split receive
Mitch Williams [Fri, 15 Jan 2016 22:33:10 +0000 (14:33 -0800)] 
i40evf: support packet split receive

Support packet split receive on VFs. This is off by default but can be
enabled using ethtool private flags. Because we need to trigger a reset
from outside of i40evf_main.c, create a new function to do so, and
export it.

Also update copyright year in file headers.

Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: drop unused debugfs file "dump"
Jesse Brandeburg [Fri, 15 Jan 2016 22:33:09 +0000 (14:33 -0800)] 
i40e: drop unused debugfs file "dump"

There was a completely unused file "dump" in debugfs that
never panned out to be useful.

Change-ID: I12bb9e37b5a83299725dda815a8746157baf6562
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: get rid of magic number
Mitch Williams [Fri, 15 Jan 2016 22:33:08 +0000 (14:33 -0800)] 
i40e: get rid of magic number

We have a define for this, use it. No functional change.

Change-ID: Ic0e3ea4f562e46de63b2a8de07f291ccc10205fd
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoMerge branch 'vxlan-cleanups'
David S. Miller [Thu, 18 Feb 2016 04:52:12 +0000 (23:52 -0500)] 
Merge branch 'vxlan-cleanups'

Jiri Benc says:

====================
vxlan: clean up rx path, consolidating extension handling

The rx path of VXLAN turned over time into kind of spaghetti code. The rx
processing is split between vxlan_udp_encap_recv and vxlan_rcv but in an
artificial way: vxlan_rcv is just called at the end of vxlan_udp_encap_recv,
continuing the rx processing where vxlan_udp_encap_recv left it. There's no
clear border between those two functions.

It makes sense to combine those functions into one; this will be actually
needed for VXLAN-GPE where we'll need to skip part of the processing which
is hard to do with the current code.

However, both functions are too long already. This patchset is shortening
them, consolidating extension handling that is spread all around together
and moving it to separate functions. (Later patchsets will do more
consolidation in other parts of the functions with the final goal of merging
vxlan_udp_encap_recv and vxlan_rcv.)

In process of consolidation of the extension handling, I needed to deal with
vni field in a generic way, as its lower 8 bits mean different things for
different extensions. While cleaning up the code to strictly distinguish
between "vni" and "vni field" (which contains vni plus an additional byte),
I also converted the code not to convert endianess back and forth.

The full picture can be seen at:
https://github.com/jbenc/linux-vxlan/commits/master
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: treat vni in metadata based tunnels consistently
Jiri Benc [Tue, 16 Feb 2016 20:59:03 +0000 (21:59 +0100)] 
vxlan: treat vni in metadata based tunnels consistently

For metadata based tunnels, VNI is ignored when doing vxlan device lookups
(because such tunnel receives all VNIs). However, this was not honored by
vxlan_xmit_one when doing encapsulation bypass. Move the check for metadata
based tunnel to the common place where it belongs.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: clean up rx error path
Jiri Benc [Tue, 16 Feb 2016 20:59:02 +0000 (21:59 +0100)] 
vxlan: clean up rx error path

When there are unrecognized flags present in the vxlan header, it doesn't
make much sense to return the packet for further UDP processing, especially
considering that for other invalid flag combinations we drop the packet
because of previous checks.

This means we return positive value only at the beginning of the function
where tun_dst is not yet allocated. This allows us to get rid of the
bad_flags and error jump labels.

When we're dropping packet, we need to free tun_dst now.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: clean up extension handling on rx
Jiri Benc [Tue, 16 Feb 2016 20:59:01 +0000 (21:59 +0100)] 
vxlan: clean up extension handling on rx

Bring the extension handling to a single place and move the actual handling
logic out of vxlan_udp_encap_recv as much as possible.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: move GBP header parsing to a separate function
Jiri Benc [Tue, 16 Feb 2016 20:59:00 +0000 (21:59 +0100)] 
vxlan: move GBP header parsing to a separate function

To make vxlan_udp_encap_recv shorter and more comprehensible.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: simplify vxlan_remcsum
Jiri Benc [Tue, 16 Feb 2016 20:58:59 +0000 (21:58 +0100)] 
vxlan: simplify vxlan_remcsum

Part of the parameters is not needed. Simplify the caller of this function
in preparation of making vxlan rx more comprehensible.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: keep flags and vni in network byte order
Jiri Benc [Tue, 16 Feb 2016 20:58:58 +0000 (21:58 +0100)] 
vxlan: keep flags and vni in network byte order

Prevent repeated conversions from and to network order in the fast path.

To achieve this, define all flag constants in big endian order and store VNI
as __be32. To prevent confusion between the actual VNI value and the VNI
field from the header (which contains additional reserved byte), strictly
distinguish between "vni" and "vni_field".

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: introduce vxlan_hdr
Jiri Benc [Tue, 16 Feb 2016 20:58:57 +0000 (21:58 +0100)] 
vxlan: introduce vxlan_hdr

Currently, pointer to the vxlan header is kept in a local variable. It has
to be reloaded whenever the pskb pull operations are performed which usually
happens somewhere deep in called functions.

Create a vxlan_hdr function and use it to reference the vxlan header
instead.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Thu, 18 Feb 2016 04:47:32 +0000 (23:47 -0500)] 
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2016-02-17

This series contains updates to i40e/i40evf only (again).

Jesse moves sync_vsi_filters() up in the service_task because it may need
to request a reset, and we do not want to wait another round of service
task time.  Refactored the enable_icr0() in order to allow it to be
decided by the caller whether the CLEARPBA (clear pending events) bit will
be set while re-enabling the interrupt.  Also provides the "Don't Give Up"
patch, where the driver will keep polling trying to allocate receive buffers
until it succeeds.  This should keep all receive queues running even in
the face of memory pressure.  Cleans up the debugging helpers by putting
everything in hex to be consistent.

Neerav updates the DCB firmware version related checkes specific to X710
and XL710 only since the checks are not required for X722 devices.

Shannon adds the use of the new shared MAC filter bit for multicast and
broadcast filters in order to make better use of the filters available
from the device.  Added a parameter to allow the driver to set the
enable/disable of statistics gathering in the hardware switch.  Also the
L2 cloud filtering parameter is removed since it was never used.

Anjali refactors the force_wb and WB_ON_ITR functionality since
Force-WriteBack functionality in X710/XL710 devices has been moved out of
the clean routine and into the service task, so we need to make sure
WriteBack-On-ITR is separated out since it is still called from clean.

Catherine changes the VF driver string to reflect all the products that
are supported.

Mitch refactors the packet split receive code to properly use half-pages
for receives.  Also changes the use of bitwise operators to logical
operators on clean_complete variable, while making a witty reference to
Mr. Spock.  Cleans up (i.e. removes) the hsplit field in the ring
structure and use the existing macro to detect packet split enablement,
which allows debugfs dumps of the VSI to properly show which recevie
routine is in use.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'rocker-worlds'
David S. Miller [Thu, 18 Feb 2016 04:08:35 +0000 (23:08 -0500)] 
Merge branch 'rocker-worlds'

Jiri Pirko says:

====================
rocker: do world split

This patchset allows new rocker worlds to be easily added in future.
Two new worlds are now under development: P4 and eBPF.

The main part of the patchset is the OF-DPA carve-out. It resuts in OF-DPA
specific file. Clean cut.

Note this patchset is based on my original attempt in October 2015.
I had to rebase, included all suggestions and did lot of small changes.
Main change to go with all-port-one-world approach. Port world is set according
to what is setup in HW. Not possible to change worlds from driver.

v1->v2:
  patch 12/13:
  - split port_init into pre-init and init
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: return -EOPNOTSUPP for undefined world ops
Jiri Pirko [Tue, 16 Feb 2016 14:14:51 +0000 (15:14 +0100)] 
rocker: return -EOPNOTSUPP for undefined world ops

Suggested-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: move OF-DPA stuff into separate file
Jiri Pirko [Tue, 16 Feb 2016 14:14:50 +0000 (15:14 +0100)] 
rocker: move OF-DPA stuff into separate file

Carve out OF-DPA would specific code from the common file to the world
file. This change required struct rocker and struct rocker_port split
into world specific struct ofdpa and struct ofdpa_port. Along with this
the world specific functions and defines were renamed from prefix
"rocker_" to "ofdpa_".

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: call rocker_cmd_exec function with "nowait" boolean instead of flags
Jiri Pirko [Tue, 16 Feb 2016 14:14:49 +0000 (15:14 +0100)] 
rocker: call rocker_cmd_exec function with "nowait" boolean instead of flags

No need to push down rocker flags just to check if this is nowait or
not. Let the caller handle that.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: remove trans parameter to rocker_cmd_exec function
Jiri Pirko [Tue, 16 Feb 2016 14:14:48 +0000 (15:14 +0100)] 
rocker: remove trans parameter to rocker_cmd_exec function

The only purpose of passing this parameter is to check for
prepare phase. The only reason for a failure in that state is if
TLVs don't fit into descriptor. That is highly unlikely and if that
happens, it is a driver bug. So remove this parameter from
rocker_cmd_exec, and check for prepare phase in caller.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: pre-allocate wait structures during cmd ring init
Jiri Pirko [Tue, 16 Feb 2016 14:14:47 +0000 (15:14 +0100)] 
rocker: pre-allocate wait structures during cmd ring init

This avoids need to alloc/free wait structure for every command call.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: pass "learning" value as a parameter to rocker_port_set_learning
Jiri Pirko [Tue, 16 Feb 2016 14:14:46 +0000 (15:14 +0100)] 
rocker: pass "learning" value as a parameter to rocker_port_set_learning

Be consistent with the rest of the setting functions, and pass
"learning" as a bool function parameter.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: introduce worlds infrastructure
Jiri Pirko [Tue, 16 Feb 2016 14:14:45 +0000 (15:14 +0100)] 
rocker: introduce worlds infrastructure

This is another step on the way to per-world clean cut. Introduce world
ops hooks which each world can implement in world-specific way.
Also introduce world infrastructure along with OF-DPA world stub.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: move rocker and rocker_port structs into header
Jiri Pirko [Tue, 16 Feb 2016 14:14:44 +0000 (15:14 +0100)] 
rocker: move rocker and rocker_port structs into header

And take some other related thing along. They are going to be pushed
into of-dpa part anyway.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: implement get settings mode command
Jiri Pirko [Tue, 16 Feb 2016 14:14:43 +0000 (15:14 +0100)] 
rocker: implement get settings mode command

Introduce a helper to ask HW for the port mode (world).

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: push tlv processing into separate files
Jiri Pirko [Tue, 16 Feb 2016 14:14:42 +0000 (15:14 +0100)] 
rocker: push tlv processing into separate files

Carve out TLV processing helpers into separate files.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: rename rocker.c to rocker_main.c
Jiri Pirko [Tue, 16 Feb 2016 14:14:41 +0000 (15:14 +0100)] 
rocker: rename rocker.c to rocker_main.c

Since "rocker.c" is going to be split into multiple files, start with
renaming original "rocker.c" file to "rocker_main.c". Multiple code
parts are going to be cut from "rocker_main.c" later on.

Fix couple of checkpatch issues on the way.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: rename rocker.h to rocker_hw.h
Jiri Pirko [Tue, 16 Feb 2016 14:14:40 +0000 (15:14 +0100)] 
rocker: rename rocker.h to rocker_hw.h

Since "rocker.h" file is going to be used for different purpose,
rename the hardware-specific header to "rocker_hw.h".

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agorocker: remove unused rocker_port param from alloc funcs and shorten their names
Jiri Pirko [Tue, 16 Feb 2016 14:14:39 +0000 (15:14 +0100)] 
rocker: remove unused rocker_port param from alloc funcs and shorten their names

No need to pass rocker_port around to alloc/free rocker functions,
since they now use switchdev_trans for memory management storage.
With the param removal, shorten the name of the functions since they now
has nothing to do with rocker port.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'xgene-multiq'
David S. Miller [Thu, 18 Feb 2016 03:08:34 +0000 (22:08 -0500)] 
Merge branch 'xgene-multiq'

Iyappan Subramanian says:

====================
Add support for Classifier and RSS

This patch set enables,

(i) Classifier engine that is used for parsing
through the packet and extracting a search string that is then used
to search a database to find associative data.

(ii) Receive Side Scaling (RSS) that does dynamic load
balancing of the CPUs by controlling the number of messages enqueued
per CPU though the help of Toeplitz Hash function of 4-tuple of
source TCP/UDP port, destination TCP/UDP port, source IPV4 address and
destination IPV4 address.

(iii) Multi queue, to make advantage of RSS

v3: Address review comments from v2
    - reordered local variables from longest to shortlest line

v2: Address review comments from v1
    - fix kbuild warning
    - add default coalescing

v1:
    - Initial version
====================

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodtb: xgene: Add irqs to support multi queue
Iyappan Subramanian [Wed, 17 Feb 2016 23:00:42 +0000 (15:00 -0800)] 
dtb: xgene: Add irqs to support multi queue

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Khuong Dinh <kdinh@apm.com>
Signed-off-by: Tanmay Inamdar <tinamdar@apm.com>
Tested-by: Toan Le <toanle@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodrivers: net: xgene: Add support for multiple queues
Iyappan Subramanian [Wed, 17 Feb 2016 23:00:41 +0000 (15:00 -0800)] 
drivers: net: xgene: Add support for multiple queues

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Khuong Dinh <kdinh@apm.com>
Signed-off-by: Tanmay Inamdar <tinamdar@apm.com>
Tested-by: Toan Le <toanle@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodrivers: net: xgene: Add support for RSS
Iyappan Subramanian [Wed, 17 Feb 2016 23:00:40 +0000 (15:00 -0800)] 
drivers: net: xgene: Add support for RSS

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Khuong Dinh <kdinh@apm.com>
Signed-off-by: Tanmay Inamdar <tinamdar@apm.com>
Tested-by: Toan Le <toanle@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodrivers: net: xgene: Add support for Classifier engine
Iyappan Subramanian [Wed, 17 Feb 2016 23:00:39 +0000 (15:00 -0800)] 
drivers: net: xgene: Add support for Classifier engine

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Khuong Dinh <kdinh@apm.com>
Signed-off-by: Tanmay Inamdar <tinamdar@apm.com>
Tested-by: Toan Le <toanle@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovlan: change return type of vlan_proc_rem_dev
Zhang Shengju [Thu, 18 Feb 2016 02:29:30 +0000 (02:29 +0000)] 
vlan: change return type of vlan_proc_rem_dev

Since function vlan_proc_rem_dev() will only return 0, it's better to
return void instead of int.

Signed-off-by: Zhang Shengju <zhangshengju@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: pack tc_cls_u32_knode struct slighter better
John Fastabend [Wed, 17 Feb 2016 22:59:30 +0000 (14:59 -0800)] 
net: pack tc_cls_u32_knode struct slighter better

By packing the structure we can remove a few holes as Jamal
suggests.

before:

struct tc_cls_u32_knode {
struct tcf_exts *          exts;                 /*     0     8 */
u8                         fshift;               /*     8     1 */

/* XXX 3 bytes hole, try to pack */

u32                        handle;               /*    12     4 */
u32                        val;                  /*    16     4 */
u32                        mask;                 /*    20     4 */
u32                        link_handle;          /*    24     4 */

/* XXX 4 bytes hole, try to pack */

struct tc_u32_sel *        sel;                  /*    32     8 */

/* size: 40, cachelines: 1, members: 7 */
/* sum members: 33, holes: 2, sum holes: 7 */
/* last cacheline: 40 bytes */
};

after:

struct tc_cls_u32_knode {
struct tcf_exts *          exts;                 /*     0     8 */
struct tc_u32_sel *        sel;                  /*     8     8 */
u32                        handle;               /*    16     4 */
u32                        val;                  /*    20     4 */
u32                        mask;                 /*    24     4 */
u32                        link_handle;          /*    28     4 */
u8                         fshift;               /*    32     1 */

/* size: 40, cachelines: 1, members: 7 */
/* padding: 7 */
/* last cacheline: 40 bytes */
};

Suggested-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoixgbe: fix dates on header of ixgbe_model.h
John Fastabend [Wed, 17 Feb 2016 22:35:23 +0000 (14:35 -0800)] 
ixgbe: fix dates on header of ixgbe_model.h

Fixes: 9d35cf062e05 ("net: ixgbe: add minimal parser details for ixgbe")
Reported-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoixgbe: use u32 instead of __u32 in model header
John Fastabend [Wed, 17 Feb 2016 22:34:53 +0000 (14:34 -0800)] 
ixgbe: use u32 instead of __u32 in model header

I incorrectly used __u32 types where we should be using u32 types when
I added the ixgbe_model.h file.

Fixes: 9d35cf062e05 ("net: ixgbe: add minimal parser details for ixgbe")
Suggested-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoi40e/i40evf: Bump version
Jesse Brandeburg [Thu, 14 Jan 2016 00:51:52 +0000 (16:51 -0800)] 
i40e/i40evf: Bump version

Bump version to i40e-1.4.13 and i40evf-1.4.9

Change-ID: I9db37f9d4899141c3e5455dfb456d45465b8c035
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e: properly show packet split status in debugfs
Mitch Williams [Thu, 14 Jan 2016 00:51:51 +0000 (16:51 -0800)] 
i40e: properly show packet split status in debugfs

Get rid of the unused hsplit field in the ring struct and use the
existing macro to detect packet split enablement. This allows debugfs
dumps of the VSI to properly show which Rx routine is in use.

Change-ID: Ic4e9589e6a788ab196ed0850703f704e30c03781
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: use logical operators, not bitwise
Mitch Williams [Thu, 14 Jan 2016 00:51:50 +0000 (16:51 -0800)] 
i40e/i40evf: use logical operators, not bitwise

Mr. Spock would certainly raise an eyebrow to see us using bitwise
operators, when we should clearly be relying on logic. Fascinating.

Change-ID: Ie338010c016f93e9faa2002c07c90b15134b7477
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: use pages correctly in Rx
Mitch Williams [Thu, 14 Jan 2016 00:51:49 +0000 (16:51 -0800)] 
i40e/i40evf: use pages correctly in Rx

Refactor the packet split Rx code to properly use half-pages for
receives. The previous code was doing way more mapping and unmapping
than it needed to, and wasn't properly using half-pages.

Increment the page use count each time we give a half-page to an skb,
knowing that the stack will probably process and release the page before
we need it again. Only free and reallocate pages if the count shows that
both half-pages are in use. Add counters to track reallocations and page
reuse.

Change-ID: I534b299196036b64be82b4861a0a4036310a8f22
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoi40e/i40evf: use __GFP_NOWARN
Jesse Brandeburg [Thu, 14 Jan 2016 00:51:48 +0000 (16:51 -0800)] 
i40e/i40evf: use __GFP_NOWARN

The i40e and i40evf drivers now cleanly handle allocation
failures and can avoid kernel log spew from the memory allocator
when allocations fail, so set __GFP_NOWARN on Rx buffer alloc.

Change-ID: Ic9e1b83c495e2a3ef6b069ba7fb6e52ce134cd23
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This page took 0.049196 seconds and 5 git commands to generate.