Commit | Line | Data |
---|---|---|
09e1c061 PW |
1 | Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection |
2 | ======================================================================== | |
3 | ||
872857a8 JK |
4 | Intel Gigabit Linux driver. |
5 | Copyright(c) 1999 - 2010 Intel Corporation. | |
09e1c061 PW |
6 | |
7 | Contents | |
8 | ======== | |
9 | ||
09e1c061 | 10 | - Identifying Your Adapter |
09e1c061 | 11 | - Additional Configurations |
872857a8 JK |
12 | - Performance Tuning |
13 | - Known Issues | |
09e1c061 PW |
14 | - Support |
15 | ||
872857a8 JK |
16 | Identifying Your Adapter |
17 | ======================== | |
09e1c061 | 18 | |
872857a8 JK |
19 | The driver in this release is compatible with 82598 and 82599-based Intel |
20 | Network Connections. | |
09e1c061 | 21 | |
872857a8 JK |
22 | For more information on how to identify your adapter, go to the Adapter & |
23 | Driver ID Guide at: | |
09e1c061 | 24 | |
872857a8 | 25 | http://support.intel.com/support/network/sb/CS-012904.htm |
09e1c061 | 26 | |
872857a8 JK |
27 | SFP+ Devices with Pluggable Optics |
28 | ---------------------------------- | |
09e1c061 | 29 | |
872857a8 | 30 | 82599-BASED ADAPTERS |
09e1c061 | 31 | |
872857a8 JK |
32 | NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or |
33 | is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel | |
34 | optics and/or the direct attach cables listed below. | |
09e1c061 | 35 | |
872857a8 | 36 | When 82599-based SFP+ devices are connected back to back, they should be set to |
68f20d94 | 37 | the same Speed setting via ethtool. Results may vary if you mix speed settings. |
872857a8 JK |
38 | 82598-based adapters support all passive direct attach cables that comply |
39 | with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach | |
40 | cables are not supported. | |
09e1c061 | 41 | |
872857a8 | 42 | Supplier Type Part Numbers |
09e1c061 | 43 | |
872857a8 JK |
44 | SR Modules |
45 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT | |
46 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1 | |
47 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 | |
48 | LR Modules | |
49 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT | |
50 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1 | |
51 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2 | |
09e1c061 | 52 | |
872857a8 JK |
53 | The following is a list of 3rd party SFP+ modules and direct attach cables that |
54 | have received some testing. Not all modules are applicable to all devices. | |
09e1c061 | 55 | |
872857a8 | 56 | Supplier Type Part Numbers |
09e1c061 | 57 | |
872857a8 JK |
58 | Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL |
59 | Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ | |
60 | Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL | |
09e1c061 | 61 | |
872857a8 JK |
62 | Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT |
63 | Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1 | |
64 | Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT | |
65 | Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1 | |
66 | Finistar 1000BASE-T SFP FCLF8522P2BTL | |
67 | Avago 1000BASE-T SFP ABCU-5710RZ | |
09e1c061 | 68 | |
872857a8 JK |
69 | 82599-based adapters support all passive and active limiting direct attach |
70 | cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. | |
09e1c061 | 71 | |
872857a8 JK |
72 | Laser turns off for SFP+ when ifconfig down |
73 | ------------------------------------------- | |
74 | "ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters. | |
75 | "ifconfig up" turns on the later. | |
09e1c061 | 76 | |
09e1c061 | 77 | |
872857a8 | 78 | 82598-BASED ADAPTERS |
09e1c061 | 79 | |
872857a8 JK |
80 | NOTES for 82598-Based Adapters: |
81 | - Intel(R) Network Adapters that support removable optical modules only support | |
82 | their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port | |
83 | Express Module only supports SR optical modules). If you plug in a different | |
84 | type of module, the driver will not load. | |
85 | - Hot Swapping/hot plugging optical modules is not supported. | |
86 | - Only single speed, 10 gigabit modules are supported. | |
87 | - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module | |
88 | types are not supported. Please see your system documentation for details. | |
09e1c061 | 89 | |
872857a8 JK |
90 | The following is a list of 3rd party SFP+ modules and direct attach cables that |
91 | have received some testing. Not all modules are applicable to all devices. | |
09e1c061 | 92 | |
872857a8 | 93 | Supplier Type Part Numbers |
09e1c061 | 94 | |
872857a8 JK |
95 | Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL |
96 | Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ | |
97 | Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL | |
09e1c061 | 98 | |
872857a8 JK |
99 | 82598-based adapters support all passive direct attach cables that comply |
100 | with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach | |
101 | cables are not supported. | |
09e1c061 | 102 | |
09e1c061 | 103 | |
872857a8 JK |
104 | Flow Control |
105 | ------------ | |
106 | Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable | |
107 | receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE | |
108 | frames are generated when the receive packet buffer crosses a predefined | |
109 | threshold. When rx is enabled, the transmit unit will halt for the time delay | |
110 | specified when a PAUSE frame is received. | |
09e1c061 | 111 | |
872857a8 | 112 | Flow Control is enabled by default. If you want to disable a flow control |
68f20d94 | 113 | capable link partner, use ethtool: |
09e1c061 | 114 | |
872857a8 | 115 | ethtool -A eth? autoneg off RX off TX off |
09e1c061 | 116 | |
872857a8 JK |
117 | NOTE: For 82598 backplane cards entering 1 gig mode, flow control default |
118 | behavior is changed to off. Flow control in 1 gig mode on these devices can | |
119 | lead to Tx hangs. | |
09e1c061 PW |
120 | |
121 | Additional Configurations | |
122 | ========================= | |
123 | ||
09e1c061 PW |
124 | Jumbo Frames |
125 | ------------ | |
126 | The driver supports Jumbo Frames for all adapters. Jumbo Frames support is | |
127 | enabled by changing the MTU to a value larger than the default of 1500. | |
128 | The maximum value for the MTU is 16110. Use the ifconfig command to | |
129 | increase the MTU size. For example: | |
130 | ||
131 | ifconfig ethx mtu 9000 up | |
132 | ||
133 | The maximum MTU setting for Jumbo Frames is 16110. This value coincides | |
134 | with the maximum Jumbo Frames size of 16128. | |
135 | ||
136 | Generic Receive Offload, aka GRO | |
137 | -------------------------------- | |
138 | The driver supports the in-kernel software implementation of GRO. GRO has | |
139 | shown that by coalescing Rx traffic into larger chunks of data, CPU | |
140 | utilization can be significantly reduced when under large Rx load. GRO is an | |
141 | evolution of the previously-used LRO interface. GRO is able to coalesce | |
142 | other protocols besides TCP. It's also safe to use with configurations that | |
143 | are problematic for LRO, namely bridging and iSCSI. | |
144 | ||
09e1c061 PW |
145 | Data Center Bridging, aka DCB |
146 | ----------------------------- | |
09e1c061 PW |
147 | DCB is a configuration Quality of Service implementation in hardware. |
148 | It uses the VLAN priority tag (802.1p) to filter traffic. That means | |
149 | that there are 8 different priorities that traffic can be filtered into. | |
150 | It also enables priority flow control which can limit or eliminate the | |
151 | number of dropped packets during network stress. Bandwidth can be | |
152 | allocated to each of these priorities, which is enforced at the hardware | |
153 | level. | |
154 | ||
155 | To enable DCB support in ixgbe, you must enable the DCB netlink layer to | |
156 | allow the userspace tools (see below) to communicate with the driver. | |
157 | This can be found in the kernel configuration here: | |
158 | ||
159 | -> Networking support | |
160 | -> Networking options | |
161 | -> Data Center Bridging support | |
162 | ||
163 | Once this is selected, DCB support must be selected for ixgbe. This can | |
164 | be found here: | |
165 | ||
166 | -> Device Drivers | |
167 | -> Network device support (NETDEVICES [=y]) | |
168 | -> Ethernet (10000 Mbit) (NETDEV_10000 [=y]) | |
169 | -> Intel(R) 10GbE PCI Express adapters support | |
170 | -> Data Center Bridging (DCB) Support | |
171 | ||
172 | After these options are selected, you must rebuild your kernel and your | |
173 | modules. | |
174 | ||
175 | In order to use DCB, userspace tools must be downloaded and installed. | |
176 | The dcbd tools can be found at: | |
177 | ||
178 | http://e1000.sf.net | |
179 | ||
09e1c061 PW |
180 | Ethtool |
181 | ------- | |
182 | The driver utilizes the ethtool interface for driver configuration and | |
872857a8 | 183 | diagnostics, as well as displaying statistical information. The latest |
68f20d94 | 184 | ethtool version is required for this functionality. |
09e1c061 PW |
185 | |
186 | The latest release of ethtool can be found from | |
68f20d94 | 187 | http://ftp.kernel.org/pub/software/network/ethtool/ |
09e1c061 | 188 | |
872857a8 | 189 | FCoE |
09e1c061 | 190 | ---- |
872857a8 JK |
191 | This release of the ixgbe driver contains new code to enable users to use |
192 | Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) | |
193 | functionality that is supported by the 82598-based hardware. This code has | |
194 | no default effect on the regular driver operation, and configuring DCB and | |
195 | FCoE is outside the scope of this driver README. Refer to | |
196 | http://www.open-fcoe.org/ for FCoE project information and contact | |
197 | e1000-eedc@lists.sourceforge.net for DCB information. | |
198 | ||
199 | MAC and VLAN anti-spoofing feature | |
200 | ---------------------------------- | |
201 | When a malicious driver attempts to send a spoofed packet, it is dropped by | |
202 | the hardware and not transmitted. An interrupt is sent to the PF driver | |
203 | notifying it of the spoof attempt. | |
204 | ||
205 | When a spoofed packet is detected the PF driver will send the following | |
206 | message to the system log (displayed by the "dmesg" command): | |
207 | ||
208 | Spoof event(s) detected on VF (n) | |
209 | ||
210 | Where n=the VF that attempted to do the spoofing. | |
211 | ||
212 | ||
213 | Performance Tuning | |
214 | ================== | |
215 | ||
216 | An excellent article on performance tuning can be found at: | |
217 | ||
218 | http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf | |
219 | ||
220 | ||
221 | Known Issues | |
222 | ============ | |
223 | ||
224 | Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS using | |
225 | Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM | |
226 | ----------------------------------------------------------------------------- | |
227 | KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This | |
228 | includes traditional PCIe devices, as well as SR-IOV-capable devices using | |
229 | Intel 82576-based and 82599-based controllers. | |
230 | ||
231 | While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF) | |
232 | to a Linux-based VM running 2.6.32 or later kernel works fine, there is a | |
233 | known issue with Microsoft Windows Server 2008 VM that results in a "yellow | |
234 | bang" error. This problem is within the KVM VMM itself, not the Intel driver, | |
235 | or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU | |
236 | model for the guests, and this older CPU model does not support MSI-X | |
237 | interrupts, which is a requirement for Intel SR-IOV. | |
09e1c061 | 238 | |
872857a8 JK |
239 | If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode |
240 | with KVM and a Microsoft Windows Server 2008 guest try the following | |
241 | workaround. The workaround is to tell KVM to emulate a different model of CPU | |
242 | when using qemu to create the KVM guest: | |
09e1c061 | 243 | |
872857a8 | 244 | "-cpu qemu64,model=13" |
09e1c061 PW |
245 | |
246 | ||
247 | Support | |
248 | ======= | |
249 | ||
250 | For general information, go to the Intel support website at: | |
251 | ||
252 | http://support.intel.com | |
253 | ||
254 | or the Intel Wired Networking project hosted by Sourceforge at: | |
255 | ||
256 | http://e1000.sourceforge.net | |
257 | ||
258 | If an issue is identified with the released source code on the supported | |
259 | kernel with a supported adapter, email the specific information related | |
260 | to the issue to e1000-devel@lists.sf.net |