Quantcast
Channel: Intel Communities : Unanswered Discussions - Wired Ethernet
Viewing all 1749 articles
Browse latest View live

Module eeprom with i40e and XL710-QDA1

$
0
0

Hi Intel,

 

I would like to read-out the module eeprom of QSFP transceiver connected to a XL710-QDA1 NIC.

The naive approach failed:  ethtool -m eth0. It reports: "Cannot get module EEPROM information: Operation not supported"

Is this feature supported by the NIC and the Intel driver i40e?

 

I am using the following driver and firmware:

driver version: 2.0.23

firmware-version: 5.05 0x800028a2 1.1568.0

 

Thanks and kind regards

Thomas Dey


Gigabit CT Desktop Adapter loses connection to local network on daily basis

$
0
0

We are using an Intel(R) Gigabit CT Desktop Adapter (device id 0x10d3) to connection to our local network on a Windows 10 PC (DELL optiplex 9020), driver version 12.12.50.6, driver date 20/03/2015. 'Update Driver' (online) yields 'The best driver software for your device is already installed'.

 

This network adapter looses the connection to our local network on a daily basis (sometimes less than once per day, sometimes several times a day).

 

On some occasions, after the crash we can't even find an event on the Event Viewer. On other occasions we get this event at the time of the crash:

 

Miniport Intel(R) Gigabit Adapter, {...long string...}, had event Fatal error: The IM miniport has failed to initialize.

 

On the Device manager, there is a yellow ! next to the device, saying that windows found a problem (code 43).

 

After restarting the PC, everything works fine every time.

 

One more peculiarity: after the crash, the 'Power management' tab on the device manager for this device disappears.

 

We've disabled 'Allow the computer to turn off this device to save power' (while this was possible). Also, we've disable all power saving options on Windows 10 and on the BIOS.

 

It is crucial for our application that our PC never looses its network connection so this issue is very frustrating.

 

Many thanks

Network Adapter will not start on cold startup.

$
0
0

Hey guys, I need your help.
I have been trying to fix this problem for 3 days now with no success.

 

Symptoms: On COLD STARTUP during BIOS POST initializing, network adapter will not turn on, no green light. After boot, device manager shows correct name of adapter Intel 82579V (FYI integrated) with yellow marking. The driver is loaded but has not started as indicated in the System information. If I reboot or disable the driver and then enable it, the network adapter will then turn on. Subsequent boots will keep the adapter working, but if I shutdown the PC completely and then restart, then the network adapter will be turned off. When I startup the PC with the power off (cold start S5 soft-off state), the network adapter will not start and when I check the adapter's properties-details-power data, the current power state will be D3, which means no power provided.

 

What I have tried:

1. Installed latest driver, resulted in same problem.

2. Installed the Intel NVRAM update to correct the chipset name of the adapter 82579V from 82579LM, resulted in same problem.

3. Installed a new CMOS battery, resulted in same problem.

4. Defaulted BIOS settings, resulted in same problem.

5. Jumped the BIOS pins and reset with full power cycle, resulted in same problem.

6. Jumped the Intel ME Engine pins and reset, resulted in same problem.

7. Basic checks: Reseated all cables. Checked the integrated adapter, no visual problems. Motherboard is less than 1 year old. Capacitors look fine.

8. Re-installed Windows, resulted in same problem.

9. Checked the network adapter WITHOUT Windows installed, resulted in same problem. Cold start, no power, but if you reboot from BIOS again, the adapter will turn on.

10. Tested with different Power Supply, resulted in same problem.

11. Oh yeh, unchecked all the power saving features under power management for network device, resulted in same problem.

12. Check BIOS LAN was enabled, resulted in same problem.

 

Now here's where I need yourhelp. I already did all of the above. The key check that I made is that WITHOUT the Windows and drivers installed, the problem still occurs, adapter does not start on cold startup, but if you reboot with just the BIOS installed, the green light will come on indicating the adapter now has power. That would indicate a problem in the BIOS POST not starting the adapter from a soft-off state or a mechanical off state.

 

Note: A related problem that occurred at the same time that this network adapter issue occurred is that if I shutdown the PC and then I shut off the surge protector that the PC is connected to, or if I just remove the plug from the PC, then I cannot restart the PC with the power button switch. Even with the new CMOS battery in, if I go from no power (G3 Mechanical Off), to plugging back in and starting the PC, it will not turn on. If I remove the battery while the PC is still plugged in, it still will not start. It will only start if I disconnect the power (Mechanical Off) and then remove the battery and then plug back in, and then hit the power button, then the PC will start.

 

It seems like there is a problem in the power settings within the BIOS, but even with a fresh BIOS Jump and default settings, results in the same problem.

 

My Specs:

Windows 7 32bit

DH61CR Motherboard (integrated Network adapter 82579V)

I5-2500 CPU

No additional PCI cards

Corsair PSU VS650

All Drivers are up-to date.
All Windows Updates installed.

Latest BIOS installed.

 

Thanks in advance for your help.

XL710 NVM v5.0.5 - TX driver issue detected, PF reset issued & I40E_ERR_ADMIN_QUEUE_ERROR

$
0
0

I've been having issues with TX driver issue detected, PF reset issued and fail to add cloud filter for quite some time across 12 VMware ESXi v6.0 hosts. About once a week the result is a purple screen of death (PSOD).

 

I recently upgraded the XL710 to NVM Firmware v5.0.5 and the VMware ESXi XL710 driver to the latest v2.0.6 on 4 of the 12 and the issues persist.

 

# ethtool -i vmnic2

driver: i40e

version: 2.0.6

firmware-version: 5.05 0x800028a6 1.1568.0

bus-info: 0000:03:00.0

 

Q. In trying to identify the culprit of , how do I identify the VM by filter_id?

Q. What is causing the "TX driver issue detected, PF reset issued"?

Q. How can I further troubleshoot to resolve the issue?

 

Here is just a snippet of /var/log/vmkernel.log. The logs are filled with the same repeating error messages:

Note the frequency of the error messages (~50 per minute!)

 

2017-05-26T16:01:04.659Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.659Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:04.660Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.660Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:05.347Z cpu11:33354)<6>i40e 0000:05:00.2: TX driver issue detected, PF reset issued

2017-05-26T16:01:05.538Z cpu38:33367)<6>i40e 0000:05:00.2: i40e_open: Registering netqueue ops

2017-05-26T16:01:05.547Z cpu38:33367)IntrCookie: 1915: cookie 0x38 moduleID 4111 <i40e-vmnic4-TxRx-0> exclusive, flags 0x25

2017-05-26T16:01:05.556Z cpu38:33367)IntrCookie: 1915: cookie 0x39 moduleID 4111 <i40e-vmnic4-TxRx-1> exclusive, flags 0x25

2017-05-26T16:01:05.566Z cpu38:33367)IntrCookie: 1915: cookie 0x3a moduleID 4111 <i40e-vmnic4-TxRx-2> exclusive, flags 0x25

2017-05-26T16:01:05.575Z cpu38:33367)IntrCookie: 1915: cookie 0x3b moduleID 4111 <i40e-vmnic4-TxRx-3> exclusive, flags 0x25

2017-05-26T16:01:05.585Z cpu38:33367)IntrCookie: 1915: cookie 0x3c moduleID 4111 <i40e-vmnic4-TxRx-4> exclusive, flags 0x25

2017-05-26T16:01:05.594Z cpu38:33367)IntrCookie: 1915: cookie 0x3d moduleID 4111 <i40e-vmnic4-TxRx-5> exclusive, flags 0x25

2017-05-26T16:01:05.604Z cpu38:33367)IntrCookie: 1915: cookie 0x3e moduleID 4111 <i40e-vmnic4-TxRx-6> exclusive, flags 0x25

2017-05-26T16:01:05.613Z cpu38:33367)IntrCookie: 1915: cookie 0x3f moduleID 4111 <i40e-vmnic4-TxRx-7> exclusive, flags 0x25

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 1 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 2 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 3 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 4 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 5 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 6 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 7 not allocated

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Netqueue features supported: QueuePair   Latency Dynamic Pre-Emptible

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Supporting next generation VLANMACADDR filter

2017-05-26T16:01:09.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:09.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:14.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:14.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:19.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:19.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:24.659Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.659Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:24.661Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.661Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:29.659Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.659Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:29.660Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.660Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:34.659Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.659Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:34.660Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.660Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:39.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:44.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:44.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:49.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:49.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:54.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:59.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:59.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:04.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:04.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:09.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:09.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:14.661Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.661Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:14.662Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.662Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:19.659Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.659Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:19.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:24.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:24.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:29.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:29.662Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.662Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:34.659Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.659Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:34.660Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.660Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:39.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:39.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:44.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:44.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:49.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:49.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:54.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:54.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:59.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:59.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:04.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:04.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:09.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:09.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:14.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:14.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:19.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:19.663Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.663Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:24.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:24.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:29.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:29.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:34.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:34.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:39.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:44.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:44.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:49.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:49.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:54.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:59.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:59.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

how to connect into wired and wireless network for NUC Digital Signage Player

$
0
0

Hi,

 

Please guide me , how to connect into wired and wireless network for NUC Digital Signage Player and please share ,what kind of network configuration required on switch side like vlan , MAC address requirement ..etc?

Thanks & Regards,

ShireSiva

Customer Requirement: To implement VF RSS using ixgbevf driver. Customer wants to know how to turn on RSS support on ixgbevf. Customer Observation: Using 3.3.2 ixgbevf in guest and 5.0.4 ixgbe on host. I noticed that even though there are two RX queue

$
0
0

Customer Requirement: 

To implement VF RSS using ixgbevf driver.  Customer wants to know how to turn on RSS support on ixgbevf.

 

Customer Observation:

Using 3.3.2 ixgbevf in guest and 5.0.4 ixgbe on host. I noticed that even though there are two RX queues for each VF, but pretty much all the packets go to the first RX queue. The RSS related features are also not supported. How to turn on the RSS support on ixgbevf? If it does not support RSS, what's the point having multiple queues in ixgbevf?

 

To help customer, can I please request

 

So kindly requesting as

 

  1. 1) Clearly step by step showing as how they can turn on RSS support on ixgbevf?
  2. 2) In case kindly you can show a sample code where may be a function is really turning on RSS support on ixgbevf – that will be highly helpful.
  3. 3) May be a sample application is already doing it.
  4. 4) And last, what the limitation in RED means – even though RSS is workable in ixgbevf – what the limitation in red mean please? – so that they correctly set their expectation and use it correctly please

The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is: The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared among PF and all VF; So it could not to provide a method to query the hash and reta content per VF on guest, while, if possible, please query them on host(PF) for the shared RETA information.

                                                                                                                               

QSFP+ Configuration modification is not supported by this adapter for XL710 QDA2

$
0
0

Hi ,

I was  trying to set up Intel XL710 in 2x40 mode using the QCU but I am getting the error ""QSFP+ Configuration modification is not supported by this adapter."

Although If I try to check the current configuration it shows the mode as N/A.

 

All the steps that I followed:

 

[root@ndr730l:/tmp/EFIx64] ./qcu64e /devices

Intel(R) QSFP+ Configuration Utility

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

 

NIC Seg:Bus Ven-Dev Mode    Adapter Name

=== ======= ========= ======= ==================================================

1) 000:005 8086-1583 N/A Intel(R) Ethernet Converged Network Adapter XL710-

 

[root@ndr730l:/tmp/EFIx64] ./qcu64e /info /nic=1

Intel(R) QSFP+ Configuration Utility

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

QSFP+ Configuration modification is not supported by this adapter.

 

MAC Address: 3CFDFE16FE80

Serial Number: XL710QDA2BLK

 

Firmware details:

ethtool -i vmnic4

driver: i40e

version: 1.4.28

firmware-version: 5.04 0x800024d8 17.5.10

bus-info: 0000:05:00.0

windows 10 Dell Optiplex 755 Intel 82566DM Ipv4 not connected

$
0
0

I can't get connected to the internet. I get the message not connected. Connection on other computers working fine. I think it started when I tried to updat the driver. Intel says adapter not supported for windows 10.  I can't find the generic driver. Also, get a error 651. Can anybody help?

Thanks

Dick Prosser

deprosser@comcast.net


802.1ad on ixgbe vfs

$
0
0

Hi,

I am trying to configure 802.1ad on ixgbe vfs with a goal of moving resultant q-in-q interface into a container i.e. have the vf in default namespace and have all the vf's vlan interfaces being moved to different namespaces. Theoretically in this configuration, packets would be double-tagged coming into pf, be directed to correct vf (and outer tag stripped) and then delivered to linux with single-tag.

 

I tried this:

 

# echo 32 > /sys/class/net/eth1/device/sriov_numvfs

# ip link set dev eth1 up

# ip link set dev eth1 vf 1 vlan 2 proto 802.1ad

RTNETLINK answers: Protocol not supported

 

Is there a way I can do this ?

 

TIA,

Don.

I210 BCM5241: Link to BCM PHY works, no link from PHY to I210

$
0
0

We have a simple on board I210 in copper 10/100 mode connected to a BCM5241 PHY. The BCM PHY sees link at 100M from the I210, but the I210 cannot see link from the BCM PHY. So, TX from the I210 functions, but RX on the I210 has no link. We have attempted turning off and on internal RX AC bypass caps on the I210, but still do not see link.

 

The MDI traces are connected directly to one another on the the PCB. I210 MDI0 is the TX and I210 MDI1 is the RX. There are no magnetics in-between.

 

The signal from the I210 looks good on a scope, but looking at the receive side, it is very noisy. Has anyone encountered this issue?

Re: Deployment Drivers Missing DEV_15D8

$
0
0

Dear Intel Corporation, I still have no solution. HP doesn't provide correct drivers. Can't Intel create PXE-drivers? We have the devices now almost a month and have to install it one by one. This is not desirable.

X710 - VFs - TX driver issue detected, PF reset issued, when running iperf3

$
0
0

Hi,

 

* In summary:

When I run iperf3 between 2 machines throgh SRIO VFs interface and I got the following error in dmesg:

 

TX driver issue detected, PF reset issued

 

* How to setup and detail results

+ I got 2 machines each with 1 X710 4ports 10G cards.  These 2 X710 cards has direct cable connect between them.

+ On machine 1, I run the following commands to create VF0 and setup its IP. VF0 has pci address of 0000:04:0a.0

    $ echo 1 > /sys/bus/pci/devices/0000\:04\:00.2/sriov_numvfs

    $ ifconfig enp4s0f2 mtu 9700

    $ ifconfig enp4s10 1.1.1.1 netmask 255.255.255.0 mtu 9216

  $ echo 'msg_enable 0xffff' > /sys/kernel/debug/i40e/0000\:04\:00.2/command

 

+ On machine 2, I run the following commands to create VF0 and setup its IP. VF0 has pci address of 0000:04:0a.0

    $ echo 1 > /sys/bus/pci/devices/0000\:04\:00.2/sriov_numvfs

    $ ifconfig enp4s0f2 mtu 9700

    $ ifconfig enp4s10 1.1.1.2 netmask 255.255.255.0 mtu 9216

 

+ I was able to ping 1.1.1.2 from machine 1 and ping 1.1.1.1 from machine 2

+ On machine 2, start iperf3 server:

   $ iperf3 -s -p 8000

 

+ On machine 1, start iperf3 client:

   $ iperf3 -c 1.1.1.12 -p 8000 -l 64 -p 4

   $ dmesg

 

[ 5895.270158] i40e 0000:04:00.2: Malicious Driver Detection event 0x00 on TX queue 68 PF number 0x02 VF number 0x40

[ 5895.270166] i40e 0000:04:00.2: TX driver issue detected, PF reset issued

[ 5895.270170] i40e 0000:04:00.2: TX driver issue detected on VF 0

[ 5895.270173] i40e 0000:04:00.2: Too many MDD events on VF 0, disabled

[ 5895.270176] i40e 0000:04:00.2: Use PF Control I/F to re-enable the VF

[ 5895.284494] i40evf 0000:04:0a.0: PF reset warning received

[ 5895.284500] i40evf 0000:04:0a.0: Scheduling reset task

[ 5895.334703] i40e 0000:04:00.2: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 5895.334711] i40e 0000:04:00.2: DCB init failed -53, disabled

[ 5895.612086] i40e 0000:04:00.2: Malicious Driver Detection event 0x00 on TX queue 67 PF number 0x02 VF number 0x40

[ 5895.612096] i40e 0000:04:00.2: Too many MDD events on VF 0, disabled

[ 5895.612098] i40e 0000:04:00.2: Use PF Control I/F to re-enable the VF

[ 5895.640978] i40e 0000:04:00.2: Invalid message from VF 0, opcode 3, len 4

 

* X710 card ports info

04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

 

* This is driver info of PF X710:

driver: i40e

version: 1.4.25-k

firmware-version: 5.04 0x80002549 0.0.0

expansion-rom-version:

bus-info: 0000:04:00.2

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

* This is the PF X710 card info:

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

  Subsystem: Intel Corporation Ethernet Converged Network Adapter X710

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 256 bytes

  Interrupt: pin A routed to IRQ 35

  Region 0: Memory at dc800000 (64-bit, prefetchable) [size=8M]

  Region 3: Memory at dc7f8000 (64-bit, prefetchable) [size=32K]

  Expansion ROM at df380000 [disabled] [size=512K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=129 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00001000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [e0] Vital Product Data

  Product Name: XL710 40GbE Controller

  Read-only fields:

  [PN] Part number:

  [EC] Engineering changes:

  [FG] Unknown:

  [LC] Unknown:

  [MN] Manufacture ID:

  [PG] Unknown:

  [SN] Serial number:

  [V0] Vendor specific:

  [RV] Reserved: checksum good, 0 byte(s) reserved

  Read/write fields:

  [V1] Vendor specific:

  End

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number 90-9c-a4-ff-ff-fe-fd-3c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 3

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 32, Total VFs: 32, Number of VFs: 1, Function Dependency Link: 02

  VF offset: 78, stride: 1, Device ID: 154c

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000dc400000 (64-bit, prefetchable)

  Region 3: Memory at 00000000dc380000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1b0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40e

  Kernel modules: i40e

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

  Subsystem: Intel Corporation Ethernet Converged Network Adapter X710

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 256 bytes

  Interrupt: pin A routed to IRQ 35

  Region 0: Memory at dc800000 (64-bit, prefetchable) [size=8M]

  Region 3: Memory at dc7f8000 (64-bit, prefetchable) [size=32K]

  Expansion ROM at df380000 [disabled] [size=512K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=129 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00001000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [e0] Vital Product Data

  Product Name: XL710 40GbE Controller

  Read-only fields:

  [PN] Part number:

  [EC] Engineering changes:

  [FG] Unknown:

  [LC] Unknown:

  [MN] Manufacture ID:

  [PG] Unknown:

  [SN] Serial number:

  [V0] Vendor specific:

  [RV] Reserved: checksum good, 0 byte(s) reserved

  Read/write fields:

  [V1] Vendor specific:

  End

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number 90-9c-a4-ff-ff-fe-fd-3c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 3

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 32, Total VFs: 32, Number of VFs: 1, Function Dependency Link: 02

  VF offset: 78, stride: 1, Device ID: 154c

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000dc400000 (64-bit, prefetchable)

  Region 3: Memory at 00000000dc380000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1b0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40e

  Kernel modules: i40e

 

* This is the VF X710 card info:

04:0a.0 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 01)

  Subsystem: Intel Corporation XL710/X710 Virtual Function

  Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0

  Region 0: [virtual] Memory at dc400000 (64-bit, prefetchable) [size=64K]

  Region 3: [virtual] Memory at dc380000 (64-bit, prefetchable) [size=16K]

  Capabilities: [70] MSI-X: Enable+ Count=5 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00002000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-

  RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 128 bytes

  DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UESvrt: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-

  CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 0

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1d0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40evf

  Kernel modules: i40evf

 

* This is driver info of VF X710:

driver: i40evf

version: 1.4.15-k

firmware-version: N/A

expansion-rom-version:

bus-info: 0000:04:0a.0

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: yes

 

* OS info:

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=16.04

DISTRIB_CODENAME=xenial

DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS"

 

Please let me know if you need more info.

 

Thanks!

X540-T1 - Device stopped (code 43)

$
0
0

Hi there,

 

I have 2x the same setup of one X540-T1 network card in a Logic Supply MK100B-50 (ASRock IMB-181-D Motherboard) and in both computers, after a certain amount of usage, the X540-T1 card stops working. When I go in the Windows Device Manager it says "Windows has stopped this device because it has reported problems. (code 43)"

 

I can disable and re-enable the device to make it work but only temporarly. The problem comes back.

I've also installed the latest intel driver. It says it is version 4.0.215.0 which was installed with the file: IntelNetworkDrivers-PROWinx64.exe.

 

Should this work?

Any ideas of how to fix this?

 

Thank you.

 

IntelX540-T1 error - 2017-06-02 16_41_41-.png

X540-T2 Link is Down

$
0
0

I have X540-T2 adapter on ubuntu 14.04 server and netgear xs708e swich  but at last one time in 24h I have link down messasge in console.

 

ixgbe 0000:04:00.0: eth0: NIC Link is Down

ixgbe 0000:04:00.0: eth0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

ixgbe 0000:04:00.0: eth1: NIC Link is Down

ixgbe 0000:04:00.0: eth1: NIC Link is Up 10 Gbps, Flow Control: RX/TX

 

I updated to latest driver and firmware but no luck.

 

How to fix this problem?

I219-V, no connection.

$
0
0

Hi, I'm french so I'm already sorry for all the mistakes I'm going to make

 

So I built my pc (I7 7700K, GTX 1080, MSI Z70, WIN10), everything was going fine the first 3 days but last sunday I lost the connection, just like this.

I rebooted my router severals times, same for my pc but nothin. I installed the drivers from Intel website, the latest drivers and also the previous, still nothin. I have this message almost every time, The card Intel(R) Ethernet Connection (2) I219-V meet some drivers problem or equipment.

I decided to re-install WIN10, my connection came back but just for few minutes, at this time I re-installed WIN10 3 times (Ubuntu too) and weirdly the last time the connetion remains several hours but only at 10 Mbps and when I tried to play a video game (first purpose of my pc) my connection crashed 30second or 1 min after I joined a party and hardly comeback. Now I don't have any connections.

By the way, my network card had a lot of trouble to recognize my wires, I used 3 differents wires but my card said that my wires are not connected or damaged. I also tried to connect my new pc with my previous pc linked by Ethernet wire, I had the same problem : my wire is not connected or damaged.

 

I tried a lot of solutions but nothin is really working for now, I think my network card should be replaced but it takes 3 weeks at least and I would like to find another solution.

 

Thank you for reading this post.


bridge mode issue with i40e vf

$
0
0

Hi,

   I have a network topology as the following:

   

PF1 and PF2, PF3 and PF4 is connected directly.

VM1, VM2, VM3 are all passthrough a VF from these PFs.

The two vfs in VM2 are all added into a linux bridge "br0".

VM1 ip address:192.168.1.2

VM3 ip address:192.168.1.3

 

VM1 VM2 VM3: CentOS 7.0

VF Driver:i40evf

Version: 2.0.22

 

PF Driver:i40e

Version:2.0.23

 

If VM3 ping VM1, There are a lot of arp on br0 like a broadcast strom.

 

Is there any way that this behaviour can be disabled? It causes severe problems that can escalate into packet storms for us.

 

[root@localhost ~]# tcpdump -i br0 -e arp

tcpdump: WARNING: br0: no IPv4 address assigned

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes

03:49:52.158874 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159042 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159097 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159139 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159187 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159269 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159280 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159322 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159390 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159433 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159484 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159521 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159560 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159578 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159588 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159602 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159623 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159645 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159922 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160161 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160486 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160607 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160640 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160971 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.161217 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.161357 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

X540-T2 VF cannot receive packet with OVS

$
0
0

I have configured host with two PF and 16 x2 VF in debian system. It worked well if I use the PF/VF as standalone NIC, but not functional with OVS bridge.

 

The testing scenario as below:

 

eth0 (PF): 10.250.11.103/24

eth11 (VF 9): 10.250.11.113/24

 

   Bridge "vmbr1"

        Port "eth10"

            Interface "eth10" (VF 8)

        Port "vmbr1"

            Interface "vmbr1"

                type: internal

        Port "in1"

            Interface "in1"

                type: internal

    ovs_version: "2.6.0"

 

in1: 10.250.11.34/24

 

I try the following icmp test:

 

ping 10.250.11.254 (reply good)

ping 10.250.11.254 -I eth11 (reply good)

ping 10.250.11.254 -I in1 (NO response)

 

ip link show my VF as:

-------------------------------

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 0c:c4:7a:df:36:c6 brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 6e:74:9e:65:58:eb, spoof checking on, link-state auto

    vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 7 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 8 MAC da:c7:06:0b:5b:1f, spoof checking on, link-state auto

    vf 9 MAC ea:97:12:4e:b4:6b, spoof checking on, link-state auto

    vf 10 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 11 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 12 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 13 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 14 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 15 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

 

arp table shown as:

-------------------------------

Address                  HWtype  HWaddress           Flags Mask            Iface

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth11

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth0

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     in1

 

But arp is not stable, it disappear very quickly:

------------------------------------------------------------

Address                  HWtype  HWaddress           Flags Mask            Iface

10.250.11.254                    (incomplete)                              in1

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth0

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth11

 

The strange thing is:

ping via in1 will not fill up the arp table for in1 Iface (always show incomplete), but ping via other two iface seems fill the mac address into in1 some time.

I have tcpdump to in1 and found it send out ARP request broadcast, but did not get any ARP reply from host 254.

 

ovs-appctl fdb/show vmbr1:

-----------------------------------------

port  VLAN  MAC                Age

    2     0  72:fc:b4:3a:e6:0f   78

    1     0  6c:3b:6b:e9:7c:e1   38

    1     0  6c:3b:6b:f5:96:e6    3

    1     0  88:43:e1:dd:9c:80    1

This looks persist and will not change whatever I ping via any Iface.

 

I have also try to change VF port, but seems none of them will work with OVS bridge.

 

Is there any advice?

How to change the tx queue value in ixgbevf driver ?

$
0
0

my question is :

environmet :82599 intel Ethernet Controller, EXSI system in host and ubuntu in Client,and my system require dpdk

 

when i build the ixgbe in host successfully,and bind vf in ubuntu ,i find the queue in vf is only 1 in both tx and rx,

then i change the rss=8,8,MQ=1,1 in host,but it has no effect on the queue value of vf unit ,dpdk can't work in my system,i want to change the tx queue to 8,how to do it ?

How to config floating veb?

$
0
0

HI

I want to add a floating veb in X710 network adapter and link two vfs to this veb .Please give me a method to do this.

 

Driver version: 2.0.23

 

Thanks

Does the i40e driver support vlan hardware filter s?

$
0
0

I am trying to filter network packets based on vlan tag ids.  Using wireshark I can capture/inspect packets with 2 different VLAN IDs on my network.  I was hoping to suppress packets associated a particular VLAN ID.  So, using the ethtool I added a hardware filter as follows:

 

sudo ethtool -U eth11 flow-type udp4 vlan 0x65 vlan-ask 0xE000 action -1 and

sudo ethtool -U eth11 flow-type udp4 vlan 0x67 vlan-ask 0xE000 action -1

Both commands are accepted (I observe 2  message indicating that rule 2001 and 2002 has been added, respectively)

 

After execution of the above commands,  no packets associated with VLAN 0x65 or 0x67 are received (good - what I expected).  As soon as I clear

one of the hardware filters via ethtool (sudo ethtool eth11 delete 2001) I receive packets associated with both VLAN Tags 0x65 and 0x66).

 

I was wondering if intel driver i40e truely supports hardware filtering? 

 

Thanks

Viewing all 1749 articles
Browse latest View live