I need 210AT project at but now can't find the 210AT at the debugging of program.
Help me find it please,thanks.
I need 210AT project at but now can't find the 210AT at the debugging of program.
Help me find it please,thanks.
Hello,
Can someone point me in the right direction or a KB article or explain what is the different between the different Intel Boot Agents.
Recently I've been seeing the following:
CL
GE
FE
Is there a matrix that explains the differences?
Or could be be product developments, FE became GE which became CL (or something like that)
Thanks
hi there.
"X540 Bypass Adapter" Driver does support the IEEE1588(PTP) ?
Linux Kernel Version: 3.9.0
Target Device: Ethernet Server Bypass Adapter X520/X540
Package: ixgbe-4.4.0.19.tar.gz
Package Unpacked and next executed.
# make CFLAGS_EXTRA="-DIXGBE_PTP" install
bud "ixgbe_ptp.o" is not built.
Hello
I have problem intel drivers I350-T2 for Microsoft Windows 2012.
Sometime service stop Intel(R) PROSet Monitoring Service.
In event write message:
Log Name: Application
Source: Application Error
Date: 27.01.2016 12:37:37
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
User: N/A
Computer: VCS4H.ascon.ru
Description:
Faulting application name: IProsetMonitor.exe, version: 20.3.44.0, time stamp: 0x55c3b29e
Faulting module name: ntdll.dll, version: 6.2.9200.17581, time stamp: 0x5644f0f7
Exception code: 0xc0000374
Fault offset: 0x00000000000e9d19
Faulting process id: 0x1684
Faulting application start time: 0x01d158e1847d87b6
Faulting application path: C:\Windows\system32\IProsetMonitor.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 995224c2-c4d9-11e5-9422-0cc47a31ddf5
Faulting package full name:
Faulting package-relative application ID:
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Application Error" />
<EventID Qualifiers="0">1000</EventID>
<Level>2</Level>
<Task>100</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2016-01-27T09:37:37.000000000Z" />
<EventRecordID>2490</EventRecordID>
<Channel>Application</Channel>
<Computer>VCS4H.ascon.ru</Computer>
<Security />
</System>
<EventData>
<Data>IProsetMonitor.exe</Data>
<Data>20.3.44.0</Data>
<Data>55c3b29e</Data>
<Data>ntdll.dll</Data>
<Data>6.2.9200.17581</Data>
<Data>5644f0f7</Data>
<Data>c0000374</Data>
<Data>00000000000e9d19</Data>
<Data>1684</Data>
<Data>01d158e1847d87b6</Data>
<Data>C:\Windows\system32\IProsetMonitor.exe</Data>
<Data>C:\Windows\SYSTEM32\ntdll.dll</Data>
<Data>995224c2-c4d9-11e5-9422-0cc47a31ddf5</Data>
<Data>
</Data>
<Data>
</Data>
</EventData>
</Event>
How it is possible to solve this problem?
Two servers in a cluster, a problem only in start of service Intel(R) PROSet Monitoring Service.
Whether if problems with operation of drivers of the network interface card are to stop this service?
Hi
I am trying to update firmware to my x710 card using linux and nvmupdate64e
But tools says that update is not available, am i doing something wrong ?
./nvmupdate64e -u -l -o update.xml -b -c nvmupdate.cfg
Intel(R) Ethernet NVM Update Tool
NVMUpdate version 1.24.33.08
Copyright (C) 2013 - 2015 Intel Corporation.
Config file read.
No devices to update
Post update inventory
./nvmupdate64e
Intel(R) Ethernet NVM Update Tool
NVMUpdate version 1.24.33.08
Copyright (C) 2013 - 2015 Intel Corporation.
WARNING: TO AVOID DAMAGE TO YOUR DEVICE, DO NOT EXIT OR REBOOT OR POWER OFF THE SYSTEM DURING THIS UPDATE
Inventory in progress. Please wait [+.........]
Num Description Device-Id B:D Adapter Status
=== ====================================== ========= ===== ====================
01) Intel(R) Ethernet Converged Network Ad 8086-1572 02:00 Update not available
02) Intel(R) I350 Gigabit Network Connecti 8086-1521 06:00 Update not available
lspci -v | grep Eth
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
Subsystem: Intel Corporation Ethernet Controller X710 for 10GbE SFP+
02:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
Subsystem: Intel Corporation Ethernet Converged Network Adapter X710
ethtool -i eth1
driver: i40e
version: 1.3.49
firmware-version: 4.26 0x80001609 0.0.0
bus-info: 0000:02:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
I have 2 new X710 cards to install in our HP DL360G8 hypervisor servers with 2012R2 64bit Datacenter loaded. The card shows up in device manger as Other Devices, Ethernet Controller. I downloaded driver 20.4.1 and when I install the driver it blue screens the server. I also updated firmware with NVMUpdate Utility 1.25.20.3. I installed this on 2 servers with the same OS and had the same result. I just tried installing 20.3 and it blue screened when loading the drivers also.
I also had an X520-DA2 card for another server and it is identified and installs without any problems, so what is my problem with the X710?
Any ideas?
when looking at the Add/Remove Programs, as well as Settings/System/Storage/Storage Usage, "the Intel(R) Network Connections Drivers" is reporting (in my case 6 gigabytes) usage.
this is apparently because the InstallDirectory setting in the installer is %systemroot%\system32, and not its own directory, so it gets charged for the whole system32 subtree. when I was looking for stuff to delete from my windows 10 computer, this looked like something was wrong and I should delete it. fortunately, I knew it was a misrepresentation. I strongly encourage you to fix that, because its not very helpful to regular users, and could lead to complaints and confusion.
Mike
I use original network card Intel 82574L (EXPI9301CTBLK). After the upgrade, with the built-in "Windows 7 x64" driver 11.0.5.22 to the last 12.7.28.0 there are delays in downloading web pages. There is a general slowdown. Why are there new drivers slow down? After roll back to version 11.0.5.22 everything becomes normal.
PS And in the driver confused modes FullDuplex (Half) and HalfDuplex (Full).
Hello,
We have installed PC with Ubuntu 14.04.3 with all updates as Border router:
Linux hellnat 3.19.0-47-generic #53~14.04.1-Ubuntu SMP Mon Jan 18 16:09:14 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
CPU: 2*E5-2690v3 with hyperthreading enabled (so total 48 logical "cores" in OS)
Intel XL710 quad port, every "channel" of every p1p* interface is binded to its core
It is used as border router, so it uses BGP. We use p1p1 and p1p3 to connect to internal routers and p1p2 and p1p3 - to Uplinks.
Suddenly traffic stopped when it was NOT rush hour.
After reboot (via IPMI) I saw next lines in syslog file:
Jan 31 02:33:33 hellnat kernel: [220504.793680] ------------[ cut here ]------------
Jan 31 02:33:33 hellnat kernel: [220504.793701] WARNING: CPU: 45 PID: 0 at /build/linux-lts-vivid-Yt59dr/linux-lts-vivid-3.19.0/net/sched/sch_generic.c:303 dev_watchdog+0x24f/0x260()
Jan 31 02:33:33 hellnat kernel: [220504.793705] NETDEV WATCHDOG: p1p1 (i40e): transmit queue 8 timed out
Jan 31 02:33:33 hellnat kernel: [220504.793707] Modules linked in: nf_conntrack_netlink nfnetlink xt_tcpudp xt_multiport iptable_filter xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_mangle xt_CT iptable_raw ast ttm joydev intel_rapl iosf_mbi drm_kms_helper x86_pkg_temp_thermal intel_powerclamp drm syscopyarea sysfillrect sysimgblt coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul aesni_intel ipmi_ssif aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me sb_edac edac_core mei ipmi_si 8250_fintek ipmi_msghandler lp wmi acpi_pad parport ioatdma mac_hid shpchp nf_conntrack_ftp acpi_power_meter nf_nat_pptp nf_nat_proto_gre nf_conntrack_pptp nf_conntrack_proto_gre nf_nat nf_conntrack ip_tables x_tables 8021q garp mrp stp llc tcp_htcp hid_generic i40e(OE) igb vxlan ip6_udp_tunnel i2c_algo_bit udp_tunnel usbhid dca uas configfs ahci ptp usb_storage hid megaraid_sas libahci pps_core
Jan 31 02:33:33 hellnat kernel: [220504.793817] CPU: 45 PID: 0 Comm: swapper/45 Tainted: G OE 3.19.0-47-generic #53~14.04.1-Ubuntu
Jan 31 02:33:33 hellnat kernel: [220504.793820] Hardware name: Supermicro SYS-6018R-WTR/X10DRW-i, BIOS 1.1 08/13/2015
Jan 31 02:33:33 hellnat kernel: [220504.793822] ffffffff81b3fcc0 ffff88105f4a3d58 ffffffff817afcd5 0000000000000000
Jan 31 02:33:33 hellnat kernel: [220504.793827] ffff88105f4a3da8 ffff88105f4a3d98 ffffffff81074dea 0000000000000286
Jan 31 02:33:33 hellnat kernel: [220504.793830] 0000000000000008 ffff88105b65a000 0000000000000040 ffff88105748cf40
Jan 31 02:33:33 hellnat kernel: [220504.793835] Call Trace:
Jan 31 02:33:33 hellnat kernel: [220504.793837] <IRQ> [<ffffffff817afcd5>] dump_stack+0x45/0x57
Jan 31 02:33:33 hellnat kernel: [220504.793857] [<ffffffff81074dea>] warn_slowpath_common+0x8a/0xc0
Jan 31 02:33:33 hellnat kernel: [220504.793860] [<ffffffff81074e66>] warn_slowpath_fmt+0x46/0x50
Jan 31 02:33:33 hellnat kernel: [220504.793869] [<ffffffff816cd69f>] dev_watchdog+0x24f/0x260
Jan 31 02:33:33 hellnat kernel: [220504.793874] [<ffffffff816cd450>] ? dev_graft_qdisc+0x80/0x80
Jan 31 02:33:33 hellnat kernel: [220504.793879] [<ffffffff810dac79>] call_timer_fn+0x39/0x110
Jan 31 02:33:33 hellnat kernel: [220504.793883] [<ffffffff816cd450>] ? dev_graft_qdisc+0x80/0x80
Jan 31 02:33:33 hellnat kernel: [220504.793888] [<ffffffff810dc440>] run_timer_softirq+0x220/0x320
Jan 31 02:33:33 hellnat kernel: [220504.793898] [<ffffffff8104a403>] ? lapic_next_deadline+0x33/0x40
Jan 31 02:33:33 hellnat kernel: [220504.793905] [<ffffffff81078f44>] __do_softirq+0xe4/0x270
Jan 31 02:33:33 hellnat kernel: [220504.793909] [<ffffffff8107930d>] irq_exit+0x9d/0xb0
Jan 31 02:33:33 hellnat kernel: [220504.793916] [<ffffffff817ba78a>] smp_apic_timer_interrupt+0x4a/0x60
Jan 31 02:33:33 hellnat kernel: [220504.793924] [<ffffffff817b87bd>] apic_timer_interrupt+0x6d/0x80
Jan 31 02:33:33 hellnat kernel: [220504.793926] <EOI> [<ffffffff81650510>] ? cpuidle_enter_state+0x70/0x170
Jan 31 02:33:33 hellnat kernel: [220504.793938] [<ffffffff816504fd>] ? cpuidle_enter_state+0x5d/0x170
Jan 31 02:33:33 hellnat kernel: [220504.793943] [<ffffffff816506c7>] cpuidle_enter+0x17/0x20
Jan 31 02:33:33 hellnat kernel: [220504.793949] [<ffffffff810b54d4>] cpu_startup_entry+0x334/0x3d0
Jan 31 02:33:33 hellnat kernel: [220504.793955] [<ffffffff810e9e53>] ? clockevents_register_device+0xe3/0x140
Jan 31 02:33:33 hellnat kernel: [220504.793960] [<ffffffff81048bb7>] start_secondary+0x197/0x1c0
Jan 31 02:33:33 hellnat kernel: [220504.793963] ---[ end trace 43e1a051ade0289e ]---
Jan 31 02:33:33 hellnat kernel: [220504.793973] i40e 0000:81:00.0 p1p1: tx_timeout: VSI_seid: 399, Q 8, NTC: 0xd36, HWB: 0xa1, NTU: 0xa1, TAIL: 0xa1, INT: 0x0
Jan 31 02:33:33 hellnat kernel: [220504.793976] i40e 0000:81:00.0 p1p1: tx_timeout recovery level 1, hung_queue 8
Jan 31 02:33:43 hellnat watchquagga[2972]: zebra state -> unresponsive : no response yet to ping sent 10 seconds ago
Jan 31 02:33:49 hellnat watchquagga[2972]: bgpd state -> unresponsive : no response yet to ping sent 10 seconds ago
Jan 31 02:33:50 hellnat kernel: [220521.908228] NMI watchdog: BUG: soft lockup - CPU#13 stuck for 23s! [kworker/13:1:536]
Jan 31 02:33:50 hellnat kernel: [220521.908306] Modules linked in: nf_conntrack_netlink nfnetlink xt_tcpudp xt_multiport iptable_filter xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_mangle xt_CT iptable_raw ast ttm joydev intel_rapl iosf_mbi drm_kms_helper x86_pkg_temp_thermal intel_powerclamp drm syscopyarea sysfillrect sysimgblt coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul aesni_intel ipmi_ssif aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me sb_edac edac_core mei ipmi_si 8250_fintek ipmi_msghandler lp wmi acpi_pad parport ioatdma mac_hid shpchp nf_conntrack_ftp acpi_power_meter nf_nat_pptp nf_nat_proto_gre nf_conntrack_pptp nf_conntrack_proto_gre nf_nat nf_conntrack ip_tables x_tables 8021q garp mrp stp llc tcp_htcp hid_generic i40e(OE) igb vxlan ip6_udp_tunnel i2c_algo_bit udp_tunnel usbhid dca uas configfs ahci ptp usb_storage hid megaraid_sas libahci pps_core
Jan 31 02:33:50 hellnat kernel: [220521.908396] CPU: 13 PID: 536 Comm: kworker/13:1 Tainted: G W OE 3.19.0-47-generic #53~14.04.1-Ubuntu
Jan 31 02:33:50 hellnat kernel: [220521.908399] Hardware name: Supermicro SYS-6018R-WTR/X10DRW-i, BIOS 1.1 08/13/2015
Jan 31 02:33:50 hellnat kernel: [220521.908408] Workqueue: events inet_frag_worker
The main lines , I think, are:
Jan 31 02:33:33 hellnat kernel: [220504.793705] NETDEV WATCHDOG: p1p1 (i40e): transmit queue 8 timed out
Jan 31 02:33:33 hellnat kernel: [220504.793973] i40e 0000:81:00.0 p1p1: tx_timeout: VSI_seid: 399, Q 8, NTC: 0xd36, HWB: 0xa1, NTU: 0xa1, TAIL: 0xa1, INT: 0x0
Jan 31 02:33:33 hellnat kernel: [220504.793976] i40e 0000:81:00.0 p1p1: tx_timeout recovery level 1, hung_queue 8
We can see that tx queue 8 hang up. Why can it happen? I think it is a problem of network adapter or driver. Can you explain it to me and how to fix it? It is big problem when it happens because all traffic is going through this machine.
Some information from ethtool:
# ethtool -i p1p1
driver: i40e
version: 1.3.49
firmware-version: 4.53 0x80001da6 0.0.0
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
# ethtool -c p1p1
Coalesce parameters for p1p1:
Adaptive RX: off TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
rx-usecs: 800
rx-frames: 0
rx-usecs-irq: 0
rx-frames-irq: 256
tx-usecs: 600
tx-frames: 0
tx-usecs-irq: 0
tx-frames-irq: 256
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
# ethtool -k p1p1
Features for p1p1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: on
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: on
receive-hashing: on
highdma: on
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
If you need more information feel free to ask it.
Thank you in advance.
Regards,
Evgeny
I am using Intel(R)82579VbasedNetworkController(OEM) for my network. Now I tried to find the Wake on Magic packet. I could not find it in the property and energy. or even in my bios. Is my adapter to old for thids or can I download some drivers for this?
Hi,
I have S5520HC that have 82575EB on board NIC's. I've upgraded server to WS2012R2 from 2012
The default driver from WS2012R2 is present but It doesn't have Teaming feature.
I have tried to install ethernet driver 20.5 from downloadcetner.intel.com (not 20.6 because there is no 82575EB on 'valied for the product list') but It failed. Error massege was 'Cannot Install drivers. No Intel(R) Adapters are present in this computer'
What's wrong? I've found 82575EB's status is not EOL on Intel ARK site. Is this my server's problem??
Any Info will be helpful to me.
Thanks,
Martin
We have an app running a VF on the Fortville XL710. According to the data sheet, the maximum number of VLAN filters the XL710 VF can set is 8. Unfortunately our app requires lots more than that, so I'm wondering if there is a way to cause the app running the VF to receive all traffic directed to its MAC address ie non-tagged and tagged traffic? If not, is there any other way to have a large number of VLANs in the VF?
Hi Folks
For some reason when I turned on my 2 week old NUC6i5SYH yesterday it refused to connect to the internet via ethernet connection. It has been fine since I got it and I have made sure that all the Intel Drivers that should be updated have been. I set up the WiFi (which I hadn't bothered to do) and that worked fine - went through all Drivers again updating as I go - only thing I didn't get to look for was any BIOS updates.
I eventually checked the cable (fine) then I checked my switch (fine) and finally my hated SKY router (as 'fine' as normal). Plugging my laptop into the same ethernet cable demonstrated that my laptop immediately switched to a cabled connection - so I now know it's either software or hardware playing up with my NUC.
My plan tonight is to delete the ethernet network drivers in Device Manager (I'm on Win10 currently) and to reinstall them - then if necessary investigate any BIOS updates.
What else am I missing? it seems pretty certain that it's the NUC as everything else tests ok, but why suddenly stop working overnight between uses? In the Visual BIOS currently on it - it shows that LAN is switched ON.....
I'm puzzled and annoyed. Any help would be appreciated.
I've been watching this go nowhere since October 2015, so I thought maybe someone could shed some light on why Intel is now a half year out from releasing a chipset they don't have driver support for? Specifically, I'm referencing FreeBSD drivers for i219, of which this thread on the Dev forums keeps getting the "it's coming" canard...
i219 Driver Support for FreeBSD
So, can anyone comment on this? Maybe we'll see support before Cannonlake...
We have 2 new HP Elite Desk 800 G1 Tower PCs which has an onboard Intel Ethernet Connection I217-LM Network Card. I decided to run the built-in diagnostics test on this card and it passed the Connection and Cable Test. But on the hardware Test, thought it passed the REgister, EEPROM and Loopback Test, it failed on FIFO Test. I contacted HP who then replaced the motherboard but it still failed the FIFO Test. This diagnostic test is accessed via Control Panel > Device Manager > Network Adapters > Intel Ethernet Connection I217-LM Properties > Link Speed > Diagnostics Test > Hardware. They then advised that they do not support this 3rd party diagnostic test software and suggest that we run the BIOS test instead as they will only investigate further if it failed the BIOS test. We haven't done the BIOS test yet as the PCs are in a different location.
I don't fully understand the implication of FIFO test failing but in so far as I'm concerned, it doesn't give me confidence to see this test failing. We have other HP machines with different models with different intel network card but it also has this built in Diagnostic Test and they passed the Hardware Test so I don't see why this test on Intel Ethernet Connection I217-LM would fail the FIFO Test. Can anyone please tell me more about this FIFO test and what we should do to resolve this? I'm attaching the screenshot of the result of Diagnostics Test so everyone will know what I meant by this. I would very much appreciate your assistance on this matter.
I have two gigabit Intel NICs, CT Desktop and Pro1000 PT Quad. What is better in meaning better speed, reducing CPU load etc? Or maybe I should sell both and buy new Intel i210-T1?
I have a Lenovo W510 laptop including 8GB RAM, Samsung SSD, and "Intel 82577LM Gigabit Network Connection" Ethernet adapter with what I believe are the latest available drivers. This machine was recently built (with SSD) and Win7. The machine connects to a Dlink Ethernet switch, to which is also connected a NAS box and Router (with connection to the Internet).
The machine and network configuration is capable of good performance, including:
- copy large file from the NAS to desktop at > 40 MByte/sec
- download from internet speed test site at > 30Mbit/sec
But after use (of various time lengths, typically minutes to hours...), the following happens:
- copy of large file runs 6 - 10 MB/sec
- I can detect a reduced "snappiness" in File Explorer operation;
- web site browsing becomes "painful". Curiously, internet speed tests still show circa 30 Mbit/s download speed.
A restart of the computer generally rectifies the problem (for a time). At all times, the "test tools" on the Intel card report good quality network connections, good quality cable and cable lengths in the 20-30 m range (depending on connection I use) and no problems.
I have connected to the Dlink switch on various ports - makes no difference.
I have another (more power desktop) machine using a Realtek LAN card - it experiences no such difficulties and performs very well consistently.
I am stumped by this. Can anyone offer any explanation or solution?
I recently installed the PROSet drivers for my I217-V network adapter to get VLAN network connections available to some programs on my laptop. Now, however, when I go to select the appropriate VLAN in the application's network interface selection, I get no options, it's not registering any adapters on my computer. Is there something wrong with my Windows registry that this driver causes these VLAN adapters not to show up properly?
Hi,
We are using Avoton platform in which four Intel Ethernet connections I354 will be present. We using Marvell 88E1512 as external PHY device. We installed windows igb drivers, prowinx64 and Ethernet got working in windows. But if we boot to linux means, we are not able to find any Ethernet connections. If we run command "lshw -class network", it shows as below message (which four controllers are unclaimed.)
*-network:0 UNCLAIMED
description: Ethernet controller
product: Ethernet Connection I354
vendor: Intel Corporation
physical id: 14
bus info: pci@0000:00:14.0
version: 03
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress cap_list
configuration: latency=0
resources: memory:dff60000-dff7ffff ioport:e0c0(size=32) memory:dffec000-dffeffff
*-network:1 UNCLAIMED
description: Ethernet controller
product: Ethernet Connection I354
vendor: Intel Corporation
physical id: 14.1
bus info: pci@0000:00:14.1
version: 03
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress cap_list
configuration: latency=0
resources: memory:dff40000-dff5ffff ioport:e0a0(size=32) memory:dffe8000-dffebfff
*-network:2 UNCLAIMED
description: Ethernet controller
product: Ethernet Connection I354
vendor: Intel Corporation
physical id: 14.2
bus info: pci@0000:00:14.2
version: 03
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress cap_list
configuration: latency=0
resources: memory:dff20000-dff3ffff ioport:e080(size=32) memory:dffe4000-dffe7fff
*-network:3 UNCLAIMED
description: Ethernet controller
product: Ethernet Connection I354
vendor: Intel Corporation
physical id: 14.3
bus info: pci@0000:00:14.3
version: 03
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress cap_list
configuration: latency=0
resources: memory:dff00000-dff1ffff ioport:e060(size=32) memory:dffe0000-dffe3fff
And when we try to print "dmesg" command, we are getting related to igb drivers as following lines.
.......
.......
[ 1.940641] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.2.13-k
[ 1.940646] igb: Copyright (c) 2007-2014 Intel Corporation.
.......
.......
[ 1.971096] igb: probe of 0000:00:14.0 failed with error -2
[ 1.971398] igb: probe of 0000:00:14.1 failed with error -2
[ 1.971690] igb: probe of 0000:00:14.2 failed with error -2
[ 1.971982] igb: probe of 0000:00:14.3 failed with error -2
....
....
We tried installing igb Standard drivers from Intel for but having same behaviour.
Your help on this will be appreciated.
Thanks and Regards,
Younus Ahamad S
We are looking at FM10420 controller based NICs. Does this controller support timestamping of only IEEE 1588 frames or every frame? Where can I find a datasheet for this controller?