Will PCIe 3.0 x4 10Gbe NIC Work in PCIe 3.0x1 Bandwidth?

Are you trying to build a DIY server, but don’t know where you can fit all the devices you want? Look, I get it — for NAS applications, you probably want to get an HBA, most of which are PCI x8 (let’s not talk about the protocol version for now). Cloud gaming is your thing? Then that x16 slot will most definitely be populated with a graphics card. Want to install the free and robust software-based firewall? You’d need a dual or quad-port NIC. Oh boy, those PCIe slots are filling up fast, aren’t they? Sadly, there aren’t many consumer ATX boards that have more than 3 full-sized PCIe slots, and those do, can cost you an arm and a leg. Worse yet, most DIY server chassis won’t even support anything bigger than ATX, so E-ATX boards are also out of the question.

What PCIe Configuration to Expect When Shopping for Motherboards

High-end consumer ATX motherboards come with only 3 full-sized PCIe slots, while most come with only 2. The one that’s closest to the CPU will, more often than not, be PCIe x16. The one in the middle, on high-end chipsets, would be the second PCIe slot that can be bifurcated from the CPU, so this slot and the x16 slot will both be running at x8. For the most part, this will not be an issue, unless you are running something like a 3090 Ti. The one, furthest down, is the chipset PCIe slot. Before Intel 12th Gen, since the chipset communicates with the CPU via DMI 3.0 x4, effectively the same speed as a PCIe 3.0 x4, so on older Intel chipsets, this slot can and will be bottlenecked by the DMI. On the AMD side, this is much better. Ever since PCIe 4.0 was introduced by X570, the chipset talks to the Ryzen CPU with a PCIe 4.0 x4 link, since AMD only provides PCIe 3.0 x4 bandwidth to the chipset PCIe slot, the chance of bottlenecking is very low.

So, the general configuration on consumer-grade motherboards can be summarized as x8, x8, and x4 provided by the chipset. But, there’s more to it.

The PCIe x1 Bandwidth Slot

I will use Gigabyte’s X570 motherboard as an example in this section. This is the motherboard layout diagram.

As you can see, most of the stuff I wrote about is true. One x16 slot closest to the CPU, one x8 slot further down, and the bottom we have a x4 slot. However, there is one PCIe x1 slot in between the x8 and the x4. Based on my research, PCIe 3.0 x1 provides about 8Gbps of bandwidth. So, while it can’t satisfy 10Gbe (which will require 10Gbps of bandwidth, by the way), it is still miles better than your on-board 1Gbps, 2.5Gbps or even 5Gbps NIC.

Therefore, can you use a PCIe 3.0 x4 10Gbe NIC in a PCIe 3.0 x1 slot? The answer is “ABSOLUTELY” and you should definitely try if that’s the only option. But before you run out and start experimenting things, here are a few tips that I have learned by going with it myself.

The Application and Use Case

First and foremost, not all PCIe x1 slots are created equal. Some motherboard manufacturers will kindly put in a x16 slot, but electrically wired as x1, you can find this kind of setup from ASUS and AsRock boards. In my opinion, this is the best option, since you can just put whatever card you want in this slot without having to worry about card compatibility, it will just run at PCIe x1 bandwidth.

ASUS B660M with 3x Full-sized PCIe Slot

Then there’s this X570 kind. It looks like a x1 and wired as such. To effectively utilize this slot, you will have to resort to getting a PCIe riser card. This riser card package will have one PCIe x1 dummy card, and it will have a USB port (not USB protocol, just in the form of USB) and an USB cable connecting to another board, which will have a full size PCIe x16 on it. So, you will first plug the dummy card into the PCIe x1 slot, then install your actual device onto the breakout board. In this scenario, you will need a case that can fit more PCIe devices than a normal ATX case does. Mid-towers are often out of the question, and you probably have to look at those cases that support E-ATX, or those that support vertical GPU, so you can mount the PCIe device in that extra cut out.

PCIe x1 to x16 Riser

Secondly, think about the application. If you are not going to saturate a 10Gbe link, like running simple HDD-based NAS and not something crazy like 10 ZFS vdevs in striped config - and I believe most people won’t - then this could work for you. But if you are running some all flash storage array and expect the full 10Gbe bandwidth 24/7, then obviously this solution is not going to satisfy your workload.

Real-World Performance?

I run a few tests with iperf3, with Brocade ICX-6610 switch, and Mellanox ConnectX3 10Gbe single port NIC, Brocade 10Gbe transceiver, and OM4 cable I bought from fs.com. Both test rigs are VMs running on Proxmox, with paravirtualized NIC, not hardware passthrough.

This is the sender side, I ran the simplest test possible. It is not bad, averaging above 6Gbps, or 720MB/s.

You may have also noticed the high “Retr” count. This is indicative of TCP packets that need to be re-transmitted. I suspect this has something to do with the limited bandwidth that PCIe 3.0 x1 provides, since the negotiated speed is at 10Gbps - If the interface can only handle 8Gbps, and the system is trying to stuff data at the rate of 10Gbps, there must be congestion where the interface is lagging behind the system.

Conclusion

With all that said, if you are running out of PCIe slots, and your motherboard happens to have a PCIe x1 that you have no use of, this is definitely something worth trying out. Old enterprise hardware are cheap, building a 10Gbe home network could sometimes even be cheaper than running 2.5Gbe. A single decent 2.5Gbe managed switch can cost upwards of $300, and retrofitting 2.5Gbe network cards for other computers/workstations are also considerably more expensive than used Mellanox 10Gbe cards. If you can put one of these two in a PCIe x1 slot, then why not go with the 10Gbe one to get 6Gbps, rather than 2.5Gbe?