Technology evolves with each passing day, and evolution brings change. Back in the day, rack and tower servers were all that everyone needed, then came the era of blade servers, and nowadays, we are seeing multi-node servers hit the market. Will multi-node servers overtake the server market, we don’t know because it is too early to tell, but the technology sure does look promising.
However, just because a piece of tech seems promising isn’t enough to make the change, especially when you’re dealing with something as vital as your server. Lately, many tech enthusiasts have discussed the future of severs – what we should look forward to and what should be our number one concern going forward.
As of now, the number one concern is related to blade servers, and it mostly revolves around a single question – how will the blade servers stack up against the CPU thermal design power. To anyone that knows anything about tech, it is quite obvious that we are nearing the point where blade servers are going to come up short when it comes to keeping things cool. Air cooling can only get you so far once you try and tackle some CPU-heavy workload in such a small, enclosed environment. So, what do you do?
What Are Your Options Going Forward?
Well, you either adapt or switch. As far as we’re concerned, adaptation is not a likely scenario, as it would involve either switching to liquid cooling, which would be a headache in an already tight blade server environment. Neither users nor manufacturers will be willing to make this happen, so we don’t really consider this a viable option.
On the other hand, you could choose a lower wattage design, but that would lower your efficiency and performance, and we really don’t think anyone is going to settle for that. That only leaves switching to another configuration. But, what do you do? Unless you have a dedicated warehouse to act as your server room, there is no way anyone’s going back to the rack config. That pretty much leaves us with one option – multi-node servers.
Multi-node servers could just be the future of datacentres, but as we’ve said, just because they seem promising doesn’t mean that anyone would be too eager to make the switch. Not unless they’re fully aware of what they’re getting in return.
Since there is not a lot of information on the multi-node servers anywhere, we thought we’d give you a hand by outlining some of the differences between multi-node and blade servers and possibly show you why multi-nodes would be an excellent alternative when the CPUs become way too hot for the blade to handle.
What Are Multi-Node Servers?
Multi-node servers aren’t that different from anything we’ve seen before, which is why most people can’t really tell the difference. Essentially, this is just an alternative form factor to existing technology, with some major improvements and changes.
For example, let’s take a look at a multi-node, 2U system that can hold 1-4 compute nodes. A server such as this one is fairly similar to a regular blade config in the way that they both share similar power and cooling properties. However, the key difference lies in the fact that each server in a multi-node config supports its own storage and networking.
What Are Some Of The Best Multi-Nodes?
The market isn’t still flooded with multi-nodes, so it might be easier than ever to find a good multi-node. Some of the best ones, according to COTT Servers, are:
- Cisco UCS 4200 w/C125
- Dell Technologies PowerEdge C6525
- Dell Technologies PowerEdge C6520
- HPE Apollo 2000 Gen10 Plus
Naturally, you have plenty of other options at your disposal, but when you look at the numbers and user satisfaction, these ones tend to come out on top nine times out of ten.
The Main Differences Between Multi-Node And Blade Servers
The 2U form factor is one of the most common ones found in multi-node systems, and that is not by chance. This form factor offers a smaller “fault domain“, as the tech-heads like to call it. To put it in semi-layman terms, if you encounter any of the common issues like power or networking failure, only a part of the servers will be affected. With blade servers, on the other hand, in case of a mishap such as that one, all of the servers inside a chassis would be affected.
Server Compatibility And Options
This is not something many people think about, but multi-nodes have a major advantage when it comes to compatible, high-end servers. For instance, you can’t really find an excellent blade server for an AMD EPYC, the industry standard. Furthermore, there are no blade servers manufactured with only 1 CPU in mind.
Now, AMD is the industry leader when it comes to 1 CPU market share, and yet – we can’t find any blade servers with AMD, and not for the lack of trying. There just aren’t any. On the other hand, the market is flooded with servers and nodes for EPYC. So, that’s something to wrap your mind around.
As we’ve said, switching to liquid cooling in a blade configuration would be virtually impossible. We say “virtually” because there is always a way, but truthfully speaking – none of the current blade servers support liquid cooling, and that’s probably never going to change. On the other hand, multi-node ones do. And when you factor in the CPU TDPs, you can see why this, at the moment insignificant difference, could prove to be a major one in the future.
The final major difference lies in connectivity. Depending on how you look at it, I/O on a multi-node can either be an advantage or a disadvantage, and we’re not only talking about cable management. With a multi-node, every server has its own I/O, which does offer a lot of possibilities, but can also cause a lot of headaches.
Unlike with blade servers, which use the same fabric across all servers, with a multi-node – you can mix and match. Naturally, this level of versatility also means more room for failure, not to mention cable complexity, but if options are what you’re looking for – this trade-off might just be worth it.
In its essence, both of these form factors do essentially the same thing, and as of right now, there are no significant advantages that any one of the two can claim. In today’s climate, both data centres would work just the same, regardless of the server infrastructure. However, in the not-so-distant future – the tables might turn. So, we hope that outlining these differences between the two allow comes in handy when the time comes.