Ins and outs of Blade Centres or should that be centers….

As you may have noticed from our dedicated server home page http://www.veber.co.uk/dedicated-server-hosting-hardware, we at www.veber.co.uk use Blade Servers – you might have noticed this snippet of information and you might have let it slip past unnoticed – let me give you a brief glimpse into our infrastructure, our thought processes on providing servers like this and why we believe that Blade Servers make sense.

Here’s a picture of one of our first blade chassis we put in:

Ok – before we start we need to think about the pre-requisites to providing a Blade Server.

First up is space. Our racks at our Tier 4 datacentre are 42U – 42 units – we use APC Netshelter SX racks, they’ve proved reliable, easy to install and importantly (yes, we know this sounds stupid) have stickers on the side noting which U number it is – each rack unit is 600mm wide and 1070mm deep – so that’s the size you have to play with when thinking about servers.

So, once you have this space, all ready for your dedicated servers – what do you put in it? – in days gone by, you’d stick desktop machines in on shelves – you’d probably manage five shelves, of which you’d get four or five servers per shelf – so roughly 25 servers per rack – rubbish density. But then people started to install rack mounted servers….

Rack mount servers come in many sizes, most common of which are 1U and 2U in height, so you can theoretically get either 21 or 42 servers in a cabinet when you do this. Unfortunately it’s not that simple because you need to firstly connect yourself up to a switch – that’ll be 1U please. Actually you want two switches with independent uplinks – that’ll be another 1U please. That brings us to the next issue – power – you can use either 2U power bricks, or zero U (i.e. down the sides of the cabinet) power – we have a mix of these, and have decided to use zero U to save space, additionally they’re easier to replace if one goes wrong.  The Raritan ones we use have twenty outlets, but your redundant 2U of switches actually have redundant power supplies, so you need 4 power leads.

Before you ask – we don’t take shortcuts in the datacentre… it just doesn’t work… so sticking that power doubler in there is just a big NO NO.

So, we’re down to 36 power ports available – we need two more I’m afraid – redundant power for the IPKVM out of band management – this is where we give customers the ability to control their dedicated server at BIOS level upwards, more often than not this is a life saver – yes it’s an expensive option and one not used that often for customers, but this has saved countless trips to the datacentre and countless phone calls to the operations department at stupid-o’clock when you’re in a hurry to get a server rebooted or see what it’s doing.

34

That is the amount of servers you can fit in a 42 U cabinet. Not as many as you’d hoped – but when you do things properly, it’s easy to let these things slip.

This brings us to Blade Servers – well first up, we use the Dell M1000e chassis, it’s one of the better in class as far as usability and power usage is concerned and offers many features which are “just haves” in our industry – especially when providing dedicated servers to customers, enabling them to do their job, rather than do ours.

So firstly size, each chassis is 10U – meaning 40U of chassis and 2 U of connectivity switching.

Each chassis has six high output power supplies – which means 24 power ports are needed, plus the 4 for switching means only 26 are needed in total – we’re not left short there.  Then we come to the amount of servers per chassis – which is sixteen – this gives us a total of….

64

Servers per cabinet – nearly double the density of the rack mount counterpart and offers redundant power (which you don’t get with the rack mounts unless you double your power strips), advanced switching (each chassis has six slots on the back that you can install Cisco, Dell or Brocade switch modules, linking in with our infrastructure.)

In addition to the physical benefits, each server chassis is controlled centrally, and each server comes as standard with an Idrac – internet Dell remote access console – their version of out of band management where the customer can see their dedicated server at bios level and control power  on/off too.

So, that’s why we use blade servers – ease of management, and higher density.

One more thing – £££

You might be doing the sums in your head at this point – how can rack servers not be cheaper – and  how can enterprise blade servers be worthwhile? – well if you do the sums (which we’ve done)  you quickly realise that the costs stack up when doing rack servers and it’s not as cut and dried as the initial figures point out.

There is a large argument always brewing here at Veber which is commodity hardware vs. branded – and the blade vs. rack mount server is always part of it, we concluded – use the right tool for the job, and the end results should show.

If you have any questions or input on the above, we’d love to hear from you – emails can be sent to contactus@veber.co.uk or even pick up the phone – we’d love to chat about it.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.