The EMC experience:

Here at Veber we were lucky enough to get our hands on a VNXE3300 from EMC for a short review with an additional disk shelf. (which we forgot to take photos of!!) Our goal was to see what this box could do for us in our VMware infrastructure.

When the box arrived – it was on a palate. BOLTED to the base… nice to see EMC take their packaging seriously – often when we get boxes like this delivered they’re only in a box with some bubble wrap and a bit of polystyrene.

When racked, it looks really nice – something like this:

Software Features:

Most of my time has been to test the software features and failover aspects of this box… I’m responsible for testing our infrastructure and making sure it’s robust so I like to do my due diligence….  The box has a nice clean and intuitive configuration interface available from any web-browser allowing the storage administrator to quickly create or modify new or already existing data volumes, disk pools or storage server configurations. EMC really made it simple – In few clicks you can setup your disk array and create a VMFS data-store for your VMware infrastructure for example. It simplifies storage management even further as it has some VMware integration so when you create a data-store for your vSphere environment it automatically sets up the storage access on the ESX hosts you granted access to – whether you have created an NFS or iSCSI data store. This is quite handy when you have a big cluster of ESX hosts you want to have a shared storage for. You can sit back slurping your cup of tea while your vSphere goes “Add Internet SCSI send targets”, “Rescan HBA”…

Should you have any type of hardware failure the VNXe has a convenient way to let you know what part and where needs to be investigated in order to resolve the problem. It has images of the unit’s front and rear and marks the faulty component in red with a popup label telling you what the issue is. There is no need to go to the event log as it shows straight away what the problem is when you move your mouse over the component marked in red “Disk6 has been removed. Disk11 is re-synchronizing with the system.”  I’d really like to show you some screen grabs of this, but I forgot while writing the review and we had to send the unit back to EMC as it was and eval unit..

When you are configuring your NFS or iSCSI server parameters it automatically places the storage server onto the relevant storage controller. It certainly allows you to override any of its automated selection even down to which Ethernet link(s) you want to use for your storage server in what VLAN.

It has a convenient dashboard showing available and used storage and providing direct access to event logs, performance statistics, health report, volume management and configuration so there is no need to click between tabs (not that if that would any difficult) you have straight access to the most common tasks. You can even directly access EMC’s support from the management interface so all of these make it even more convenient to use. It really is one of the easiest storage control interfaces I’ve used.

Additional features:

VNXe 3300 supports iSCSI, NFS and CIFS. It offers thin provisioning and de-duplication to store data efficiently and snapshots and data replication for data protection purposes. There are some restrictions though as I was not able to use de-duplication for ISCSI at all but it was available to NFS and I could not use thin provisioning for general iSCSI volumes. It was working fine with vmware data stores though.



The 15 bay storage controller unit has two storage controllers with six 1Gbps Ethernet ports each (1 management, 1 service , 4 storage traffic),

Redundant power supplies – with removable battery cache unit underneath– nice idea EMC…

Two 6Gbps SAS uplink ports are present for additional disk shelves per controller.

Its worth noting that we took the additional disk shelf to give ourselves a bit of extra breathing room… the shelves have 15 drive bays  with redundant power supply and redundant controllers with two 6Gbps SAS connectors – each to make sure there will be no single point of hardware failure.

Of course both modules are removable and have user replaceable parts (anything with orange you can pull basically!)

Both the storage controller and disk shelf are 2U rack mountable units. The fronts of the units have stylish cover with a blue and an orange light. If you get the orange lit that indicates an underlying problem. (again, forgot to take photos while it was in and running, but I’ll update when we get our one in and running (with the correct disk config)

Inside the unit, is quite interesting, there is an STEC SSD flash drive which slotted down the side, and three slots for ram (all filled) with a single Xeon CPU – not exactly sure what spec, but I’m sure we could find out if really need be…


As far as I can tell this system has a quite solid fail-over features. I have tried different things to render the system unresponsive (within reason of course) by pulling out and swapping disks around, removing uplink SAS cables, failing network ports and power supplies even all at the same time. I would say that I’ve done to this unit the worst that a stupid engineer could manage (short of powering it off) and it handled pretty well everything I threw at it.

The only thing I noticed was that in one instance the graphical interface showed nearly everything in red but the system was still serving our vmware cluster and the virtual machines were able to read and write their disks. When there was any kind of failover the transition time was about five seconds so our infrastructure did not even notice it. I have set up multipath for our data-stores and vmware noticed the change in the available paths but did not have problem at all.


Not a bad bit of kit – really good failover features, has some interesting other features which may be useful down the line like De-Dupe and hardware wise it is very solidly built and the software works well.

Price. Sticky point – it’s fairly expensive (I’m not going to give you the actual costs, but it’s a bit more expensive than let’s say a Dell  Powervault MD3620i), but when comparing it to some of the other options it’s really not that bad. We’ve been trialling lots of boxes this year and this is the one we think we’re going to be using moving forward.

If you have any questions about my review, or you want to host your own VNXe online with us 😉 please feel free to email me and I’ll do my best to answer your questions.

Comments are closed.