This video educates to upgrade the latest MegaRAID driver and CIM Provider under ESXi Server. VMware ESXi 5 Applying Patches. LSI RAID as passthrough device for VMware ESX.
Your CPU cores will cost you additional in Microsoft Server core licensing for anything above 16 cores per host. You are looking at 20 cores, so your Microsoft licensing costs are up 25% over what they could be.
How many VMs do you plan to run? That's a lot of LOW Performance capacity. You can't run many VMs on it effectively. Please describe your expected workloads. Looking at the system overall, and assuming general workloads, I would say your storage is too slow and that you have too much CPU. I would look at a larger number of drives, and CPU with 8 cores or a single CPU.
@kevinmhsieh Kevin, nothing is set in stone yet. I can go with 1 CPU x 10 cores or with 2 x 8 cores. And this is true only if we upgrade to windows server 2016 with its per core licensing, which we are not doing currently. DC, File Server, Exchange (300 users with potential to expand to 500), Terminal Server, couple of Linux machines. I think I'll be fine but I can throw some SSD drives as well @Toby - Thanks Toby, since then 512e disks are supported, but I'll email them directly and ask or just stick with the 512n version to be on the safe side, which means Seagate drives.
Hi StorageNinja yes I read that 6.5 is required so thats why I chose this particular motherboard that is on HCL list and support 6.5. What about the 8 x 6TB drives in raid 10. Will it be that slow again? Currently is on 10k sas drives datastore. If that is slow then my other option is 3 x Samsung PM863 3.8TB in Raid 5.
If i mix SSD drives and SAS 7.2 K drives obviously in different arreys with the above Raid controller would there be a problem or bottleneck? If i put 4kn drives on the raid controller will the controller translate them to 512n and present to Vmware? Jackal077 wrote: Hi StorageNinja yes I read that 6.5 is required so thats why I chose this particular motherboard that is on HCL list and support 6.5. What about the 8 x 6TB drives in raid 10. Will it be that slow again? Currently is on 10k sas drives datastore.Digs for car analogy.
A 7200RPM drive can only issue about 100-120 sustained IOPS (Disk commands per second). Note as they get bigger this doesn't really change a whole lot. Now, that's assuming it's operating at deep queue depth (and terrible latency). Realistically at acceptable latency for databases, you only get about 25 disk commands per second. As you add more data to the enviroment assuming an equal demand in performance it will scale in demand with capacity, so adding more drives together doesn't really change much unless you have a crazy data skew.
Now if RAID 10 of 8 drives means. If 100% reads you get at best 200 IOPS of 'passable latency' If 100 writes you get 100 IOPS of 'passable latency'. For comparison the laptop I'm typing this on I can easily push 40,000 IOPS with it's single drive. You are asking if attaching 8 lawnmowers together will be slow, of someone who's typing from a Ferrari the Ferrari:).
Jackal077 wrote: If that is slow then my other option is 3 x Samsung PM863 3.8TB Those drives are REALLY hard to find right now (like, I know some people who might do things they are not proud of to get them). Supply chain on TLC 3D NAND is constrained for that drive till next year. Jackal077 wrote: PM863 3.8TB in Raid 5.It will be a lot faster, just note these TLC 'Read optimized drives' don't have great consistent performance under heavy writes. If your not a write heavy environment this may not be a big deal, just be aware of this. A DPACK to look at what you are doing now might help size this.
Jackal077 wrote: If i mix SSD drives and SAS 7.2 K drives obviously in different arreys with the above Raid controller would there be a problem or bottleneck? It will be a bottleneck for what you put on the 7.2K:) In all seriousness, the thing to watch out for is if the SAS and SATA are mixed and your using a SAS expander (IE more than 8 drives per controller). You'll be invoking SATA tunneling protocol and stuff can get 'weird'. Note mixing speeds (12Gbps SAS and 6Gbps SATA) if they are running at native speeds can cause a LOT of problems. The newest firmware's from Dell disable SAS buffering/Databolt to prevent this (and everything drops to the lowest interface speed). Jackal077 wrote: If i put 4kn drives on the raid controller will the controller translate them to 512n and present to Vmware?No, and again.
This is an issue that is really protecting you from yourself, as 4KN Magnetic drives are so incredibly slow they should only be used for the coldest of data. Note 4KN SSD's are always supported (the DRAM buffers will resolve any block alignment issues). StorageNinja wrote: jackal077 wrote: Hi StorageNinja yes I read that 6.5 is required so thats why I chose this particular motherboard that is on HCL list and support 6.5. What about the 8 x 6TB drives in raid 10.
Will it be that slow again? Currently is on 10k sas drives datastore.Digs for car analogy. A 7200RPM drive can only issue about 100-120 sustained IOPS (Disk commands per second). Note as they get bigger this doesn't really change a whole lot.
Now, that's assuming it's operating at deep queue depth (and terrible latency). Realistically at acceptable latency for databases, you only get about 25 disk commands per second. As you add more data to the enviroment assuming an equal demand in performance it will scale in demand with capacity, so adding more drives together doesn't really change much unless you have a crazy data skew. Now if RAID 10 of 8 drives means.
If 100% reads you get at best 200 IOPS of 'passable latency' If 100 writes you get 100 IOPS of 'passable latency'. For comparison the laptop I'm typing this on I can easily push 40,000 IOPS with it's single drive. You are asking if attaching 8 lawnmowers together will be slow, of someone who's typing from a Ferrari the Ferrari:) I know that SATA 7.2K is slow and SSD's are blazingly fast:) but I'll tell you where I'm coming from and the confusion. In every RAID calculator if you put 8 disks in raid 10 it gives you 8 x read & 4 x write speed gain(something like 450 min to 600 max Iops), the more disks you use the better.
Now combine this with the cache of the raid controller and I thought it will be sufficient for the particular case. I know SSD drives have crazy high IOPS but if they were cheap enough we wouldn't be having this conversation:).
Also, the previous versions of Exchange like 2007 used to be hungry for IOPS, but the latest versions are not, even Microsoft recommends to use SATA enterprise grade 7.2k drives in JBOD if you go with HA. StorageNinja wrote: jackal077 wrote: If that is slow then my other option is 3 x Samsung PM863 3.8TB Those drives are REALLY hard to find right now (like, I know some people who might do things they are not proud of to get them). Supply chain on TLC 3D NAND is constrained for that drive till next year. What alternative drive would you recommend? Jackal077 wrote: PM863 3.8TB in Raid 5.It will be a lot faster, just note these TLC 'Read optimized drives' don't have great consistent performance under heavy writes.
If your not a write heavy environment this may not be a big deal, just be aware of this. A DPACK to look at what you are doing now might help size this. Not a heavy write environment.
Jackal077 wrote: If i mix SSD drives and SAS 7.2 K drives obviously in different arreys with the above Raid controller would there be a problem or bottleneck? It will be a bottleneck for what you put on the 7.2K:) jackal077 wrote: If i put 4kn drives on the raid controller will the controller translate them to 512n and present to Vmware?No, and again. This is an issue that is really protecting you from yourself, as 4KN Magnetic drives are so incredibly slow they should only be used for the coldest of data. Note 4KN SSD's are always supported (the DRAM buffers will resolve any block alignment issues). If I add SSD drives to the RAID controller is there anything that I should be aware of? Consumer grade SSD's should be overprovisioned with 25-30% right, but does the same stand for enterprise grade SSD's as the LSI does not support TRIM. 'In all seriousness, the thing to watch out for is if the SAS and SATA are mixed and your using a SAS expander (IE more than 8 drives per controller).
You'll be invoking SATA tunneling protocol and stuff can get 'weird'. Note mixing speeds (12Gbps SAS and 6Gbps SATA) if they are running at native speeds can cause a LOT of problems. The newest firmware's from Dell disable SAS buffering/Databolt to prevent this (and everything drops to the lowest interface speed).' Are you talking in general or in regards to MegaRAID SAS 9361-8i controller? If I remember correctly Dell are using rebranded LSI cards as their PERC controller. Eventually further down the line I may be using expander so in this case the best option is to put them on separate RAID controllers.
Does it matter if I'm expanding the SAS drives or SATA drives? I mean will this problem be presented only if I'm expanding the SATA drives and no problem if I'm expanding the SAS drives. Edited Dec 15, 2016 at 09:21 UTC.
Kevinmhsieh wrote: I have suffered through what can happen if storage can't provide enough low latency IOPS. Rebooting single VMs would be quick; trying to boot 20 at once would take 45+ minutes for a single VM. My hyperconverged platform, from the largest vendor, choked on Exchange 2010 so bad that Outlook would disconnect. You do not want to starve your VMs of IOPS. Instead of a 6 or 8 drive system, how about 24 x 2TB drives instead?
You will have a much better chance of having a successful deployment. Are you talking about 24 x 7.2K SAS drives or 24 x 10K SAS drivers, as looking at this table all parameters look the same regardless of whether is 2TB or 6TB. I'll run the price numbers and decide but I'm leaning over to a mix of SSD & 7.2K SAS drives. Exchange on 7.2K drives was done with 1-2TB drives, not 8. IOPS density is 1/8th effectively. Also as others have mentioned it's reboot storms and other problems that will sneak up on you with the big drives. TRIM isn't supported but enterprise Grade drive over provision makes up for it.
Consumer SSDs lack power loss protection, I'd stick with enterprise. The PM863a is supposed to replace the regular pm863, but in the meantime Intel and sandisk have reasonably priced stuff you can find (S35xx is the Intel capacity stuff). StorageNinja wrote: Couple thoughts. Note 6.5 is required for 512E support. 8TB drives area CRAZY slow. IOPS per GB, its borderline tape.
Do NOT run Virtual Machines, Exchange/SQL off something that slow. What about 1.8TB 10k 2.5' 512e drives behind MegaRAID controller with VMware 6.5?
This is supposed to be supported now, as I think the logical volume can be configured as 512n in MegaRAID. What sort of performance drop would I generally be looking at in general real world VMware use, because of the read-modify-write cycles the drives have to do? According to one comparison to 4K native drives, it could be anywhere from 14% to 50% depending on block size.
Would this be even slower than a similarly sized 7.2k 512n drive? Or somewhere in between? Too bad there aren't any 1.8TB 10k 2.5' 512n drives around.