Cool Info on Configuring Alarms in vCenter 4

11 10 2010

Just a quick post tonight but thought I would share some very useful information I found on the VIOPS site regarding how to more fully utilize the Alarm functionality in vCenter 4.  This includes what’s new with Alarms in vSphere 4.1, how to better use default alarms, how to configure event triggers, configuring alarm actions, as well as how to copy alarm definitions between vCenter instances.  There are also some useful scenarios based on how alarms can help with managing securing and compliance as well as monitoring HA.

http://communities.vmware.com/docs/DOC-12145





A Few Quick Details on EMC FAST

28 09 2010

While digging around on PowerLink I happened upon a new whitepaper on EMC FAST so thought I would share.  There is a ton of buzz around this awesome new functionality for EMC’s CLARiiON and Celerra arrays (as well as Symmetrix/VMAX for the sake of accuracy) but being so new there has definitely been a lack of substance.

The main focus here is FAST Sub-LUN, aka the ability to migrate data at a sub-LUN level across multiple tiers of storage.  FAST Cache, the ability to add Enterprise Flash Drives to an array for increasing the SP cache, also gets an honerable mention.

First off, if you’re trying to determine exactly how the different tiers of storage are applicable to different workloads the following chart is very helpful.

I found it interesting that EFDs can be very beneficial for high I/O and super fast response for reads (no surprise there), but may not be the best bet for bandwidth intensive workloads based on cost per IOP, where FC may be the better choice.  But hey…that’s why FAST Sub-LUN is so great, you can just add the multiple tiers of storage into a pool and allow the array to make the best use of each specific type of disk.

Along those lines, FAST relies on heterogeneous pools of storage introduced in FLARE 30.  While the pools can contain different types of storage, meaning FC, EFDs, and SATA, another best practice is to ensure you maintain the same rotational speed within each pool tier. 

And if you have a CX4 based array but are still setup with traditional RAID Group-based LUNs, don’t fret…you can leverage LUN Migration to move to a Storage Pool topology therefore allowing for the adoption of FAST functionality.  Just keep in mind that while most environments won’t encounter any limitations with this technology, there are some facts to keep in mind, mainly the number of LUNs on a single array that can be configured to leverage FAST.  The limitations listed below are a 1-for-1 in terms of the maximum number of pool LUNs per system as well.

Once FAST Sub-LUN is enabled within a storage pool, there is an algorithm comprised of 3 main components, statistics collection, analysis, and relocation, which is used to determine how data is tiered to enable the best use of each type of storage.  The algorithm begins by gathering statistics on the activity level of each data slice within a storage pool.  This activity level is ultimately based on a culmination  of the writes, reads, and overall I/O’s against the 1GB slices of data.  The most recent activity level is given more weight in the overall analysis, and the longer the data is included in the analysis, the less consideration it is given in the overall equation.  Therefore, the most recent data always has the most impact in tiering decisions. 

The algorithm next reviews the performance of each 1GB slice of data on an hourly basis, and then based on the results of that analysis it begins to rank the slices from hottest, i.e. most heavily accessed, to coldest, least accessed. 

Once the analysis is complete, the array will begin to move the 1GB slices of data based on the previously determined ranking while always giving the higher priority slices precedence.  Therefore data will always be migrated to the highest performing tiers of storage first, therefore pushing less critical data to lower tiers enabling the most efficient use of the overall array.

In terms of FAST Cache, there was nothing too earth shattering but again a few tidbits of knowledge.  First off, the following chart helps with the determination between when FAST Cache should be used as opposed to FAST Sub-LUN.

 

And just in case you’re wondering if both can be used on the same array, the answer is of course, yes!  Along those lines, if you’re wondering if they will conflict, the answer is no.  If a slice of data has been promoted to a Flash tier of storage within a storage pool, FAST Cache will not promote that data to EFD-based Cache to ensure the system doesn’t waste resources copying from one Flash drive to another.  Likewise, if data is already being serviced by FAST Cache, that will drop its activity level ranking from storage pool and FAST Sub-LUN perspective, and it therefore will not be moved to an EFD tier thus, once again, preventing duplication in effort.

Essentially, FAST Cache is suited perfectly to improving the overall  performance of burst-prone data, while FAST Sub-LUN moves data to the appropriate tier of respective storage, therefore lowering the TCO for housing your data and also improving performance over a longer timeframe.  In the end…there are completely complimentary of one another.

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/White_Paper/h8058-fast-clariion-wp.pdf?mtcs=ZXZlbnRUeXBlPUttQ2xpY2tDb250ZW50RXZlbnQsZG9jdW1lbnRJZD0wOTAxNDA2NjgwNTM0M2NlLGRvY3VtZW50VHlwZT1wZGYsbmF2ZU5vZGU9MGIwMTQwNjY4MDM0YWZkOF9Hcmlk





EMC Celerra Unified Storage, FAST Sub-LUN, FAST Cache and VDI Reference Architecture

12 09 2010

EMC recently created a reference architecture document focused on Celerra, FAST, EFD’s and View 4.5, linked below, which is a must read if you are planning to adopt virtual desktops in your environment.  Heck, even if you have already implemented VDI,  and are probably trying to figure out how to cost justify the storage investment, this guide will show you how to do just that.

VDI has been around for a few years now so it’s not an earth shattering statement to say that it places unique demands on storage.  At any kind of scale, the amount of shared storage required to support the I/O generated by a bunch of individual virtual workstations drives up cost, and therefore ends up being a huge barrier to adoption.

EMC is working to remove this barrier through the recent releases around FAST-Sub-LUN, and FAST Cache, coupled with Enterprise Flash Drives, which as demonstrated in this white paper, can provide the same I/O capacity as traditional storage but with a 75% reduction in the number of required disks…which leads to a 60% reduction in overall costs!

The following chart represents a comparison of the storage needed to support 500 concurrent View 4.5 based virtual workstations both when leveraging FAST, FAST Cache, and EFD’s, and without these game changing technologies.

This is just a teaser but read through the white paper to learn more…

http://israel.emc.com/collateral/solutions/reference-architecture/h8027-virtual-desktop-celerra-fc-vmware-ra.pdf





Virtual Provisioning, Storage Pools and FLARE 30

23 08 2010

A while back I wrote about best practices for CLARiiON and Virtual Provisioning, and while it was an excellent feature when introduced with FLARE 28.5,..it’s gotten even better in FLARE 30.

If your not versed on the concept of Storage Pools yet then search through www.varrowblogs.com as there is a ton of good info on the subject, but the important thing to know if that prior to FLARE 30 the recommendation of EMC was to use Homogeneous pools only.  This basically means storage pools that are comprised of a single drive type.  FLARE 28.5 allowed the use of either Fibre Channel or SATA drives in a single pool, and FLARE 28.7 added EFDs to the list.

With the release of FLARE 30, the types of storage pools were extended to include Heterogeneous pools which enable multiple drive types, FC, SATA and EFD,  to be included in a single pool.  Heterogeneous pools are the essential foundation of FAST, or Fully Automated Storage Tiering, i.e. the ability for the storage array to make autonomous decisions on where to best place storage based upon performance requirements, and offer the most flexibility when allocating storage via the pool format.

When putting any new technology or feature to use, it’s always important to know all of the caveats.  Listed below is a brief summary of the best practices in conjunction with Storage pools and Virtual Provisioning, specifically in scope of FLARE 30:

  • Storage pools can be configured as RAID 5, RAID 6, or RAID 1/0
  • Homogeneous pools are still recommended for applications with similar performance requirements, and/or when you don’t plan to leverage FAST
  • Heterogeneous and Homogeneous pools both support a single RAID type
  • There can be a maximum of 3 tiers in a Heterogeneous pool, based on 3 drive types, but FAST storage pools can be comprised of a minimum of 2 drive types
  • RAID 5 pools are recommended for the majority of workloads
  • As of FLARE 30, a storage pool can be as large as the maximum number of drive in a given array model, less the vault drives and hot spares
  • Storage pools are recommended to be built in multiples of 5 for RAID 5, multiples of 8 for RAID 6, and multiples of 8 for RAID 1/0.  Very large pools should be allocated using RAID 6 but ultimately your requirements should be validated to ensure the best mix of performance and resource utilization
  • If creating a pool using differing drive types, create the initial pool using 1 drive type only, and then expand the pool in stages using the remaining drive types, grouped by specific drive type for each expansion stage
  • Thick LUNs, introduced in FLARE 30, reserve all space in a storage pool corresponding to the LUN size when created, and in general perform better than Thin LUNs
  • The maximum size for a storage pool-based LUN is 14TB
  • FLARE 30 introduced the ability to dynamically expand and shrink pool LUNs which is a vary easy process compared to dealing with metaLUNs, however, LUNs can only be shrunk when used in conjunction with Windows 2008
  • EMC recommends that the default owner of a pool LUN not be changed once it is provisioned as it can impact performance
  • Space reclamation was introduced with FLARE 30 and allows allocated space not in use to be freed up from a LUN.  This process can be initiated either through performing a LUN migration or by migrating via SAN Copy
  • SnapView supports both thin and think LUNs (thick LUN support was introduced with FLARE 30)
  • A minimum of FLARE 29 must be in use on both source and destination arrays for replication of thin LUNs.  Thick LUNs can be migrated only if the source array is running FLARE 30.  The same holds true for SAN Copy
  • RecoverPoint supports CDP and CRR for thin, thick, and of course traditional LUNs

Reference the following white paper for additional information:

https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/White_Paper/H5512-emc-clariion-virtual-provisioning-wp.pdf?mtcs=ZXZlbnRUeXBlPUttQ2xpY2tTZWFyY2hSZXN1bHRzRXZlbnQsZG9jdW1lbnRJZD0wOTAxNDA2NjgwNTIzOTAzLGRhdGFTb3VyY2U9RENUTV9lbl9VU18w





EMC FAST Tiering and FAST Cache: What’s the Difference and How to Choose?

16 08 2010

Back around the end of 2009, EMC released the first promising step towards automated tiering Nirvana within their storage platforms.  And while it was a step in the right direction, that main drawback at least in the CLARiiON and Celerra arrays was that data had to be tiered at the LUN level…meaning that the whole LUN had to be moved between tiers.  If you were planning to add Enterprise Flash Drives to take advantage of denser I/O, it made it difficult to do so at a realistic cost per I/O.

With the advent of FLARE 30, EMC has addressed the challenges around automated storage tiering within their mid-range storage platforms by releasing FASTv2.  FASTv2 allows for tiering of storage at the sub-LUN level, actually in 1GB chunks, meaning that an individual LUN can be spread across multiple types of storage, i.e. SATA, FC, and EFD.  This ensures that only the most active blocks of a LUN will reside on Enterprise Flash Drives thus enabling a lot more efficient use of all storage types.  The bottom line is as a customer you will get a lot more mileage out of an incremental investment in Flash storage.

So now that we know the capabilities of FAST at a high level…what about that other new feature called FAST Cache?  Well, both features make efficient use of Flash drives, but that’s where the similarities end.  FAST Cache simply allows for Enterprise Flash Drives to be added to the array and used to extend the cache capacity for both reads and writes, which translates into better system-wide performance.

If your trying to figure out what makes the most sense for your environment…the bottom line is that if you are looking for an immediate, array-wide increase in performance than FAST Cache is the best place to start.  FAST Cache allows for a small investment in Flash drives, as little as 2 drives for CLARiiON and Celerra arrays, to realize a big jump in performance.  And while it is beneficial for most all workloads, the key to remember is that it is locality of reference based.  This means that FAST Cache will bring the most benefit to workloads which tend to hit the same area of disks multiple times, hence they have a high locality of reference.  Some examples of applications which tend to have a high locality of reference are OLTP-based, file-based, and VMware View to name a few.  You can certainly work with your VAR, hopefully Varrow :-), to gather performance data against your array, specifically the Cache Miss Rate metric, and therefore show where the addition of FAST Cache will provide the most benefit.

FAST tiering can still be utilized in an array that has FAST Cache enabled, but it’s focus will be on increasing the performance of specific data sets and specifically sustained access while lowering the TCO for the storage required to house said data.  In comparison,FAST Cache will allow for an immediate boost in performance for burst-prone workloads.  So the answer is really that the 2 technologies work together to increase the performance of storage in your environment while lowering the overall Total Cost of Ownership.





Best Practices: Virtualizing Exchange 2010 on VMware

9 08 2010

There has been a perfect storm brewing within IT infrastructures over the past year, especially when it comes to Exchange and virtualization.  Let’s face it…a lot of email environments are still physical due to the fact that they were built prior to it becoming common practice to virtualize tier 1 applications, i.e. workloads that were considered too resource intensive to host on a virtual platform.  While a lot of this was related to perception more so than fact, any room for doubt was swiftly eliminated by VMware vSphere, which has been proven over and over again since it’s release last year to support even the most high I/O workloads without compromising performance.

So now that most organizations have a robust virtual hosting platform in place, why not leverage it to facilitate an upgrade of Exchange and save on a big purchase of dedicated hardware?!    The following 2 documents from VMware outline the best practices for virtualizing Exchange 2010 on VMware ESX, as well as considerations for high availability and recovery.

http://communities.vmware.com/servlet/JiveServlet/downloadBody/13273-102-1-14551/Exchange%202010%20on%20VMware%20-%20Best%20Practices%20Guide.pdf

http://communities.vmware.com/servlet/JiveServlet/downloadBody/13275-102-1-14553/Exchange%202010%20on%20VMware%20-%20Availability%20and%20Recovery%20Options.pdf





vCenter and VDI: Protecting the Heart of It All

19 07 2010

It’s a cold world out there, and while we work our fingers to the bone providing services to the business, no one is watching our back in IT.  In fact, if your organization is anything like some of the ones that I have worked for in the past, there are a million set of eyes watching, constantly waiting for something to fail so they can scream their collective heads off.

OK, so hopefully it’s not that bad, but still it serves to prove an important point…we need to build as much redundancy as possible in all solutions that are halfway critical to users, and this becomes even more crucial in scope of virtual desktops.

Think about it…when desktops were dispersed and a major system outage occurred, at least users could still work locally.  However,  it’s a whole other story when desktops are being hosted from the data center.  This isn’t a cheap shot at VDI as I think that the benefits far out-way the unique requirements and risks, but more so a call to arms around stressing the importance of architecting truly redundant environments.

For example, one of the easiest things that can be done to increase resiliency in any VDI environment hosted on a VMware infrastructure is to protect vCenter.  Whether it’s XenDesktop, VMware View, or even Quest vWorkspace (when leveraging the Linked Clone API) vCenter serves as the brains of the operation.  Fault Tolerance, introduced in vSphere, i.e. ESX 4.X, is a technology that allows for true high availability of a VM, meaning zero downtime even if a physical host failure occurs, and something that can be leveraged to ensure your vCenter server has the utmost level of uptime.

While the point of this post is not to fully outline the requirements for Fault Tolerance, you can reference the links below to validate that your hardware is compatible, mainly the CPU type/family, as well as the network configuration of your ESX hosts.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008027 – Processor and Guest Operating Systems that Support Fault Tolerance

http://www.vmware.com/files/pdf/fault_tolerance_recommendations_considerations_on_vmw_vsphere4.pdf – Fault Tolerance Recommendations and Considerations

Another quick point of consideration is to host your vCenter VM on a cluster other than the one hosting your virtual workstations.  While this isn’t a deal breaker, it does make a lot of sense from a load and basic redundancy perspective.