Help – My disk performance II

 

 

OK, Post number two in two days on a rather weighty topic… don’t expect this forever! I am prepping for a presentation, and using this to collect my thoughts, as well as provide a place to point to during the presentation… there is no way I can get to all this detail in a presentation, so my hope is to provide a place for some self-paced study of my rambling thoughts.

Introduction
This is the second post in a series on disk performance
Help — My disk performance   (RAID how/why / what to put where)

 Now that we have covered some of the basics about disk, raid configurations, number and size of drives, today’s discussion will move up the stack (just a little) to the connections, and layout of those disks.  This discussion is more theoretical than specific to a particular SAN vendor, the logic applies regardless of the hardware provider, be it EMC, Hitachi, HP, etc.

Some basic assumptions (Your SAN admin folks are going to have done a good job protecting you at the san/network levels with redundancy.)

  1. SAN fabric has a minimum of two switches (This provides failover and redundancy in the network layer) should a switch fail, someone unplug something, break a cable, etc, the two switches are independent and can provide complete access to the SAN.
  2. There are a minimum of two storage processors internally in the SAN.  This provides load-balancing internally, as well as failover internally within the SAN should a processor fail.

I have seen quite good performance from a single dual-port card connected at 2gbps. My recommendation is to connect at 4gbps to effectively eliminate discussions about HBA latency (more on this another time).  A single dual-port card does little for redundancy and failover on the server, but it does provide load-balancing across the two ports, and does provide for connections to both halves of the SAN, which does have redundancy in the fabric as well as internally in the SAN.

 

As you can see in the illustration, the dual port card has a connection to each switch and each switch is connected to a storage processor.  Working with the setup of a minimum of two data drives, each drive is a 1tb lun spanning 40 physical drives(Previous post).  Each LUN gets assigned a primary storage processor, given two drives, and two storage processors (each with their own read-ahead and buffer caches) each of the two luns should be split between the storage processors.

Monitoring, is topic for another day, but here are some basic numbers to live by (if they don’t make sense… stay tuned, I promise I will get there)

  • IOPS, when calculating IO, use 180 – 200 per drive in the LUN
  • Access time 5ms – 8ms (as measured on the SAN)
  • Disk queue 2 per drive in the LUN
  • 1tb per LUN

The filegroups, numbers of files, file sizes are all going to be part of a later post.

Tags: , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: