Editorials

Final Virtualization Thoughts and Feedback

Featured Article(s)
Kill a Blocking Session After Waiting a Parameterized Wait Time Value
The following stored procedure can be used inside a job that runs on a recurrent basis and kills all blocking session (block column in sysprocesses != 0) that the wait time column of the blocked session is greater than a parameter value (in miliseconds) The procedure kills the session and audits the session id, the database name, the time stamp that the session was killed and the login name in the session that was killed

Some Final Virtualization Feedback
First, Charlie (the culprit(!) who started this all) wrote to clarify a bit what he was referring to in his original post:

"My focus of comment was high volume OLTP applications and the key for these apps is transaction throughput at peak intervals, nothing else. We’re working on environments where our clients’ transaction throughput requirements are 10k TPM in a small center and over 80K TPM in a large one (sustained peaks of several hours). On such applications, hard disk queuing (and elimination of) during peak times is typically the critical “path” in ensuring top performance (more than 1 GB of RAM helps too in large scale OLTP <g> ).

There are also issues using the quad cores which still have single threaded resources (at the bus interface, lower level cache, command process buffers, etc) resulting in exponentially higher context switches and command process queues compared to their true multi-processor counterparts. However that is off the subject of these comments.


Bottom line: As I said in my original comment if you can satisfy constant and peak transaction throughput business requirements in a virtual environment, then great. This may apply to 60% of the database system deployments out there.

And yes, we use/test VMWare as well and I like it. <g> I just happened to be working on an MS Virtual Server environment at the time I dumped out my original comments."

…Finally, to wrap up the now-VMWare discussions, Mitesh writes:

"Virtualization offers great flexibility but at the same time you have to be careful to monitor the disk IO. This is the most important factor that we have realized impacts the performance (at least in our environments). Even though you could throw large RAM and Host with multiple CPUs, it is important how the SAN storage is architected.

We extensively use VMware ESX host VMs in our environment and it works well for Development /QA/ Tech Support environments. VMware also has a feature called Snapshot Manager which is very useful in creating VM snapshots and the changes can be latter applied to the master VM. We also use VM Templates, a feature whereby you can save a configuration and use the same to deploy a new VM in the future. The time savings and the effort to create a exact match for an environment is highly simplified.

Also, patching and troubleshooting is another advantage that helps speed up the deployments and resolve issues. We have also used various applications such as Siebel and PeopleSoft running on various database VMs and it has worked fine for R&D. It is important to keep in mind that virtualization benefits are constrained by how well you can balance the various environmental factors (network, storage, load on the host) to achieve maximum throughput. "

Featured White Paper(s)
Bust a Move With Your SSIS – Passing Package Variables
Explore the creation of sample development data using one of the most basic features in this new interface. Integration Serv… (read more)

The Payment Card Industry Compliance – Securing both Merchant and Customer data
This white paper introduces the Payment Card Industry Compliance standard, and the security threats which brought about the n… (read more)