Author: Ben Taylor

Editorials

Partition By Time

As the size of your database continues to grow, the physical implementation of your disk storage become more important. This is expressed in real time performance when using the database, and disaster recovery options. One of the key ways of handling large volumes of data, especially in a single table, is the use of partitioned tables. We are going to […]

Editorials

Optimal File Group Utilization

David Eaton brings us back to database fundamentals of file utilization for an SQL Server database. He writes: The fundamentals of SQL Server are still the same, spread out the data on as many disks as you can. Make sure the database design is correct and scalable. Set up filegroups to move indexes intelligently, meaning use two or more and […]

Editorials

Getting The Most Out Of Your Database

As AZ Jim reminds us in our comments, the configuration of your disk systems matter in more ways than one. There are many ways a controller can build RAID drives, consisting of at least two disks that perform as a single disk. The more disks participating in a RAID system, the better the fault tolerance should one or more drives […]

Editorials

Database Disk Usage

I appreciate the comments for addressing performance issues that may be based on systems out of your control as a DBA. Some suggestions rely on tools to identify bottlenecks. Others take advantage of metrics you are already able to gather from SQL Server and performance monitors built into Windows. Sometimes the issues are not with SQL Server, or Windows is […]

Editorials

Optimize High Volume With a Staging Table

There are many things you can do to improve the performance of your OLTP system when it experiences exceptionally high volume causing the system to slow to the point of visible perception. Normalization, file separation, index and statistics tuning, memory management, and report separation are your first targets for any OLTP database. We even talked about pre-assigning keys externally last […]

Editorials

Two Stage Persistence

In dealing with high performance systems there is another method of addressing the normalization of data when it is persisted to an SQL relational database with appropriate normalization. This was alluded to by Eilenblogger in his comments yesterday. The method uses two stages of persistence. First, the data is persisted to a temporary data store with extremely fast performance. There […]

Editorials

How Do I Implement High Volume Key Assignment?

Today I am sharing some specifics for building your own key assignment implementation, allowing you to allocate multiple key values in advance of actually creating records. When you use the database engine to create unique keys for you data you usually follow the following sequence. Add data to a table Capture the assigned key value Return the key value to […]

Editorials

High Performance Key Generation

Many times it is valuable to have a different key generator other than the internal tools built natively in your database engine. One key reason is simply performance. When you are using system assigned keys generated by your database engine it requires multiple round trips to store data that is normalized in any fashion. When you have related data where […]

Editorials

The Keys to the Kingdom

So how do you create a primary key in a table? Believe it or not, there are a few different methods that have proven useful in different scenarios. Let’s start with some of the traditional methods supported by most SQL Engines.in First there is the database generated sequential number generation from the SQL Engine itself. In SQL Server we call […]