In my SSD testing I use the standard benchmarks, TPC-C and TPC-H to simulate the OLTP and DSS/DWH environments. Instead of re-inventing the wheel, I use schema examples gleaned from test on similar hardware that have been published at the http://ww.tpc.org/ website.
In creating the TPC-C schema I used a schema model based on a successful TPC-C run on an HP platform. In this schema, several of the tables were created as single-table or multi-table clusters. During the initial load I found that the multi-table cluster used wouldn’t load correctly, at least loading using external tables, so I broke it into two indexed and referentially related tables. However, I left the single table clusters alone believing them to be more efficient.
In a single table cluster the primary key structures are hashed into a single set of blocks making them easier to look up by simply using a hash function and scanning those blocks. This is supposed to be faster than an index lookup followed by a table lookup due to the use of the hash lookup and co-location of the primary key blocks.
I noticed during reloads that the single-table clusters took longer to finish loading than did the non-clustered tables of a similar size. I decided for a test to check to see if the clustering was having a positive or negative effect on the performance of the database. In this test I replaced all clustered tables with table-primary key index table combinations and used the configuration that gave the best previous performance (no flash cache, no keep or recycle and maximized db cache, with FIRST_ROWS_(n) set to 1). The results are shown in Figure 1.
Figure 1: The Effect of Removing Table Clusters
Surprisingly removing the clusters increased performance from a peak of 6435 up to a peak value of 7065 tps, nearly a 10% increase in performance. This corresponds to a non-audited tpmC value of 197,378.310, this would be equivalent to a result from around a 200 disk drive based system. From my research I find generally a 1K of tpmC per physical disk drive depending on the amount of cache and the speed and type of disk used.
It appears that the SSD reduces latency to the point where disk access time saving features such as table clustering may actually incur more overhead in processing than is saved from the supposed reduction in IO from their use.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment