There are many suggestions that Oracle has provided in order to achieve the greatest benefits from ASM. Overall, ASM is a very favorable and proven application. The design of ASM is still progressing, it was a new feature implemented in Oracle 10g and has developed more of its promise in 11g. Remember that a guideline is just that, a guide, not necessarily applicable to your situation, always take any best practice with a grain of salt. The following are considered good practices when using ASM:
1.) Separate the ASM Home from that of any normal database
Since ASM does run as a lightweight and independent database it should have a separate $ORACLE_HOME or $HOME. When using the Oracle Universal Installer (OUI) it places ASM in an independent home automatically. For best practice, the ASM home directory should be kept separate from that of any other database, despite that database’s dependency. By default, when using the DBCA, creating an initial database for ASM will result in a separate ASM directory. To initialize ASM with its own $ORACLE_HOME or $HOME directory, use the Universal Installer and follow the steps from the Oracle Database Installation Guide regarding Installing ASM.
By using a separate $ORACLE_HOME or $HOME for ASM, DBAs can manage the database(s) dependent upon ASM with upgrades, backups and recovery without having to interfere with ASM operations. Also, if multiple databases are reliant upon ASM while one database shares the $ORACLE_HOME or $HOME with ASM, management of the databases becomes exceptionally complicated.
For ASM management, the ASM database can be patched or upgraded independently of the dependent databases as long as home directories are separate. The backwards compatibility of ASM ensures that these steps can proceed without having to migrate storage between upgrades.
2.) Use homogenous diskgroups
Every disk in a diskgroup is treated the same (with the exception of the “Preferred Mirror Read” discussed in the next section). Therefore, high-speed drives will only exhibit their performance if grouped with other high-speed drives. Stripe performance of a hybrid diskgroup will be virtually equivalent to that of a diskgroup of the slowest type disk. This is known as the convoy affect.
3.) Assign task-specific diskgroups for tiered storage
Although ASM makes exceptionally efficient use of available disks, performance characteristics are only chartered through the ASM templates with extent size. Oracle suggests using a limited number, 2 specifically, of diskgroups. In smaller deployments, limiting the number of diskgroups but increasing the number of disk members will highlight the performance of ASM. In large-scale deployments, however, tiered storage with task-specific diskgroups will provide the greatest performance. With the RAM-based SSD as the premier solution for the hot-files atop the tiered storage pyramid, it is necessary to isolate the system into its own diskgroup. Frequently accessed data files should be placed on Flash-based storage, in its own diskgroup, with high performance disk based storage used for the remainder. Again, the performance of these drives is not exhibited if grouped with lower speed storage.
Archive and recovery storage should be kept on a large deployment of slower disks, potentially non-SCSI disks. ASM can provide significant bandwidth of data written to these drives, given the sequential data patterns of the archive storage in Oracle. With a sufficiently large diskgroup, the capacity, availability, and performance requirements can be met for the readily available data files of large-scaled databases.
This strategy conflicts with the concept of a minimal number of diskgroups, but it can be quite favorable to the performance of the database, especially when multiple classes of storage are available. To host multiple databases on a single ASM instance, this strategy can be scaled further to provide an even larger diskgroup for the data files and isolated log file diskgroups for each database.
4.) Take advantage of multipathing
Multipathing provides performance gains in throughput and data protection against path failure. Almost all multipathing solutions available to each Operating System are thoroughly tested and provide full compatibility with ASM.
Taking advantage of cache-enabled SANs can offer better response times than disks can deliver. In such cases, it is important to note the precautions of such a practice; cache front-ends can often fill quickly and as applications grow performance can suffer. In such an event, many disk systems will suffer poorer than average performance because the clients, as well as the cache contents, are continually reading/writing to the spindle. In write-intensive or low latency applications, be wary of using cache-enabled SANs because the de-staging process can be detrimental to the clients and customers.
5.) Allocate enough reserve capacity for groups where disk failures are probable
In the event that a disk failure occurs in a diskgroup with mirroring enabled, a second failure could result it lost data. In order to minimize this risk ASM will attempt to get back to full redundancy as quickly as possible, but space must be available on all other members of the failgroup to accommodate the capacity of the failed disk. This type of protection can be ensured as long as the free space in a diskgroup (taking into account the redundancy level) is greater than the size of a single disk.
For example: Say six 73GB disks are used in a single diskgroup (438 GB total) with normal redundancy (mirroring), and each disk is be holding 50GB of ASM data, leaving approximately 23GB of unused capacity. If a disk member fails, then ASM will immediately rebalance and create new backup extents in order to maintain normal redundancy. The 50GB of ASM data that is no longer available is repopulated evenly amongst the remaining drives in the failgroup. This results in each of the remaining drives using an additional 10GB of data (50GB divided by the remaining 5 disks). The free space available (23*6 = 138 GB) was greater than the size of a single disk so redundancy can quickly be ensured. However, the redundancy will be lost if a second disk fails before a replacement is added. After the first rebalancing each disk member containing now contains 60GB of ASM data and 13 GB of free space (13 * 5 = 65 GB total – less than the size of a single disk). If a 2nd disk were to fail, there would not be sufficient space available to allocate 15GB (60GB divided by remaining 4 disks) to each disk.
6.) Use External Redundancy for diskgroups from Arrays that Provide Mirroring and Replication
Duplicating data onto an already-duplicated storage medium can be overly redundant. This protects against a catastrophic failure, but array level replication may be providing protection against this already. The external redundancy is in effect disabling using ASM to provide redundancy.
7.) The default initialization parameters are fine, except for processes
Set processes according to this formula:
processes = 40 + (10 + [max number of concurrent database file creations, and file extend operations possible])*n
Where n is the number of databases connecting to ASM (ASM clients).
By following these best practices, the highest performance can be achieved using ASM.
Showing posts with label oracle. Show all posts
Showing posts with label oracle. Show all posts
Friday, April 13, 2012
Monday, May 3, 2010
Who's In Charge Here Anyways?
As DBAs we have all seen it, heck, probably done it. We call over to the server administrators for more space for our database files and sometime later we get it. We have no idea how it is configured, where it is located or if it will contend with existing file placements. All of the files we own are located in some magic land, let’s call it SAN Land, where everything is always load balanced, there are no hot spots and nothing ever contends with anything else. I think it is located right next to Lake Woebegone .
The SAN as a blackbox technology has been a boon and a bane to Oracle administrators. We know how things should be set up but when we try to pass along this information to the SAN administrator we hear the usual replies about how we have to co-exist with the other users and it is just not possible to configure things just for us. Well, those days have ended.
How about space that doesn’t have to be configured with an eye towards contention due to head movement or contention caused by block placement? How about freedom from hotspots and all the other problems which plague disk based technology? Even better, how about storage that can be locally managed? Impossible? Am I in a fantasy land somewhere?
Nope, not a fantasy land, welcome to the year 2010. How about 225 to 450 gigabytes of low latency storage that is locally controlled and doesn’t depend on disks, and better yet, can usually be purchased and installed with little pushback from system or LAN administrators? The RamSan-10 or RamSan-20 provide 225-450 gigabytes of high speed – low latency SLC flash memory based storage that plugs into a full size PCIe slot in the server and looks like another disk drive, but looks are deceiving.
As a “database accelerator” for a single server database that hooks directly into the server and doesn’t require any fibre channel, NFS, iSCSI or SAS connection, PCIe storage bypasses many of the management headaches associated with standard SAN technology. Due to the RamSans not being dependent on a physical device, 100% of the storage capacity can be utilized, no need to worry about short-stroking, striping or mirroring to get better performance. At a price of between 8-20K USD these solutions also fall easily within the signatory purchase powers of most department heads.
So shake off the fetters of the SAN world and step into the 21st century! Deliver 5 times the performance of standard SAN technologies to your database that you control locally.
The SAN as a blackbox technology has been a boon and a bane to Oracle administrators. We know how things should be set up but when we try to pass along this information to the SAN administrator we hear the usual replies about how we have to co-exist with the other users and it is just not possible to configure things just for us. Well, those days have ended.
How about space that doesn’t have to be configured with an eye towards contention due to head movement or contention caused by block placement? How about freedom from hotspots and all the other problems which plague disk based technology? Even better, how about storage that can be locally managed? Impossible? Am I in a fantasy land somewhere?
Nope, not a fantasy land, welcome to the year 2010. How about 225 to 450 gigabytes of low latency storage that is locally controlled and doesn’t depend on disks, and better yet, can usually be purchased and installed with little pushback from system or LAN administrators? The RamSan-10 or RamSan-20 provide 225-450 gigabytes of high speed – low latency SLC flash memory based storage that plugs into a full size PCIe slot in the server and looks like another disk drive, but looks are deceiving.
As a “database accelerator” for a single server database that hooks directly into the server and doesn’t require any fibre channel, NFS, iSCSI or SAS connection, PCIe storage bypasses many of the management headaches associated with standard SAN technology. Due to the RamSans not being dependent on a physical device, 100% of the storage capacity can be utilized, no need to worry about short-stroking, striping or mirroring to get better performance. At a price of between 8-20K USD these solutions also fall easily within the signatory purchase powers of most department heads.
So shake off the fetters of the SAN world and step into the 21st century! Deliver 5 times the performance of standard SAN technologies to your database that you control locally.
Tuesday, February 16, 2010
RMOUG
Well, I arrived here in Denver last night after nearly a 2 hour delay getting out of Atlanta, some due to weather but most due to someone dropping a trashbag (I don't know if it was full) into a running jet engine, anyway, we had to wait while the engine was inspected.
I will be giving two presentations at RMOUG:
I hope if you are there you will stop in for one of these since I will be talking on new architectures that will affect everyone in the near future!
When I am not giving presentations I will be hanging out with our good friends at the Dynamic Solutions booth (DSI) so come by and say hi! I might even email you a PDF copy of our new book:
Oracle Performance Tuning with Solid State Disk
If you ask real nice!
I am looking forward to seeing all of you at RMOUG and at these future events:
SEOUC - Charlotte, North Carolina 24-25 Feb
TMS Road Show - San Jose, California 2 March Register Here
TMS Road Show - Chicago, Illinois 8 March Register Here
IOUG Collaborate - Las Vegas, Nevada 18-22 April
ODTUG - Washington, DC 28 June-1 July
I will blog each day at these events, time allowing to let those of you who can't attend what is happening and what you are missing!
See you!
Mike Ault
Oracle Guru
Texas Memory Systems, Inc.
I will be giving two presentations at RMOUG:
Going Solid: Use of Tier Zero Storage in Oracle Databases
02/17/2010
Session 1, 9:00 am - 10:00 am
Room: 201
02/17/2010
Session 1, 9:00 am - 10:00 am
Room: 201
The Ultimate Oracle Architecture - OPERA
02/18/2010
Session 9, 11:45 am - 12:45 pm
Room: 210/212
02/18/2010
Session 9, 11:45 am - 12:45 pm
Room: 210/212
I hope if you are there you will stop in for one of these since I will be talking on new architectures that will affect everyone in the near future!
When I am not giving presentations I will be hanging out with our good friends at the Dynamic Solutions booth (DSI) so come by and say hi! I might even email you a PDF copy of our new book:
Oracle Performance Tuning with Solid State Disk
If you ask real nice!
I am looking forward to seeing all of you at RMOUG and at these future events:
SEOUC - Charlotte, North Carolina 24-25 Feb
TMS Road Show - San Jose, California 2 March Register Here
TMS Road Show - Chicago, Illinois 8 March Register Here
IOUG Collaborate - Las Vegas, Nevada 18-22 April
ODTUG - Washington, DC 28 June-1 July
I will blog each day at these events, time allowing to let those of you who can't attend what is happening and what you are missing!
See you!
Mike Ault
Oracle Guru
Texas Memory Systems, Inc.
Subscribe to:
Posts (Atom)