Part of my duties (Mike here) at TMS is managing the Statspack Analyzer website. The website management mostly consists of reviewing comments in the forum there and analyzing the Statspacks and AWRs that it can't handle. What things can cause a report not to able to be handled by the website? Let's look at them.
1. The report contains Oracle generated errors
2. The report is in HTML format
3. The parser for one reason or another can't handle the report format
The first situation generates an error 4 code back to the user that states an Oracle error has occurred. This means that it found an Oracle error in the report, not that the site has an error. Usually this happens with Oracle10g or newer Statspack reports and it is from this bit of error stack somewhere in the report:
truncate table STATS$TEMP_SQLSTATS
*
ERROR at line 1:
ORA-00942: table or view does not exist
If you remove that bit of error, your report will process in the parser just fine.
The next problem is that the parser for now can only handle text reports generated as text by the spreport.sql or awrrpt.sql scripts. It cannot handle HTML converted to text by a HTML converter. These SQL scripts for generating the reports are located in $ORACLE_HOME/rdbms/admin (or the equivalent on Windows) and can be run by any user from SQLPLUS that has DBA privileges. Do not spool the output, it must be the output text-based file generated by the script. The scripts, unless you tell them to do otherwise, will dump the reports into the directory from which you run the particular SQL script. Then you simply cut and past the text from the report into the window on the StatspackAnalyzer site. Afraid we can't as yet take the new RAC based, multi-instance reports.
The final thing which can prevent use of the site is if your report for one reason or another deviates from the standard Statspack or AWR reports, for example, if you have set some of the optional reporting features in the Statspack or AWR setup tables or have customized the report output in anyway this can result in a report the parser doesn't recognize. Another issue can be that you are sending in a report that is just too new. Unfortunately we can't always keep the site up to date with all the new reports Oracle comes out with, but please give us time!
So, if for some reason, after looking this over and making any corrections on your side of the fence that are needed to your AWR or Statspack reports they still won't process, send them on in using the link on the StatspackAnalyzer website and I will take a look and give you my recommendations.
Wednesday, March 21, 2012
Monday, March 19, 2012
Accelerating Your Existing Architecture
Very often we hear today how you need to completely throw away your existing disk based architecture and move in the latest, greatest set of servers, disks, flash and who knows what all in order to get better performance, scalability and so forth. The new configurations pile on the storage, flash and CPUs and of course, license fees. In some cases it has reached the point where you are paying 2-3 times the hardware cost in license fees alone!
What if you could double or triple your performance, not pay any additional license fees and still be able to use your existing servers and disk storage? Would that make you a hero to your CFO or what? First let’s discuss what would be required to do this miracle.
In a perfect world we would use exactly the amount of storage, have instant access and then be able to use an exact amount of CPU and hand the CPU to other processes when we are done, getting our work done as quickly as possible. Unfortunately it usually ends up that a process makes a request, the CPU sends an Io request, is told it has to wait so it spins registering time as idle time. It is quite possible to have nearly idle CPUs and yet not be able to get any work done. This is usually due to IO wait conditions.
One major contributor to IO wait conditions is what I call read-poisoning. If all a disk had to do was writing, then it would function very effectively since writes can be optimized by controllers to be very efficient. Likewise if all we did were reads then disks would be happy and we could optimize for reads. Unfortunately we usually have a mixture of both reads and writes going to the same disks in the array with reads usually outnumbering writes 4 to 1 ( a 80/20 ratio reads to writes.) With Oracle anytime you slow down reads you will cause performance issues.
Oracle waits on reads to complete, it has to unless the data is already stored in the DB or Flash caches. For most things Oracle is write agnostic. What do I mean by write agnostic? Oracle uses a process called delayed block cleanout whereby data is kept in memory until it absolutely has to be written. When data is written it is usually done in batches. This is why Oracle doesn’t reort in AWR and Statspack reports the milliseconds it takes to write data, with a few exceptions, it really doesn’t care!
When does Oracle need writes to be fast? When it is waiting on those writes! When does Oracle wait on writes? There are only a few instances when Oracle will be waiting on writes:
1. Redo log writes
2. Temporary tablespace writes
3. Undo tablespace writes (although these have been greatly reduced by in-memory undo)
Unless you are in a high transaction environment like a stock exchange, redo writes rarely cause issues, and, since the implementation of in memory undo with Oracle10g, undo write issues have also faded to obscurity. Usually if they occur your temporary tablespace writes are what will cause the most issues. Temporary tablespaces are used for:
1. Sorts
2. Hashes
3. Temporary table activity
4. Bitmap operations
Since more than sorts are done there it is quite possible to have zero sort operations and yet have the temporary tablespace a major source for IO, I have seen this with hash joins and temporary table operations.
So, to optimize an existing system we would need to split off reads from writes and isolate the effects of large writes such as temporary tablespace activity from the general table and index storage. Luckily in Oracle11g R2 we have been given a means to do this isolation of reads from writes. In Oracle11g R2 ASM there is the capability for preferred-read failover group designation from the instances using ASM. Each instance can specify its own ASM preferred read failure group within a specific disk group. This feature was intended to enable remote (relatively speaking) RAC instances to specify local storage for read activity to preserve performance. However, we can make use of this preferred-read failure group to optimize a single instances performance. If you aren’t using ASM, your disk management tool may also have this capability.
If we add in a suitable amount of RamSan flash based storage to a server’s storage setup, we can specify that the flash half of a disk group in ASm or other storage managers be the preferred read failure group. For example let’s put a RamSan720 or RamSan820 into an existing storage subsystem. The 720 or 820 provide no single point of failure within the devices themselves so unless we just want the added security; there isn’t a need to mirror them. The 720 comes in 6 or 12 terabyte SLC Flash configurations with 5 or 10 terabytes available after the high availability configuration is set. The 820 comes in 12 or 24 terabyte eMLC flash configurations with 10 to 20 terabytes available after HA configuration. Did I mention both of these are 1-U rack mounts? For reads both of these units give sub 200 microsecond (.2 millisecond) read times and sub 50 microsecond (.05 millisecond) write times.
So, now we have, for arguments sake, a RamSan-820 with 20 terabytes and our existing SAN of which we are using 10 terabytes for the database with a potential to grow to 15 terabytes over the next 3 years. We create a diskgroup (with the database still active mind you if we are currently using ASM) with the existing 15 terabytes of disk in one failure group and the 15 terabyte LUN we created on the RamSan in another. Once the ASM finishes rebalancing the diskgroup, from the instance that is using the diskgroup, we assign the diskgroup’s RamSan failure group as the preferred-read mirror failure group.
SQL> alter system set ASM_PREFERRED_READ_FAILURE_GROUPS = 'HYBRID.SSD’;
System altered.
Now we should see immediate performance improvements. But what about the 5 terabytes we have left on the RamSan? We use them for redo logs, undo and temporary tablespaces. All of these structures can be re-assigned or rebuilt with the database up and running in most cases. This will provide high speed writes (and reads) for write sensitive files, highspeed reads for data and indexes and remove the read-poisoning from the existing disk based SAN. Notice we did it without adding Oracle license fees! And, with any luck, we did it with zero or minimal down time and no Oracle consulting fees!
Now, the IO requests complete 10-20 or more times faster which means the CPU spends less time waiting on IO and more time working. In tests using a RamSan 620 for 2,000,000 queries doing 14,000,000 IOs the configuration using the RamSan as the preferred read mirror completed nearly 10 ten times faster than the standard disk configuration.
When the test is run against the architecture with PRG set to HYBRID.DISK we see the following results:
• ~4000 IOPS per RAC node
o 16,000 IOPS total
• 12.25 minutes to complete with 4 nodes running (2m queries).
[oracle@opera1 ~]$ time ./spawn_50.sh
real: 12m15.434s
user: 0m5.464s
sys: 0m4.031s
When the test is run against the architecture with PRG set to HYBRID.SSD we see the following results:
• 40,000 IOPS per RAC node
o 160,000 total IOPS in this test
• 1.3 minutes to complete with 4 nodes running (2m queries).
[oracle@opera1 ~]$ time ./spawn_50.sh
real: 1m19.838s
user: 0m4.439s
sys: 0m3.215s
So, as you can see you can optimize your existing architecture (assuming you have Oracle11g R2 or are using a disk manager that can do preferred read mirror) to get 10-20 times the performance by just adding in a RamSan solid state storage appliance.
What if you could double or triple your performance, not pay any additional license fees and still be able to use your existing servers and disk storage? Would that make you a hero to your CFO or what? First let’s discuss what would be required to do this miracle.
In a perfect world we would use exactly the amount of storage, have instant access and then be able to use an exact amount of CPU and hand the CPU to other processes when we are done, getting our work done as quickly as possible. Unfortunately it usually ends up that a process makes a request, the CPU sends an Io request, is told it has to wait so it spins registering time as idle time. It is quite possible to have nearly idle CPUs and yet not be able to get any work done. This is usually due to IO wait conditions.
One major contributor to IO wait conditions is what I call read-poisoning. If all a disk had to do was writing, then it would function very effectively since writes can be optimized by controllers to be very efficient. Likewise if all we did were reads then disks would be happy and we could optimize for reads. Unfortunately we usually have a mixture of both reads and writes going to the same disks in the array with reads usually outnumbering writes 4 to 1 ( a 80/20 ratio reads to writes.) With Oracle anytime you slow down reads you will cause performance issues.
Oracle waits on reads to complete, it has to unless the data is already stored in the DB or Flash caches. For most things Oracle is write agnostic. What do I mean by write agnostic? Oracle uses a process called delayed block cleanout whereby data is kept in memory until it absolutely has to be written. When data is written it is usually done in batches. This is why Oracle doesn’t reort in AWR and Statspack reports the milliseconds it takes to write data, with a few exceptions, it really doesn’t care!
When does Oracle need writes to be fast? When it is waiting on those writes! When does Oracle wait on writes? There are only a few instances when Oracle will be waiting on writes:
1. Redo log writes
2. Temporary tablespace writes
3. Undo tablespace writes (although these have been greatly reduced by in-memory undo)
Unless you are in a high transaction environment like a stock exchange, redo writes rarely cause issues, and, since the implementation of in memory undo with Oracle10g, undo write issues have also faded to obscurity. Usually if they occur your temporary tablespace writes are what will cause the most issues. Temporary tablespaces are used for:
1. Sorts
2. Hashes
3. Temporary table activity
4. Bitmap operations
Since more than sorts are done there it is quite possible to have zero sort operations and yet have the temporary tablespace a major source for IO, I have seen this with hash joins and temporary table operations.
So, to optimize an existing system we would need to split off reads from writes and isolate the effects of large writes such as temporary tablespace activity from the general table and index storage. Luckily in Oracle11g R2 we have been given a means to do this isolation of reads from writes. In Oracle11g R2 ASM there is the capability for preferred-read failover group designation from the instances using ASM. Each instance can specify its own ASM preferred read failure group within a specific disk group. This feature was intended to enable remote (relatively speaking) RAC instances to specify local storage for read activity to preserve performance. However, we can make use of this preferred-read failure group to optimize a single instances performance. If you aren’t using ASM, your disk management tool may also have this capability.
If we add in a suitable amount of RamSan flash based storage to a server’s storage setup, we can specify that the flash half of a disk group in ASm or other storage managers be the preferred read failure group. For example let’s put a RamSan720 or RamSan820 into an existing storage subsystem. The 720 or 820 provide no single point of failure within the devices themselves so unless we just want the added security; there isn’t a need to mirror them. The 720 comes in 6 or 12 terabyte SLC Flash configurations with 5 or 10 terabytes available after the high availability configuration is set. The 820 comes in 12 or 24 terabyte eMLC flash configurations with 10 to 20 terabytes available after HA configuration. Did I mention both of these are 1-U rack mounts? For reads both of these units give sub 200 microsecond (.2 millisecond) read times and sub 50 microsecond (.05 millisecond) write times.
So, now we have, for arguments sake, a RamSan-820 with 20 terabytes and our existing SAN of which we are using 10 terabytes for the database with a potential to grow to 15 terabytes over the next 3 years. We create a diskgroup (with the database still active mind you if we are currently using ASM) with the existing 15 terabytes of disk in one failure group and the 15 terabyte LUN we created on the RamSan in another. Once the ASM finishes rebalancing the diskgroup, from the instance that is using the diskgroup, we assign the diskgroup’s RamSan failure group as the preferred-read mirror failure group.
SQL> alter system set ASM_PREFERRED_READ_FAILURE_GROUPS = 'HYBRID.SSD’;
System altered.
Now we should see immediate performance improvements. But what about the 5 terabytes we have left on the RamSan? We use them for redo logs, undo and temporary tablespaces. All of these structures can be re-assigned or rebuilt with the database up and running in most cases. This will provide high speed writes (and reads) for write sensitive files, highspeed reads for data and indexes and remove the read-poisoning from the existing disk based SAN. Notice we did it without adding Oracle license fees! And, with any luck, we did it with zero or minimal down time and no Oracle consulting fees!
Now, the IO requests complete 10-20 or more times faster which means the CPU spends less time waiting on IO and more time working. In tests using a RamSan 620 for 2,000,000 queries doing 14,000,000 IOs the configuration using the RamSan as the preferred read mirror completed nearly 10 ten times faster than the standard disk configuration.
When the test is run against the architecture with PRG set to HYBRID.DISK we see the following results:
• ~4000 IOPS per RAC node
o 16,000 IOPS total
• 12.25 minutes to complete with 4 nodes running (2m queries).
[oracle@opera1 ~]$ time ./spawn_50.sh
real: 12m15.434s
user: 0m5.464s
sys: 0m4.031s
When the test is run against the architecture with PRG set to HYBRID.SSD we see the following results:
• 40,000 IOPS per RAC node
o 160,000 total IOPS in this test
• 1.3 minutes to complete with 4 nodes running (2m queries).
[oracle@opera1 ~]$ time ./spawn_50.sh
real: 1m19.838s
user: 0m4.439s
sys: 0m3.215s
So, as you can see you can optimize your existing architecture (assuming you have Oracle11g R2 or are using a disk manager that can do preferred read mirror) to get 10-20 times the performance by just adding in a RamSan solid state storage appliance.
Friday, March 16, 2012
Cluster Headaches
In my SSD testing I use the standard benchmarks, TPC-C and TPC-H to simulate the OLTP and DSS/DWH environments. Instead of re-inventing the wheel, I use schema examples gleaned from test on similar hardware that have been published at the http://ww.tpc.org/ website.
In creating the TPC-C schema I used a schema model based on a successful TPC-C run on an HP platform. In this schema, several of the tables were created as single-table or multi-table clusters. During the initial load I found that the multi-table cluster used wouldn’t load correctly, at least loading using external tables, so I broke it into two indexed and referentially related tables. However, I left the single table clusters alone believing them to be more efficient.
In a single table cluster the primary key structures are hashed into a single set of blocks making them easier to look up by simply using a hash function and scanning those blocks. This is supposed to be faster than an index lookup followed by a table lookup due to the use of the hash lookup and co-location of the primary key blocks.
I noticed during reloads that the single-table clusters took longer to finish loading than did the non-clustered tables of a similar size. I decided for a test to check to see if the clustering was having a positive or negative effect on the performance of the database. In this test I replaced all clustered tables with table-primary key index table combinations and used the configuration that gave the best previous performance (no flash cache, no keep or recycle and maximized db cache, with FIRST_ROWS_(n) set to 1). The results are shown in Figure 1.

Figure 1: The Effect of Removing Table Clusters
Surprisingly removing the clusters increased performance from a peak of 6435 up to a peak value of 7065 tps, nearly a 10% increase in performance. This corresponds to a non-audited tpmC value of 197,378.310, this would be equivalent to a result from around a 200 disk drive based system. From my research I find generally a 1K of tpmC per physical disk drive depending on the amount of cache and the speed and type of disk used.
It appears that the SSD reduces latency to the point where disk access time saving features such as table clustering may actually incur more overhead in processing than is saved from the supposed reduction in IO from their use.
In creating the TPC-C schema I used a schema model based on a successful TPC-C run on an HP platform. In this schema, several of the tables were created as single-table or multi-table clusters. During the initial load I found that the multi-table cluster used wouldn’t load correctly, at least loading using external tables, so I broke it into two indexed and referentially related tables. However, I left the single table clusters alone believing them to be more efficient.
In a single table cluster the primary key structures are hashed into a single set of blocks making them easier to look up by simply using a hash function and scanning those blocks. This is supposed to be faster than an index lookup followed by a table lookup due to the use of the hash lookup and co-location of the primary key blocks.
I noticed during reloads that the single-table clusters took longer to finish loading than did the non-clustered tables of a similar size. I decided for a test to check to see if the clustering was having a positive or negative effect on the performance of the database. In this test I replaced all clustered tables with table-primary key index table combinations and used the configuration that gave the best previous performance (no flash cache, no keep or recycle and maximized db cache, with FIRST_ROWS_(n) set to 1). The results are shown in Figure 1.

Figure 1: The Effect of Removing Table Clusters
Surprisingly removing the clusters increased performance from a peak of 6435 up to a peak value of 7065 tps, nearly a 10% increase in performance. This corresponds to a non-audited tpmC value of 197,378.310, this would be equivalent to a result from around a 200 disk drive based system. From my research I find generally a 1K of tpmC per physical disk drive depending on the amount of cache and the speed and type of disk used.
It appears that the SSD reduces latency to the point where disk access time saving features such as table clustering may actually incur more overhead in processing than is saved from the supposed reduction in IO from their use.
Wednesday, March 14, 2012
Flash Over Disk
Since the test with a flash utility against an internal PCIe Flash card proved inconclusive I decided to have the lab hook up some disks and re-run the tests using a disk array containing 24-10k 300gb disks for the tables and indexes. The DB_CACHE_SIZE was increased to 50gb and the DB_FLASH_CACHE_SIZE was set to 300gb. Figure 1 shows the results for a disk array with and without a 300gb flash cache.

Figure 1: Disk verse Disk plus Flash Cache Performance
As you can see from reviewing the graph, the Flash cache definitely helped performance at the all levels of our user range. It also showed that with the same hardware the sustained performance increase could be extrapolated to a larger number of users so in the case of using flash cache with disks, yes, performance is gained.
While running this test I had indication that over 160 gigabytes of data blocks were cached in the flash cache. Figure 2 shows the SQL script used to determine flash usage for a single user and Figure 3 shows an example of its output during test runs.
set lines 132 pages 55
col object format a45
select owner||'.'||object_name object,
sum(case when b.status like 'flash%' then 1 end) flash_blocks,
sum(case when b.status like 'flash%' then 0 else 1 end) cache_blocks,
count(*) total_cached_blocks
from v$bh b join dba_objects o
on (objd=object_id)
where owner = upper('&owner')
group by owner, object_name
order by owner,4 asc;
Figure 2: SQL Script to see cached objects for an owner

Figure 3: Example use of Flash Cache
Just to put things in perspective, let’s put the top pure-Flash database results against these disk and Flash cache results. Look at Figure 4.

Figure 4: Flash only, Disk Only and Disk plus Flash Cache Results
In reviewing Figure 4 you should first note it is a logarithmic plot, which means that for each change on the left axis there is a factor of 10 change. This figure shows that using pure flash far outperforms even the best we can expect from a combination of flash and disk: in this case by nearly a factor of 7. The peak performance we obtained from our disk combined with a Flash cache was 1024 TPS, while the peak we obtained in our flash tests was over 7000 TPS. Even in previous testing with larger disk arrays, the peak performance I obtained from disk arrays was only in the 2000 TPS range, again showing that SSD technology is superior to any equivalent disk array.

Figure 1: Disk verse Disk plus Flash Cache Performance
As you can see from reviewing the graph, the Flash cache definitely helped performance at the all levels of our user range. It also showed that with the same hardware the sustained performance increase could be extrapolated to a larger number of users so in the case of using flash cache with disks, yes, performance is gained.
While running this test I had indication that over 160 gigabytes of data blocks were cached in the flash cache. Figure 2 shows the SQL script used to determine flash usage for a single user and Figure 3 shows an example of its output during test runs.
set lines 132 pages 55
col object format a45
select owner||'.'||object_name object,
sum(case when b.status like 'flash%' then 1 end) flash_blocks,
sum(case when b.status like 'flash%' then 0 else 1 end) cache_blocks,
count(*) total_cached_blocks
from v$bh b join dba_objects o
on (objd=object_id)
where owner = upper('&owner')
group by owner, object_name
order by owner,4 asc;
Figure 2: SQL Script to see cached objects for an owner

Figure 3: Example use of Flash Cache
Just to put things in perspective, let’s put the top pure-Flash database results against these disk and Flash cache results. Look at Figure 4.

Figure 4: Flash only, Disk Only and Disk plus Flash Cache Results
In reviewing Figure 4 you should first note it is a logarithmic plot, which means that for each change on the left axis there is a factor of 10 change. This figure shows that using pure flash far outperforms even the best we can expect from a combination of flash and disk: in this case by nearly a factor of 7. The peak performance we obtained from our disk combined with a Flash cache was 1024 TPS, while the peak we obtained in our flash tests was over 7000 TPS. Even in previous testing with larger disk arrays, the peak performance I obtained from disk arrays was only in the 2000 TPS range, again showing that SSD technology is superior to any equivalent disk array.
Monday, March 12, 2012
Flash on Flash
Why exactly did Oracle create the Flash Cache concept? Well, generally speaking flash is going to be faster than disk for access of data, so, putting a bit of flash into your server and then using it as an L2 cache for the database cache makes sense. In this test we aren’t dealing with the Exadata Cell based flash cache, but the DB, server based flash cache.
But what happens when disk isn’t the storage media? Let’s look at a flash-on-flash test case using a RamSan630 flash solid state storage appliance and a RamSan 70 PCIe server mounted flash card.
Figure 1: Test Configuration
The flash cache was sized at the lower end of the suggested 2X to 10X the database cache size (90 gb) and then a run with the flash cache set to zero was run. Note that for the first run appropriate tables and indexes were assigned to be kept in the flash cache, other tables where set to default. Figure 2 shows the results from use of the Smart Flash Cache with Flash as storage.

Figure 2: TPS verses GB in the Flash cache
At least for our testing with the database on a RamSan630 SSD and the flash cache being placed on a RamSan70 PCIe card, the results are not encouraging towards the use of the flash cache with a flash based SAN. Review of the AWR results showed that the flash cache was indeed being used but, due to the small difference in overall latency between the RS630 with IB interfaces and the RS70 in the PCIe slot, the overall effect of the flash cache was negligible. According to AWR results when the flash cache was set to zero the predominate wait event was the db file sequential read, when the flash cache was set to 90 gb, the db flash cache single block physical read event dominated the report, thus showing that the cache was in fact being used.
These results demonstrate that for a database system that is based on high-speed flash storage, the DB flash cache will not be needed.
But what happens when disk isn’t the storage media? Let’s look at a flash-on-flash test case using a RamSan630 flash solid state storage appliance and a RamSan 70 PCIe server mounted flash card.

Figure 1: Test Configuration
The flash cache was sized at the lower end of the suggested 2X to 10X the database cache size (90 gb) and then a run with the flash cache set to zero was run. Note that for the first run appropriate tables and indexes were assigned to be kept in the flash cache, other tables where set to default. Figure 2 shows the results from use of the Smart Flash Cache with Flash as storage.

Figure 2: TPS verses GB in the Flash cache
At least for our testing with the database on a RamSan630 SSD and the flash cache being placed on a RamSan70 PCIe card, the results are not encouraging towards the use of the flash cache with a flash based SAN. Review of the AWR results showed that the flash cache was indeed being used but, due to the small difference in overall latency between the RS630 with IB interfaces and the RS70 in the PCIe slot, the overall effect of the flash cache was negligible. According to AWR results when the flash cache was set to zero the predominate wait event was the db file sequential read, when the flash cache was set to 90 gb, the db flash cache single block physical read event dominated the report, thus showing that the cache was in fact being used.
These results demonstrate that for a database system that is based on high-speed flash storage, the DB flash cache will not be needed.
Wednesday, November 17, 2010
DOAG in Nuremberg
Well here it is Thursday in Nuremberg Germany. On Tuesday I gave my presentation "Validating your IO Subsystem - Coming Out of the Black Box" to a packed room (about 100 attendees). Nobody threw anything, I didn't see anyone sleeping and other than right at the end, no one walked out and everyone clapped at the end, so I guess it was successful!
I have seen Steve Feuerstein, Tom Kyte, Danial Morgan and several other big names in the industry here (as well as myself I guess!)
The booth traffic has been moderate to light with a few folks stopping in for extended chats. There seems to be a lot of interest in SSDs and we still need to correct misinformation and bad data about SSDs.
Nuremberg (at least Alt Nuremberg, the walled inner city) is wonderful, of course after our arrival on Sunday it has been raining which has limited sight-seeing (working from 7am to 5pm also puts a crimp in that) but usually we have been walking from the hotel into Old Town for dinner.
The DOAG conference is the largest in Germany and well worth the effort so far. If you are here and haven't stopped by, please do so!
I have seen Steve Feuerstein, Tom Kyte, Danial Morgan and several other big names in the industry here (as well as myself I guess!)
The booth traffic has been moderate to light with a few folks stopping in for extended chats. There seems to be a lot of interest in SSDs and we still need to correct misinformation and bad data about SSDs.
Nuremberg (at least Alt Nuremberg, the walled inner city) is wonderful, of course after our arrival on Sunday it has been raining which has limited sight-seeing (working from 7am to 5pm also puts a crimp in that) but usually we have been walking from the hotel into Old Town for dinner.
The DOAG conference is the largest in Germany and well worth the effort so far. If you are here and haven't stopped by, please do so!
Wednesday, October 13, 2010
News from VOUG 2010
Here in Richmond I am attending the VOUG 20101 conference. Rich Niemiec did the Keynote Address on “How Oracle came to Rule the Database World”, as usual Rich gave a great presentation. Wev’ve had good booth traffic and some interested folks asking great questions.
In my first presentation, “Detailed AWR Analysis” I had a full room (about 30-40 folks) and lots of good questions. Overall there are about 150 attendees, essentially on par with last year, which is saying a lot with this economy! My second presentation (a vendor presentation), “Testing to Destruction: Part 2” was also well attended with 20-30 attendees with loads of questions and positive comments.
Due to a scheduling SNAFU I am the only TMS person today so I am doing the booth/table as well as my presentations so it doesn’t leave a lot of time to attend other presentations, hopefully tomorrow I will be able to report on some other folk’s papers.
In my first presentation, “Detailed AWR Analysis” I had a full room (about 30-40 folks) and lots of good questions. Overall there are about 150 attendees, essentially on par with last year, which is saying a lot with this economy! My second presentation (a vendor presentation), “Testing to Destruction: Part 2” was also well attended with 20-30 attendees with loads of questions and positive comments.
Due to a scheduling SNAFU I am the only TMS person today so I am doing the booth/table as well as my presentations so it doesn’t leave a lot of time to attend other presentations, hopefully tomorrow I will be able to report on some other folk’s papers.
Subscribe to:
Posts (Atom)