Here we are at the end of another ECO conference. Yesterday I presented "Putting on the Flash" describing my testing of the flash cache options in 11gR2. Today I present an updated "Exadata: A Critical Review" which now includes some data on the X3.
It has been perfect weather here but they are expecting thunderstorms today, hope my flight home isn't delayed. The IBM conversion of TMS is going well so far from what I can see from Atlanta. The usual issues with organization and understanding how the we verses they way of doing things are happening, but I don't believe it is anything that can't be resolved.
The biggest new feature in Oracle12c seems to be the concept of pluggable databases. I will review this feature once I can actually play with it, but since I haven't been invited to the beta party I will have to wait like the rest of you, unless IBM can use their partner clout to get an advanced copy. I am trying to get the lab in Houston to provide me with a RamSan820 and an IBM server so I can begin testing for performance and other topics with our new technologies. Keep watching for anything interesting I find!
Thursday, October 18, 2012
Tuesday, October 2, 2012
X3 the Next Generation
So Larry has announced the next big thing, the X3 series of the Exadata. After reviewing the specifications I see mostly it is providing 4 times more flash cache, an improved caching algorithm and the capability to buy a 1/8th rack configuration.
When will they finally ditch the disks and go all flash? Face it, we don't need more flash cache if the main stoarge is flash.
The graph above demonstrates what I talking about. With a 40 gig SGA I set the flash cache to 90 gig. As you can see using flash (a RamSan810) as the main storage and running a standard TPC-C benchmark the performance difference is negligable with the zero level of flash cache actually performing better in the upper range. Why pay for 12 disk licenses in perpetuity when you don't need to? If you get the new Exadata storage cell with the increased flash cache, that is what you will be doing.
Larry also placed great store in the HCC compression technology. However you must realize HCC requires you do a direct load of the data and is virtually useless in OLTP, in fact, Oracle themselves say not to use it in OLTP but to instead use standard advanced compression.
For what the Exadata and and now the X3 is designed for, data warehousing and environments with highly redundent, stable data, it will be a great performance booster, but for the rank and file OLTP databases I see it providing marginal performance increases at best.
When will they finally ditch the disks and go all flash? Face it, we don't need more flash cache if the main stoarge is flash.
The graph above demonstrates what I talking about. With a 40 gig SGA I set the flash cache to 90 gig. As you can see using flash (a RamSan810) as the main storage and running a standard TPC-C benchmark the performance difference is negligable with the zero level of flash cache actually performing better in the upper range. Why pay for 12 disk licenses in perpetuity when you don't need to? If you get the new Exadata storage cell with the increased flash cache, that is what you will be doing.
Larry also placed great store in the HCC compression technology. However you must realize HCC requires you do a direct load of the data and is virtually useless in OLTP, in fact, Oracle themselves say not to use it in OLTP but to instead use standard advanced compression.
For what the Exadata and and now the X3 is designed for, data warehousing and environments with highly redundent, stable data, it will be a great performance booster, but for the rank and file OLTP databases I see it providing marginal performance increases at best.
Monday, October 1, 2012
Day One of Open World
I arrived in San Francisco at about 11 am local time on Sunday September 29, grabbed my luggage and jumped in a cab to make to my first co-presentation, "VM Experts Panel" at 12:30 at Moscone West. Immediately following the VM Experts panel, was my second presentation, " Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control" with Tariq Farooq and Rich Niemiec. Both presentations were well attended with standing room only for the second.
After finishing the second presentation I checked into my hotl, but only had a half hour before I had to get back to the Moscone center for the opening day Keynote. Fujitsu's Corporate SVP Noriyuki Toyuki opened the Keynote with Fujitsu's vision of the future heavily pushing cloud initiatives with their smart cities and agricultural and medical applications.
The final half of the Keynote was all Oracle CEO Larry Ellison with Ellison making the first announcement of the IaaS (infrastructure as as service) initiative from Oracle were they will provide you with equipment either hosted or on your site and only charge you for what CPU and other resources you actually use. Larry also hinted that there will be new changes to the Exadata line of products "eliminating IO latency" look for something involving SUN SSD PCIe cards or form factor SSDs in the Exadata storage cells would be my guess.
Following the keynote I had to run to catch the bus, which I missed, and ended up taking a taxi, to the ACE dinner at the St Francis Yacht Club and finished the evening with a rousing political debate with my friend Steven Feuerstein. Needless to say I ended up going to sleep at 10 pm local time after an exhausting but rewarding day.
After finishing the second presentation I checked into my hotl, but only had a half hour before I had to get back to the Moscone center for the opening day Keynote. Fujitsu's Corporate SVP Noriyuki Toyuki opened the Keynote with Fujitsu's vision of the future heavily pushing cloud initiatives with their smart cities and agricultural and medical applications.
The final half of the Keynote was all Oracle CEO Larry Ellison with Ellison making the first announcement of the IaaS (infrastructure as as service) initiative from Oracle were they will provide you with equipment either hosted or on your site and only charge you for what CPU and other resources you actually use. Larry also hinted that there will be new changes to the Exadata line of products "eliminating IO latency" look for something involving SUN SSD PCIe cards or form factor SSDs in the Exadata storage cells would be my guess.
Following the keynote I had to run to catch the bus, which I missed, and ended up taking a taxi, to the ACE dinner at the St Francis Yacht Club and finished the evening with a rousing political debate with my friend Steven Feuerstein. Needless to say I ended up going to sleep at 10 pm local time after an exhausting but rewarding day.
Thursday, August 30, 2012
Cursor Sharing, Versioning and Optimizer Adaptive Cursor Sharing
As I review AWR and Statspack reports from the http://www.statspackanalyzer.com
website, I often see excessive versioning. Versioning is when the optimizer for
one reason or another decides you need to have multiple versions of the same
SQL with different plans in the SQL area of the shared pool. Versioning is a
good thing when used correctly, however, when it is used poorly it can result
in large numbers of single use SQL statements clogging up the Shared Pool and
causing excessive recursion and CPU usage. Figure 1 shows an example of
versioning.
Figure 1: Example of
Excessive Versioning Oracle11g 11.1.0.7
As you can see in the example in Figure 1 we can get
thousands of identical statements loaded into the shared pool as a result of
versioning, and this was just a third of the full report section. In some
versions of Oracle (10.2 and 11.2) there are bugs that can result in excessive
versioning. In Oracle11g a new method of cursor sharing using more advanced
bind variable peeking called Optimizer Adaptive Cursor Sharing was introduced.
However, none of the advances in 11g seem to have corrected this issue with
versioning.
Once you determine that your database is experiencing excessive
versioning there are three ways I know of to mitigate the situation. The first
of course is to set CURSOR_SHARING to EXACT (the default) and eliminate cursor
sharing (although this may be ineffective in some cases). Another method to reduce versioning is to set
CURSOR_SHARING to FORCE and set “_SQLEXEC_PROGRESSION_COST” to either zero (0)
or its maximum value on your system. The progression cost is used to determine
the cost of SQL statements that will be reviewed for possible bind variable
issues. Note that setting the progression cost to 0 also turns off the V$SESSION_LONGOPS
view.
In Oracle11g the Optimizer Adaptive Cursor Sharing may also
cause versioning. If you want to turn off the feature there are actually
several undocumented parameters that you may have to “unset”:
Cursor_sharing=false
“_optimizer_extended_cursor_sharing_rel’=none
“optimizer_extended_cursor_sharing”=none
“_optimizer_adaptive_cursor_sharing”=false
Let me stress that you shouldn’t mess with undocumented parameters
without taking to Oracle support. If you are having versioning issues, contact
Oracle support and file and SR, then ask about these parameters and if they
will help in your release and in your situation.
Thursday, August 16, 2012
A Bolt from the Blue
In a move announced by press release today at 8am EST
catching industry analysts flatfooted, IBM leaped to the front lines of the solid
state storage devices wars. In a single bold stroke they can lay the claim of
having the fastest, most advanced solid state storage technology for their
engineered systems. IBM has agreed to purchase TMS: lock, stock and RamSan.
Press ReleaseI went to bed last night wearing the red shirts of TMS. However, having been a fan of Star Trek since it first made its TV debut, I should have known that red shirts are usually gone by the first commercial. Rather than meet death as any brave red-shirted Star Trek security would, I have instead been converted to a blue shirt in a wholly painless and so far, enjoyable, process. TMS will be a wholly owned subsidiary of IBM at closing later this year with the ranks and privileges that entails. I guess I may have to resolve even to like AIX…J.
So watch this blog as TMS becomes a part of the IBM universe
for announcements and the joys of becoming a part of an iconic undisputed
leader in the computer world.
Wednesday, July 25, 2012
Getting the True Access Path in Oracle
Introduction
Many times I have been asked how to determine the actual
access paths used by active SQL in the database. In the past we had to use
EXPLAIN PLAN, TKPROF and various scripts or procedures to get the explain plan
for a given statement and then use that to determine access paths. The problem
being that due to the way the SQL was injected into the database (manually, via
JAVA, C or PL/SQL for example) the actual execution plan could differ from what
was show via a manual explain plan. In Oracle9i that all changed with the
V$SQL_PLAN dynamic performance view.
Using the V$SQL_PLAN Dynamic Performance View
The V$SQL_PLAN DPV provides a single place to go and
retrieve information about the execution plans currently being utilized by the
database. For each active SQL in the shared pool, there will be a stored plan
in the V$SQL_PLAN table.
The structure of the V$SQL_PLAN DPV is nearly identical to
that of the PLAN_TABLE used for EXPLAIN PLAN and TKPROF plan display, with the
exception that the V$SQL_PLAN DPV is automatically populated and de-populated
as SQL is placed in the shared pool and aged from it. This allows real-time
viewing of how objects are currently being accessed in the database by current
SQL code.
Querying the V$SQL_PLAN DPV
For individual SQL statements, if you know their SQL
identifier, you can utilize the DBMS_XPLAN package to access their plan data in
the DPV as a standard format explain plan. In versions prior to 10g (9i,
version 9.2.08 and greater to be exact) you had to create a view to allow
DBMS_XPLAN to access the data in the DPV:
create view seeplan as
select sp.*,
to_char(hash_value) statement_id,
sysdate timestamp
from v$sql_plan sp;
Once the view exists any plan can be retrieved when you know its STATEMENT_ID:
SQL> select * from table(DBMS_XPLAN.DISPLAY('SEEPLAN','1683243911'));
In Oracle 10g and above use the DBMS_XPLAN.DISPLAY_CURSOR
with the SQL_ID and child number to get the data directly form the DPV.
However, what if we want summaries of the data?
Summarizing the Data in V$SQL_PLAN
Very often I want to know what objects are using full scans
and a few other access plans that can be detrimental to performance. Now I
could go through the plans one by one in the V$SQL_PLAN DPV, but why when
writing a query that does the work for you is so much easier! Look at the
following SQLPLUS report:
rem fts report
rem based on
V$SQL_PLAN table
col operation format a13
col object_name format a32
col object_owner
format a10
col options format a15
col executions format 999,999,999
col fts_meg format 999,999.999
column dt new_value
today noprint
select
to_char(sysdate,'ddmonyyyyhh24miss') dt from dual;
set pages 55 lines
132 trims on
ttitle 'Full
Table/Index Scans'
spool
fts&&today
select
a.hash_value,a.object_owner,a.object_name,
rtrim(a.operation) operation,
a.options,
sum(b.executions) executions, c.bytes,
(sum(b.executions)*c.bytes)/(1024*1024)
fts_meg
from
v$sql_Plan a, v$sqlarea b, dba_segments c
where
(a.object_owner=c.owner and
a.object_name=c.segment_name)
and a.address=b.address
and a.operation
IN ('TABLE ACCESS','INDEX')
and nvl(a.options,'NULL')
in ('FULL','NULL','FULL SCAN')
and a.object_owner
not in ('SYS','SYSTEM','PERFSTAT','SYSMAN','WKSYS','DBSNMP')
and b.executions>1
group by
a.hash_value, a.object_owner, a.object_name,
operation, options, c.bytes
order by
a.object_owner,a.object_name,operation, options, executions desc
/
spool off
set pages 20
ttitle off
The report generates an output similar to the following:
Wed Jan 21
page 1
Full Table/Index
Scans
HASH_VALUE OBJECT_OWN
OBJECT_NAME OPERATION OPTIONS
EXECUTIONS BYTES
FTS_MEG
---------- ----------
---------------------- ------------ --------- ------------ -------- ----------
3156122473 EXFSYS RLM$JOINQKEY INDEX FULL SCAN 2
65536 .125
2786539032 FLOWS_0300
WWV_FLOW_MAIL_QUEUE_PK INDEX FULL
SCAN 4 65536 .250
4239944566 TPCC C_ORDER TABLE ACCESS FULL 1,590 19922944 30,210.000
1311609328 TPCC C_ORDER TABLE ACCESS FULL 226 19922944 4,294.000
The report clearly shows what objects are seeing full scan
activity, the SQL responsible (at least its hash value) and the expected
megabyte impact on the database cache area. By using the hash value the SQL can
be quickly retrieved from either the V$SQLTEXT or the V$SQL_PLAN DPVs and
tuning decisions made.
Retrieving the SQL allows you to see what other objects are
involved, the WHERE clauses and allows you to then retrieve the objects indexes
to see if perhaps a better index can be constructed to eliminate any non-needed
full table scans.
Thursday, June 21, 2012
Can You Take a Hint?
Hints in Oracle have been around since version 8. Hints are
like compiler directives that tell Oracle what path to take when optimizing a
SQL statement (generally speaking.) However, Oracle will ignore the hint if it
can’t do it or it is formatted poorly.
Most tuning products for SQL will make use of hints if
statement re-arrangement doesn’t solve the problem. In later versions of
Oracle, outlines became available. You see in many applications the SQL is
considered source code and the end user is of course not allowed to modify
them. If you cannot place the hint into the code then of course hints could not
be used for those applications with source code restrictions. Outlines allowed
hints to be tied to a SQL statement ID tag (a signature) and to be applied at
run time by creating a execution plan containing the hints and storing that
plan to be used when the SQL with the proper ID was recognized by the Oracle
kernel. Profiles were the first stage of execution plan stabilization (ensuring
that for a specific SQL statement the execution plan stayed the same, even with
changes in the underlying statistics at the table and index level.)
Now in 10g and beyond Oracle provides not only hints and
outlines, but a new execution plan stabilization mechanism called profiles.
Profiles alter the costs of various optimizer branches to optimize a specific
piece of SQL code.
Generally speaking adding hints was considered to be the
last thing you did if you couldn’t get the SQL optimizer to properly optimize
the SQL statement through the use of indexes, statement rewrite or
initialization parameter adjustments. Why? Well, by adding hints to SQL
statements you generated possible documentation issues if you didn’t properly
document where hints had been placed in the application code and why. The use
of hints also could result in excessively stable plans that wouldn’t change
even if they needed to, not a good situation to be in. Finally hints are
changed, dropped and deprecated in each release of Oracle, so with each patch
or upgrade you would need to re-verify each hinted statement to be sure Oracle
didn’t send along a little surprise in the upgrade or patch.
In my opinion it is a
good practice to only use hints when no other method of SQL statement
optimization can be applied. This also applies to outlines and profiles if you
don’t have access to change source code.
Subscribe to:
Posts (Atom)