Home » Articles » Misc » Right here
Value-Primarily based Optimizer (CBO) And Database Statistics
When a legitimate SQL commentary is distributed to the server for the primary future, Oracle produces an execution plan that describes how you can retrieve the vital knowledge. In used variations of the database this execution plan might be generated the usage of certainly one of two optimizers:
- Rule-Primarily based Optimizer (RBO) – This used to be the fresh optimization form and because the title suggests, used to be necessarily an inventory of regulations Oracle must apply to generate an execution plan. Even nearest the cost-based optimizer used to be presented, this form used to be worn if the server had disagree inner statistics in relation to the items referenced by way of the commentary, or if explicitly asked by way of a touch or example/consultation parameter. This optimizer used to be made out of date, nearest deprecated in nearest variations of the database.
- Value-Primarily based Optimizer (CBO) – The CBO makes use of database statistics to generate a number of execution plans, choosing the only with the bottom charge, the place charge pertains to gadget assets required to finish the operation.
In more moderen variations of the database the cost-based optimizer is your best option to be had. If pristine items are created, the volume of information or the unfold of information within the database adjustments the statistics will now not constitute the actual surrounding of the database so the CBO determination procedure is also critically old. This article is going to focal point on control of statistics the usage of the DBMS_STATS
package deal, even supposing there can be some point out of legacy modes.
Matching articles.
- Automatic Optimizer Statistics Collection
- Statistics Collection Enhancements in Oracle Database 11g Release 1
- Dynamic Sampling
- Statistics Collection Enhancements in Oracle Database 12c Release 1 (12.1)
- Optimizer Statistics Advisor in Oracle Database 12c Release 2 (12.2)
Advent
For those who put 10 Oracle efficiency gurus in the similar room they’ll all say database statistics are necessary for the cost-based optimizer to make a choice the right kind execution plan for a question, however they’ll all have a distinct opinion on how you can store the ones statistics. A few quotes that get up out in my thoughts are:
-
“You don’t necessarily need up to date statistics. You need statistics that are representative of your data.” – Graham Timber.
That means, the era of the statistics on your gadget isn’t a disorder so long as they’re nonetheless consultant of your knowledge. So simply taking a look on the
LAST_ANALYZED
column of theDBA_TABLES
view isn’t a sign of legitimate stats for your gadget. -
“Do you want the optimizer to give you the best performance, or consistent performance?” – Anjo Kolk
That means, often converting your stats probably introduces alternate. Exchange isn’t all the time a just right factor.
Neither of those mavens are suggesting you by no means replace your stats, simply mentioning that during doing so you might be changing data the optimizer makes use of to decide which execution plan is the most productive. In changing that data it’s not not likely the optimizer would possibly put together a distinct determination. Optimistically it’ll be the right kind determination, however perhaps it wont. For those who store statistics for all tables each and every evening, your gadget will probably employment in a different way each and every week. That is the basic paradox of amassing statistics.
So what must our statistics technique be? Listed below are some tips.
- Automated Optimizer Statistics Assortment: From 10g onward the database robotically gathers statistics each day. The default statistics task has come underneath a batch of grievance through the years, however its worth is dependent upon the kind of techniques you might be managing. Maximum of that grievance has come from public discussing edge circumstances, like massive knowledge warehouses. In case you are managing numerous petite databases that experience reasonably slight efficiency necessities, you’ll be able to nice-looking a lot let Oracle do its personal factor the place stats are involved. If in case you have any particular issues, trade in with them on a case by way of case foundation.
- Combined Means: You depend at the automated task for almost all of stats assortment, however you might have particular tables or schemas that experience very particular stats necessities. In those circumstances you’ll be able to both all set the personal tastes for the items in query, or lock the stats for the particular tables/schemas to prohibit the task from converting them, nearest devise a customized answer for the ones tables/schemas.
- Guide: You disable the automated stats assortment totally and devise a customized answer for the entire of the database.
Which such a approaches you’re taking must be determined on a case-by-case foundation. Whichever path you’re taking, you’ll be the usage of the DBMS_STATS
package deal to govern your stats.
Without reference to the means you’re taking, you want to imagine gadget and glued object statistics for each and every database, as those don’t seem to be amassed by way of the automated task.
DBMS_STATS
The DBMS_STATS
package deal used to be presented in Oracle 8i and is Oracle’s most well-liked form of amassing statistics. Oracle listing an a variety of benefits to the usage of it together with parallel execution, longer term warehouse of statistics and switch of statistics between servers.
The capability of the DBMS_STATS package deal varies a great deal between database variations, as do the default parameter settings and the detail of the statistics they generate. It’s usefulness spending once in a while checking the documentation related for your model.
Desk and Index Stats
Desk statistics will also be amassed for the database, schema, desk or partition.
EXEC DBMS_STATS.gather_database_stats; EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15); EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15, cascade => TRUE); EXEC DBMS_STATS.gather_schema_stats('SCOTT'); EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15); EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15, cascade => TRUE); EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES'); EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15); EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15, cascade => TRUE); EXEC DBMS_STATS.gather_dictionary_stats;
The ESTIMATE_PERCENT
parameter used to be ceaselessly worn when amassing stats from massive sections to leave the pattern dimension and due to this fact the overhead of the operation. In Oracle 9i upwards, we additionally had the choice of letting Oracle decide the pattern dimension the usage of the AUTO_SAMPLE_SIZE
consistent, however this were given a wicked popularity since the decided on pattern dimension used to be now and again irrelevant, making the ensuing statistics questionable.
In Oracle 11g, the AUTO_SAMPLE_SIZE
consistent is the most popular (and default) pattern dimension because the mechanism for figuring out the unedited pattern dimension has been stepped forward. As well as, the statistics estimate according to the automobile sampling are related to 100% correct and far quicker to store than in earlier variations, as described here.
The CASCADE
parameter determines if statistics must be amassed for all indexes at the desk these days being analyzed. Previous to Oracle 10g, the default used to be FALSE, however in 10g upwards it defaults to AUTO_CASCADE
, this means that Oracle determines if index stats are vital.
Because of those changes to the conduct within the stats amassing, in Oracle 11g upwards, the ordinary defaults for amassing desk stats are adequate for many tables.
Index statistics will also be amassed explicitly the usage of the GATHER_INDEX_STATS
process.
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK'); EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);
The wave statistics data is to be had from the knowledge dictionary perspectives for the particular items (DBA, ALL and USER perspectives). A few of these view have been added in nearest releases.
- DBA_TABLES
- DBA_TAB_STATISTICS
- DBA_TAB_PARTITIONS
- DBA_TAB_SUB_PARTITIONS
- DBA_TAB_COLUMNS
- DBA_TAB_COL_STATISTICS
- DBA_PART_COL_STATISTICS
- DBA_SUBPART_COL_STATISTICS
- DBA_INDEXES
- DBA_IND_STATISTICS
- DBA_IND_PARTITIONS
- DBA_IND_SUBPARTIONS
Histogram data is to be had from refer to perspectives.
- DBA_TAB_HISTOGRAMS
- DBA_PART_HISTOGRAMS
- DBA_SUBPART_HISTOGRAMS
Desk, column and index statistics will also be deleted the usage of the related delete procedures.
EXEC DBMS_STATS.delete_database_stats; EXEC DBMS_STATS.delete_schema_stats('SCOTT'); EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMP'); EXEC DBMS_STATS.delete_column_stats('SCOTT', 'EMP', 'EMPNO'); EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMP_PK'); EXEC DBMS_STATS.delete_dictionary_stats;
Machine Stats
Presented in Oracle 9iR1, the GATHER_SYSTEM_STATS
process gathers statistics in relation to the efficiency of your techniques I/O and CPU. Giving the optimizer this knowledge makes its collection of execution plan extra correct, because it is in a position to weigh the relative prices of operations the usage of each the CPU and I/O profiles of the gadget.
There are two conceivable sorts of gadget statistics:
-
Noworkload: All databases come bundled with a default all set of noworkload statistics, however they are able to get replaced with extra correct data. When amassing noworkload stats, the database problems a line of random I/Os and assessments the velocity of the CPU. As you’ll be able to consider, this places a load for your gadget all the way through the collection section.
EXEC DBMS_STATS.gather_system_stats;
-
Workload: When initiated the usage of the beginning/restrain or pause parameters, the database makes use of counters to secure observe of all gadget operations, giving it a correct thought of the efficiency of the gadget. If workload statistics are provide, they’ll be worn as opposed to noworkload statistics.
-- Manually get started and restrain to pattern a consultant future (a number of hours) of gadget task. EXEC DBMS_STATS.gather_system_stats('get started'); EXEC DBMS_STATS.gather_system_stats('restrain'); -- Pattern from now till a selected choice of mins. DBMS_STATS.gather_system_stats('pause', pause => 180);
Your wave gadget statistics will also be displayed by way of querying the AUX_STATS$
desk.
SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname="SYSSTATS_MAIN"; PNAME PVAL1 ------------------------------ ---------- CPUSPEED CPUSPEEDNW 1074 IOSEEKTIM 10 IOTFRSPEED 4096 MAXTHR MBRC MREADTIM SLAVETHR SREADTIM 9 rows decided on. SQL>
In case you are working 11.2.0.1 or 11.2.0.2 nearest take a look at MOS Note: 9842771.8.
The DELETE_SYSTEM_STATS
process will delete all workload stats and change prior to now amassed noworkload stats with the default values.
EXEC DBMS_STATS.delete_system_stats;
You handiest wish to replace your gadget statistics when one thing main has took place for your techniques {hardware} or workload profile.
There are two faculties of considered gadget stats. One aspect steer clear of the utility of gadget statistics altogether, favoring the default noworkload stats. The alternative aspect suggests offering correct gadget statistics. The disorder with the ultimate, is it is vitally tough to come to a decision what represents a correct all set of gadget statistics. Maximum public appear to bias investigation of techniques the usage of a lot of modes, together with amassing gadget stats right into a stats desk, nearest manually atmosphere the gadget statistics the usage of the SET_SYSTEM_STATS
process.
EXEC DBMS_STATS.set_system_stats('iotfrspeed', 4096);
The to be had parameter names will also be discovered here.
I might say, if in indecision, utility the defaults.
Fastened Object Stats
Presented in Oracle 10gR1, the GATHER_FIXED_OBJECTS_STATS
process gathers statistics at the X$
tables, which sit down beneath the V$
dynamic efficiency perspectives. The X$
tables don’t seem to be truly tables in any respect, however a window directly to the reminiscence buildings within the Oracle kernel. Fastened object stats don’t seem to be amassed robotically, so you want to store them manually at a future when the database is in a consultant degree of task.
EXEC DBMS_STATS.gather_fixed_objects_stats;
Major modifications to initialization parameters or gadget task must sign you to store untouched stats, however underneath standard working this doesn’t wish to be achieved on a ordinary foundation.
The stats are got rid of the usage of the DELETE_FIXED_OBJECTS_STATS
process.
EXEC DBMS_STATS.delete_fixed_objects_stats;
Locking Stats
To prohibit statistics being overwritten, you’ll be able to lock the stats at schema, desk or partition degree.
EXEC DBMS_STATS.lock_schema_stats('SCOTT'); EXEC DBMS_STATS.lock_table_stats('SCOTT', 'EMP'); EXEC DBMS_STATS.lock_partition_stats('SCOTT', 'EMP', 'EMP_PART1');
If you want to switch the stats, they will have to be unlocked.
EXEC DBMS_STATS.unlock_schema_stats('SCOTT'); EXEC DBMS_STATS.unlock_table_stats('SCOTT', 'EMP'); EXEC DBMS_STATS.unlock_partition_stats('SCOTT', 'EMP', 'EMP_PART1');
Locking stats will also be very helpful to prohibit computerized jobs from converting them. That is particularly helpful with tables worn for ETL processes. If the stats are amassed when the tables are emptied, they’ll no longer replicate the actual lot of information all the way through the weight procedure. Rather, both store stats every future the knowledge is loaded, or store them as soon as on a complete desk and lock them.
Transfering Stats
It’s conceivable to switch statistics between servers permitting constant execution plans between servers with various
quantities of information. First the statistics will have to be gathered right into a statistics desk. In refer to examples the statistics
for the APPSCHEMA consumer are gathered right into a pristine desk, STATS_TABLE, which is owned by way of DBASCHEMA.
EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE'); EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');
This desk can nearest be transfered to every other server the usage of your most well-liked form (Export/Import, SQL*Plus COPY and so forth.) and
the stats imported into the knowledge dictionary as follows.
EXEC DBMS_STATS.import_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA'); EXEC DBMS_STATS.drop_stat_table('DBASCHEMA','STATS_TABLE');
Surroundings Personal tastes
Since Oracle 10g, lots of the default values of parameters for the DBMS_STATS
procedures have modified from being parched coded to the usage of personal tastes. In Oracle 10g, those personal tastes might be altered the usage of the SET_PARAM
process.
EXEC DBMS_STATS.set_param('DEGREE', '5');
In 11g, the SET_PARAM
process used to be deprecated in bias of a layered technique to personal tastes. The 4 ranges of personal tastes are amended with refer to procedures.
SET_GLOBAL_PREFS
: Impaired to all set world personal tastes, together with some particular to the automated stats assortment task.SET_DATABASE_PREFS
: Units personal tastes for the entire database.SET_SCHEMA_PREFS
: Units personal tastes for a selected schema.SET_TABLE_PREFS
: Units personal tastes for a selected desk.
The to be had personal tastes are indexed under, in conjunction with the to be had scope (G=World, D=Database, S=Schema, T=Desk).
Choice | Description | Default (11gR2) | Scope | Model |
---|---|---|---|---|
CASCADE | Determines if index stats must be amassed for the wave desk (TRUE, FALSE, AUTO_CASCADE). | DBMS_STATS.AUTO_CASCADE | G, D, S, T | 10gR1+ |
DEGREE | Stage of parallelism (integer or DEFAULT_DEGREE). | DBMS_STATS.DEFAULT_DEGREE | G, D, S, T | 10gR1+ |
ESTIMATE_PERCENT | Share of rows to pattern when amassing stats (0.000001-100 or AUTO_SAMPLE_SIZE). | DBMS_STATS.AUTO_SAMPLE_SIZE | G, D, S, T | 10gR1+ |
METHOD_OPT | Controls column statistics assortment and histogram founding. | FOR ALL COLUMNS SIZE AUTO | G, D, S, T | 10gR1+ |
NO_INVALIDATE | Determines if dependent cursors must be invalidated on account of pristine stats on items (TRUE, FALSE or AUTO_INVALIDATE). | DBMS_STATS.AUTO_INVALIDATE | G, D, S, T | 10gR1+ |
AUTOSTATS_TARGET | Determines which items have stats amassed (ALL, ORACLE, AUTO). | AUTO | G | 10gR2+ |
GRANULARITY | The granularity of stats to be gathered on partitioned items (ALL, AUTO, DEFAULT, GLOBAL, ‘GLOBAL AND PARTITION’, PARTITION, SUBPARTITION). | AUTO | G, D, S, T | 10gR2+ |
PUBLISH | Determines if amassed stats must be printed in an instant or left in a pending surrounding (TRUE, FALSE). | TRUE | G, D, S, T | 11gR2+ |
INCREMENTAL | Determines whether or not incremental stats can be worn for world statistics on partitioned items, in lieu than generated the usage of desk scans (TRUE, FALSE). | FALSE | G, D, S, T | 11gR2+ |
CONCURRENT | Must items statistics be amassed on a couple of items without delay, or one at a future (MANUAL, AUTOMATIC, ALL, OFF). | OFF | G | 12cR1+ |
GLOBAL_TEMP_TABLE_STATS | Must stats on world brief tables be session-specific or shared between classes (SHARED, SESSION). | SESSION | G, D, S | 12cR1+ |
INCREMENTAL_LEVEL | Which degree of synopses must be gathered for incremental partitioned statistics (TABLE, PARTITION). | PARTITION | G, D, S, T | 12cR1+ |
INCREMENTAL_STALENESS | How is staleness of partition statistics progressive (USE_STALE_PERCENT, USE_LOCKED_STATS, NULL). | NULL | G, D, S, T | 12cR1+ |
TABLE_CACHED_BLOCKS | The choice of blocks cached within the buffer cache all the way through calculation of index pile issue. Jonathan Lewis recommends “16” as a smart worth. | 1 | G, D, S, T | 12cR1+ |
OPTIONS | Impaired for the OPTIONS parameter of the GATHER_TABLE_STATS process (GATHER, GATHER AUTO). | GATHER | G, D, S, T | 12cR1+ |
Please see displays their ordinary utilization.
EXEC DBMS_STATS.set_global_prefs('AUTOSTATS_TARGET', 'AUTO'); EXEC DBMS_STATS.set_database_prefs('STALE_PERCENT', '15'); EXEC DBMS_STATS.set_schema_prefs('SCOTT','DEGREE', '5'); EXEC DBMS_STATS.set_table_prefs('SCOTT', 'EMP', 'CASCADE', 'FALSE');
World personal tastes will also be reset and the alternative layers of personal tastes deleted the usage of refer to procedures.
EXEC DBMS_STATS.reset_global_pref_defaults; EXEC DBMS_STATS.delete_database_prefs('CASCADE'); EXEC DBMS_STATS.delete_schema_prefs('SCOTT','DEGREE'); EXEC DBMS_STATS.delete_table_prefs('SCOTT', 'EMP', 'CASCADE');
Surroundings Stats Manually
The DBMS_STATS
package deal supplies a number of procedures for manually atmosphere statistics.
SET_SYSTEM_STATS
SET_TABLE_STATS
SET_COLUMN_STATS
SET_INDEX_STATS
The wave stats will also be returned the usage of refer to procedures.
GET_SYSTEM_STATS
GET_TABLE_STATS
GET_COLUMN_STATS
GET_INDEX_STATS
Watch out when atmosphere stats manually. Most likely the most secure means is to get the wave values, amend them as required, nearest all set them. An instance of atmosphere column statistics is proven under.
SET SERVEROUTPUT ON DECLARE l_distcnt NUMBER; l_density NUMBER; l_nullcnt NUMBER; l_srec DBMS_STATS.StatRec; l_avgclen NUMBER; BEGIN -- Get wave values. DBMS_STATS.get_column_stats ( ownname => 'SCOTT', tabname => 'EMP', colname => 'EMPNO', distcnt => l_distcnt, density => l_density, nullcnt => l_nullcnt, srec => l_srec, avgclen => l_avgclen); -- Amend values. l_srec.minval := UTL_RAW.cast_from_number(7369); l_srec.maxval := UTL_RAW.cast_from_number(7934); -- Poised pristine values. DBMS_STATS.set_column_stats ( ownname => 'SCOTT', tabname => 'EMP', colname => 'EMPNO', distcnt => l_distcnt, density => l_density, nullcnt => l_nullcnt, srec => l_srec, avgclen => l_avgclen); END; /
Problems
- Exclude dataload tables out of your ordinary stats amassing, except you understand they’ll be complete on the future that stats are amassed.
- Previous to 10g, amassing stats for the SYS schema can put together the gadget run slower, no longer quicker.
- Amassing statistics will also be very useful resource in depth for the server so steer clear of height workload instances or store stale stats handiest.
- Even supposing scheduled, it can be vital to store untouched statistics nearest database repairs or massive knowledge lots.
Legacy Forms for Amassing Database Stats
The tips on this category is solely for historic causes. All statistics control must now be achieved the usage of the DBMS_STATS
package deal.
Analyze Commentary
The ANALYZE commentary will also be worn to store statistics for a selected desk, index or pile. The statistics will also be computed precisely, or estimated according to a selected choice of rows, or a share of rows.
ANALYZE TABLE workers COMPUTE STATISTICS; ANALYZE INDEX employees_pk COMPUTE STATISTICS; ANALYZE TABLE workers ESTIMATE STATISTICS SAMPLE 100 ROWS; ANALYZE TABLE workers ESTIMATE STATISTICS SAMPLE 15 PERCENT;
DBMS_UTILITY
The DBMS_UTILITY
package deal will also be worn to store statistics for an entire schema or database. Each modes apply the similar layout because the analyze commentary.
EXEC DBMS_UTILITY.analyze_schema('SCOTT','COMPUTE'); EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_rows => 100); EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_percent => 15); EXEC DBMS_UTILITY.analyze_database('COMPUTE'); EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_rows => 100); EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_percent => 15);
Refreshing Stale Stats
This comes to tracking the DML operations in opposition to person tables so statistics are handiest amassed for the ones tables whose knowledge has modified considerably. That is the default motion for the automated optimizer statistics assortment in 10g and above, however in case you are the usage of an used model of the database, you might need to learn extra about this here.
Scheduling Stats
Previous to Oracle 10g, scheduling the collection of statistics the usage of the DBMS_JOB
package deal ws the best way to put together positive they have been all the time as much as future.
SET SERVEROUTPUT ON DECLARE l_job NUMBER; BEGIN DBMS_JOB.publish(l_job, 'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;', SYSDATE, 'SYSDATE + 1'); COMMIT; DBMS_OUTPUT.put_line('Activity: ' || l_job); END; /
The above code units up a role to store statistics for SCOTT for the wave future each and every week. You’ll listing the wave jobs at the server
the usage of the DBA_JOBS
and DBA_JOBS_RUNNING
perspectives.
Current jobs will also be got rid of the usage of refer to.
EXEC DBMS_JOB.take away(X); COMMIT;
The place ‘X’ is the choice of the task to be got rid of.
For more info see:
- Automatic Optimizer Statistics Collection
- Statistics Collection Enhancements in Oracle Database 11g Release 1
- Dynamic Sampling
- Refreshing Stale Statistics
- Statistics Collection Enhancements in Oracle Database 12c Release 1 (12.1)
- Best Practices for Gathering Optimizer Statistics
- DBMS_STATS (8i)
- DBMS_STATS (9iR1)
- DBMS_STATS (9iR2)
- DBMS_STATS (10gR1)
- DBMS_STATS (10gR2)
- DBMS_STATS (11gR1)
- DBMS_STATS (11gR2)
Hope this is helping. Regards Tim…