- Optimizer-Mode
- Direct-Path
- Anpassungen von Cardinality-Schätzungen
- Bug-Handhabung
- Cursor-Sharing
- Parallelisierung
- Caching
- ...
Eine Liste verfügbarer Hints liefert V$SQL_HINT.
cost = blevelVöllig unproblematisch sind in dieser Rechnung die Faktoren blevel, leaf_blocks und clustering_factor, die man z.B. in der View user_indexes findet. Die beiden anderen Elemente effective index selectivity und effective table selectivity werden im CBO-Buch ebenfalls erläutert, aber ich hatte mir bisher nie die Mühe gemacht, zu prüfen, ob ich diese Erklärungen tatsächlich verstanden hatte. Das soll hiermit geändert werden.
+ ceiling(leaf_blocks * effective index selectivity)
+ ceiling(clustering_factor * effective table selectivity)
-- Anlage einer Test-Tabelle mit einem zusammengesetzten Index drop table test_ind_selectivity; create table test_ind_selectivity tablespace test_ts as select rownum id , mod(rownum, 10) col1 -- 10 unterschiedliche Werte -> Selektivität: 0,1 , mod(rownum, 100) col2 -- 100 unterschiedliche Werte -> Selektivität: 0,01 , mod(rownum, 1000) col3 -- 1000 unterschiedliche Werte -> Selektivität: 0,001 , lpad('*', 100, '*') pad from dual connect by level <= 1000000; exec dbms_stats.gather_table_stats(user, 'TEST_IND_SELECTIVITY', method_opt=>'FOR ALL COLUMNS SIZE 1') create index test_ind_selectivity_ix1 on test_ind_selectivity(col1, col2, col3);
select index_name , blevel , leaf_blocks , clustering_factor from user_indexes where table_name = upper('test_ind_selectivity'); INDEX_NAME BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR ------------------------------ ---------- ----------- ----------------- TEST_IND_SELECTIVITY_IX1 2 2894 1000000
ALTER SESSION SET EVENTS '10053 trace name context forever, level 1'; select /*+ index(test_ind_selectivity) test1 */ count(id) from test_ind_selectivity where col1 = 1; select /*+ index(test_ind_selectivity) test2 */ count(id) from test_ind_selectivity where col1 = 1 and col2 = 1; select /*+ index(test_ind_selectivity) test3 */ count(id) from test_ind_selectivity where col1 = 1 and col2 = 1 and col3 = 1; select /*+ index(test_ind_selectivity) test4 */ count(id) from test_ind_selectivity where col1 = 1 and col3 = 1; ALTER SESSION SET EVENTS '10053 trace name context OFF';
*************************************** BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY #Rows: 1000000 #Blks: 16907 AvgRowLen: 116.00 Index Stats:: Index: TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 LVLS: 2 #LB: 2894 #DK: 1000 LB/K: 2.00 DB/K: 1000.00 CLUF: 1000000.00 User hint to use this index Access path analysis for TEST_IND_SELECTIVITY *************************************** SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for TEST_IND_SELECTIVITY[TEST_IND_SELECTIVITY] Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY Card: Original: 1000000.000000 Rounded: 100000 Computed: 100000.00 Non Adjusted: 100000.00 Access Path: index (RangeScan) Index: TEST_IND_SELECTIVITY_IX1 resc_io: 100292.00 resc_cpu: 751223460 ix_sel: 0.100000 ix_sel_with_filters: 0.100000 Cost: 100292.15 Resp: 100292.15 Degree: 1 Best:: AccessPath: IndexRange Index: TEST_IND_SELECTIVITY_IX1 Cost: 100292.15 Degree: 1 Resp: 100292.15 Card: 100000.00 Bytes: 0 ***************************************
*************************************** BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY #Rows: 1000000 #Blks: 16907 AvgRowLen: 116.00 Index Stats:: Index: TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 LVLS: 2 #LB: 2894 #DK: 1000 LB/K: 2.00 DB/K: 1000.00 CLUF: 1000000.00 User hint to use this index Access path analysis for TEST_IND_SELECTIVITY *************************************** SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for TEST_IND_SELECTIVITY[TEST_IND_SELECTIVITY] ColGroup (#1, Index) TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 CorStregth: -1.00 ColGroup Usage:: PredCnt: 2 Matches Full: Partial: Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY Card: Original: 1000000.000000 Rounded: 1000 Computed: 1000.00 Non Adjusted: 1000.00 ColGroup Usage:: PredCnt: 2 Matches Full: Partial: ColGroup Usage:: PredCnt: 2 Matches Full: Partial: Access Path: index (RangeScan) Index: TEST_IND_SELECTIVITY_IX1 resc_io: 1005.00 resc_cpu: 7547047 ix_sel: 0.001000 ix_sel_with_filters: 0.001000 Cost: 1005.00 Resp: 1005.00 Degree: 1 Best:: AccessPath: IndexRange Index: TEST_IND_SELECTIVITY_IX1 Cost: 1005.00 Degree: 1 Resp: 1005.00 Card: 1000.00 Bytes: 0 ***************************************
*************************************** BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY #Rows: 1000000 #Blks: 16907 AvgRowLen: 116.00 Index Stats:: Index: TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 LVLS: 2 #LB: 2894 #DK: 1000 LB/K: 2.00 DB/K: 1000.00 CLUF: 1000000.00 User hint to use this index Access path analysis for TEST_IND_SELECTIVITY *************************************** SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for TEST_IND_SELECTIVITY[TEST_IND_SELECTIVITY] ColGroup (#1, Index) TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 CorStregth: 1000.00 ColGroup Usage:: PredCnt: 3 Matches Full: #1 Partial: Sel: 0.0010 Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY Card: Original: 1000000.000000 Rounded: 1000 Computed: 1000.00 Non Adjusted: 1000.00 ColGroup Usage:: PredCnt: 3 Matches Full: #1 Partial: Sel: 0.0010 ColGroup Usage:: PredCnt: 3 Matches Full: #1 Partial: Sel: 0.0010 Access Path: index (AllEqRange) Index: TEST_IND_SELECTIVITY_IX1 resc_io: 1005.00 resc_cpu: 7567047 ix_sel: 0.001000 ix_sel_with_filters: 0.001000 Cost: 1005.00 Resp: 1005.00 Degree: 1 Best:: AccessPath: IndexRange Index: TEST_IND_SELECTIVITY_IX1 Cost: 1005.00 Degree: 1 Resp: 1005.00 Card: 1000.00 Bytes: 0 ***************************************
*************************************** BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY #Rows: 1000000 #Blks: 16907 AvgRowLen: 116.00 Index Stats:: Index: TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 LVLS: 2 #LB: 2894 #DK: 1000 LB/K: 2.00 DB/K: 1000.00 CLUF: 1000000.00 User hint to use this index Access path analysis for TEST_IND_SELECTIVITY *************************************** SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for TEST_IND_SELECTIVITY[TEST_IND_SELECTIVITY] ColGroup (#1, Index) TEST_IND_SELECTIVITY_IX1 Col#: 2 3 4 CorStregth: -1.00 ColGroup Usage:: PredCnt: 2 Matches Full: Partial: #1 (2 4 ) Sel: 0.0010 Table: TEST_IND_SELECTIVITY Alias: TEST_IND_SELECTIVITY Card: Original: 1000000.000000 Rounded: 1000 Computed: 1000.00 Non Adjusted: 1000.00 kkofmx: index filter:"TEST_IND_SELECTIVITY"."COL3"=1 ColGroup Usage:: PredCnt: 2 Matches Full: Partial: #1 (2 4 ) Sel: 0.0010 ColGroup Usage:: PredCnt: 2 Matches Full: Partial: #1 (2 4 ) Sel: 0.0010 Access Path: index (skip-scan) SS sel: 0.001000 ANDV (#skips): 100.000000 SS io: 300.000000 vs. index scan io: 8455.000000 Skip Scan rejected Access Path: index (RangeScan) Index: TEST_IND_SELECTIVITY_IX1 resc_io: 1292.00 resc_cpu: 29410900 ix_sel: 0.100000 ix_sel_with_filters: 0.001000 ***** Logdef predicate Adjustment ****** Final IO cst 0.00 , CPU cst 50.00 ***** End Logdef Adjustment ****** Cost: 1292.01 Resp: 1292.01 Degree: 1 Best:: AccessPath: IndexRange Index: TEST_IND_SELECTIVITY_IX1 Cost: 1292.01 Degree: 1 Resp: 1292.01 Card: 1000.00 Bytes: 0 ***************************************
============ Plan Table ============ ----------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ----------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 1292 | | | 1 | SORT AGGREGATE | | 1 | 12 | | | | 2 | TABLE ACCESS BY INDEX ROWID | TEST_IND_SELECTIVITY | 1000 | 12K | 1292 | 00:00:07 | | 3 | INDEX RANGE SCAN | TEST_IND_SELECTIVITY_IX1| 1000 | | 292 | 00:00:02 | ----------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 3 - access("COL1"=1 AND "COL3"=1) 3 - filter("COL3"=1)
select plan_table_output from table( dbms_xplan.display_cursor ( NULL, NULL, 'allstats')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------- SQL_ID gj6fuawjzpr51, child number 0 ------------------------------------- select /*+ index(test_ind_selectivity) gather_plan_statistics */ count(id) from test_ind_selectivity where col1 = 1 and col3 = 1 Plan hash value: 779399104 ------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.03 | 1296 | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.03 | 1296 | | 2 | TABLE ACCESS BY INDEX ROWID| TEST_IND_SELECTIVITY | 1 | 1000 | 1000 |00:00:00.01 | 1296 | |* 3 | INDEX RANGE SCAN | TEST_IND_SELECTIVITY_IX1 | 1 | 1000 | 1000 |00:00:00.01 | 296 | ------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("COL1"=1 AND "COL3"=1) filter("COL3"=1)
What history has taught us is that such killer architectures do not exist, but that the optimal strategy (just like in nature) depends on which environment you find yourself in. Being an IT architect means rising above all these technologies, to see that big picture, and resisting the temptation to find a single solution/platform for all problems in the enterprise. They key is to understand the environment you live in, and design for it using a palette of technologies.Und das ist sicher eine ziemlich zeitlose Schlussfolgerung.
create function generator (n pls_integer) return sys.odcinumberlist pipelined is begin for i in 1 .. n loop pipe row (i); end loop; end; / select count(*) from table(generator(1e7));
select /*+ full(t) */ count(*) from dim_table t where t.col1 = 421220 AND t.col2 = 20 COUNT(*) ---------- 2841 ------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT REMOTE| | 1 | 9 | 60397 (1)| | 1 | SORT AGGREGATE | | 1 | 9 | | |* 2 | TABLE ACCESS FULL | DIM_TABLE | 147 | 1323 | 60397 (1)| ------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("A1"."COL1"=421220 AND "A1"."COL2"=20) Note ----- - fully remote statement
select count(*) from fact_table f , dim_table d where f.col0 = d.col0 and d.col1 = 421220 AND d.col2 = 20 and f.col3 = '01.12.2011' COUNT(*) ---------- 13562 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 54 | 8 (0)| | 1 | SORT AGGREGATE | | 1 | 54 | | | 2 | NESTED LOOPS | | 24 | 1296 | 8 (0)| | 3 | REMOTE | DIM_TABLE | 2 | 78 | 4 (0)| | 4 | PARTITION RANGE SINGLE| | 12 | 180 | 2 (0)| |* 5 | INDEX RANGE SCAN | IDX_FACT_TABLE | 12 | 180 | 2 (0)| ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("F"."COL0"="A"."COL0" AND "F"."COL3"=TO_DATE(' 2011-12-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Remote SQL Information (identified by operation id): ---------------------------------------------------- 3 - SELECT "COL0","COL1","COL2" FROM "DIM_TABLE" "A" WHERE "COL1"=421220 AND "COL2"=20 (accessing 'DBL_XXX' )
select /*+ DRIVING_SITE(a) */ count(*) from fact_table f , dim_table d where f.col0 = d.col0 and d.col1 = 421220 AND d.col2 = 20 and f.col3 = '01.12.2011' --------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT REMOTE | | 1 | 38 | 395 (1)| | 1 | SORT AGGREGATE | | 1 | 38 | | | 2 | NESTED LOOPS | | 1703 | 64714 | 395 (1)| | 3 | TABLE ACCESS BY INDEX ROWID| DIM_TABLE | 147 | 2352 | 100 (0)| |* 4 | INDEX RANGE SCAN | IDX_DIM_TABLE | 147 | | 3 (0)| | 5 | REMOTE | FACT_TABLE | 12 | 264 | 2 (0)| --------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - access("A1"."COL2"=20 AND "A1"."COL1"=421220) Remote SQL Information (identified by operation id): ---------------------------------------------------- 5 - SELECT "COL0","COL3" FROM "XXX"."FACT_TABLE" "A2" WHERE "COL3"=TO_DATE(' 2011-12-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "COL0"=:1 (accessing '!' )
select ... from a , b where a.id = b.id and a.id = 10
and b.id = 10
MERGE /*+ APPEND */ INTO FACT USING (SELECT A_DATE , ... FROM (SELECT DISTINCT A_DATE , ... FROM BASE WHERE A_DATE BETWEEN :B2 AND :B1 ) ) SOURCE ON ( some_conditions AND FACT.A_DATE = SOURCE.A_DATE ) WHEN MATCHED THEN UPDATE SET ... WHEN NOT MATCHED THEN INSERT ...
------------------------------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib | ------------------------------------------------------------------------------------------------------------------------------------------------ | 0 | MERGE STATEMENT | | | | | 57703 (100)| | | | | | | | 1 | MERGE | FACT | | | | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10001 | 41430 | 8981K| | 57703 (7)| 00:13:04 | | | Q1,01 | P->S | QC (RAND) | | 4 | VIEW | | | | | | | | | Q1,01 | PCWP | | | 5 | HASH JOIN OUTER | | 41430 | 8981K| | 57703 (7)| 00:13:04 | | | Q1,01 | PCWP | | | 6 | BUFFER SORT | | | | | | | | | Q1,01 | PCWC | | | 7 | PX RECEIVE | | 41430 | 5502K| | 17408 (6)| 00:03:57 | | | Q1,01 | PCWP | | | 8 | PX SEND PARTITION (KEY)| :TQ10000 | 41430 | 5502K| | 17408 (6)| 00:03:57 | | | | S->P | PART (KEY) | | 9 | VIEW | | 41430 | 5502K| | 17408 (6)| 00:03:57 | | | | | | | 10 | SORT UNIQUE | | 41430 | 2022K| 6520K| 17408 (6)| 00:03:57 | | | | | | | 11 | FILTER | | | | | | | | | | | | | 12 | TABLE ACCESS FULL | BASE | 41430 | 2022K| | 16966 (6)| 00:03:51 | | | | | | | 13 | PX PARTITION RANGE ALL | | 895M| 71G| | 39789 (7)| 00:09:01 | 1 | 362 | Q1,01 | PCWC | | | 14 | TABLE ACCESS FULL | FACT | 895M| 71G| | 39789 (7)| 00:09:01 | 1 | 362 | Q1,01 | PCWP | | ------------------------------------------------------------------------------------------------------------------------------------------------
MERGE /*+ APPEND */ INTO FACT USING (SELECT DISTINCT A_DATE , ... FROM BASE WHERE A_DATE BETWEEN :B2 AND :B1 ) ) SOURCE ON ( some_conditions AND FACT.A_DATE = SOURCE.A_DATE ) WHEN MATCHED THEN UPDATE SET ... WHEN NOT MATCHED THEN INSERT ...
----------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib | ----------------------------------------------------------------------------------------------------------------------------------------------- | 0 | MERGE STATEMENT | | 19294 | 5916K| | 43473 (1)| | | | | | | 1 | MERGE | FACT | | | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10001 | 19294 | 4182K| | 43473 (1)| | | Q1,01 | P->S | QC (RAND) | | 4 | VIEW | | | | | | | | Q1,01 | PCWP | | | 5 | NESTED LOOPS OUTER | | 19294 | 4182K| | 43473 (1)| | | Q1,01 | PCWP | | | 6 | BUFFER SORT | | | | | | | | Q1,01 | PCWC | | | 7 | PX RECEIVE | | | | | | | | Q1,01 | PCWP | | | 8 | PX SEND ROUND-ROBIN | :TQ10000 | | | | | | | | S->P | RND-ROBIN | | 9 | VIEW | | 19294 | 2562K| | 4881 (9)| | | | | | | 10 | SORT UNIQUE | | 19294 | 942K| 3048K| 4881 (9)| | | | | | | 11 | FILTER | | | | | | | | | | | | 12 | TABLE ACCESS FULL | BASE | 19294 | 942K| | 4674 (9)| | | | | | | 13 | PARTITION RANGE ITERATOR | | 1 | 86 | | 0 (0)| KEY | KEY | Q1,01 | PCWP | | | 14 | TABLE ACCESS BY LOCAL INDEX ROWID| FACT | 1 | 86 | | 0 (0)| KEY | KEY | Q1,01 | PCWP | | | 15 | INDEX RANGE SCAN | PK_FACT | 1 | | | 0 (0)| KEY | KEY | Q1,01 | PCWP | | -----------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------- | 0 | MERGE STATEMENT | | 19294 | 5916K| 33548 (9)| | | | | | | 1 | MERGE | FACT | | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10002 | 19294 | 2562K| 33548 (9)| | | Q1,02 | P->S | QC (RAND) | | 4 | VIEW | | | | | | | Q1,02 | PCWP | | | 5 | FILTER | | | | | | | Q1,02 | PCWC | | | 6 | HASH JOIN OUTER BUFFERED| | 19294 | 2562K| 33548 (9)| | | Q1,02 | PCWP | | | 7 | BUFFER SORT | | | | | | | Q1,02 | PCWC | | | 8 | PX RECEIVE | | 19294 | 942K| 4674 (9)| | | Q1,02 | PCWP | | | 9 | PX SEND HASH | :TQ10000 | 19294 | 942K| 4674 (9)| | | | S->P | HASH | | 10 | TABLE ACCESS FULL | BASE | 19294 | 942K| 4674 (9)| | | | | | | 11 | PX RECEIVE | | 2237K| 183M| 28872 (9)| | | Q1,02 | PCWP | | | 12 | PX SEND HASH | :TQ10001 | 2237K| 183M| 28872 (9)| | | Q1,01 | P->P | HASH | | 13 | PX BLOCK ITERATOR | | 2237K| 183M| 28872 (9)| KEY | KEY | Q1,01 | PCWC | | | 14 | TABLE ACCESS FULL | FACT | 2237K| 183M| 28872 (9)| KEY | KEY | Q1,01 | PCWP | | ----------------------------------------------------------------------------------------------------------------------------
Using the simple techniques described in this article, we are now able to find a trace file, read it using the TRACEFILE_XT read-only external table, profile it with preprocessor external tables over TKProf or OraSRP and read the reports without leaving our IDE once. We can even read HTML reports in a browser window generated from within SQL*Plus without any additional keystrokes. For readers who work regularly with trace files, these are good productivity improvements!
DML error logging extends existing DML functionality by enabling you to specify the name of an error logging table into which Oracle Database should record errors encountered during DML operations. This enables you to complete the DML operation in spite of any errors, and to take corrective action on the erroneous rows at a later time.Ein detailliertes Beispiel für das Vorgehen gibt's bei Tim Hall; für den eiligen Nutzer hier eine kleinere Demonstration:
This DML error logging table consists of several mandatory control columns and a set of user-defined columns that represent either all or a subset of the columns of the target table of the DML operation using a data type that is capable of storing potential errors for the target column.
-- Anlage einer Tabelle TEST create table test ( id number , name varchar2(10) , description varchar2(32) , some_data number(8)); -- Anlage einer passenden ERROR-Table zur Tabelle TEST -- mit Hilfe des dbms_errorlog-Packages exec dbms_errlog.create_error_log (dml_table_name => 'test'); -- Die erzeugte ERROR-Tabelle besitzt das Präfix ERR$_ -- und enthält alle Spalten der Zieltabelle als VARCHAR2(4000) -- und zusätzlich diverse Spalten zur Speicherung von Fehler- -- Informationen desc ERR$_TEST Name Null? Typ ----------------------------------------- -------- ------------------- ORA_ERR_NUMBER$ NUMBER ORA_ERR_MESG$ VARCHAR2(2000) ORA_ERR_ROWID$ ROWID ORA_ERR_OPTYP$ VARCHAR2(2) ORA_ERR_TAG$ VARCHAR2(2000) ID VARCHAR2(4000) NAME VARCHAR2(4000) DESCRIPTION VARCHAR2(4000) SOME_DATA VARCHAR2(4000)
-- mein Insert klappt erst mal nicht insert into test select rownum , 'bla' , 'irgendwas' , rownum * 1000000 from dual connect by level < 10000; , rownum * 1000000 * FEHLER in Zeile 5: ORA-01438: Wert größer als die angegebene Gesamststellenzahl, die für diese Spalte zulässig ist -- aber mit der LOG ERRORS INTO Klausel geht's durch insert into test select rownum , 'bla' , 'irgendwas' , rownum * 1000000 from dual connect by level < 10000 log errors into err$_test ('insert') reject limit unlimited; 99 Zeilen wurden erstellt. --> die übrigen 9900 Sätze landen dabei in ERR$_TEST
-- Löschung der Daten der ERROR-Tabelle: truncate table err$_test; -- dazu ein INSERT, das für mehrere Spalten unverdauliche Daten enthält: insert into test values ( 1 , 'blaaaaaaaaaaaaaaaaaaaaaa' , 'ssssssssssssssssssssssssssspppppppppppppppppppaaaaaaaaaaaaaaaaaaaaaaaaammmmmmmmmmmmmmmmm' , 123456789000000000000) log errors into err$_test ('insert') reject limit unlimited; 0 Zeilen wurden erstellt. --> was für die Spalten 2 - 4 nicht klappen kann -- Die Metadaten zum Fehler sind dann: select ora_err_number$ , ora_err_mesg$ , ora_err_rowid$ , ora_err_optyp$ , ora_err_tag$ from err$_test; ORA_ERR_NUMBER$ ORA_ERR_MESG$ ORA_ERR_ROWID$ ORA_ERR_OPTYP$ ORA_ERR_TAG$ --------------- ------------------------------ --------------- --------------- --------------- 12899 ORA-12899: Wert zu groß für Sp I insert alte "DBADMIN"."TEST"."NAME" ( aktuell: 24, maximal: 10) -- dann folgen in err$_test die Werte des gescheiterten INSERTs.
at least in the white papers and presentations it sounds like a very attractive solution. There are two levels to Auto DOP:Der Herr Hesse erläutert die Wirkung des Parameters parallel_degree_policy und die daraus resultierende Wirksamkeit von Hints:
- Limited – when accessing tables and indexes that have been declared with parallel clause, Oracle will decide on the degree of parallelism based on the query and system resources.
- Auto – Oracle will decide on degree of parallelism for every query. In addition two exciting new features are enabled: parallel statement queuing and in-memory parallel execution.
parallel_degree_policy=LIMITED will give you the DOP you request and compute an appropriate DOP only with a parallel degree of DEFAULT as an attribute of the table. Use this parameter if you trust that your applications/designers know why they use a certain DOP. parallel_degree_policy=AUTO will overrule any specific DOP you gave – except the new 11g parallel (n) Hint – and consider to do things in parallel for all tables even without a Hint or Degree.Beim Herrn Geist geht's um die Beobachtung, dass ein parallel_degree_policy=auto die Verwendung von direct path inserts hervorrufen kann, was zwar in gewisser Weise folgerichtig ist, aber die üblichen Nebeneffekte des direct path hervorruft (INSERT oberhalb der HWM und "ORA-12838: cannot read/modify an object after modifying it in parallel"), wobei besonders ORA-12838 ungünstige Wirkungen haben kann:
An existing application logic might break because it attempts to re-access the object after the now direct-path insert within the the same transaction which will end up with an "ORA-12838: cannot read/modify an object after modifying it in parallel".Bei den Herren Hesse und Geist gibt's die dort üblichen Testfällen, mit deren Hilfe die Effekte nachvollzogen und überprüft werden können.
-- Loeschung des Jobs (wenn vorhanden) exec dbms_scheduler.drop_job (job_name => 'my_scheduler_test_job'); -- Job-Definition begin dbms_scheduler.create_job ( job_name => 'my_scheduler_test_job', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN TEST_PROC; END;', start_date => systimestamp, repeat_interval => 'FREQ=DAILY;BYHOUR=09;BYMINUTE=15', end_date => null, enabled => true, comments => 'belangloser Test-Job zur Ausfuehrung einer belanglosen Test-Prozedur.'); end; / -- sofortiger Start des Jobs unabhaengig vom Zeitplan exec dbms_scheduler.run_job('my_scheduler_test_job')
Overall the 11g DBMS_STATS has been enhanced to gather stats in less time, but in my opinion the significant enhancement is to AUTO_SAMPLE_SIZE which yields near 100% sample accuracy in 10% sample time.Trotzdem hatte ich mir zuletzt Gedanken darüber gemacht, ob man mit der Option block_sample die Laufzeit der Statistik-Erstellung signifikant reduzieren kann. Die Dokumentation erklärt diese Option folgendermaßen:
Whether or not to use random block sampling instead of random row sampling. Random block sampling is more efficient, but if the data is not randomly distributed on disk, then the sample values may be somewhat correlated. Only pertinent when doing an estimate statistics.Ein kleiner Test zeigt, dass die Option tatsächlich einen deutlichen Einfluss auf die Leseoperationen der Statistikerfassung hat:
-- Anlage einer relativ großen Testtabelle drop table big_t; create table big_t as with base_data as ( select rownum id1 from dual connect by level <= 1000000 ) , mult as ( select rownum id2 from dual connect by level <= 30 ) select rownum id , 'aaa' col2 from mult , base_data; -- Statistikerhebung exec dbms_stats.gather_table_stats(user, 'BIG_T') exec dbms_stats.gather_table_stats(user, 'BIG_T', estimate_percent=>1) exec dbms_stats.gather_table_stats(user, 'BIG_T', estimate_percent=>1, block_sample=>TRUE)
Version
|
Parameter
|
Laufzeit
|
10.2.0.4
|
ohne
|
33.86 sec.
|
10.2.0.4
|
estimate_percent: 1
|
12.31 sec.
|
10.2.0.4
|
estimate_percent: 1; block_sampling
|
1.01 sec.
|
11.1.0.7
|
ohne
|
13.07 sec.
|
11.1.0.7
|
estimate_percent: 1
|
2.40 sec.
|
11.1.0.7
|
estimate_percent: 1; block_sampling
|
1.78 sec.
|
However, there is still a potential problem even when resorting to Dynamic Sampling. If the cursors based on Dynamic Sampling get shared between sessions then they won't be re-optimized even if a GTT in one session is completely different from the one of the session that created the shared cursor previously.Eine mögliche Lösung solcher Probleme liefert die Verwendung von Row Level Security (RLS) durch Ergänzung redundanter Prädikate, die dafür sorgen, dass Queries sich textuell unterscheinden (ergänzt wird 1=1; 2=2 oder dergleichen).