Die Fragestellung, die ich mit Hilfe von runstat untersuchen will, lautet: ab welchem Punkt wird es im Rahmen einer ETL-Operation - also in einer single user Umgebung - günstiger, eine Tabelle über Truncate zu leeren und neu zu befüllen, statt einen Teil der Daten über Delete zu löschen und nur die erforderlichen Daten neu einzuspielen. Dazu lege ich zwei Tabellen an, von denen die eine per truncate geleert und dann über insert neu gefüllt wird und die andere per delete teilweise geleert wird und die gelöschten Sätze dann wieder über insert neu eingefügt werden. In mehreren Testläufen wird dann der Anteil der in Fall 2 gelöschten Sätze erhöht. Meine Vermutung ist, dass es für diesen Test einen Punkt gibt - wenn mehr als n % gelöscht werden -, ab dem die brutale truncate/insert-Variante effizienter wird als die delete/insert-Variante.
Zunächst zu den Vorraussetzungen:
- 11.1.0.7
- noarchivelog (was bekanntlich im Zusammenspiel mit dem Append-Hint Einfluß auf die redo-Erzeugung hat)
- mssm tablespace
- 8 K blocksize
-- test_delete_truncate_performance.sql -- 27.07.2011 /* -- nur einmal auszuführen: drop table t0; create table t0 as select level id , mod(level, 100) + 1 col1 , lpad('*', 100, '*') filler from dual connect by level <= 1000000 order by dbms_random.value; */ undefine pct_del drop table t1; drop table t2; create table t1 as select * from t0; create table t2 as select * from t0; exec runstats_pkg.rs_start; -- Fall 1: truncate + insert truncate table t1; insert into t1 select * from t0; exec runstats_pkg.rs_middle; -- Fall 2: delete% der Sätze + erneutes insert der gelöschten Sätze delete from t2 where col1 <= &&pct_del; insert into t2 select * from t0 where col1 <= &pct_del; exec runstats_pkg.rs_stop(100000);
Es folgen das runstat-Ergebnis für die Löschung unterschiedlicher Teile der Tabelle:
Fall 1 - gelöschte Sätze: 1%
================================================================================ Runstats report : 27-JUL-2011 10:54:12 ================================================================================ -------------------------------------------------------------------------------- 1. Summary timings -------------------------------------------------------------------------------- Run1 ran in 247 hsecs Run2 ran in 174 hsecs Run2 was 29,6% quicker than Run1 -------------------------------------------------------------------------------- 2. Statistics report -------------------------------------------------------------------------------- Type Name Run1 Run2 Diff ----- ----------------------------------- ------------ ------------ ------------ LATCH checkpoint queue latch 955 12,037 11,082 STAT free buffer requested 10,331 21,653 11,322 STAT physical reads cache prefetch 1,582 15,296 13,714 STAT physical reads 1,834 15,712 13,878 STAT physical reads cache 1,834 15,712 13,878 STAT Heap Segment Array Inserts 14,555 145 -14,410 STAT calls to get snapshot scn: kcmgss 16,446 233 -16,213 LATCH cache buffers lru chain 2,382 24,520 22,138 LATCH object queue header operation 26,907 56,462 29,555 STAT redo entries 42,224 10,406 -31,818 STAT db block gets 46,358 10,708 -35,650 STAT db block gets from cache 46,358 10,708 -35,650 STAT session logical reads 64,449 26,630 -37,819 STAT db block changes 67,919 20,784 -47,135 LATCH cache buffers chains 252,917 120,542 -132,375 STAT undo change vector size 3,131,416 2,144,744 -986,672 STAT table scan rows gotten 1,008,449 2,015,616 1,007,167 STAT session pga memory -1,507,328 589,824 2,097,152 STAT redo size 124,435,844 4,753,664 -119,682,180 STAT cell physical IO interconnect bytes 30,048,256 257,425,408 227,377,152 STAT physical IO disk bytes 30,048,256 257,425,408 227,377,152 STAT physical read bytes 30,048,256 257,425,408 227,377,152 STAT physical read total bytes 30,048,256 257,425,408 227,377,152 -------------------------------------------------------------------------------- 3. Latching report -------------------------------------------------------------------------------- Run1 used 317,391 latches Run2 used 221,404 latches Run2 used 30,2% fewer latches than Run1 ================================================================================ End of report ================================================================================
Für die Löschung eines Prozents der Daten ergibt sich demnach Folgendes:
- die delete Variante benötigt weniger Zeit als die truncate Variante
- Das hatte ich für 1% erwartet.
- Run2 was 29,6% quicker than Run1: 2,47 sec vs. 1,74 sec
- die delete Variante verwendet deutlich weniger redo als die truncate Variante
- leuchtet mir ein, da die redo-Einträge für das insert von 1.000.000 Sätzen natürlich umfangreicher sind als die für delete und insert von 10.000 Sätzen
- die delete Variante verwendet weniger logical reads als die truncate Variante
- die delete Variante verwendet mehr physical reads als die truncate Variante
- die delete Variante verwendet weniger latches als die truncate Variante
================================================================================ Runstats report : 27-JUL-2011 11:00:02 ================================================================================ -------------------------------------------------------------------------------- 1. Summary timings -------------------------------------------------------------------------------- Run1 ran in 244 hsecs Run2 ran in 264 hsecs Run1 was 7,6% quicker than Run2 -------------------------------------------------------------------------------- 2. Statistics report -------------------------------------------------------------------------------- Type Name Run1 Run2 Diff ----- ----------------------------------- ------------ ------------ ------------ STAT physical reads cache prefetch 2,637 15,307 12,670 STAT physical reads 2,717 15,708 12,991 STAT physical reads cache 2,717 15,708 12,991 STAT free buffer requested 11,178 24,659 13,481 STAT Heap Segment Array Inserts 14,555 728 -13,827 LATCH checkpoint queue latch 916 15,331 14,415 STAT calls to get snapshot scn: kcmgss 16,447 895 -15,552 LATCH simulator hash latch 25,530 8,406 -17,124 LATCH cache buffers lru chain 2,433 30,777 28,344 STAT HSC Heap Segment Block Changes 15,047 50,740 35,693 STAT db block changes 67,922 104,012 36,090 LATCH object queue header operation 28,593 66,692 38,099 LATCH DML lock allocation 39,058 26 -39,032 STAT buffer is pinned count 2 49,593 49,591 STAT session uga memory 457,856 523,264 65,408 LATCH cache buffers chains 250,979 335,975 84,996 STAT table scan rows gotten 1,008,449 2,014,080 1,005,631 STAT session pga memory -1,572,864 458,752 2,031,616 STAT undo change vector size 3,131,668 10,893,104 7,761,436 STAT redo size 124,324,584 23,817,964 -100,506,620 STAT cell physical IO interconnect bytes 44,515,328 257,359,872 212,844,544 STAT physical IO disk bytes 44,515,328 257,359,872 212,844,544 STAT physical read bytes 44,515,328 257,359,872 212,844,544 STAT physical read total bytes 44,515,328 257,359,872 212,844,544 -------------------------------------------------------------------------------- 3. Latching report -------------------------------------------------------------------------------- Run1 used 383,102 latches Run2 used 460,789 latches Run1 used 16,9% fewer latches than Run2 ================================================================================ End of report ================================================================================
Erwartungsgemäß bleiben die Werte für das truncate relativ stabil. Darüber hinaus kann mal feststellen, dass:
- truncate hier schon schneller ist als delete
- Run1 was 7,6% quicker than Run2: 2,44 sec zu 2,46 sec
- die Differenz bei den logical reads bereits unter dem Schwellwert von 10.000 liegt (möglicherweise wäre eine andere Filterung des runstats-Ergebnisses hier sinnvoller gewesen)
- die physical reads auch für das delete stabil bleiben (gegenüber dem 1% Fall)
- der truncate Fall jetzt schon weniger latches verwendet
Fall 3 - gelöschte Sätze: 25%
================================================================================ Runstats report : 27-JUL-2011 11:08:27 ================================================================================ -------------------------------------------------------------------------------- 1. Summary timings -------------------------------------------------------------------------------- Run1 ran in 243 hsecs Run2 ran in 577 hsecs Run1 was 57,9% quicker than Run2 -------------------------------------------------------------------------------- 2. Statistics report -------------------------------------------------------------------------------- Type Name Run1 Run2 Diff ----- ----------------------------------- ------------ ------------ ------------ STAT Heap Segment Array Inserts 14,555 3,666 -10,889 STAT dirty buffers inspected 1,671 13,169 11,498 STAT calls to get snapshot scn: kcmgss 16,445 4,204 -12,241 STAT physical reads cache prefetch 2,189 15,308 13,119 STAT physical reads 2,419 15,738 13,319 STAT physical reads cache 2,419 15,738 13,319 LATCH simulator hash latch 5,681 22,251 16,570 STAT free buffer requested 10,915 29,069 18,154 LATCH checkpoint queue latch 1,258 22,243 20,985 LATCH cache buffers lru chain 3,208 43,567 40,359 LATCH object queue header operation 29,596 90,771 61,175 STAT session uga memory 457,856 523,264 65,408 STAT redo entries 42,222 260,447 218,225 STAT db block gets 46,353 268,940 222,587 STAT db block gets from cache 46,353 268,940 222,587 STAT session logical reads 64,431 287,375 222,944 STAT HSC Heap Segment Block Changes 15,047 253,726 238,679 STAT buffer is pinned count 2 247,918 247,916 STAT db block changes 67,918 520,097 452,179 STAT table scan rows gotten 1,008,449 2,015,488 1,007,039 LATCH cache buffers chains 253,277 1,416,450 1,163,173 STAT session pga memory -1,572,864 458,752 2,031,616 STAT redo size 124,419,928 119,169,836 -5,250,092 STAT undo change vector size 3,131,416 54,710,068 51,578,652 STAT cell physical IO interconnect bytes 39,632,896 257,851,392 218,218,496 STAT physical IO disk bytes 39,632,896 257,851,392 218,218,496 STAT physical read bytes 39,632,896 257,851,392 218,218,496 STAT physical read total bytes 39,632,896 257,851,392 218,218,496 -------------------------------------------------------------------------------- 3. Latching report -------------------------------------------------------------------------------- Run1 used 320,956 latches Run2 used 1,606,384 latches Run1 used 80% fewer latches than Run2 ================================================================================ End of report ================================================================================
Hier ist das truncate bereits deutlich effizienter:
- Run1 was 57,9% quicker than Run2: 2,43 sec zu 5,77 sec
- delete benötigt die sechsfache Menge an logical reads
- die physical reads bleiben weiterhin stabil (wobei physical reads unter Umständen vom OS gecachet sein können)
- Run1 used 80% fewer latches than Run2
Fall 4 - gelöschte Sätze: 50%
================================================================================ Runstats report : 27-JUL-2011 11:15:09 ================================================================================ -------------------------------------------------------------------------------- 1. Summary timings -------------------------------------------------------------------------------- Run1 ran in 218 hsecs Run2 ran in 901 hsecs Run1 was 75,8% quicker than Run2 -------------------------------------------------------------------------------- 2. Statistics report -------------------------------------------------------------------------------- Type Name Run1 Run2 Diff ----- ----------------------------------- ------------ ------------ ------------ STAT physical reads cache prefetch 2,831 15,243 12,412 STAT calls to kcmgas 2,521 15,603 13,082 STAT physical reads 3,017 16,108 13,091 STAT physical reads cache 3,017 16,108 13,091 STAT free buffer inspected 13,011 28,743 15,732 STAT dirty buffers inspected 1,665 18,386 16,721 STAT free buffer requested 11,517 34,303 22,786 LATCH checkpoint queue latch 940 26,535 25,595 LATCH simulator hash latch 5,733 39,483 33,750 LATCH DML lock allocation 39,058 202 -38,856 LATCH cache buffers lru chain 2,638 52,859 50,221 STAT session uga memory 457,856 523,264 65,408 LATCH object queue header operation 31,948 115,674 83,726 STAT redo entries 42,201 521,548 479,347 STAT db block gets 46,355 539,041 492,686 STAT db block gets from cache 46,355 539,041 492,686 STAT HSC Heap Segment Block Changes 15,047 507,970 492,923 STAT buffer is pinned count 2 495,976 495,974 STAT session logical reads 64,485 561,606 497,121 STAT db block changes 67,919 1,044,330 976,411 STAT table scan rows gotten 1,008,449 2,015,104 1,006,655 STAT session pga memory -1,572,864 458,752 2,031,616 LATCH cache buffers chains 253,454 2,770,061 2,516,607 STAT undo change vector size 3,131,416 109,653,688 106,522,272 STAT redo size 124,157,056 238,877,476 114,720,420 STAT cell physical IO interconnect bytes 49,430,528 263,913,472 214,482,944 STAT physical IO disk bytes 49,430,528 263,913,472 214,482,944 STAT physical read bytes 49,430,528 263,913,472 214,482,944 STAT physical read total bytes 49,430,528 263,913,472 214,482,944 -------------------------------------------------------------------------------- 3. Latching report -------------------------------------------------------------------------------- Run1 used 370,161 latches Run2 used 3,025,717 latches Run1 used 87,8% fewer latches than Run2 ================================================================================ End of report ================================================================================
Fall 5 - gelöschte Sätze: 100%
================================================================================ Runstats report : 27-JUL-2011 11:16:21 ================================================================================ -------------------------------------------------------------------------------- 1. Summary timings -------------------------------------------------------------------------------- Run1 ran in 216 hsecs Run2 ran in 1711 hsecs Run1 was 87,4% quicker than Run2 -------------------------------------------------------------------------------- 2. Statistics report -------------------------------------------------------------------------------- Type Name Run1 Run2 Diff ----- ----------------------------------- ------------ ------------ ------------ STAT redo ordering marks 1,781 13,681 11,900 LATCH MinActiveScn Latch 3,238 15,815 12,577 STAT physical reads cache prefetch 2,651 15,257 12,606 STAT consistent gets 18,093 32,080 13,987 STAT consistent gets from cache 18,093 32,080 13,987 STAT physical reads 2,746 20,628 17,882 STAT physical reads cache 2,746 20,628 17,882 STAT calls to kcmgas 2,521 22,032 19,511 STAT dirty buffers inspected 2,698 26,639 23,941 STAT free buffer inspected 12,227 39,414 27,187 STAT free buffer requested 11,210 42,908 31,698 LATCH checkpoint queue latch 1,637 37,689 36,052 STAT session uga memory 457,856 523,264 65,408 LATCH simulator hash latch 5,735 73,497 67,762 LATCH cache buffers lru chain 4,186 73,309 69,123 LATCH object queue header operation 32,129 162,218 130,089 STAT redo entries 42,226 1,031,879 989,653 STAT buffer is pinned count 2 992,187 992,185 STAT HSC Heap Segment Block Changes 15,047 1,014,922 999,875 STAT table scan rows gotten 1,008,449 2,014,592 1,006,143 STAT db block gets 46,375 1,067,775 1,021,400 STAT db block gets from cache 46,375 1,067,775 1,021,400 STAT session logical reads 64,468 1,099,855 1,035,387 STAT db block changes 67,927 2,084,254 2,016,327 STAT session pga memory -1,572,864 458,752 2,031,616 LATCH cache buffers chains 253,597 5,453,856 5,200,259 STAT undo change vector size 3,131,728 219,793,172 216,661,444 STAT cell physical IO interconnect bytes 44,990,464 337,969,152 292,978,688 STAT physical IO disk bytes 44,990,464 337,969,152 292,978,688 STAT physical read bytes 44,990,464 337,969,152 292,978,688 STAT physical read total bytes 44,990,464 337,969,152 292,978,688 STAT redo size 124,379,784 477,840,336 353,460,552 -------------------------------------------------------------------------------- 3. Latching report -------------------------------------------------------------------------------- Run1 used 325,330 latches Run2 used 5,838,489 latches Run1 used 94,4% fewer latches than Run2 ================================================================================ End of report ================================================================================
Zusammenfassung:
- Adrian Billingtons runstats-Packages werden meine Eigenbaulösung wahrscheinlich verdrängen
- truncate ist bereits ab einer Löschung von 5% der Daten genauso schnell wie delete und bei größeren Prozentwerten deutlich effektiver
- truncate verwendet bereits ab 5% Datenlöschungen deutlich weniger latches
- bis 25% Löschungen ist die redo size für den truncate-Fall höher, bei höheren Werten dann für den delete-Fall
- die physical reads bleiben für alle delete-Varianten relativ stabil
- die logical reads steigen im delete-Fall ab 5% Löschungen über die Werte des truncate-Falls
Keine Kommentare:
Kommentar veröffentlichen