Nightly full export (EXPDP) of our production database (660GB) started to take much longer time – 7 to 9 hours compared to normal under an hour.
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Tue Mar 8 10:18:59 2018 elapsed 0 08:18:54
When I came in the next day, database export was still running. I started to investigate the issue and found the query in this forum thread.
SELECT  dps.job_name, dps.session_type, s.sid, s.serial#, s.blocking_session, s.event, 
           s.state, s.wait_time_micro, sql.sql_fulltext
FROM dba_datapump_sessions dps
JOIN   v$session s ON dps.saddr = s.saddr
LEFT JOIN v$sql sql ON s.sql_id = sql.sql_id AND s.sql_child_number = sql.child_number;
JOB_NAME SESSION_TYPE SID SERIAL# BLOCKING_SESSION EVENT STATE WAIT_TIME_MICRO SQL_FULLTEXT ------------------ ------------- ----- ------- ---------------- -------------------------------------------- ------- --------------- --------------------------------------------------------------------- SYS_EXPORT_FULL_01 WORKER 13199 14950 [NULL] wait for unread message on broadcast channel WAITING 205331 BEGIN :1 := sys.kupc$que_int.transceive_int(:2, :3, :4, :5, :6); END; SYS_EXPORT_FULL_01 WORKER 11315 22164 [NULL] Streams AQ: enqueue blocked on low memory WAITING 2179507 BEGIN sys.kupc$que_int.send(:1, :2, :3); END; SYS_EXPORT_FULL_01 WORKER 9401 13405 [NULL] wait for unread message on broadcast channel WAITING 205364 BEGIN :1 := sys.kupc$que_int.transceive_int(:2, :3, :4, :5, :6); END; SYS_EXPORT_FULL_01 WORKER 5702 36208 [NULL] wait for unread message on broadcast channel WAITING 205342 BEGIN :1 := sys.kupc$que_int.transceive_int(:2, :3, :4, :5, :6); END; SYS_EXPORT_FULL_01 WORKER 8501 42591 [NULL] Streams AQ: enqueue blocked on low memory WAITING 2205789 BEGIN sys.kupc$que_int.send(:1, :2, :3); END; SYS_EXPORT_FULL_01 WORKER 9442 37052 [NULL] Streams AQ: enqueue blocked on low memory WAITING 2205765 BEGIN sys.kupc$que_int.send(:1, :2, :3); END; SYS_EXPORT_FULL_01 WORKER 3794 13950 [NULL] wait for unread message on broadcast channel WAITING 2208274 BEGIN :1 := sys.kupc$que_int.transceive_int(:2, :3, :4, :5, :6); END; SYS_EXPORT_FULL_01 MASTER 8487 11248 [NULL] Streams AQ: enqueue blocked on low memory WAITING 2201198 BEGIN sys.kupc$que_int.put_status(:1, :2, :3); END; SYS_EXPORT_FULL_01 DBMS_DATAPUMP 8484 58424 [NULL] wait for unread message on broadcast channel WAITING 2205272 BEGIN :1 := sys.kupc$que_int.get_status(:2, :3); END;
All datapump workers were waiting on “wait for unread message on broadcast channel” or “Streams AQ: enqueue blocked on low memory” events. Quick search on the event message returned Doc ID 1596645.1 EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory. According to the document, every occurence for the Streams AQ block event causes a minute delay. No wonder the database export was running so long.
The current setting for streams_pool is only 128MB.
SQL> show parameter streams_pool_size; NAME TYPE VALUE ------------------ ------------- --------- streams_pool_size big integer 128M
After I increased streams_pool_size from 128MB to 256MB per Oracle’s solution, the job finished shortly.
SQL> alter system set streams_pool_size=150m scope=both; System altered. SQL> show parameter streams_pool_size; NAME TYPE VALUE ------------------ ------------- --------- streams_pool_size big integer 256M
The next day, database export was still running after 6 hours. The query for expdp sessions still returned wait and clock events. So I increased streams_pool_size from 256MB to 640MB. From that day on, expdp consistently took less than an hour.
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Mar 22 02:53:19 2018 elapsed 0 00:53:13