Recently we had faced an issue where the CPU utilization on 3-node RAC (10.2.0.5) servers were high. The load testing team had reported for the same as they were doing loading testing on the machine and wanted us to have a look. I was asked to check and find the reason for the same, so i logged in the box and started checking. Node 2 and 3 showed ~500 sessions on each node waiting on Cursor: pin S wait on X event.
For now tracking back the issue –
11:54:38 [email protected] > select wait_class_id, wait_class, count(*) cnt from dba_hist_active_sess_history where snap_id between 81801 and 81805 group by wait_class_id, wait_class order by 3; WAIT_CLASS_ID WAIT_CLASS CNT ------------- ---------------------------------------------------------------- ---------- 3386400367 Commit 724 4108307767 System I/O 1382 3871361733 Cluster 1662 4217450380 Application 6837 1740759767 User I/O 13593 2000153315 Network 13643 1893977003 Other 13735 27435 3875070507 Concurrency 1570590
We can see that there was large number of Concurrency events.Lets try to dig some more in
12:02:01 [email protected] > select event_id, event, count(*) cnt from dba_hist_active_sess_history where snap_id between 81803 and 81805 and wait_class_id=3875070507 group by event_id, event order by 3; EVENT_ID EVENT CNT ---------- ---------------------------------------------------------------- ---------- 2161531084 buffer busy waits 1 1035026728 enq: TX - index contention 2 1117386924 latch: row cache objects 15 2779959231 latch: cache buffers chains 16 1714089451 row cache lock 20 1729366244 cursor: pin S wait on X 42428 6 rows selected.
Above we can see Cursor: pin S wait on X was dominating. Lets find the sqlids
12:10:37 [email protected] > select sql_id, count(*) cnt from dba_hist_active_sess_history where snap_id between 81803 and 81805 and event_id=1729366244 group by sql_id having count(*) > 5000 order by 2; SQL_ID CNT ------------- ---------- 9gwcrh9842s2w 16758 g41vxyz956uvu 16804 2 rows selected.
So sqlid 9gwcrh9842s2w and g41vxyz956uvu had a big wait count for Cursor: pin S wait on X.As per the Oracle doc –
cursor: pin S wait on X — A session waits for this event when it is requesting a shared mutex pin and another session is holding an exclusive mutex pin on the same cursor object.
To check which session is holding the exclusive lock on the mutex –
select p1, p2raw, count(*) from v$session where event = ‘cursor: pin S wait on X’ and wait_time = 0 group by p1, p2raw;
p1 –> Mutex Id
p2raw –> The top bytes of p2raw gives the blocker session id in hexadecimal.
In our case, we checked the session wait event on the the 2 sqlids and saw 2 distinct wait events, cursor: pin S wait on X and single-task message.
single-task message — When running single task, this event indicates that the session waits for the client side of the executable.
The session with single-task messgae had a logon time of ~100hrs and the sqls indicated some dblink operation.
--------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT| --------------------------------------------------------------------------------------------------------------------- .......... | 6 | REMOTE | ABC_CONFIG_VW | 1 | 39 | 2 (0)| 00:00:01 | MATRIX | R->S | ..........
The remote box had got rebooted ~100hrs back, but the connections were still open and the session had gone in loop. Killing these active session with wait event “single-task message” reduced the active session count wait from 500 to ~5 on each node which in turn also reduced the CPU utilization.
Meanwhile looking for the issue, i came across the below blog post.
http://www.oracledatabase12g.com/archives/single-task-message-and-cursor-pin-s-on-x.html
4 thoughts on “single-task message and Cursor:pin S wait on X”
Hi Anand,
was it user or sys cpu that was high, do you have the sar or vmstat out put during the issue ?
Very good information about this issue. Very helpful
Hi Anand,
Nice Issue and good resolution…systematic approach
Thanks buddy 🙂