Slow Running SQL results in
Oracle performance degradation
ChunPei Feng & R. Wang
Oracle 9iR2 and Unix, production database and standby database
In the morning, routine daily database checking shows that the database has an
unusual heavy load. As DBA, definitely, the first checking step is to monitor the
top OS processes with command TOP or PRSTAT, which offer an ongoing look at
processor activity in real time. In this case, however, a list of the most CPU-
intensive processes on the system does not tell us anything special which might
particularly cause the database performance degradation.
Next, information fetching about TOP SQL and long-running SQL also fail to
figure out the possible reason of this performance problem.
Also, the team of application development confirms that no change has been
made at the application level. And, application log doesn’t show exception on
heavy jobs and excessive user logon.
According to the information above, it can be concluded that the corrupt
database performance is caused by issues relating to the database server.
Steps to diagnose:
1. Check and Compare Historical Statspack Reports
So far, no one is able to tell which job attributes to performance degradation
because hundreds of processes, which reside on tens of Unix servers, make
DBAs difficult to track process by process. Here, the more feasible action is to
recur to Statspack, which provides a great deal of performance information
about an Oracle database. By keeping historical Statspack reports, it makes
possible to compare current Statspack report to the one in last week. The report,
generated at peak period (9:00AM - 10:00AM), is sampled to compare to one of
report created in last week at same period.
Upon comparison, the instant finding is that CPU time is increased by 1,200
(2341 vs. 1175) seconds. Usually, the significant increase on CPU time very
likely attribute to the following two scenarios:
More jobs loaded
The execution plan of SQLs is changed
Top 5 Timed Events in Statspack Rep