Oracle 11G - Determining Peak Processes and Sessions - oracle

So as part of the reconfiguration of our oracle database I am trying to figure out what is the peak amount of processes from an instance of our database.
Whilst I am able to obtain the current processes/sessions which are running on the database I am unsure of if there is a function/view which contains this information in 11G.
I've done a search however nothing has really come up.
Thanks,
C

I hope I understand correctly that you want to find historic data on session activity in order to monitor performance. There are several solutions I can think of.
I'd suggest DBA_HIST_ACTIVE_SESS_HISTORY, V$ACTIVE_SESSION_HISTORY for historical data and V$SESSION for current data. (I guess you are using V$SESSION to get the current processes/sessions status).
If that is not sufficient you can always create your own procedure with scheduled execution and store snapshots of current status somewhere or use Zabbix trapper (or any other monitoring tool of you choice) to monitor DB activity.
If you have Oracle Enterprise Manager I'd recommend using that. You can use performance page there to see both current and historic data on sessions and also generate ASH reports.

Peak processes and sessions are available in the Active Workload Repository (AWR) in DBA_HIST_SYSMETRIC_SUMMARY.
--Maximum number of sessions.
select instance_number, maxval, begin_time
from dba_hist_sysmetric_summary
where metric_name = 'Session Count'
order by maxval desc, begin_time desc;
--Maximum number of processes.
--This is a little trickier because only the Limit is stored so it must be compared
--to the parameter value.
--This assumes that the PROCESSES parameter hasn't changed recently.
select instance_number, round(maxval/100 * gv$parameter.value) processes, begin_time
from dba_hist_sysmetric_summary
join gv$parameter
on dba_hist_sysmetric_summary.instance_number = gv$parameter.inst_id
where gv$parameter.name = 'processes'
and metric_name = 'Process Limit %'
order by processes desc, begin_time desc;
That's probably enough information to set the PROCESSES and SESSIONS parameters. If what you really need is to know how active the database was, then looking at
DBA_HIST_ACTIVE_SESS_HISTORY, like J91321 mentioned, is probably a better method.

Related

Pagination on db side vs application side

I have simple app which execute query on dp, since there are alot rows returned ~ 300-400k and its to much to be retrived and it cause out of memory error i have to use pagination. In groovy.sql.SQL we have rows(String sql,int offset, int maxRows) anyway its works very slow, for example with step 20k rows execution time of rows method starts with around 10 sec and increase with every next call, second way of achiving pagination is using some buile in mechanism for example
select *
from (
select /*+ first_rows(25) */
your_columns,
row_number()
over (order by something unique)rn
from your_tables )
where rn between :n and :m
order by rn;
And for my query second approach tooks 5 seconds with step 20k. My question is, which method is better for database? And what is the reason of slow execution Sql.rows ?
The first_rows hint is no more needed - since Oracle 11g. For Oracle it is best approach producer-consumer design pattern. As database generates data "on-the-fly".
So simple pure select would be suitable:
select your_columns,
row_number() over (order by something unique)rn
from your_tables;
But unfortunately Java frameworks usually can not keep db connection open. They simply fetch all data at once, and then hand over the whole result set to caller.
You do not have many options. Either:
you will need all lot of RAM to fetch everything. Plus you can also use lazy loading on JPA level.
or you have to find a way how keep db connection open in a web application. Which it practically impossible. Also such a approach is not suitable for applications having more than thousands of concurrent users.
PS: under usual circumstances, the usual way how pagination is implemented does not return consistent data, as they can change between executions. So it should not be used for anything else that displaying purposes.

Inactive sessions in Oracle

I would like to ask a question :
This is my environment :
Solaris Version 10; Sun OS Version 5.10
Oracle Version: 11g Enterprise x64 Edition.
When I am running this query :
select c.owner, c.object_name, c.object_type,b.sid, b.serial#, b.status, b.osuser, b.machine
from v$locked_object a , v$session b, dba_objects c
where b.sid = a.session_id
and a.object_id = c.object_id;
Sometimes I get many status to be 'INACTIVE'.
What does this inactive mean?
Does this will make my db and application slow?
What are the affects of active and inactive status?
What does this inactive mean?
Just before the oracle executable executes a read to get the next "command" that it should execute for its session, it will set its session's state to INACTIVE. After the read completes, it will set it to ACTIVE. It will remain in that state until it is done executing the requested work. Then it will do the whole thing all over again.
Does this will make my db and application slow?
Not necessarily. See the answer to your final question.
What are the affects of active and inactive status?
The consequences of a large number of sessions (ACTIVE or INACTIVE) are significant in two ways.
The first is if the number is monotonically increasing, which would lead one to investigate the possibility that the application is leaking connections. I'm confident that such a catastrophe is not the case otherwise you would have mentioned it specifically.
The second, where the number fluctuates within the declared upper bound, is more likely. According to Andrew Holdsworth and other prominent members of the RWP, some architects allow too many connections in the application's connection pool and they demonstrate what happens (response time and availability consequences) when it is too high. They also have a prescription for how to better define the connection pool's attributes and behavior.
The essence of their argument is that by allowing a large number of connections in the pool, you allow them to all be busy at the same time. Rather than having the application tier queue transactions, the database server may have to play a primary role in queuing for low level resources like disk, CPU, network, and even other things like enqueues.
Even if all the sessions are busy for only a short time and they're contending for various resources, the contention is wasteful and can repeat over and over and over again. It makes more sense to spend extra time devising a good user experience queueing model so that you don't waste resources on what is undoubtedly the most expensive (hardware and software licenses) tier in your architecture.
ACTIVE means the session is currently executing some SQL operations whereas INACTIVE means the opposite. Check out the ORACLE v$session documentation
By nature, a high number of ACTIVE sessions will slow down the whole DBMS including your application. To what extent is hard to say - here you have to look at IO, CPU, etc. loads.
Inactive sessions will have a low impact unless you exceed the maximum session number.
In simple words, an INACTIVE status in v$session means no SQL statement is being executed at the time you check in v$session.
On other hand, if you see a lot of inactive sessions, then first check the last activity time for each session.
What I suspect is that, they might be part of a connection pool, hence they might be getting used frequently. You shouldn't worry about it, since from an application connection perspective, the connection pool will take care of it. You might see such INACTIVE sessions for quite a long time.
It does not at all suggest D/B is slow.
Status INACTIVE means session is not executing any query now.
ACTIVE means it's executing query.

Actively tracking oracle query performance

Background:
We have a database environment where views are calling views which are calling views... the logic has become complex and now changes to underlying views can have significant impact on the top view being called by the client application.
Now while we are documenting all the logic and figuring out how to unwind everything development continues on and performance continues to degrade.
Currently I would manually run an explain plan on a client query and dig into tuning it. This is a slow and tedious process and changes may not be examined for ages.
Problem:
I want to generate a report that lists SQL ID and lists changes in actual time/discrepancy between estimated rows and actual rows/changes in buffers/changes in reads in comparison to the average computed over the last month.
I would generally run the following script manually and examine it based just on that day's response.
ALTER SESSION SET statistics_level=all;
set linesize 256;
set pagesize 0;
set serveroutput off;
-- QUERY
SELECT
*
FROM
table (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
What I am trying to do is see about automating the explain plan query and inserting the statistics into a table. From there I can run a regression report to detect changes in the performance which can then alert the developers.
I was thinking something like this would be common enough without having to resorting to the OEM. I can't find anything so I wonder if there is a more common approach to this?
Oracle provides functionality for this with the Automatic Workload Repositiory. http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm
It's an extra license on top of Enterprise Edition though, I believe. It ought to be usable in non-production environments without additional cost, but check with your Oracle sales rep.
It sounds like you are on the road to re-inventing STATSPACK. Oracle still include this in their database but don't document it any more, presumably because it's free, unlike AWR and ASH. You can still find the documentation in the 9i manual.
Active Session History (ASH) is what you are looking for
select * from v$active_session_history where sql_id = :yoursqlid

Control/Limit user's thread usage in Oracle?

Is it possible to control or limit a user's parallel thread usage in oracle?
Lets say, user dev is executing a SELECT query which is taking 32 parallel threads.
But, irrespective of the hints or table design, i want the query to run in single thread as with /*NOPARALLEL*/ hint. This should happen to whatever DML transaction user dev does with the DB.
Is there any way i could achieve this?
I tried searching for an approach but couldn't reach anywhere.
The only way we can limit a user's consumption of system resources is with a profile. The CREATE PROFILE option provides a couple of options for limiting CPU usage, CPU_PER_SESSION and CPU_PER_CALL , but alas not number of CPUs. Find out more.
I would say that in the sort of environment where we would want to impose resource limits - i.e. a live one - the use of parallel query should either be left to the database through the PARALLEL_AUTOMATIC_TUNING parameter or be locked down by the PARALLEL hint on pre-canned queries only.

Simplest way to get the number of calls to an MSSQL2000 Server

Does anyone know how one can get the total number of calls to an MSSQL2000 server during a specified time, let’s say 24 hours?
We want figures of how many calls our production machine gets per day, but we can’t find any good tools/strategies for this.
Best regards
Fredrik
You could use SQL Profiler?
http://technet.microsoft.com/en-us/library/aa173918(SQL.80).aspx
http://www.sqlteam.com/article/sql-server-2000-performance-tuning-tools
http://support.microsoft.com/kb/325263
I think using SQL Profiler here is overkill in this situation, particularly as it can create a substantial load on the server depending on what you trace. SQL Server exposes the raw values used for its performance counters via the sysperfinfo system table; you should just be able to run this query once each day and subtract the values to work out how many SQL requests you received for the day:
SELECT cntr_value
FROM sysperfinfo
WHERE object_name = 'SQLServer:SQL Statistics'
AND counter_name = 'Batch Requests/sec'
This will obviously only work if the server is up for the whole day; restarting will reset the number.
I sloved this another way (all calls are "routed" thru an IIS cluster and I where able to analyze their logs).
Thanx!

Resources