Inactive sessions in Oracle - oracle

I would like to ask a question :
This is my environment :
Solaris Version 10; Sun OS Version 5.10
Oracle Version: 11g Enterprise x64 Edition.
When I am running this query :
select c.owner, c.object_name, c.object_type,b.sid, b.serial#, b.status, b.osuser, b.machine
from v$locked_object a , v$session b, dba_objects c
where b.sid = a.session_id
and a.object_id = c.object_id;
Sometimes I get many status to be 'INACTIVE'.
What does this inactive mean?
Does this will make my db and application slow?
What are the affects of active and inactive status?

What does this inactive mean?
Just before the oracle executable executes a read to get the next "command" that it should execute for its session, it will set its session's state to INACTIVE. After the read completes, it will set it to ACTIVE. It will remain in that state until it is done executing the requested work. Then it will do the whole thing all over again.
Does this will make my db and application slow?
Not necessarily. See the answer to your final question.
What are the affects of active and inactive status?
The consequences of a large number of sessions (ACTIVE or INACTIVE) are significant in two ways.
The first is if the number is monotonically increasing, which would lead one to investigate the possibility that the application is leaking connections. I'm confident that such a catastrophe is not the case otherwise you would have mentioned it specifically.
The second, where the number fluctuates within the declared upper bound, is more likely. According to Andrew Holdsworth and other prominent members of the RWP, some architects allow too many connections in the application's connection pool and they demonstrate what happens (response time and availability consequences) when it is too high. They also have a prescription for how to better define the connection pool's attributes and behavior.
The essence of their argument is that by allowing a large number of connections in the pool, you allow them to all be busy at the same time. Rather than having the application tier queue transactions, the database server may have to play a primary role in queuing for low level resources like disk, CPU, network, and even other things like enqueues.
Even if all the sessions are busy for only a short time and they're contending for various resources, the contention is wasteful and can repeat over and over and over again. It makes more sense to spend extra time devising a good user experience queueing model so that you don't waste resources on what is undoubtedly the most expensive (hardware and software licenses) tier in your architecture.

ACTIVE means the session is currently executing some SQL operations whereas INACTIVE means the opposite. Check out the ORACLE v$session documentation
By nature, a high number of ACTIVE sessions will slow down the whole DBMS including your application. To what extent is hard to say - here you have to look at IO, CPU, etc. loads.
Inactive sessions will have a low impact unless you exceed the maximum session number.

In simple words, an INACTIVE status in v$session means no SQL statement is being executed at the time you check in v$session.
On other hand, if you see a lot of inactive sessions, then first check the last activity time for each session.
What I suspect is that, they might be part of a connection pool, hence they might be getting used frequently. You shouldn't worry about it, since from an application connection perspective, the connection pool will take care of it. You might see such INACTIVE sessions for quite a long time.

It does not at all suggest D/B is slow.
Status INACTIVE means session is not executing any query now.
ACTIVE means it's executing query.

Related

How to detect the cpu-peak-inducing transaction/statements (mostly read/select) from mon$... data?

I can see that the Firebird 2.1 process (on Linux) (for our program) reaches 97% CPU load, the load may be distributed, e.g. the server can have 4 cores and 2 cores are consumed with 97% load and the remaining 2 cores are under normal load (1-10%) from the Firebird process. The bad thing is, that this 97% peak can last half hour, an hour or even longer.
As I understand, then I just need to determine the Firebird transaction and the Firebird attachment (i.e. connection) that has created this peek and then I can just ask the user/software instance, that created this connection/attachment to close his/her program and start anew. When attachment is closed, the Firebird can sense this and Firebird process stops any CPU loads and processes that were assigned to that attachment.
So, my aim is to look on the data from the monitoring tables (mon$...) and to determine the offending transaction/connection.
I came up with the select (for Firebird 2.1):
select a.mon$user, sa.*, t.*
from mon$transactions t
left join mon$io_stats s on (t.mon$stat_id=s.mon$stat_id)
left join mon$attachments a on (t.mon$attachment_id=a.mon$attachment_id)
left join mon$statements sa on (t.mon$transaction_id=sa.mon$transaction_id)
where s.mon$page_reads>1000000
This SQL seems to be right, but practically the results are misleading. For example, my select returns several entries with a.mon$timestamp that is 4 or even more hours old. I can not believe that there are transactions that are so old and that still are taking resources. The strange thing is that the records have no data from left-joined mon$statements. So, I have some information about long-running transactions, but I have no information about statements that case created or prolonged this transaction. I don't even understand whether such transactions are actually creating the CPU peak or if this data is obsolete.
So, how to correct this SQL (or write completely anew) to find the statements/attachments that is causing CPU % in Firebird 2.1?

How to Oracle ON LOGON trigger impact on performance?

I've created trigger for logon event as showed below:
CREATE OR REPLACE TRIGGER "log_users_session"
AFTER LOGON ON DATABASE
WHEN USER = 'SomeUser'
BEGIN
INSERT INTO "users_logon_log" ("username","date") VALUES ("Some user",sysdate)
END;
It's big report database. I want to know, is this really slow down database perfomarnce, or has side effects?
My Oracle version 19c.
A few objections, if I may.
Get rid of double quotes when working with Oracle, i.e. no "log_users_session" but log_users_session. In Oracle, everything is (by default) stored into data dictionary as uppercase, but you can reference it any way you want. With double quotes, you MUST reference it using exactly that letter case along with double quotes, always.
That affects column name: "date". When you remove double quotes, you'd get date and that's an invalid name as date is reserved for Oracle datatype; so, use log_date or something like that.
As of your question: you decided to log only SomeUser so - if that user doesn't establish zillion connections, I wouldn't expect significant impact. Though, if it is a big reporting database and all users (read: people) use the same credentials while connecting/establishing a new session, then maybe. On the other hand, what's the purpose of such a setup? You'd get a large number of connections for the same user for the whole time you monitor that.
Basically, it just depends on what you do and how. It wouldn't cost much if you try it and see how it behaves. If it affects performance, don't use it any longer.
There will be a performance hit ranging from very minimal to significantly high based on the concurrent connections. The hit is linearly proportional to the no. of concurrent connections i.e. More connections mean big hit, less connections mean less hit. It would be ideal to make a decision depending upon the avg no. of users who connects to the system at a given time. I had implemented this for a 300GB database with ~200 connections, it didn't have much impact.
Also, users_logon_log table should be taken into account for regular maintenance/clean up from growing too large and occupy significant disk space.
If you only need to record connexions to database, I would simply use database audit features: https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/introduction-to-auditing.html#GUID-F901756D-F747-489C-ACDE-9DBFDD388D3E

Cache greenplum query plan globally?

I'd like to save planner cost using plan cache, since OCRA/Legacy optimizer will take dozens of millionseconds.
I think greenplum cache query plan in session level, when session end or other session could not share the analyzed plan. Even more, we can't keep session always on, since gp system will not release resource until TCP connection disconnected.
most major database cache plans after first running, and use that corss connections.
So, is there any switch that turn on query plan cache cross connectors? I can see in a session, client timing statistics not match the "Total time" planner gives?
Postgres can cache the plans as well, which is on a per session basis and once the session is ended, the cached plan is thrown away. This can be tricky to optimize/analyze, but generally of less importance unless the query you are executing is really complex and/or there are a lot of repeated queries.
The documentation explains those in detail pretty well. We can query pg_prepared_statements to see what is cached. Note that it is not available across sessions and visible only to the current session.
When a user starts a session with Greenplum Database and issues a query, the system creates groups or 'gangs' of worker processes on each segment to do the work. After the work is done, the segment worker processes are destroyed except for a cached number which is set by the gp_cached_segworkers_threshold parameter.
A lower setting conserves system resources on the segment hosts, but a higher setting may improve performance for power-users that want to issue many complex queries in a row.
Also see gp_max_local_distributed_cache.
Obviously, the more you cache, the less memory there will be available for other connections and queries. Perhaps not a big deal if you are only hosting a few power users running concurrent queries... but you may need to adjust your gp_vmem_protect_limit accordingly.
For clarification:
Segment resources are released after the gp_vmem_idle_resource_timeout.
Only the master session will remain until the TCP connection is dropped.

Commits in the absence of locks in CockroachDB

I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
Of course, ignoring performance issues / joins, etc.
I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
CockroachDB does have locks, but uses different terminology. Some of the existing documentation that talks about optimistic concurrency control is currently being updated.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
One of the transactions will block waiting for the other to commit. If a deadlock between the transactions is detected, one of the two transactions involved in the deadlock will be aborted.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Yes.
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
At a high-level there is nothing additional for you to do. CockroachDB defaults to serializable isolation which can result in more transaction restarts that weaker isolation levels, but comes with the advantage that the application programmer doesn't have to worry about anomalies.

How oracle handles concurrency in clustered environment?

I have to implement a database solution wherein contention is handled in a clustered environment. There is a scenario wherein there are multiple users trying to access a bank account at the same time and deposit money into it if balance is less than $100, how can I make sure that no extra money is deposited? Basically , this query is supposed to fire :-
update acct set balance=balance+25 where acct_no=x ;
Since database is clustered , account ends up getting deposited multiple times.
I am looking for purely oracle based solution.
Clustering doesn't matter for the system which is trying to prevent the scenario you're fearing/seeing, which is locking.
Behold scenario user A and then user B trying to do an update, based on a check (less than 100 dollar in account):
If both the check and the update is done in the same transaction, locking will prevent that user B does a check, UNTIL user A has done both the check, and the actual insert. In other words, user B will find the check failing, and will not perform the asked action.
When a user says "at the same time", you should know that the computer does not know that concept, as all transactions are sequential, no matter what millisecond is identical. Behold the ID that is kept in the Redo Logs, there's only one counter. Transaction X and Y is done before or after each other, never at the same time.
That doesn't sound right ... When Oracle locks a row for update, the lock should be across all nodes. What you describe doesn't sound right. What version of Oracle are you using, and can you provide a step-by-step example of what you're doing?
Oracle 11 doc here:
http://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT020

Resources