PostgreSQL 9.5.6 update statement runs forever - performance

Suddenly we entered a problem
We have a small table on which we are doing an update
Usually, it takes 2 seconds. But now it will never finish
I was looking in the pg_stat_activity table but there is no other query in the transaction and no locks.
If I cancel the query pg_cancel_backend and rerun it
it finishes without any problem
But 1 day later it happened again
and I don't know any other source than the pg_stat_activity where I can have a look.
In the postgresql.log there is nothing curious
does anybody have any suggestion what else could be the Problem?

Related

Oracle Lock , How do they differ?

What is the difference between the below two erros.as far as i understood, they happen in case of a Lock. But do you know the differene in scenarios where one might occure.
ORA-04021: timeout occurred while waiting to lock object
and
ORA-00054: resource busy and acquire with NOWAIT specified
Example for ORA-04021 might be this: there's a package in your schema. It contains a procedure which does some job that takes 15 minutes to finish. Someone runs that procedure. Meanwhile, you'd want to fix something in that package, so you edit its code and want to compile it. Well, you can't - it is being used so you'll have to wait until it is released. Oracle tells you that timeout occurred while you're waiting to lock the package and compile it.
Example for ORA-00054: there's a table. You update some values in it, but didn't commit (nor rollback) as you have to do something else as well. In another session, another user wants to alter one of table's columns (for example, enlarge its size). ALTER will then raise 0RA-00054 which says that table is busy (you're updating it in another session, right?) so you'll have to wait until transaction commits (or rollbacks).

Strange oracle job behavior

I face a problem with an oracle job
This job runs every 10 min and it calls a procedure from a package.
Inside the procedure, there is a select and then a loop.
The select could return from 10 to 1000 rows
For one week everything was running fine (, but suddenly it is like the job is not calling the procedure.
It runs successfully every 10 minutes but the procedure is not affecting the rows.
I run the procedure on its own and it works properly.
DBMS Scheduler Run details not showing anything. Everything was successfull. The only difference it that before the problem the run duration was 5 to 30 seconds, and after the problem the duration is just one second.
Do you know what else to look?
Log what's going on within the procedure. How? Create an autonomous transaction procedure which inserts log info into a separate table and commits; as it is an autonomous transaction procedure, that commit won't affect the rest of the transaction (i.e. the main procedure itself).
Log every step of the procedure and then review the result. There's probably something going on, but - it is difficult to guess what. One option might be that you used the
exception
when others then null;
exception handler which successfully hides the problem.

SAS connection to Oracle hung up for 2 hours

In SAS we have a library which is actually ORACLE schema and today I faced with a strange event when trying to query a table in this library.
A regular SAS SQL query:
proc sql;
delete from table where id=123;
quit;
Was hung up for two hours while it usually took some seconds:
NOTE: PROCEDURE SQL used (Total process time):
real time 2:00:33.49
cpu time 0.03 seconds
While this operation was being performed I tried to delete a nearby row in ORACLE SQL DEVELOPER but it hung up processing delete request too. However deleting a row that was not nearby these rows did not cause any problems. Well how can I find out the possible reason? I guess that was a sort of deadlock.
It sounds like someone has locked a row that your session is trying to delete. You should be able to spot this by querying v$session:
select sid, schemaname, osuser, terminal, program, event
from v$session
where type != 'BACKGROUND';
and checking if your session has an event of "enq: TX - row lock contention" (or similar). If so, then you'll have to work out who has the blocking lock (if you have access to Toad's session browser, this is easy to do, but Google should throw up something that can help. Or, if your database is Oracle 11.2, there's a view: v$session_blockers that ought to pinpoint the blocking session), and then get them to either commit or rollback their transaction.

Oracle - Can't Find Long Running Queries

What am I missing here? I am trying to test identifying long running queries.
I have a test table with about 400 million rows called mytest.
I ran select * from mytest in sqlplus
In another window, I ran the script below to see my long running query
select s.username, s.sid, s.serial#, s.schemaname,
s.program, s.osuser, s.status, s.last_call_et
from v$session s
where last_call_et >= 1 – this is just for testing
My long running query does not show up in the result from the query above. If I change the criteria to be >=0, then I see my query showing the status as INACTIVE and last_call_et of 0 despite the fact that the query is still running. What can I do to see my long running queries like the select * from... above so that I can kill it?
Thanks
First, you need to understand what a query like select * from mytest is really doing under the covers because that's generally not going to be a long-running query. Oracle doesn't ever need to materialize that result set and isn't going to read all the data as the result of a single call. Instead, what goes on is a series of calls each of which cause Oracle to do a little bit of work. The conversation goes something like this.
Client: Hey Oracle, run the query for me: select * from mytest
Oracle: Sure thing (last_call_et resets to 0 to reflect that a new call started). I've generated a query plan and opened a cursor,
here's a handle (note that no work has been done yet to actually
execute the query)
Client: Cool, thanks. Using this cursor handle,
fetch me the next 50 rows (the fetch size is a client-side setting)
Oracle: Will do (last_call_et resets to 0 to reflect that a new call started). I started full scanning the table, read a couple of
blocks, and got 50 rows. Here you go.
Client: OK, I've processed
those. Using this cursor handle, fetch the next 50 rows
Repeat until all the data is fetched
At no point in this process is Oracle ever really being asked to do more than read a handful of blocks to get the 50 rows (or whatever the fetch size the client is requesting). At any point, the client could simply not request the next batch of data so Oracle doesn't need to do anything long-running. Oracle doesn't track the application think time between requests for more data-- it has no idea whether the client is a GUI that is in a tight loop fetching data or whether it is displaying a result to a human and waiting for a human to hit the "next" button. The vast majority of the time, the session is going to be INACTIVE because it's mostly waiting for the client to request the next batch of data (which it generally won't do until it had formatted the last batch of data for display and done the work to display it).
When most people talk about a long-running query, they're talking about a query that Oracle is actively processing for a relatively long time with no waits on a client to fetch the data.
You can use the below script to find long running query:
select * from
(
select
opname,
start_time,
target,
sofar,
totalwork,
units,
elapsed_seconds,
message
from
v$session_longops
order by start_time desc
)
where rownum <=1;

Oracle ref cursor fetch hangs if it contains 1 single record

I have a weird problem right now that if a ref cursor returned from a stored procedure that has only 1 record in it, the fetch operation will hang and freeze. The stored procedure execution was really fast, just the fetching process hangs. If the ref cursor has more than 1 record, then everything is fine. Does anyone have similar issues before?
The Oracle server is 11g running on Linus. The client is Windows Server 2003. I'm testing this using the generic Oracle sqlplus tool on the Windows Server.
Any help and comments would be greatly appreciated. thanks.
When you say hangs, what do you mean ?
If the session is still active in the database (status in V$SESSION), then it is probably waiting on some event (eg SQL*Net from client means it is waiting for the client to do something).
It may be that the query is taking a long time to find that there aren't any more rows. Consider a table of 10,000,000 rows with no indexes. The query may full scan the table and find the first row matches the criteria. It still has to scan the next 9,999,999 rows to find that they don't. That can take a while.
Since you are saying that the process hangs, Is there a chance that your cursor does a "select for Update" instead of "Select " ? Since you are saying that the fetch of multiple records does not cause this error, that might not be the case.
Can you show us the code (or a reproducible small test/sample) for your select and the fetch.
Also, you can check the v$locked_objects using the following query and giving in your table name(s) to see if the object in question is being locked. Again, unless your current query has "for update" this fetch should not hang.
select do.*
from v$locked_objects vo,
dba_objects do
where vo.object_id = do.object_id
and vo.object_name = '<your_table_name>'

Resources