ERROR: cannot execute nextval() in a read-only transaction - PostgreSQL - spring

I can see internal server error on my app developed on spring mvc, using wildfly as the webserver and database is PostgreSQL. What does this exception mean?
ERROR: cannot execute nextval() in a read-only transaction
It was working all fine before. I tried to look for the solution here on stackoverflow but didn't find anything that could fix this issue.

The function nextval() is used to increment a sequence, which modifies the state of the database.
You get that error because you are in a read-only transaction. This can happen because
You explicitly started a read-only transaction with START TRANSACTION READ ONLY or similar.
The configuration parameter default_transaction_read_only is set to on.
You are connected to a streaming replication standby server.
If default_transaction_read_onlyis set to on, you can either start a read-write transaction with
START TRANSACTION READ WRITE;
or change the setting by editing postgresql.conf or with the superuser command
ALTER SYSTEM SET default_transaction_read_only = off;

As others have pointed out nextval() actually updates the database to get a new sequence, so can't be used in a transaction that is marked as read only.
As you're using Spring I suspect this means that you're using the spring-transaction support. If you're using annotation based transaction support, then you'd get a read only transaction if you have
#Transactional(readOnly=true)
That means that when spring starts the transaction it will put it into read only mode.
Remove the readOnly=true bit and a regular writable transaction is created instead.
Spring transaction control at http://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/transaction.html

I means exactly what it says, here is example:
t=# create sequence so49;
CREATE SEQUENCE
t=# begin;
BEGIN
t=# select nextval('so49');
nextval
---------
1
(1 row)
t=# set transaction_read_only TO on;
SET
t=# select nextval('so49');
ERROR: cannot execute nextval() in a read-only transaction
t=# end;
ROLLBACK
I presume you connected as so called ro user, which is user with "transaction_read_only" set to true, EG:
t=# select rolconfig from pg_roles where rolname ='ro';
rolconfig
----------------------------
{transaction_read_only=on}
(1 row)
you can switch that off for your user of course, but this is out of scope I believe
https://www.postgresql.org/docs/current/static/sql-set-transaction.html

Related

Why Update using #Query in Spring Data Jpa requires #Transactional

I am new to spring boot. I use spring data jpa to deal with database. I have a method to update a table in the database using #Query. But when I try to update I get an exception of invalidDataAccessApiUsageException. when I tried it with #Transactional it gets updated successfully. Aren't updates a single operation so wouldn't it get committed automatically.
There are 2 ways in which transaction execute in SQL
Implicit -> One that means database, while running the write query(UPDATE, INSERT ....), create an isolation and then execute. If an error occurs, the isolation is discarded and no change will be written.
Explicit -> In this you explicitly specify the isolation start using BEGIN , discarding by ROLLBACK and finally writing by COMMIT
Initial versions of postgres (<7.4) had a configuration to called AUTOCOMMIT which when set off, will DISABLE implicit transactions. But this was disabled in 2003 since the databases were smart enough to discard isolations and not create inconsistencies.
In a nutshell at any point running following queries
UPDATE table_name WHERE id IN (....)
or
BEGIN
UPDATE table_name WHERE id IN (....)
COMMIT
are EXACTLY the same.
In JPA autocommit is now just a runtime validation for write queries.

Oracle 12c - Find if temporary objects created before TEMP_UNDO_ENABLED is set

Database : Oracle 12c (12.1.0.2) - Enterprise Edition with RAC
I'm trying to reduce REDO and archive logs generated for my application and measure using V$SYSSTAT and corresponding archive logs using DBA_HIST* views.
In my application code on DB side, I'm using the session level setting of TEMP_UNDO_ENABLED to direct UNDO for gtt into temporary tablespace. The specific feature noted here.
ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;
INSERT INTO my_gtt VALUES...
Note the documentation has this quote:
..if the session already has temporary objects using regular undo, setting this parameter will have no effect
If I use a pure database session, I can ascertain that since no other temporary tables have been created/used before setting the parameter, the REDO logs generated are minimal. I can use a simple (select value from V$SYSSTAT where name= 'redo size') to see the difference.
However the actual application (Java) triggers this code through a JDBC session. As such, I'm unable to ascertain if before the call to 'ALTER SESSION..' there were any GTT or other temporary objects previously created/used in the session. The consequence of this is, if say a GTT was already used, then the call to 'ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE' simply ignores the setting without an indication. The code will continue logging UNDO & REDO in the normal tablespace, which is unintended.
Is there any way to query if this parameter TEMP_UNDO_ENABLED is already set/unset within the session, so that before I do a ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE I'll know for sure this will or will not have an effect?
Thanks in advance for inputs.
There is no holistic way to do this satisfying all cases. Posting some options I got as answer elsewhere:
Assumptions :
Both options work only if:
Only GTT is concerned (excluding WITH and other temporary objects)
COMMIT/ROLLBACK has not already been done including from SAVEPOINTS
or other methods
Option 1 : Use v$tempseg_usage, to check if any segment created in DATA, instead of TEMP_UNDO
select count(*)
from v$tempseg_usage
where contents = 'TEMPORARY'
and segtype = 'DATA'
and session_addr =
(select saddr
from v$session
where sid = sys_context('userenv', 'sid'));
Option 2 : Use gv$transaction as below, ubafil = 0 if for temp_undo, else ubafil = undo tablespace file id:
select count(*)
from gv$transaction
where ses_addr = (select saddr
from v$session
where sid = sys_context('userenv', 'sid'))
and ubafil <> 0;
On other note for thought, I still think, there should have been a parameter or an indication elsewhere that simply indicates the setting of TEMP_UNDO_ENABLED has not had an effect, within the scope of a SESSION, not having to touch views that would otherwise be considered as administrative.
I'm open to answers if someone finds a better approach.
Although this does not answer your question directly but this link may help you.
In 12c temporary undo concept has been added
" Oracle 12c introduced the concept of Temporary Undo, allowing the
undo for a GTT to be written to the temporary tablespace, thereby
reducing undo and redo."

Not able to insert the same record after connection interruption

I was inserting some records in the production table ,while doing that before commit ,I lost the production connection and none of the record got inserted.
Now when I am trying to insert the same record ,the sql plus is getting hanged and data is not getting saved.
But when I tried for other record which I was not inserted ,those records are getting inserted.
I have checked the table again ,for availability of data.Those previous data has not stored anywhere.
SQL plus is not generating any error also ,so that I can check the error and try to rectify.
Can anyone please help me to analyse and troubleshoot the problem.
while inserting in oracle the connection has lost now I am not able to add the same data
If your SQL/Plus session hangs, it's probably being blocked by your previous session. To find the offending session, you can use (requires DBA privileges):
select * from v$lock where block = 1
This should give you the session ID of the blocking session. Now you can run
select * from v$session
and check whether the session ID returned by the first query indeed belongs to your previous session. To kill the session, use the command
alter system kill session '<SID>,<serial#>'

oracle two different session

in our work we create two .net listener,
first one:
calling oracle stored procedure that insert bulk of data into table(table1) using insert into select syntax:
insert into table1 select c1,c2... from tbl2 inner join tbl3....
then we use explicity commit;
second listener:
calling oracle procedure that reading data inserted into table1 via listener1
but we notice that even the record inserted into table1 listener2 couldn't see that recordat same time even that commit is use.
my question is how does cmmit work when we use insert ...select?
is this issue related to session?when listener 1 session end listener 2 can read data?
please help,
thank in advance.
You're using the wrong terms...
A listener is a server application that listens to the incoming client requests and hands it to the DB engine. A listener is not being used on the client end.
A session is not related to the data you can see, a transaction is the object that controls that.
Oracle works in a very clear way - After a transaction has committed - all the new transactions can see it, and already existing transactions can see the new content based on it transaction configurations..
I recommend you reading about isolation levels in that context http://msdn.microsoft.com/en-us/library/system.transactions.isolationlevel(v=vs.110).aspx
By default - the moment (and in DB it is defined by SCN) a transaction have been committed - the data is visible to the client.
Bottom line - your issue is related either to transaction isolation levels (in the case the reading transaction started before the commit), or to the writer, which does not commit the data when you think it is (a transaction issue).
After the call to transaction.Commit() in .net returned - the data is already visible, and other transactions are seeing it.
You're second question was how commit works.
This is a very complicated process in Oracle, so I'll give a really short description:
1. When you commit, Oracle first runs some verifications before the commit itself (for example, runs the deferred constraints).
2. After oracle knows it can safely commit the changes it gets the system time (SCN) , write the commit itself to the redo log, and flushes the data to disk (for consistency).
3. Sends an ACK to the user, that the data is already visible to the world.
4. marks the buffers been used as free.
Something I want to add, just to make sure (and I'm writing it half a sleep - so excuse me if it does not compile...)
In you're .net code - your code should be logically equivalent to it:
OracleConnection con = new OracleConnection(connStr);
con.Open();
OracleTransaction trans = con.BeginTransaction();
OracleCommand cmd = con.CreateCommand();
cmd.Connection = cmd;
cmd.CommandText = "insert into ...";
cmd.ExecuteNonQuery();
cmd.Dispose();
trans.Commit();
trans.Dispose();
con.Close();
con.Dispose();
and if you're using LINQ - make sure you create the transaction scope on the right area.

How can I close Oracle DbLinks in JDBC with XA datasources and transactions to avoid ORA-02020 errors?

I have a JDBC-based application which uses XA datasources and transactions which span multiple connections, connected to an Oracle database. The app sometimes needs to make some queries using join with a table from another (Oracle) server using a shared DbLink. The request works if I don't do it too often, but after 4 or 5 requests in rapid succession I get an error (ORA-02020 - too many links in use). I did some research, and the suggested remedy is to call "ALTER SESSION CLOSE DATABASE LINK ". If I call this request after the query that joins the DbLnk table, I get the error ORA-2080 (link is in use). If I call it before the query, I get ORA-2081 (link closed). Does this call do any good at all? The JDBC connection is closed long before the transaction commit (which is managed either by servlet or by EJB container, depending on the circumstances). I get the impression that when the connection closes, Oracle marks the link as closed, but it takes a minute or two for it to return to the pool of available links. I understand I could enlarge the pool of links (using the open_links property in the config file), but that won't guarantee that I won't have the same problem under a heavier load. Is there something I can do differently to get the dblinks to close more rapidly?
Any distributed SQL, even a select, will open a transaction that must be closed before you can close the database link. You need to either rollback or commit before you call ALTER SESSION CLOSE DATABASE LINK.
But it sounds like you've already got something else handling your transactions. If it's not possible to manually rollback or commit, you should try to increase the number of open links. The OPEN_LINKS parameter is the maximum number of links per session. The number of links you need isn't really dependent on the load, it should be based on the maximum number of distinct remote databases.
Edit:
The situation you describe in your comment shouldn't happen. I don't understand enough about your system to know what's really happening with the transactions. Anyway, if you can't figure out exactly what the system is doing maybe you can replace "alter session close database link" with a procedure like this:
create or replace procedure rollback_and_close_db_links authid current_user is
begin
rollback;
for links in (select db_link from v$dblink) loop
execute immediate 'alter session close database link '||links.db_link;
end loop;
end;
/
You'll probably need this grant:
grant select on v_$dblink to [relevant user];

Resources