Is connection.commit() needed after each cursor.execute()? - pymysql

Is it a valid PyMySQL operation to cursor.execute(…) multiple times before executing connection.commit(), or does connection.commit() need to occur after each execute statement for the results to be stored properly? The idea is to eliminate as many redundant statements as possible, as the process is long-running.
Code structure:
with connection.cursor() as cursor:
…
cursor.execute(SQLtype1, rowToInsert)
cursor.execute(SQLtype2, otherToInsert)
connection.commit() # does this account for both execute statements, or just the last?
I’ve reviewed the following:
PyMySQL execute/commit example, but with only one example having only one execute statement.
Python MySQLdb example, but with an example showing a commit after each execute statement
Python SQLite example, shows multiple execute statements before a commit, but it's uncertain whether SQLite handles differently
Note: As the SQL queries are different, executemany doesn’t appear to be an option.

No, that's the intended purpose of cursor.commit(). What you're describing is auto-committing, which may or may not be enabled for your database driver. Check with its documentation to be sure.
If your first query succeeds but the second one fails, you probably don't want your database to be left in a corrupt state where some rows were inserted but others were not. All the changes you perform are staged until you commit them to the database with cursor.commit(). This allows you to perform multiple queries at once and automatically roll back the changes if one of them fails.

Related

PL/SQL Developer statements sometimes do not commit or "stick"

I apologize if this is too vague, but it is a random issue that occurs with many types of statements. Google and Stack Overflow searches have failed me. Here is what I am experiencing, I hope that someone out there has seen or at least heard of this happening and possibly knows of a solution.
From time to time, with no apparent rhyme or reason, statements that I run through PL/SQL Developer against our Oracle databases do not "stick". Last week I ran an update on table A, a commit for the update statement, then a truncate on table B and an insert to table B followed by another commit. Everything seemed to work fine, as in I received no errors. I was, of course, able to query the changes and see that they were made. However, upon logging out and then back in, the changes had not been committed. Even the truncate command had not worked "stuck" - and truncates do not need a commit performed.
Some details that may be helpful: I am logging into the database server through PL/SQL on a shared account that is used by my team only to gain access to the schema (multiple schemas on each server, each schema has one shared login/PW). Of the 12 people on my team, I am the only one experiencing this issue. I have asked our database administration team to investigate my profile setup and have been told that my profile looks the same as my teammates' profiles. We are forced to go through Citrix to connect to our production database servers. I can only have one instance of PL/SQL open at any time through Citrix, so I typically have PL/SQL connected to several schemas, but I have never been running SQL on more than one schema simultaneously. I'm not even sure if that's possible, but I thought I would mention it. I typically have 3-4 windows open within PL/SQL, each connected to a different schema.
My manager was directly involved in a case where something similar to this happened. I ran four update commands, and committed each one in between; then he ran a select statement only to find that my updates had not actually committed.
I hope that one of my fellow Overflowers' has seen or heard of this issue, or at least may be able to provide me with a direction to follow to attempt to get to the bottom of this.
"it has begun to reflect poorly on me and damage my reputation in the company."
What would really reflect poorly on you would be you believing that an Oracle RDBMS is a magical or random device, or, even worse, sentient and conducting a personal vendetta against you. Computers may seem vindictive but that is always us projecting onto them ;-)
The way to burnish your reputation would be through an informed investigation of the situation. Databases do not randomly lose transactions. So, what is going on?
Possible culprits:
Triggers: does table A have an UPDATE trigger which suppresses some of your SQL?
Synonyms: are tables A and B really the tables you think they are?
Ownership: are these tables in another schema which has row level security enabled (although that should through an error message if you violate a policy)?
PL/SQL Developer configuration: is the IDE hiding error messages or are you not spotting them?
Object types: are tables A and B really tables? Could they be views with INSTEAD OF triggers suppressing some of your SQL?
Object types: or could A and B be materialized views and your session has QUERY_REWRITE_INTEGRITY=stale_tolerated?
If that last one seems a bit of a stretch there other similarly esoteric explanations, involving data flashback, pipelined functions and other malarky. This a category of explanation which indicates a colleague is pranking you.
How to proceed:
Try different tools. SQL*Plus (or the new SQL Command Line) may produce a different outcome. Rule out PL/SQL Developer.
Write some test cases. Strive to establish reproducible test cases: given a certain set-up this SQL statement always leads to a given outcome (SQL always sticks or always does not).
Eliminate bugs or "funnies" in the queries you use to check the results.
Use the data dictionary to understand the characteristics and associated objects of the troublesome tables. You need to understand what causes the different outcomes. What distinguishes a row where the UPDATE holds compared to one where it does not?
I have used PL/SQL Developer for over a decade and I have never known it silently undo successful truncate operations. If it can do that, AA should add it as a menu item. It seems more likely that you ran the commands against the wrong database connection.
I can feel your frustration, sorry you're going through this. I am surprised, however, that at a large company, your change control process is like this. I don't work for a large multi-national company, but any changes done to a production database are first approved by management and run by the DBAs (or in your case, your team). Every script that is run does a few things:
Lists the database instance information its connecting to. For example:
select host_name, instance_name, version, startup_time from v$instance;
Spools the output to a file (the DBAs typically use sqlplus, but I'm sure PL/SQL Developer can do the same)
Shows the current date and time (in the beginning and end of the script)
The output file is saved to a change control server (the directory structure makes it easy to pull any changes for a given instance and/or given timeframe)
Exits on any errors:
WHENEVER SQLERROR EXIT SQL.SQLCODE
Any additional checks that need to be run post script (select counts, etc)
Shows each command that is being run (set echo on), including the commits!
All of this would allow you to not only verify that the script was run successfully, but would allow you to CYOA. Perhaps you can talk with your team about putting some of this in place in your own environment. Hope that helps.
I have no way of knowing if my issue is fixed or not, but here is what I've done:
1. I contacted our company's Citrix team to request that they give my team the ability to have several instances of PL/SQL open. This has been done and so will eliminate the need for one instance with multiple DB connections.
2. I contacted the DBA's and had them remove my old profile, then create a new one with a new username.
So far, all SQL I've run under these new conditions has been just fine. However, I have no way of recreating the issue I'm experiencing so I am just continuing on about my business and hoping for the best.
Should I find a few months from now that I have not experienced this issue again I will update this post in case anyone else experiences it.
Thank you all for the accusations of operator error (screenshots prove that this is not operator error but why should you believe me when my own co-workers have accused me of faking the screenshots) and for the moral support.

How to cause a "ORA-01555: snapshot too old error" without updates

I am running into ORA-01555: snapshot too old errors with Oracle 9i but am not running any updates with this application at all.
The error occurs after the application has been connected for some hours without any queries, then every query (which would otherwise be subsecond queries) comes back with a ORA-01555: snapshot too old: rollback segment number 6 with name "_SYSSMU6$" too small.
Could this be cause of transaction isolation set to TRANSACTION_SERIALIZABLE? Or some other bug in the JDBC code? This could be caused by a bug in the jdbc-go driver but everything I've read about this bug has led me to believe scenarios where no DML statements are made this would not occur.
Read below a very good insight on this error by Tom Kyte. The problem in your case may come from what is called 'delayed block cleanout'. This is a case where selects creates redo. However, the root cause is almost sure improper size of rollback segments(but Tom adds as correlated causes: too frequently commits, a too big read after many updates, etc).
snaphot too old error (Tom Kyte)
When you run a query on an Oracle database the result will be what Oracle calls a "Read consistent snapshot".
What it means is that all the data items in the result will be represented with the value as of the the time the query was started.
To achieve this the DBMS looks into the rollback segments to get the original value of items which have been updated since the start of the query.
The DBMS uses the rollback segment in a circular way and will eventually wrap around - overwriting the old data.
If your query needs data that is no longer available in the rollback segment you will get "snapshot too old".
This can happen if your query is running for a long time on data being concurrently updated.
You can prevent it by either extending your rollback segments or avoid running the query concurrently with heavy updaters.
I also believe newer versions of Oracle provides better dynamic management of rollback segments than what is the case for Oracle 9i.

Oracle database as a single synchronization point for two separate web applications

I am considering using an Oracle database to synchronize concurrent operations from two or more web applications on separate servers. The database is the single infrastructure element in common for those applications.
There is a good chance that two or more applications will attempt to perform the same operation at the exact same moment (cron invoked). I want to use the database to let one application decide that it will be the one which will do the work, and that the others will not do it at all.
The general idea is to perform a somehow-atomic and visible to all connections select/insert with node's ID. Only node which has the same id as the first inserted node ID returned by select would be do the work.
It was suggested to me that a merge statement can be of use here. However, after doing some research, I found a discussion which states that the merge statement is not designed to be called
Another option is to lock a table. By definition, only one node will be able to lock the server and do the insert, then select. After the lock is removed, other instances will see the inserted value and will not perform work.
What other solutions would you consider? I frown on workarounds with random delays, or even using oracle exceptions to notify a node that it should not do the work. I'd prefer a clean solution.
I ended up going with SELECT FOR UPDATE. It works as intended. It is important to remember to commit the transaction as soon as the needed update is made, so that other nodes don't hang waiting for the value.

Oracle- relying on ROLLBACK for data validation

Is relying on the oracle ROLLBACK command good practice for importing data, validating the data and THEN performing a ROLLBACK?
I've had a data import program built for our ERP, and looking at the code, they insert the data into the real tables, validate, and if it fails validation, they perform a ROLLBACK. I've always validated data before inserting, but just curious if this is an accepted method to rely on?
There are a few things to remember here-
Constraints enable us preserve data integrity. This means that constraints allow us to enforce business rules (or at least the most basic of those) at the database level itself.
A commit or a rollback is a method of preserving or undoing the changes made in a transaction. If you issue a commit after a series of successfully run DML statements, the changes are preserved. The rollback statement would undo the changes.
If, in a series of DML statements, if one of those fails, the effects of that particular statement are rolled back. E.g., if an UPDATE statement updates 10 rows and one of those violates a vital constraint, any of the 10 rows are not updated. Yet, the effects of its preceding statements are not implicitly rolled back.
In order to preserve data integrity and keep the data as per the business requirements, you must issue a manual ROLLBACK statement if any of the DMLs fail.
What you are seeing in your program is the same practice. It doesn't issue a ROLLBACK after a successful transaction, but only after a failed DML, if you look at the code closely. This is indeed a good practice to roll back on failure and commit only if everything goes right.
Front end checks on data are indeed an essential part of any application. This ensures that the data being entered conforms to the business roles. Even in this case, constraints must be applied to perform checks at the database level. This is particularly helpful when some rookie makes changes to the front end and tries to enter invalid data. This is also helpful when someone is bypassing the application and entering data manually. Hence putting constraints on the database level is always necessary.

Query result in Oracle memory or Perl memory?

I am using the DBI module to fire a select query of Oracle. Using the prepare module of the DBI, the query has been prepared, and then using the execute module, the select query is executed.
My question is: Once the query is executed, the result is stored in memory till we use any of the fetchrow methods to retrieve the result. Till then, the query result is stored in Oracle memory or Perl memory?
As of my understanding, it should be in Oracle memory, I still wanted to confirm.
It is held in Oracle until you issue your first fetch. However, you should be aware that once you make your first fetch call DBD::Oracle (which I presume you are using) will likely fetch multiple rows back in one go even if you asked for only one (you can see how many with RowsInCache). You can alter the settings used with ora_prefetch_rows, ora_prefetch_memory and ora_row_cache_off.
in the Oracle memory. First hint: you don't have access to that data.
You could test the amount of memory used by your Perl script before and after the execute statement to confirm.
See http://docstore.mik.ua/orelly/linux/dbi/ch05_01.htm

Resources