About "LOCKED MODE" (COPY INTO LOCKED) - monetdb

Is the "lock" done on the entire database or only on the table involved in "COPY INTO"?
In case it is done only on the involved table: Can I have more than one connection in MonetDB, guaranteeing that only the "COPY INTO" connection will access the table that is being filled?
I ask this because in the documentation it says that when using the "LOCKED MODE" I have to ensure that there is only one connection in MonetDB (the entire database).
But, this doesn't seem to make sense because usually a "COPY INTO" command uses only one table.
Thanks.
Alexandre.

Just don't use it. It has very specific requirements not well-documented. In my 10+ year experience with COPY INTO, it barely gives me any noticeable benefits (over the trouble I have to go through).

Related

PL/SQL Developer statements sometimes do not commit or "stick"

I apologize if this is too vague, but it is a random issue that occurs with many types of statements. Google and Stack Overflow searches have failed me. Here is what I am experiencing, I hope that someone out there has seen or at least heard of this happening and possibly knows of a solution.
From time to time, with no apparent rhyme or reason, statements that I run through PL/SQL Developer against our Oracle databases do not "stick". Last week I ran an update on table A, a commit for the update statement, then a truncate on table B and an insert to table B followed by another commit. Everything seemed to work fine, as in I received no errors. I was, of course, able to query the changes and see that they were made. However, upon logging out and then back in, the changes had not been committed. Even the truncate command had not worked "stuck" - and truncates do not need a commit performed.
Some details that may be helpful: I am logging into the database server through PL/SQL on a shared account that is used by my team only to gain access to the schema (multiple schemas on each server, each schema has one shared login/PW). Of the 12 people on my team, I am the only one experiencing this issue. I have asked our database administration team to investigate my profile setup and have been told that my profile looks the same as my teammates' profiles. We are forced to go through Citrix to connect to our production database servers. I can only have one instance of PL/SQL open at any time through Citrix, so I typically have PL/SQL connected to several schemas, but I have never been running SQL on more than one schema simultaneously. I'm not even sure if that's possible, but I thought I would mention it. I typically have 3-4 windows open within PL/SQL, each connected to a different schema.
My manager was directly involved in a case where something similar to this happened. I ran four update commands, and committed each one in between; then he ran a select statement only to find that my updates had not actually committed.
I hope that one of my fellow Overflowers' has seen or heard of this issue, or at least may be able to provide me with a direction to follow to attempt to get to the bottom of this.
"it has begun to reflect poorly on me and damage my reputation in the company."
What would really reflect poorly on you would be you believing that an Oracle RDBMS is a magical or random device, or, even worse, sentient and conducting a personal vendetta against you. Computers may seem vindictive but that is always us projecting onto them ;-)
The way to burnish your reputation would be through an informed investigation of the situation. Databases do not randomly lose transactions. So, what is going on?
Possible culprits:
Triggers: does table A have an UPDATE trigger which suppresses some of your SQL?
Synonyms: are tables A and B really the tables you think they are?
Ownership: are these tables in another schema which has row level security enabled (although that should through an error message if you violate a policy)?
PL/SQL Developer configuration: is the IDE hiding error messages or are you not spotting them?
Object types: are tables A and B really tables? Could they be views with INSTEAD OF triggers suppressing some of your SQL?
Object types: or could A and B be materialized views and your session has QUERY_REWRITE_INTEGRITY=stale_tolerated?
If that last one seems a bit of a stretch there other similarly esoteric explanations, involving data flashback, pipelined functions and other malarky. This a category of explanation which indicates a colleague is pranking you.
How to proceed:
Try different tools. SQL*Plus (or the new SQL Command Line) may produce a different outcome. Rule out PL/SQL Developer.
Write some test cases. Strive to establish reproducible test cases: given a certain set-up this SQL statement always leads to a given outcome (SQL always sticks or always does not).
Eliminate bugs or "funnies" in the queries you use to check the results.
Use the data dictionary to understand the characteristics and associated objects of the troublesome tables. You need to understand what causes the different outcomes. What distinguishes a row where the UPDATE holds compared to one where it does not?
I have used PL/SQL Developer for over a decade and I have never known it silently undo successful truncate operations. If it can do that, AA should add it as a menu item. It seems more likely that you ran the commands against the wrong database connection.
I can feel your frustration, sorry you're going through this. I am surprised, however, that at a large company, your change control process is like this. I don't work for a large multi-national company, but any changes done to a production database are first approved by management and run by the DBAs (or in your case, your team). Every script that is run does a few things:
Lists the database instance information its connecting to. For example:
select host_name, instance_name, version, startup_time from v$instance;
Spools the output to a file (the DBAs typically use sqlplus, but I'm sure PL/SQL Developer can do the same)
Shows the current date and time (in the beginning and end of the script)
The output file is saved to a change control server (the directory structure makes it easy to pull any changes for a given instance and/or given timeframe)
Exits on any errors:
WHENEVER SQLERROR EXIT SQL.SQLCODE
Any additional checks that need to be run post script (select counts, etc)
Shows each command that is being run (set echo on), including the commits!
All of this would allow you to not only verify that the script was run successfully, but would allow you to CYOA. Perhaps you can talk with your team about putting some of this in place in your own environment. Hope that helps.
I have no way of knowing if my issue is fixed or not, but here is what I've done:
1. I contacted our company's Citrix team to request that they give my team the ability to have several instances of PL/SQL open. This has been done and so will eliminate the need for one instance with multiple DB connections.
2. I contacted the DBA's and had them remove my old profile, then create a new one with a new username.
So far, all SQL I've run under these new conditions has been just fine. However, I have no way of recreating the issue I'm experiencing so I am just continuing on about my business and hoping for the best.
Should I find a few months from now that I have not experienced this issue again I will update this post in case anyone else experiences it.
Thank you all for the accusations of operator error (screenshots prove that this is not operator error but why should you believe me when my own co-workers have accused me of faking the screenshots) and for the moral support.

Does Ab Initio support Oracle Merge statement?

Am attempting to design an Ab Initio load process without any Ab Initio training or documentation. Yeah I know.
A design decision is: for the incoming data files there will be inserts and updates.
Should I have the feed provider split them into to data files (1 - 10 GB in size nightly) and have Ab Initio do inserts and updates separately?
A problem I see with that, is data isnt always what you expect it to be...
And an Insert row may be already present (perhaps purge failed or feed provider made a mistake)
Or UPdate row isnt present.
So I'm wondering if I should just combine all inserts and updates... and use Oracle Merge statement
(after parallel loading the data into a staging table with no index of course)
But I don't know if AbInitio supports Merge or not.
There is not much for ab initio tutorials or docs on web... can you direct me to anything good?
The solution which you just depicted (inserts and updates in a staging table and then merging the content in the main table) is feasible.
A design decision is: for the incoming data files there will be inserts and updates.
I don't know the background of this decision but you should know that this solution will result in longer execution time. In order to execute inserts and updates you have to use the "Update Table" component which is slower than a simpler "Output Table" component. By the way don't use the same "Update Table" component for inserts and updates simultaneously. Use a separate "Update Table" for inserts and another one for updates instead (you'll experience dramatic performance boost in this way). (If you can change the above mentioned design decision then use an "Output Table" instead.)
In either case set the "Update Table"/"Output Table" components to "never abort" so that your graph won't fail if the same insert statement occurs twice or if there's no entry to update on.
Finally the "oracle merge" statement should be fired/executed from a "Run SQL" component when the processing of all the inserts and updates are finished. Use phases to make sure it happens this way...
If you intend to build a graph with parallel execution then make sure that the insert and update statements for the same entries will be processed by the same partitions. (Use the primary key of the final table as the key in the "partition by key" component.)
If you want to have an overview of how many duplicated inserts or wrong updates occur in your messy input then use the "Reject" (and eventually "Error") port of the appropriate "Update Table"/"Output Table" components for further processing.
I would certainly not rely on a source system to tell me whether rows are present in the target table or not. My instinct says to go for a parallel, nologging (if possible), compress (if possible) load into a staging table followed by a merge -- if Ab-Initio does not support Merge then hopefully it supports a call to a PL/SQL procedure, or direct execution of a SQL statement.
If this is a large amount of data I'd like to arrange hash partitioning on the join key for the new and current data sets too.

Dropping a table partition avoiding the error ORA-00054

I need your opinion in this situation. I’ll try to explain the scenario. I have a Windows service that stores data in an Oracle database periodically. The table where this data is being stored is partitioned by date (Interval-Date Range Partitioning). The database also has a dbms_scheduler job that, among other operations, truncates and drops older partitions.
This approach has been working for some time, but recently I had an ORA-00054 error. After some investigation, the error was reproduced with the following steps:
Open one sqlplus session, disable auto-commit, and insert data in the
partitioned table, without committing the changes;
Open another sqlplus session and truncate/drop an old partition (DDL
operations are automatically committed, if I’m not mistaken). We
will then get the ORA-00054 error.
There are some constraints worthy to be mentioned:
I don’t have DBA access to the database;
This is a legacy application and a complete refactoring isn’t
feasible;
So, in your opinion, is there any way of dropping these old partitions, without the risk of running into an ORA-00054 error and without the intervention of the DBA? I can just delete the data, but the number of empty partitions will grow everyday.
Many thanks in advance.
This error means somebody (or something) is working with the data in the partition you are trying to drop. That is, the lock is granted at the partition level. If nobody was using the partition your job could drop it.
Now you say this is a legacy app and you don't want to, or can't, refactor it. Fair enough. But there is clearly something not right if you have a process which is zapping data that some other process is using. I don't agree with #tbone's suggestion of just looping until the lock is released: you can't just get rid of data which somebody is using with establishing why they are still working with data that they apparently should not be using.
So, the first step is to find out what the locking session is doing. Why are they still amending this data your background job wants to retire? Here's a script which will help you establish which session has the lock.
Except that you "don't have DBA access to the database". Hmmm, that's a curly one. Basically this is not a problem which can be resolved without DBA access.
It seems like you have several issues to deal with. Unfortunately for you, they are political and architectural rather than technical, and there's not much we can do to help you further.
How about wrapping the truncate or drop in pl/sql that tries the operation in a loop, waiting x seconds between tries, for a max num of tries. Then use dbms_scheduler to call that procedure/function.
Maybe this can help. Seems to be the same issue as the one that you discribe.
(ignore the comic sans, if you can) :)

Database Project Insists on "Rebuilding" Table on Deployment for Dropped Columns

So I have a VS2010 Database Project that I am deploying, with a few schema changes. I have one table in particular that the VSDBCMD insists on "rebuilding" i.e. rename->create->copy->drop
The only changes for this table are dropping some columns, which could be handled by the simply I dunno, dropping the columns. Normally I wouldn't mind, except this particular table is called "Attachments" and weighs in at 15 gigs or so. Which takes a long time, locks up the database and fails locally, as I don't have 15+ gigs free, and times out remotely in our testing environment.
Can anyone direct me to the rules VSDBCMD follows for changing the schema when it deploys?
Or perhaps you have experienced similar issues and have a suggestion?
Thanks!
VSDBCMD just 'likes' rebuilding tables too often, and I don't have the 'magic vsdbcmd manual' for when it chooses to rebuild a table unfortunately, but I don't trust the output of VSDBCMD on a production database anyway without manual checking first anyway.
There's a setting in the 'dbname.sqldeployment' file that allows the setting 'IgnoreColumnOrder' that might help prevent rebuilding the table (maybe it's triggering the rebuild because the column index has changed).
In your case I would just run a manually created script on your DB.
Heck, writing 'alter table Attachments drop column uselessData' would've probably cost you 10% of the time you put into asking this question in the first place :)

Is there a way to peek inside of another Oracle session?

I have a query editor (Toad) looking at the database.
At the same time, I am also debugging an application with its own separate connection.
My application starts a transaction, does some updates, and then makes decisions based on some SELECT statements. Because the update statements (which are many and complex) are not committed yet, the results my application gets from its SELECT are not the same as what I get if I run the same statement in Toad.
Currently I get around this by dumping the query output from the app into a text file, and reading that.
Is there a better way to peek inside another oracle session, and see what that session sees, before the commit is complete?
Another way to ask this is: Under Oracle, can I enable dirty reads between only two sessions, without affecting anyone else's session?
No, Oracle does not permit dirty reads. Also, since the changes may not have physically been written to disk, you won't find them in the data files.
The log writer will write any pending data changes at least every three seconds, so you may be able to use the Log Miner stuff to pick it out from there.
But in general, your best bet is to include your own debugging information which you can easily switch on and off as required.
It's not a full answer I know, but while there are no dead reads, there are locks that can give you some idea what is going on.
In session 1 if you insert a row with primary key 7, then you will not see it when you select from session 2. (That would be a dirty read).
However, if you attempt an insert from session 2 using the primary key of 7 then it will block behind session 1 as it has to wait and see if session 1 will commit or rollback. You can use "WAIT 10" to wait 10 seconds for this to happen.
A similar story exists for updates or anything that would cause a unique constraint violation.
Can you not just set the isolation level in the session you want to peak at to 'read uncommitted' with an alter session command or a logon trigger (I have not tried this myself) temporarily?
What I prefer to do (in general) is place debug statements in the code that remain there permanently, but are turned off in production - Tom Kyte's debug.f package is a useful place to start - http://asktom.oracle.com/tkyte/debugf

Resources