I've gotten to one of those places where I've been toying with something for a little while trying to figure out why its not working and figured I would ask here. I am currently in the middle of making adjustments to a batch process that involves creating an external table A used for staging and then transferring the data from that table over to Table B for further processing.
There's a step in the batch that was there before to load all that data and it goes like this:
INSERT INTO TABLE B SELECT * FROM TABLE A
Upon running this statement in batch and outside of it in Oracle Developer I get the following error:
Run query ORA-00932: inconsistent datatypes: expected DATE got NUMBER
I went through my adjustments line by line and made sure I had the right data types. I also went over the data itself the best I could and from what I can tell it seems normal also. In an effort to find which individual field could have been having the error, I attempted to load data from Table A to Table B one column at a time...Doing this I received no errors which shocked me somewhat. If I use the SQL below and have all the fields listed out individually, the load of all the data works flawlessly. Can someone explain why this might be? Does the below function perform an internal Oracle working that the previous one does not?
insert into TABLE B (
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.)
select
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.
from TABLE A;
Well, if you posted description of tables A and B, we could see it ourselves. As it is now, we have to trust what you're saying, i.e. that everything matches (but Oracle disagrees), so I don't know what to say.
On the other hand, I've learnt that using
INSERT INTO TABLE B SELECT * FROM TABLE A
is a poor way of handling things (unless that's a quick & dirty testing). I try to always name all columns I'm working with, no matter how many of them are involved in that very operation. As you noticed, that seems to be working well for you too, so I'd suggest you to keep doing it.
Related
Is this considered good practice?
Been trying to merge data from 2 different tables (dsb_nb_users_option & dsb_nb_default_options) for the number of users existing on dsb_nb_users table.
Would a JOIN statement be the best option?
Or a sub-query would work better for me to access the data laying on dsb_nb_users table?
This might be an operation i will have to perform a few times so i want to understand the mechanics of it.
INSERT INTO dsb_nb_users_option(dsb_nb_users_option.code, dsb_nb_users_option.code_value, dsb_nb_users_option.status)
SELECT dsb_nb_default_option.code, dsb_nb_default_option.code_value, dsb_nb_default_option.status
FROM dsb_nb_default_options
WHERE dsb_nb_users.user_id IS NOT NULL;
Thank you for your time!!
On the face of it, I see nothing wrong with your query to achieve your goal. That said, I see several things worth pointing out.
First, please learn to format your code - for your own sanity as well as that of others who have to read it. At the very least, put each column name on a line of its own, indented. A good IDE tool like SQL Developer will do this for you. Like this:
INSERT INTO dsb_nb_users_option (
dsb_nb_users_option.code,
dsb_nb_users_option.code_value,
dsb_nb_users_option.status
)
SELECT
dsb_nb_default_option.code,
dsb_nb_default_option.code_value,
dsb_nb_default_option.status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Now that I've made your code more easily readable, a couple of other things jump out at me. First, it is not necessary to prefix every column name with the table name. So your code gets even easier to read.
INSERT INTO dsb_nb_users_option (
code,
code_value,
status
)
SELECT
code,
code_value,
status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Note that there are times you need to qualify a column name, either because oracle requires it to avoid ambiguity to the parser, or because the developer needs it to avoid ambiguity to himself and those that follow. In this case we usually use table name aliasing to shorten the code.
select a.userid,
a.username,
b.user_mobile_phone
from users a,
join user_telephones b on a.userid=b.userid;
Finally, and more critical your your overall design, It appears that you are unnecessarily duplicating data across multiple tables. This goes against all the rules of data design. Have you read up on 'data normalization'? It's the foundation of relational database theory.
Long time user, first time "asker".
I am attempt to construct an Oracle procedure and/or trigger that will compare two tables with the MINUS operation and then insert any resulting rows into another table. I understand how to do the query in standard SQL, but I am having trouble coming up with an efficient way to do this using PL/SQL.
Admittedly, I am very new to Oracle and pretty green with SQL in general. This may be a silly way to go about accomplishing my goal, so allow me to explain what I am attempting to do.
I need to create some sort of alert that will be triggered when the V_$PARAMETER view is changed. Apparently triggers can not respond to changes to view but, instead, can only replace actions on views...which I do not wish to do. So, what I did was create a table that to mirror that view to essentially save it as a "snapshot".
create table mirror_v_$parameter as select * from v_$parameter;
Then, I attempted to make a procedure that would minus these two so that, whenever a change is made to v_$parameter, it will return the difference between the snapshot, mirror_v_$parameter. I trying to create a cursor with the command:
select * from v_$parameter minus select * from mirror_v_$parameter;
to be used inside a procedure, so that it could be used to fetch any returned rows and insert them into another table called alerts_v_$parameter. The intent being that, when something is added to the "alert" table, a trigger can be used to somehow (haven't gotten this far yet) notify my team that there has been a change to the v_$parameter table, and that they can refer to alerts_v_$parameter to see what has been change. I would use some kind of script to run this procedure at a regular interval. And maybe, some day down the line when I understand all this better, manipulate what goes into the alerts_v_$parameter table so that it provides better information such as specifically what column was changed, what was its previous value, etc.
Any advice or pointers?
Thank you for taking the time to read this. Any thoughts will be very appreciated.
I would create a table based on the exact structure of v_$parameter with an additional timestamp column for "last_update", and periodically (via DBMS_Scheduler) merge into it any changes from the real v_$parameter table and capture the timestamp of any detected change.
You might also populate a history table at the same time, either using triggers on update of your table or with SQL.
PL/SQL is unlikely to be required, except as a procedural wrapper to the SQL code.
Examples of Merge are in the documentation here: http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9016.htm#SQLRF01606
I apologize for posting a question that seems to have been asked numerous times on the internet, but I can't quite fix it for some reason.
I was trying to populate some tables using Oracle's magical sqldr utility, but it throws an ORA-01775 error for some reason.
Everywhere I go on Google, people say something along the lines of: "Amateur, get your synonyms sorted out" (that was paraphrased) and that's nice and all, but I did not make any synonyms.
Here, the following does not work on my system:
SQLPLUS user/password
SQL>CREATE TABLE test (name varchar(10), id number);
SQL>exit
Then, I have a .ctl file with the following contents:
load data
characterset utf16
infile *
append
into table test
(name,
id
)
begindata
"GURRR" 4567
Then I run this command:
sqlldr user#localhost/password control=/tmp/controlfiles/test.ctl
The result:
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
Part of test.log:
Table TEST, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
NAME FIRST 2 CHARACTER
ID NEXT 2 CHARACTER
SQL*Loader-702: Internal error - ulndotvcol: OCIStmtExecute()
ORA-01775: looping chain of synonyms
And, if I try to do a manual insert:
SQL> insert into test values ('aa', 56);
1 row created.
There is no problem.
So, yeah, I am stuck!
If it helps, I am using Oracle 11g XE on CentOS.
Thanks for the help guys, I appreciate it.
EDIT:
I kind of, sort of figured out part of the problem. The problem was that somewhere along the line, maybe during a failed load or something, Oracle had given itself corrupt views and synonyms.
The affected views were: GV_$LOADISTAT, GV_$LOADPSTAT, V_$LOADISTAT and V_$LOADPSTAT. I am not quite sure why the views got corrupt, but recompiling them resulted in compiled with errorserrors. The synonyms used in the queries themselves were corrupt, namely the gv$loadistat, gv$loadpstat, v$loadistat and v$loadpstat synonyms.
I wasn't sure about why this was happening and I didn't quite understand anything. So, I decided to drop the synonyms and recreate them. Unfortunately, I couldn't recreate them, as the view they pointed to (there is a bit of weird recursion going on here...) was corrupt. These views were the aforementioned GV_$LOADISTAT and other views. In other words, the synonyms pointed to the views that used those synonyms. Talk about a looping chain.
So...I recreated the public synonyms but instead of specifying the view as GV_$LOADISTAT, I specified them as sys.GV_$LOADISTAT. e.g.
DROP PUBLIC synonym GV$LOADISTAT;
CREATE PUBLIC synonym GV$LOADISTAT for sys.GV_$LOADISTAT;
Then, I recreated the user views to point to those public synonyms.
CREATE OR REPLACE FORCE VIEW "USER"."GV_$LOADISTAT" ("INST_ID", "OWNER", "TABNAME", "INDEXNAME", "SUBNAME", "MESSAGE_NUM", "MESSAGE")
AS
SELECT "INST_ID",
"OWNER",
"TABNAME",
"INDEXNAME",
"SUBNAME",
"MESSAGE_NUM",
"MESSAGE"
FROM gv$loadistat;
That seemed to fix the views/synonyms. Yeah, it is a bit of a hack, but it somehow worked. Unfortunately, this was not enough to run SQL Loader. I got a table or view does not exist error.
I tried granting more permissions to my regular user, but it didn't work. So, I gave up and ran SQL Loader as sysdba. It worked! It is not a good thing to do, but it is a development only system made for testing purposes, so, I didn't care.
I could not repeat your looping synonym chain error, but it appears the control file needed a bit of work, at least for my environment.
I was able to get your example to work by modifying it thusly:
load data
infile *
append
into table test
fields terminated by "," optionally enclosed by '"'
(name,
id
)
begindata
"GURRR",4567
I want a tool or solution to find out the affected table on running the procedure|Function or package Given the PL/SQL code.
This is require for me to comeup with the better testcase by knowing which all the tables will be affected by running the code and what all the operation performed on them.
The solution should even work for Procedure calling Procedure.
OutPut may be:
SELECT FROM: TABLE1
DELETE FROM: TABLE2
INSERT INTO: TABLE3
CALL AnotherPROC:
SELECT FROM: TABLE4
DELETE FROM: TABLE5
Thanks in Advance:
For a pre-run analysis if you are running a stored procedure/package/function then the DBA_DEPENDENCIES table can tell you which objects "depend" on it, but that doesn't mean they may necessarily be affected because the program control can take different directions.
Post-run analysis you could use AUDITing or tracing to see what tables were affected.
There are several different ways you can get some or all of this information, but I can't think of any method that will give you the information in the exact format you specified.
Tracing
A trace file can record everything, but it's all stored in a text file meant to be read by a human. There are lots of examples for how to do this, here's one that just worked for me: http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/
Profiling
You can use DBMS_PROFILER to record which line numbers are called by the procedure. Then you'd have to join the line numbers to DBA_SOURCE to get the actual commands.
V$SQL
This records SQL statements executed. You could search for SQL by PARSING_SCHEMA_NAME and order by LAST_UPDATE_TIME. But this won't get the PL/SQL, and V$SQL can be difficult to use. (SQL may age out, or could get loaded by someone else, etc.)
But to get exactly what you want, all of these solutions require you to write a program to parse SQL and PL/SQL. I'm sure there are tools to do this, but I have no experience with them.
You can always write your own custom logging, but that's a huge amount of work. The best solution may be to ask the developers to adequately document every function, and list the purpose, inputs, outputs, and side-effects of all their code.
In MySql you can get information on the tables that are being affected by adding the keyword EXPLAIN in the start of your Query. It will give you different information's listed as columns. Check if there is a feature like this in Oracle might help in your scenario.
The program that I am currently assigned to has a requirement that I copy the contents of a table to a backup table, prior to the real processing.
During code review, a coworker pointed out that
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
is unduly risky, as it is possible for the tables to have different columns, and different column orders.
I am also under the constraint to not create/delete/rename tables. ~Sigh~
The columns in the table are expected to change, so simply hard-coding the column names is not really the solution I am looking for.
I am looking for ideas on a reasonable non-risky way to get this job done.
Does the backup table stay around? Does it keep the data permanently, or is it just a copy of the current values?
Too bad about not being able to create/delete/rename/copy. Otherwise, if it's short term, just used in case something goes wrong, then you could drop it at the start of processing and do something like
create table backup_table as select * from primary_table;
Your best option may be to make the select explicit, as
insert into backup_table (<list of columns>) select <list of columns> from primary_table;
You could generate that by building a SQL string from the data dictionary, then doing execute immediate. But you'll still be at risk if the backup_table doesn't contain all the important columns from the primary_table.
Might just want to make it explicit, and raise a major error if backup_table doesn't exist, or any of the columns in primary_table aren't in backup_table.
How often do you change the structure of your tables? Your method should work just fine provided the structure doesn't change. Personally I think your DBAs should give you a mechanism for dropping the backup table and recreating it, such as a stored procedure. We had something similar at my last job for truncating certain tables, since truncating is frequently much faster than DELETE FROM TABLE;.
Is there a reason that you can't just list out the columns in the tables? So
INSERT INTO backup_table( col1, col2, col3, ... colN )
SELECT col1, col2, col3, ..., colN
FROM primary_table
Of course, this requires that you revisit the code when you change the definition of one of the tables to determine if you need to make code changes, but that's generally a small price to pay for insulating yourself from differences in column order, differences in column names, and irrelevent differences in table definitions.
If I had this situation, I would retrieve the column definitions for the two tables right at the beginning of the problem. Then, if they were identical, I would proceed with the simple:
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
If they were different, I would only proceed if there were no critical columns missing from the backup table. In this case I would use this form for the backup copy:
INSERT INTO BACKUP_TABLE (<list of columns>)
SELECT <list of columns>
FROM PRIMARY_TABLE
But I'd also worry about what would happen if I simply stopped the program with an error, so I might even have a backup plan where I would use the second form for the columns that are in both tables, and also dump a text file with the PK and any columns that are missing from the backup. Also log an error even though it appears that the program completed normally. That way, you could recover the data if the worst happened.
Really, this is a symptom of bad processes somewhere which should be addressed, but defensive programming can help to make it someone else's problem, not yours. If they don't notice the log error message which tells them about the text dump with the missing columns, then its not your fault.
But, if you don't code defensively, and the worst happens, it will be partly your fault.
You could try something like:
CREATE TABLE secondary_table AS SELECT * FROM primary_table;
Not sure if that automatically copies data. If not:
CREATE TABLE secondary_table AS SELECT * FROM primary_table LIMIT 1;
INSERT INTO secondary_table SELECT * FROM primary_table;
Edit:
Sorry, didn't read your post completely: especially the constraints part. I'm afraid I don't know how. My guess would be using a procedure that first describes both tables and compares them, before creating a lengthy insert / select query.
Still, if you're using a backup-table, I think it's pretty important it matches the original one exactly.