Combining 2 tables with the number of users on a different table ORACLE SQl - oracle

Is this considered good practice?
Been trying to merge data from 2 different tables (dsb_nb_users_option & dsb_nb_default_options) for the number of users existing on dsb_nb_users table.
Would a JOIN statement be the best option?
Or a sub-query would work better for me to access the data laying on dsb_nb_users table?
This might be an operation i will have to perform a few times so i want to understand the mechanics of it.
INSERT INTO dsb_nb_users_option(dsb_nb_users_option.code, dsb_nb_users_option.code_value, dsb_nb_users_option.status)
SELECT dsb_nb_default_option.code, dsb_nb_default_option.code_value, dsb_nb_default_option.status
FROM dsb_nb_default_options
WHERE dsb_nb_users.user_id IS NOT NULL;
Thank you for your time!!

On the face of it, I see nothing wrong with your query to achieve your goal. That said, I see several things worth pointing out.
First, please learn to format your code - for your own sanity as well as that of others who have to read it. At the very least, put each column name on a line of its own, indented. A good IDE tool like SQL Developer will do this for you. Like this:
INSERT INTO dsb_nb_users_option (
dsb_nb_users_option.code,
dsb_nb_users_option.code_value,
dsb_nb_users_option.status
)
SELECT
dsb_nb_default_option.code,
dsb_nb_default_option.code_value,
dsb_nb_default_option.status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Now that I've made your code more easily readable, a couple of other things jump out at me. First, it is not necessary to prefix every column name with the table name. So your code gets even easier to read.
INSERT INTO dsb_nb_users_option (
code,
code_value,
status
)
SELECT
code,
code_value,
status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Note that there are times you need to qualify a column name, either because oracle requires it to avoid ambiguity to the parser, or because the developer needs it to avoid ambiguity to himself and those that follow. In this case we usually use table name aliasing to shorten the code.
select a.userid,
a.username,
b.user_mobile_phone
from users a,
join user_telephones b on a.userid=b.userid;
Finally, and more critical your your overall design, It appears that you are unnecessarily duplicating data across multiple tables. This goes against all the rules of data design. Have you read up on 'data normalization'? It's the foundation of relational database theory.

Related

Oracle Functionality of Insert All Into Versus By Individual Column

I've gotten to one of those places where I've been toying with something for a little while trying to figure out why its not working and figured I would ask here. I am currently in the middle of making adjustments to a batch process that involves creating an external table A used for staging and then transferring the data from that table over to Table B for further processing.
There's a step in the batch that was there before to load all that data and it goes like this:
INSERT INTO TABLE B SELECT * FROM TABLE A
Upon running this statement in batch and outside of it in Oracle Developer I get the following error:
Run query ORA-00932: inconsistent datatypes: expected DATE got NUMBER
I went through my adjustments line by line and made sure I had the right data types. I also went over the data itself the best I could and from what I can tell it seems normal also. In an effort to find which individual field could have been having the error, I attempted to load data from Table A to Table B one column at a time...Doing this I received no errors which shocked me somewhat. If I use the SQL below and have all the fields listed out individually, the load of all the data works flawlessly. Can someone explain why this might be? Does the below function perform an internal Oracle working that the previous one does not?
insert into TABLE B (
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.)
select
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.
from TABLE A;
Well, if you posted description of tables A and B, we could see it ourselves. As it is now, we have to trust what you're saying, i.e. that everything matches (but Oracle disagrees), so I don't know what to say.
On the other hand, I've learnt that using
INSERT INTO TABLE B SELECT * FROM TABLE A
is a poor way of handling things (unless that's a quick & dirty testing). I try to always name all columns I'm working with, no matter how many of them are involved in that very operation. As you noticed, that seems to be working well for you too, so I'd suggest you to keep doing it.

Difference between select * into newtable and create table newtable AS

I am working with huge table which has 9 million records. I am trying to take backup of the table. As below.
First Query
Create table newtable1 as select * from hugetable ;
Second Query
Select * into newtable2 from hugetable ;
Here first query seems to be quite fast. I want to know the functional difference and Why it is fast compare to second query?, Which one is to be preferred?
Quote from the manual:
This command (create table as) is functionally similar to SELECT INTO, but it is preferred since it is less likely to be confused with other uses of the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset of the functionality offered by SELECT INTO.
So both do the same thing, but CREATE TABLE AS is part of the SQL standard (and thus works on other DBMS as well), whereas the SELECT INTO is non-standard. As the manuals says, you should prefer CREATE TABLE AS over the non-standard syntax.
Performance wise both are identical, the differences you see are probably related to caching or other activities on your system.

Temporarily storing multiple values in Oracle [duplicate]

This question already has an answer here:
Alternate method to global temp tables for Oracle Stored Procedure
(1 answer)
Closed 8 years ago.
I need a way to temporarily store and use multiple values returned from an Oracle query. In SQL Server, I stored my values in a temp table, did my work, then dropped the table. I'm discovering the Oracle equivalent isn't as clear cut.
Here's a SQL Server example of what I'm trying to do:
select id into #temp from SomeTable where SomeColumn = 'Some Value'
:
(do whatever I need to do with #temp data)
:
drop table #temp
I can code my way around SQL Server pretty well, but am almost clueless when it comes to Oracle syntax. I've been reading various Oracle references, and they haven't been very helpful. I did read that Oracle temp tables work differently than SQL, and are often not recommended.
I'm looking into the temp table route, but if there's a better way to do this that doesn't use temp tables, I'm all ears. Anyone know a better way to do this in Oracle?
Thanks in advance.
As with many things, that depends. It depends on how much data you'll be retrieving and how you want to use it. If you don't have too much data to work with ("too much" meaning, oh, say, more than a couple thousand rows) and you want to manipulate the data procedurally, i.e. in a PL/SQL procedure or script, AND you don't want to access it using DML, i.e. you don't want to say something like SELECT * FROM your_temp_data... than loading the data into a PL/SQL collection, as #EgorSkriptunoff mentions above, might be a workable solution.
However, if the temp data is large (more than a couple thousand rows) and/or you need to be able to do something like SELECT * FROM your_temp_data... then your best bet it to use Oracle's Global Temp Tables. A GTT is a table which is used to hold data which should only last as long as either a single transaction or a complete session (i.e. as long as you're attached to the database). Documentation here and here, and another write-up on them here.
Share and enjoy.

How to get the Name of the Table with SELECT DELETE INSERT UPDATE operation

I want a tool or solution to find out the affected table on running the procedure|Function or package Given the PL/SQL code.
This is require for me to comeup with the better testcase by knowing which all the tables will be affected by running the code and what all the operation performed on them.
The solution should even work for Procedure calling Procedure.
OutPut may be:
SELECT FROM: TABLE1
DELETE FROM: TABLE2
INSERT INTO: TABLE3
CALL AnotherPROC:
SELECT FROM: TABLE4
DELETE FROM: TABLE5
Thanks in Advance:
For a pre-run analysis if you are running a stored procedure/package/function then the DBA_DEPENDENCIES table can tell you which objects "depend" on it, but that doesn't mean they may necessarily be affected because the program control can take different directions.
Post-run analysis you could use AUDITing or tracing to see what tables were affected.
There are several different ways you can get some or all of this information, but I can't think of any method that will give you the information in the exact format you specified.
Tracing
A trace file can record everything, but it's all stored in a text file meant to be read by a human. There are lots of examples for how to do this, here's one that just worked for me: http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/
Profiling
You can use DBMS_PROFILER to record which line numbers are called by the procedure. Then you'd have to join the line numbers to DBA_SOURCE to get the actual commands.
V$SQL
This records SQL statements executed. You could search for SQL by PARSING_SCHEMA_NAME and order by LAST_UPDATE_TIME. But this won't get the PL/SQL, and V$SQL can be difficult to use. (SQL may age out, or could get loaded by someone else, etc.)
But to get exactly what you want, all of these solutions require you to write a program to parse SQL and PL/SQL. I'm sure there are tools to do this, but I have no experience with them.
You can always write your own custom logging, but that's a huge amount of work. The best solution may be to ask the developers to adequately document every function, and list the purpose, inputs, outputs, and side-effects of all their code.
In MySql you can get information on the tables that are being affected by adding the keyword EXPLAIN in the start of your Query. It will give you different information's listed as columns. Check if there is a feature like this in Oracle might help in your scenario.

Oracle Populate backup table from primary table

The program that I am currently assigned to has a requirement that I copy the contents of a table to a backup table, prior to the real processing.
During code review, a coworker pointed out that
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
is unduly risky, as it is possible for the tables to have different columns, and different column orders.
I am also under the constraint to not create/delete/rename tables. ~Sigh~
The columns in the table are expected to change, so simply hard-coding the column names is not really the solution I am looking for.
I am looking for ideas on a reasonable non-risky way to get this job done.
Does the backup table stay around? Does it keep the data permanently, or is it just a copy of the current values?
Too bad about not being able to create/delete/rename/copy. Otherwise, if it's short term, just used in case something goes wrong, then you could drop it at the start of processing and do something like
create table backup_table as select * from primary_table;
Your best option may be to make the select explicit, as
insert into backup_table (<list of columns>) select <list of columns> from primary_table;
You could generate that by building a SQL string from the data dictionary, then doing execute immediate. But you'll still be at risk if the backup_table doesn't contain all the important columns from the primary_table.
Might just want to make it explicit, and raise a major error if backup_table doesn't exist, or any of the columns in primary_table aren't in backup_table.
How often do you change the structure of your tables? Your method should work just fine provided the structure doesn't change. Personally I think your DBAs should give you a mechanism for dropping the backup table and recreating it, such as a stored procedure. We had something similar at my last job for truncating certain tables, since truncating is frequently much faster than DELETE FROM TABLE;.
Is there a reason that you can't just list out the columns in the tables? So
INSERT INTO backup_table( col1, col2, col3, ... colN )
SELECT col1, col2, col3, ..., colN
FROM primary_table
Of course, this requires that you revisit the code when you change the definition of one of the tables to determine if you need to make code changes, but that's generally a small price to pay for insulating yourself from differences in column order, differences in column names, and irrelevent differences in table definitions.
If I had this situation, I would retrieve the column definitions for the two tables right at the beginning of the problem. Then, if they were identical, I would proceed with the simple:
INSERT INTO BACKUP_TABLE
SELECT *
FROM PRIMARY_TABLE
If they were different, I would only proceed if there were no critical columns missing from the backup table. In this case I would use this form for the backup copy:
INSERT INTO BACKUP_TABLE (<list of columns>)
SELECT <list of columns>
FROM PRIMARY_TABLE
But I'd also worry about what would happen if I simply stopped the program with an error, so I might even have a backup plan where I would use the second form for the columns that are in both tables, and also dump a text file with the PK and any columns that are missing from the backup. Also log an error even though it appears that the program completed normally. That way, you could recover the data if the worst happened.
Really, this is a symptom of bad processes somewhere which should be addressed, but defensive programming can help to make it someone else's problem, not yours. If they don't notice the log error message which tells them about the text dump with the missing columns, then its not your fault.
But, if you don't code defensively, and the worst happens, it will be partly your fault.
You could try something like:
CREATE TABLE secondary_table AS SELECT * FROM primary_table;
Not sure if that automatically copies data. If not:
CREATE TABLE secondary_table AS SELECT * FROM primary_table LIMIT 1;
INSERT INTO secondary_table SELECT * FROM primary_table;
Edit:
Sorry, didn't read your post completely: especially the constraints part. I'm afraid I don't know how. My guess would be using a procedure that first describes both tables and compares them, before creating a lengthy insert / select query.
Still, if you're using a backup-table, I think it's pretty important it matches the original one exactly.

Resources