How to compare table data structure - oracle

How to compare table data structure.
1. Any table added or deleted.
2. Any column in the tables added or deleted.
So my job is to verify if any table or columns are added/deleted on 1st of every month.
My plan is to run a sql query and take a copy of entire list of tables and it's data type only (NO DATA) and save it in txt file or something and use it as base line, and next month run the same sql query and get the results and compare the file. is it possible? please help with the sql query which can do this job.

This query will give you a list of all tables and their columns for a given user (just replace ABCD in this query for the user you have to audit and providing you have access to all that users tables this will work).
SELECT table_name,
column_name
FROM all_tab_columns
WHERE owner = 'ABCD'
ORDER
BY table_name,
column_id;
This answers your question but I have to agree with a_horse_with_no_name that is not a good way implement change control, most notably because the changes have already happened.
This query is very basic and doesn't give you all the information you'd need to see if a column has changed (or any information about other objects types etc), but then you only asked about additions and deletions of tables and columns and you can compare the output of this script to previous outputs to find the answer to your allotted task.

Related

PL/SQL: Looping through a list string

Please forgive me if I open a new thread about looping in PL/SQL but after reading dozens of existing ones I'm still not able to perform what I'd like to.
I need to run a complex query on a view of a table and the only way to shorten running time is to filter through a where clause based on a variable to which such table is indexed (otherwise the system ends up doing a full scan of the table which runs endlessly)
The variable the table is indexed on is store_id (string)
I can retrieve all the store_id I want to query from a separate table:
e.g select distinct store_id from store_anagraphy
Then I'd like to make a loop that iterate queries with the store_id identified above
e.g select *complex query from view_of_sales where store_id = 'xxxxxx'
and append (union) all the result returned by each of this queries
Thank you very much in advance.
Gianluca
In theory, you could write a pipelined table function that ran multiple queries in a loop and made a series of pipe row calls to return the results. That would be pretty unusual but it could be done.
It would be far, far more common, however, to simply combine the two queries and run a single query that returns all the rows you want
select something
from your_view
where store_id in (select distinct store_id
from store_anagraphy)
If you are saying that you have tried this query and Oracle is choosing to do a table scan rather than using the index then what you really have is a tuning problem. Most likely, statistics on one or more objects are inaccurate which leads Oracle to expect that this query would return more rows than it really will thus favoring the table scan. You should be able to fix that by fixing the statistics on the objects. In a pinch, you could also use hints to force an index to be used.

Hive not creating separate directories for skewed table

My hive version is 1.2.1. I am trying to create a skewed table but it clearly doesn't seem to be working. Here is my table creation script:-
CREATE EXTERNAL TABLE IF NOT EXISTS mydb.mytable
(
country string,
payload string
)
PARTITIONED BY (year int,month int,day int,hour int)
SKEWED BY (country) on ('USA','Brazil') STORED AS DIRECTORIES
STORED AS TEXTFILE;
INSERT OVERWRITE TABLE mydb.mytable PARTITION(year = 2019, month = 10, day=05, hour=18)
SELECT country,payload FROM mydb.mysource;
The select query returns names of countries and some associated string data (payload). So, based on the way I have specified skewing on the column 'country' I was expecting the insert statement to cause creation of separate directories for USA & Brazil (the select query returns enough rows with country as USA & Brazil), but this clearly didn't happen. I see that hive created directory called 'HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME' and all the values went into a single file in that directory. Skewed table is only supposed to send rows with default values (those not specified in table creation statement) to common directory (which is what HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME seems to be) and should create dedicated directories for the rows with skew values. But instead all is going to the default directory and the other directory isn't even created. Do I have to toggle any hive options to make this thing work?
It looks like old bug, doesn't look like it's fixed yet. https://issues.apache.org/jira/browse/HIVE-13697. Basically internally when Hive stores these skew values specified during the table creation, they are converted to lower case before storing in the metastore. That's why the workaround for now is to convert case in the select statement so it goes to the right bucket. I tested this and this way it works.

how to create a table at runtime in oracle sql developer

How can i create a temporary table at run time with as many columns as count from select query returns?
For example, at starting time it is not known how many items my select query will return as the user inputs it returns number of items under the search criteria that user has entered. So every item's information is to be stored in different columns therefore i need to create a table at run time.
Please suggest how it can be done?

Deleting duplicate rows in oracle

Shouldn't the following query work fine for deleting duplicate rows in oracle
SQL> delete from sessions o
where 1<=(select count(*)
from sessions i
where i.id!=o.id
and o.data=i.data);
It seems to delete all the duplication rows!! (I wish to keep 1 tough)
Your statement doesn't work because your table has at least one row where two different ID's share the same values for DATA.
Although your intent may be to look for differing values of DATA ID by ID, what your SQL is saying is in fact set-based: "Look at my table as a whole. If there are any rows in the table such that the DATA is the same but the ID's are different (i.e., that inner COUNT(*) is anything greater than 0), then DELETE every row in the table."
You may be attempting specific, row-based logic, but your statement is big-picture (set-based). There's nothing in it to single out duplicate rows, as there is in the solution Ollie has linked to, for example.

How should I manage my many-to-many relationships?

I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables.
Here's the schema of each table:
files -> file_id, file_name
users -> user_id, user_name
users_files_ref -> user_file_ref_id, user_id, file_id
I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem.
Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record.
I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario.
I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-)
I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario?
Thanks for any help provided, it's much appreciated.
-Wes
Use $this->db->insert_id() to get the id number of the row you just inserted. Further documentation here: http://codeigniter.com/user_guide/database/helpers.html
You're basically describing how it normally is done, with one important adjustment: how you retrieve the file_id of the file to be able to add it to users_files_ref.
Normally in a database environment you have many clients connecting at the same time, doing updates simultaneously. In such an environment you can't just get the file_id of the last file added - it might be someone elses file added in between your DB calls. You have to use functionality of the database to get the ID generated (e.g. SELECT ##IDENTITY on MSSQL) or generate the IDs in the application code somehow.
I think what you need is just this:
----primary key-----
users_files_ref -> | user_id, file_id |
How you get the the file_id is dependent on the code you're implementing. Your reasoning is correct. You already have the user_id and just need to get the file_id. With these values you can add a new row to user_files_ref.
When I need to do this I usually have a stored procedure with the help of a sequence that inserts the file and returns the sequence NEXTVAL as the output. This might be a way of implementing such cenario.
This is the code for an Oracle based stored procedure:
CREATE OR REPLACE PROCEDURE SP_IMPORT_FILE(FILE IN FILE.FILE%TYPE,
FILE_ID OUT NUMBER)
IS
BEGIN
SELECT SEQ_FILE.NEXTVAL INTO FILE_ID from DUAL;
INSERT INTO FILE (FILE_ID, FILE) VALUES (FILE_ID, FILE);
END SP_IMPORT_FILE;

Resources