I have access to an oracle11g server that blongs to a store and have a software to get reports inside softare but the software is closed source.
bacause i have a username inside database i can access from the software to all reports but i want to automate some works and want to write a python3 script to do that.all i need is that know the table names in DB. i want a code in oracle that returns me the table name.
I have access to database from "PLSQL Developer" and from my notcompleted code in python.
import cx_Oracle
con = cx_Oracle.connect('username/password#serveraddress/dbname')
print (con.version)
cur =con.cursor()
cur.execute(" SOME ORACLE CODE")
#cur.execute('select * from tablesname orderby id' )
for i in cur:
print (i)
con.close()
so what is a code that can execute and return table names to me?
p.s : i even used wireshark to find out what was the code that program sent to server but table names was not there in packet. :(
p.s 2 : any creative and unusual answer is welcome.
You can use
select table_name
from user_tables
order by table_name
or
select table_name
from cat
where table_type = 'TABLE'
order by table_name
to derive the names of all tables in your current user. Where cat is synonym of user_catalog data dictionary view and user_tables is another data dictionary view automatically created by database during installation.
Related
I have to do CRUD operations on a table that is not owned by the user I am using to connect to my Informix database. I have been granted the necessary privileges to do the operations, but I do not know how to do the actual query.
I have little experience with Informix, but I remember in OracleDB I had to do reference the shema like so:
SELECT * FROM SCHEMA.TABLE;
In Informix should I reference the user that owns the table ? Like :
SELECT * FROM OWNER:TABLE
Or can I just do :
SELECT * FROM TABLE
Thanks for any help !
In Informix you can generally use the table name without or without the owner prefix unless the database was created with mode ANSI in which case the owner prefix is required. Note that the correct syntax when using the owner is to use a period "." as in:
SELECT * FROM owner.table;
The colon is used to separate the database name as shown in the Informix Guide to SQL: Syntax https://www.ibm.com/docs/en/informix-servers/14.10?topic=segments-database-object-name#ids_sqs_1649
FYI you can determine if the database is mode ANSI with this query:
SELECT is_ansi FROM sysmaster:sysdatabases WHERE name = "<database name>";
I need to analyze a large Oracle DMP file. So far, I have no experience with Oracle.
I know that the database contains information about certain people, for example a person with the name Smith.
I don't know how the database is structured (which table contains which information, are there triggers, ...).
As long as I don't know which tables I have to search, the best way I have found to work with the database files is to use grep.
This way, I can at least verify that the database really does contain the name "Smith".
Ultimately, I would like to have an SQL dump that can be viewed, filtered and understood in a text editor.
The DMP file was created with
expdp system / [PW] directory = [expdp_dir] dumpfile = [dumpfile.dmp] full = yes logfile = [logfile.log] reuse_dumpfiles = y
I know that the name Smith occurs often in the Database. Running grep -ai smith dumpfile.dmp returns many hits.
To analyze the database further I installed oracle-database and sqldeveloper-20.2.0.175.1842-x64. I imported the DMP file with
impdp USERID = system / [PW] FULL = y FILE = [dumpfile.dmp]
The folder C:\app\[user]\oradata\orcl now contains the files SYSAUX01.DBF and SYSTEM01.DBF, among others.
I suspect that these are the database files.
The command grep -ai smith * .DBF does not return any hits.
Either the files SYSAUX01.DBF and SYSTEM01.DBF are not the databases or something did not work on the import.
Using the SQL developer, I log in with the following data:
User: system
Password: [PW] (= PW from the expdp command)
SDI: orcl
In SQL developer, I do not find Smith. SQL developer displays many tables, most of which seem
to be empty and none of which I understand. I suspect that these tables are not the tables I am looking for. Perhaps I need to log in a different way (different user, different SDI?).
I tried to export the database to an SQL dump file, trying out various options that SQL developer provides,
but the result does not contain the string "Smith".
Something is not right:
Import is faulty
wrong SDI
Export is faulty
anything else
What might have gone wrong along the way?
You have a lot misconceptions in your question.
Oracle Datapump is a database utility designed for exporting and importing. But the content, either is DDL commands ( as create table, create index ) or data from the tables, is stored as binary, so you can't check the contents of those files. There are options to extract the DDL commands from the dumpfile and put it into a script.
The datafiles you are mentioned are part of the database itself, they have nothing to do with datapump. Do not touch those files
I don't know what you mean by "Smith" , if you mean an schema, after importing make a select over dba_users looking for the field username = 'SMITH'
If you mean looking for "Smith" as part of any of those tables, you will have to look in any single table of the database ( except the ones of schemas belonging to Oracle ) and for each field that is a string
SDI does not mean anything. I guess you meant SID or Oracle System ID, an unique identifier to identify a database in a specific environment
There is nothing wrong. The problem I believe is that you don't exactly know what you are looking for.
Check this
A user/schema with name SMITH
SQL> SELECT USERNAME FROM DBA_USERS WHERE USERNAME = 'SMITH' ;
A table which name contains the word SMITH ( unlikely )
SQL> SELECT TABLE_NAME FROM DBA_TABLES WHERE TABLE_NAME LIKE '%SMITH%' ;
I need to update the value of two fields in a table and in every schema containing that table on an Oracle Server (11g). I am from the SQL Server world, but recognize that the architecture is different with multiple schemas under a single instance, not multiple databases under a single instance. The reason for this update is to ensure that while database development and testing takes place on copies of client databases, the clients do not accidentally receive emails (according to settings defined in certain columns of a table in each schema/database).
I am using SQL Developer, but could also apply SQLplus instead. Assuming the Server name is "myServer", the tablename is "myTable" and the fields are "Field1" and "Field2" of this table, can somebody help in providing some SQL that will perform a global update of all schemas on myServer setting Field1 = 'N' and Field2 = NULL in myTable? Or direct me to a link that does answer this question? I first ran multiple searches and found nothing.
Thanks
You can use dynamic SQL
FOR tbl IN (SELECT owner, table_name
FROM dba_tables
WHERE table_name = 'MYTABLE')
LOOP
EXECUTE IMMEDIATE 'update ' || tbl.owner || '.' || tbl.table_name ||
' set field1 = ''N'', ' ||
' field2 = null ';
END LOOP;
You can add some additional logging and you probably want to build the SQL statement in a local variable so that you can log it if something goes wrong. This also assumes that you're running as a user that has access to dba_tables. You could use all_tables if you want to update every table with that name that you have access to (which might not cover every table in the database).
Here is my problem, I wants to create a baseline on our development Dateabase (Oracle 10g), and check into our svn for version control, and after this we will use liquibase to help us manage the incremental database changes.
My problem is how should I create baseline of Oracle 10g? the database now consists of 500+ tables, with large amount of configuration data, and I wants my db baseline to base on a set SQL scripts to check into subversion, rather then check in Oracle dump..
I have try use liquibase generateChangeLog, but it have some performance problem.. can anyone can recommends me any tools that will help me
1. Scan any Oracle Schema
2. Generate a set of SQL Scripts (With Table structures, and Data)..
Thanks in advance
James!
Something like
SELECT DBMS_METADATA.GET_DDL('TABLE',table_name) FROM USER_TABLES;
is a good start. You can tweak it with PL/SQL and UTL_FILE to get it to write each table to a different file. You will probably need to do sequences too (though versioning them is fairly pointless), and maybe triggers/procedures/functions/packages etc.
Don't forget grants.
Have you tried Oracle's free SQLDeveloper tool? It gives you the possibility of exporting DDL and data.
EXPDP with CONTENT=METADATA_ONLY option, then IMPDP with SQLFILE=your_script.sql ?
Nicolas.
More general solution would be to dump DDL sql for selected list of tables, but additionally also other types of objects. This could be done by using all_objects and all_users views.
Example that worked for me:
select dbms_metadata.GET_DDL(u.object_type,u.object_name, u.owner)
from all_objects u
where 1=1
-- filter only selected object types
and u.object_type in ('TABLE', 'INDEX', 'FUNCTION', 'PROCEDURE', 'VIEW',
'TYPE', 'TRIGGER', 'SEQUENCE')
-- don't want system objects, generated, temp, invalid etc.
and u.object_name not like 'SYS_%'
and temporary!='Y'
and generated!='Y'
and status!='INVALID'
and u.object_name not like 'TMP_%'
and u.object_name not like '%$%'
-- if you want to filter only changed from some date/timestamp:
-- and u.last_ddl_time > '2014-04-02'
-- filter by owner
and owner in (
select username from dba_USERS where DEFAULT_TABLESPACE not like 'SYS%'
and username not in ('ORACLE_OCM')
and username not like '%$%'
)
;
I wrote a python script that refreshes db schema in incremental mode based on similar sql:
runs sql with last_ddl_time>=max(last_ddl_time from last refresh)
at the end stores last_ddl_time somewhere in filesystem for next refresh
References:
oracle dbms_metadata.GET_DDL function
oracle all_objects view
So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/