I've database with 15 tables. Now due to development process one column has to added to all the tables in the database. This changes should not affect the existing process because some other services are also consuming this database. So to accomplish it I thought of creating a new database. Is there are any other way to do it.
Usually it should be enough to create a new schema ("user") and create the tables in that new schema. In Oracle, identically named tables can exist in several schemas.
CREATE USER xxx IDENTIFIED BY yyy
you can create another schema for development and import the table to new schema.Developer should use the development schema instead of production schema.you can also create new database and import from current database but it might be last option
What's wrong with alter table T add (COL varchar2(5)); ?
Of course dependend stored procedures or packages become invalid.
You can leave them alone, then the first call would return an exception and auto-recompile the called procedure. Or you can alter procedure P compile;.
Related
I have the following scenario and need to solve it in ORACLE:
Table A is on a DB-server
Table B is on a different server
Table A will be populated with data.
Whenever something is inserted to Table A, i want to copy it to Table B.
Table B nearly has similar columns, but sometimes I just want to get
the content from 2 columns from tableA and concatenate it and save it to
Table B.
I am not very familiar with ORACLE, but after researching on GOOGLE
some say that you can do it with TRIGGERS or VIEWS, how would you do it?
So in general, there is a table which will be populated and its content
should be copien to a different table.
This is the solution I came up so far
create public database link
other_db
connect to
user
identified by
pw
using 'tns-entry';
CREATE TRIGGER modify_remote_my_table
AFTER INSERT ON my_table
BEGIN INSERT INTO ....?
END;
/
How can I select the latest row that was inserted?
If the databases of these two tables are in two different servers, then you will need a database link (db-link) to be created in Table A schema so that it can access(read/write) the Table B data using db-link.
Step 1: Create a database link in Table A server db pointing to Table B server DB
Step 2: Create a trigger for Table A, which helps in inserting data to the table B using database link. You can customize ( concatenate the values) inside the trigger before inserting it into table B.
This link should help you
http://searchoracle.techtarget.com/tip/How-to-create-a-database-link-in-Oracle
Yes you can do this with triggers. But there may be a few disadvantages.
What if database B is not available? -> Exception handling in you trigger.
What if database B was not available for 2h? You inserted data into database A which is now missing in database B. -> Do crazy things with temporarily inserting it into a cache table in database A.
Performance. Well, the performance for inserting a lot of data will be ugly. Each time you insert data, Oracle will start the PL/SQL engine to insert the data into the remote database.
Maybe you could think about using MViews (Materialized Views) to replicate the data via database link. Later you can build your queries so that they access tables from database B and add the required data from database A by joining the MViews.
You can also use fast refresh to replicate the data (almost) realtime.
From perspective of an Oracle Database Admin this would make a lot more sense than the trigger approach.
try this code
database links are considered rather insecure and oracle own options are having licences associated these days, some of the other options are deprecated as well.
https://gist.github.com/anonymous/e3051239ba401e416565cdd912e0de8c
uses ora_rowscn to sync tables across two different oracle databases.
I have a situation where I need to update tables in different schemas in a single transaction.I can either use a plain JDBC or a Spring JDBCtemplate.
Any suggestions as to how I can achieve this.Thanks.
Ravi
I suggest you make your database connection from java with a specific user (say JAVA_USR or something like that), and ask your DBA for the proper grants over the destination schemas. Your DBA will like too to know that all the accesses from java applications are made from the same user, since it will make monitoring easier for them.
This way you can send the following alter statements through JDBC or SpringTemplate:
ALTER TABLE JOE.TABLE_1 ADD NAME VARCHAR2(80)
ALTER TABLE MARVIN.TABLE_2 ADD SURNAME VARCHAR2(100)
ALTER TABLE JACK.TABLE_3 ADD ADDRESS VARCHAR2(75)
...
If all the schemas are in the same database, you should be able to do your job in the same transaction in either of the ways you have asked about.
I want to create a table (lets say table_copy) which has same columns as other table (lets call it table_original) in Oracle database, so the query will be like this :
create table table_copy as (select * from table_original where 1=0);
This will create a table, but the constraints of table_original are not copied to table_copy, so what should be done in this case?
Only NOT NULL constraints are copied using Create Table As Syntax (CTAS). Others should be created manually.
You might however query data dictionary view to see the definitions of constraints and implement them on your new table using PL/SQL.
The other tool that might be helpful is Oracle Data Pump. You could import the table using REMAP_TABLE option specifying the name for the new table.
Use a database tool to extract the DDL needed for the constraints (SQL Developer does the job). Edit the resulting script to match the name of the new class.
Execute the script.
If you need to do this programmatically you can use a statement like this:
DBMS_METADATA.GET_DDL('TABLE','PERSON') from DUAL;
I trying to take oracle DB backup using expdp. I have a specific case where an application table resides in the SYSTEM tablespace.
The backup export of this schema is successfully created with options SCHEMAS=SYSTEM and INCLUDE=TABLE:"like 'USER%'" which corresponds to my application tables.
I have created another schema with the user impexp which has a different tablespace allocated to it.
when I try to import the .dmp file into impexp, the import is unsuccessful stating "SYSTEM"."USER_SYS_MAST" exists.
Is there a way to import this table in the newly created schema. I also tried using the option REMAP_SCHEMA=SYSTEM:IMPEXP, but it seems to error out saying ORA-39013: Remapping the SYSTEM schema is not supported.
Summarizing : I want to import my application tables in the SYSTEM tablespace into a new TABLESPACE 'IMPEXP'.
Please let me know If I am going wrong somewhere and trying to do something that isn't supported.
any help will be greatly appreciated.
This is one of the reasons why putting application tables in the SYS or SYSTEM schemas is considered bad practice. These schemas are vital to the running of our databases and should not be meddled with.
You have compounded this bloomer by naming your tables with a prefix of USER, which is the same convention the data dictionary uses.
What you need to do is create a new schema to hold these tables. Grant it whatever privileges it needs that made you think it had to be owned by SYSTEM. Then move those tables out of the SYSTEM schema.
To do a proper job you should change your application to use this new schema, but as temporary fix you could give SYSTEM rights on the tables and build synonyms for them. If you have the time, change the application. It will cause you less grief in the long run.
Either way, you will be able to export the data out of the old database and into the target database using this new schema.
Agree with APC.
In your specific case, I would look at DBMS_METADATA.GET_DDL to extract the DDL so I can recreate all the objects in the new schema. There are options to exclude the TABLESPACE component so they would get created in the new schema's default tablespace.
Then I would simply do INSERT /*+APPEND */ INTO newschema.table AS SELECT * FROM SYSTEM.table
If space is an issue, you may need to TRUNCATE or DROP individual tables immediately after they are successfully copied.
I have a schema called "CUSTOMERS". In this schema there is a table called RECEIVABLES.
There is another schema called "ACCOUNTS". In this schema, there is a table called RECEIVABLES_AC.
RECEIVABLES_AC has a public synoym called RECEIVABLES.
The table structure of both the tables is exactly the same.
If your front-end uses the customer schema credentials to establish a connection, how can you ensure that the record will get inserted in RECEIVABLES_AC without changing the front-end code.
I think this is a trick question. Short of renaming the table RECEIVABLES in the CUSTOMERS schema, I don't see how this can be done.
The only way that I can think of (without changing the login or insert statement) is to use a database trigger that runs on login and changes the current schema to ACCOUNTS:
create or replace trigger logon_set_schema
AFTER LOGON ON DATABASE
BEGIN
if sys_context('USERENV','SESSION_USER') = 'CUSTOMERS' then
execute immediate 'alter session set current_schema=accounts';
end if
END;
/
However, this would likely break other aspects of the code, so changing the application to specify the schema name would be vastly preferable.
What isn't specified is if the behavior is supposed to be instead-of or in-addition-to.
Use replication on ACCOUNTS.RECEIVABLES to propagate DML to CUSTOMER.RECEIVABLES_AC. Triggers, streams, what have you.
Use the ALTER SESSION SET CURRENT_SCHEMA statement to change the default namespace of the user's session.
The right way to respond is to fix the design, and to not have multiple receivables tables with public schemas floating about.
Two good ways to solve this problem are:
Option 1
Rename CUSTOMERS.RECEIVABLES.
Drop the public synonym.
Create a private synonym in the CUSTOMERS schema, called RECEIVABLES that points to ACCOUNTS.RECEIVABLES_AC.
Option 2
Change the front-end to refer to RECEIVABLES_AC instead of RECEIVABLES.
Create a private synonym in the CUSTOMERS schema, called RECEIVABLES_AC that points to ACCOUNTS.RECEIVABLES_AC.
I would prefer Option 2. Private synonyms are a great way of controlling which tables are used by a particular schema, without having to hard-code the schema name in the app.