How to rename a table column in Oracle 10g - oracle

I would like to know:
How to rename a table column in Oracle 10g?

SQL> create table a(id number);
Table created.
SQL> alter table a rename column id to new_id;
Table altered.
SQL> desc a
Name Null? Type
----------------------------------------- -------- -----------
NEW_ID NUMBER

The syntax of the query is as follows:
Alter table <table name> rename column <column name> to <new column name>;
Example:
Alter table employee rename column eName to empName;
To rename a column name without space to a column name with space:
Alter table employee rename column empName to "Emp Name";
To rename a column with space to a column name without space:
Alter table employee rename column "emp name" to empName;

alter table table_name rename column oldColumn to newColumn;

suppose supply_master is a table, and
SQL>desc supply_master;
SQL>Name
SUPPLIER_NO
SUPPLIER_NAME
ADDRESS1
ADDRESS2
CITY
STATE
PINCODE
SQL>alter table Supply_master rename column ADDRESS1 TO ADDR;
Table altered
SQL> desc Supply_master;
Name
-----------------------
SUPPLIER_NO
SUPPLIER_NAME
ADDR ///////////this has been renamed........//////////////
ADDRESS2
CITY
STATE
PINCODE

alter table table_name
rename column old_column_name/field_name to new_column_name/field_name;
example: alter table student rename column name to username;

Related

Replace hive table with partition

There is a Hive-table with 2 string columns one partition "cmd_out".
I'm trying to rename all 2 columns ('col1', 'col2'), by using Replace-function:
Alter table 'table_test' replace columns(
'col22' String,
'coll33' String
)
But I receive the following exception:
Partition column name 'cmd_out' conflicts with table columns.
When I include the partition column in query
Alter table 'table_test' replace columns(
'cmd_out' String,
'col22' String,
'coll33' String
)
I receive:
Duplicate column name cmd_out in the table definition
if you want to rename a column, you need to use alter table ... change.
Here is the syntax
alter table mytab change col1 new_col1 string;

set system date/ constraint

I'm trying to set the default date of the EmpDate column as the current system date. How do I do it in oracle sql? Besides, how do I add multiple column in one command(instead of using two separate ALTER like the code shown below)?
The question is "Add two columns to the EMPLOYEES table. One column, named EmpDate, contains the date of employment for each employee, and its default value should be the system date. The second column, named EndDate, contains employees’ date of termination."
ALTER TABLE EMPLOYEES
Add EmpDate Date;
ALTER TABLE EMPLOYEES
Add EndDate Date;
ALTER TABLE EMPLOYEES
ADD CONSTRAINT empdate
DEFAULT GETDATE() FOR EmpDate;
alter table Employees add Empdate date default sysdate;
alter table Employees add Enddateq date ;

In oracle on delete set null is not working

I have created two tables:
Create Table Dept
(Department_id number Constraint Depart_id_pk Primary Key
,Department_name varchar2(20));
Create table Emp
(Emp_id number Constraint Empl_id_pk Primary Key
,First_name varchar2(10)
,salary number
,Department_id number
,Constraint depart_id_fk Foreign Key (department_id)
References Dept (Department_id) on delete set null);
Then I have inserted some records in dept and Emp table. But when I try to drop dept table, instead of setting null in Emp.department_id column it shows error like this:
SQL> Drop Table Dept;
Drop Table Dept
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
The foreign key's clause say "on delete set null". Delete is a DML operation, and had you attempted to delete rows from the dept table, the corresponding emp rows would have been updated with a null dept_id.
But this isn't the case - you tried to drop the entire table, a DDL operation. This isn't allowed, because you'd be leaving behind constraints on the emp table that reference a table that no longer exists. If you want to drop these constraints too, you can use a cascade constraints clause:
DROP TABLE dept CASCADE CONSTRAINTS

Alter hive table add or drop column

I have orc table in hive I want to drop column from this table
ALTER TABLE table_name drop col_name;
but I am getting the following exception
Error occurred executing hive query: OK FAILED: ParseException line 1:35 mismatched input 'user_id1' expecting PARTITION near 'drop' in drop partition statement
Can any one help me or provide any idea to do this? Note, I am using hive 0.14
You cannot drop column directly from a table using command ALTER TABLE table_name drop col_name;
The only way to drop column is using replace command. Lets say, I have a table emp with id, name and dept column. I want to drop id column of table emp. So provide all those columns which you want to be the part of table in replace columns clause. Below command will drop id column from emp table.
ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
There is also a "dumb" way of achieving the end goal, is to create a new table without the column(s) not wanted. Using Hive's regex matching will make this rather easy.
Here is what I would do:
-- make a copy of the old table
ALTER TABLE table RENAME TO table_to_dump;
-- make the new table without the columns to be deleted
CREATE TABLE table AS
SELECT `(col_to_remove_1|col_to_remove_2)?+.+`
FROM table_to_dump;
-- dump the table
DROP TABLE table_to_dump;
If the table in question is not too big, this should work just well.
suppose you have an external table viz. organization.employee as: (not including TBLPROPERTIES)
hive> show create table organization.employee;
OK
CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string,
`updated_by` string,
`updated_date` timestamp)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
You want to remove updated_by, updated_date columns from the table. Follow these steps:
create a temp table replica of organization.employee as:
hive> create table organization.employee_temp as select * from organization.employee;
drop the main table organization.employee.
hive> drop table organization.employee;
remove the underlying data from HDFS (need to come out of hive shell)
[nameet#ip-80-108-1-111 myfile]$ hadoop fs -rm hdfs://getnamenode/apps/hive/warehouse/organization.db/employee/*
create the table with removed columns as required:
hive> CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
insert the original records back into original table.
hive> insert into organization.employee
select employee_id, employee_name from organization.employee_temp;
finally drop the temp table created
hive> drop table organization.employee_temp;
ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
Above statement can only change the schema of a table, not data.
A solution of this problem to copy data in a new table.
Insert <New Table> Select <selective columns> from <Old Table>
ALTER TABLE is not yet supported for non-native tables; i.e. what you get with CREATE TABLE when a STORED BY clause is specified.
check this https://cwiki.apache.org/confluence/display/Hive/StorageHandlers
After a lot of mistakes, in addition to above explanations, I would add simpler answers.
Case 1: Add new column named new_column
ALTER TABLE schema.table_name
ADD new_column INT COMMENT 'new number column');
Case 2: Rename a column new_column to no_of_days
ALTER TABLE schema.table_name
CHANGE new_column no_of_days INT;
Note that in renaming, both columns should be of same datatype like above as INT
For external table its simple and easy.
Just drop the table schema then edit create table schema , at last again create table with new schema.
example table: aparup_test.tbl_schema_change and will drop column id
steps:-
------------- show create table to fetch schema ------------------
spark.sql("""
show create table aparup_test.tbl_schema_change
""").show(100,False)
o/p:
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP, id BIGINT)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- drop table --------------------------------
spark.sql("""
drop table aparup_test.tbl_schema_change
""").show(100,False)
------------- edit create table schema by dropping column "id"------------------
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- sync up table schema with parquet files ------------------
spark.sql("""
msck repair table aparup_test.tbl_schema_change
""").show(100,False)
==================== DONE =====================================
Even below query is working for me.
Alter table tbl_name drop col_name

How can I implement conditional updating in Oracle?

I'm new to oracle and having a problem with one of my SQL Queries.
There are 2 Users: User1 and User2:
Tab1 Tab2
-------- --------
EmpNo EmpNo
EmpName EmpName
ContactNo Salary
Location
User2 has all privileges in User1.Tab1, and there is no foreign key relationship between the two tables.
The Problem:
I wanted to add a column in tab2 "NameDesignation" And I wanted to insert the value in this column after checking the following condition:
WHEN User1.Tab1.EmpNo = User2.Tab2.EmpNo THEN
INSERT INTO Tab2 VALUES (&designation)
I really have no idea how to do this, and was hoping for a little help. Any thoughts?
try this:
update user2.tab2.empno t2
set NameDesignation= &designation
where exists (select ''
from user1.tab1 t1
where t1.empno=t2.empno)
(statement updated to match the edited question)
You would need a set of triggers,
After insert or update:
CREATE OR REPLACE TRIGGER tab1_after_changed
AFTER INSERT OR UPDATE
ON tab1
FOR EACH ROW
BEGIN
DELETE FROM User2.Tab2 WHERE EmpNo=:NEW.EmpNo;
INSERT INTO User2.Tab2(EmpNo,EmpName,NameDesignation)
VALUES (:NEW.EmpNo,:NEW.EmpName, (SELECT DesignationName FROM Designation where DesignationID=:NEW.DesignationID));
END;
I just imagined a table with Designation (DesignationID number, DesignationName varchar2(xx)), and Tab1 having DesignationID(number).

Resources