Oracle - Alter all table column names with trim of white space in between names
For suppose column names before alter :
Home number
Mobile number
Local number
After alter column names shall be :
Homenumber
Mobilenumber
Localnumber
I've tried this way: but unable to crack:
UPDATE SA_VW_PHONENUMBER TN SET TN.Column_Name = TRIM (TN.Column_Name);
Fully automatic way
Use this cursor based DDL hacking - statement concat.
BEGIN
FOR alters IN
(
SELECT
'ALTER TABLE "'||table_name||'" RENAME COLUMN "'||column_name||
'" TO "'||replace(cols.column_name,' ','')||'"' sql_stmt
FROM all_tab_cols cols
WHERE REGEXP_LIKE(column_name,'[[:space:]]')
AND owner = user --Add real schema name here
ORDER BY 1
) LOOP
DBMS_OUTPUT.PUT_LINE ( alters.sql_stmt ||';') ;
EXECUTE IMMEDIATE alters.sql_stmt;
END LOOP;
END;
/
If you want to use the safe way
As I know you cannot perform a DDL as a dynamic SQL, so you cannot pass variables to the ALTER TABLE command, but here is what you can do instead of that.
Selecting the occurences:
SELECT table_name,column_name,replace(cols.column_name,' ','') as replace_name
FROM all_tab_cols
WHERE REGEXP_LIKE(column_name,'[[:space:]]');
Use the ALTER TABLE DDL command:
alter table T_TABLE rename column "COLUMN SPACE" TO "COLUMNNOSPACE";
Try the REPLACE function
UPDATE SA_VW_PHONENUMBER TN SET TN.Column_Name = REPLACE(TN.Column_Name,' ','')
Related
I want to create a procedure to remove all special characters from a column of my spesific table and then remove duplicate records.
I tried the following query so far to show the desired logic :
SELECT ft_nm_val,count(*)
FROM ( SELECT REGEXP_REPLACE(ft_nm_val, '[^A-Za-z0-9, ]') AS ft_nm_val
FROM fraud_token_name )
GROUP BY ft_nm_val
HAVING COUNT(*) > 1
Since you want to remove duplicate records and if there are other columns exist in the table, then concept duplicate would change row-wisely. So, I assume you have that table with only one column mentioned. Then you can create such a procedure :
SQL> create or replace procedure make_unique is
begin
--# Leave only alpha-numeric characters through applying [^ ] to [:alnum:] posix
update fraud_token_name
set ft_nm_val = regexp_replace(ft_nm_val,'[^[:alnum:]]');
--# Then delete duplicate records
delete fraud_token_name f1
where rowid <
(
select max(rowid)
from fraud_token_name f2
where f2.ft_nm_val = f1.ft_nm_val
);
commit;
end;
/
SQL> exec make_unique;
Demo
We are trying to modify the precision on existing columns in the database. Those which have been defined as NUMBER, we want to change it to NUMBER(14,2).
But, since NUMBER has a default precision of 38, there exist values in the database which run into more than 10 decimal places. So, when we create an additional column and try to copy over from a temp table, this results in errors.
select count(*) into countCol from USER_TAB_COLUMNS where TABLE_NAME = 'EVAPP_INTERFACE' and COLUMN_NAME = 'RESERVE_RATE_NUM' and DATA_SCALE is null;
IF (countCol <> 0) then
execute immediate 'alter table EVAPP_INTERFACE add RESERVE_RATE_NUM_TMP NUMBER(6,3)' ;
execute immediate 'update EVAPP_INTERFACE set RESERVE_RATE_NUM_TMP = RESERVE_RATE_NUM' ;
execute immediate 'alter table EVAPP_INTERFACE drop column RESERVE_RATE_NUM' ;
execute immediate 'alter table EVAPP_INTERFACE rename column RESERVE_RATE_NUM_TMP to RESERVE_RATE_NUM' ;
DBMS_OUTPUT.put_line('This column EVAPP_INTERFACE.RESERVE_RATE_NUM has been modified to the required precision');
Is there any way to truncate all values in a column?
Like say a column has
43.8052201822
21.1610909091
76.4761223618
75.8535613657
I want them all changed to
43.8
21.16
76.47
75.85
EDIT : I know the word Truncate is used wrongly, but I don't know of a better term to shaving off precision.
Not a wrong word at all, see: TRUNC(number).
From the example below you can see the difference of truncating and rounding:
create table foo(n number);
insert all
into foo values (1.111)
into foo values (5.555)
into foo values (9.999)
select * from dual;
select n, round(n,2), trunc(n, 2) from foo;
N ROUND(N,2) TRUNC(N,2)
---------- ---------- ----------
1.111 1.11 1.11
5.555 5.56 5.55
9.999 10 9.99
How about using ROUND (number)?
I'm looking for the best way to change a data type of a column in a populated table. Oracle only allows changing of data type in colums with null values.
My solution, so far, is a PLSQL statement which stores the data of the column to be modified in a collection, alters the table and then iterates over the collection, restoring the original data with data type converted.
-- Before: my_table ( id NUMBER, my_value VARCHAR2(255))
-- After: my_table (id NUMBER, my_value NUMBER)
DECLARE
TYPE record_type IS RECORD ( id NUMBER, my_value VARCHAR2(255));
TYPE nested_type IS TABLE OF record_type;
foo nested_type;
BEGIN
SELECT id, my_value BULK COLLECT INTO foo FROM my_table;
UPDATE my_table SET my_value = NULL;
EXECUTE IMMEDIATE 'ALTER TABLE my_table MODIFY my_value NUMBER';
FOR i IN foo.FIRST .. foo.LAST
LOOP
UPDATE my_table
SET = TO_NUMBER(foo(i).my_value)
WHERE my_table.id = foo(i).id;
END LOOP;
END;
/
I'm looking for a more experienced way to do that.
The solution is wrong. The alter table statement does an implicit commit. So the solution has the following problems:
You cannot rollback after alter the alter table statement and if the database crashes after the alter table statement you will loose data
Between the select and the update users can make changes to the data
Instead you should have a look at oracle online redefinition.
Your solution looks a bit dangerous to me. Loading the values into a collection and subsequently deleting them fom the table means that these values are now only available in memory. If something goes wrong they are lost.
The proper procedure is:
Add a column of the correct type to the table.
Copy the values to the new column.
Drop the old column.
Rename the new column to the old columns name.
I want to disable NOT NULL constraints into a table to insert data for test but I can't find a way to disable unnamed constraints.
I found enough info to disable named constraints, but I couldn't find a example to disable unnamed NOT NULL constraint.
I would like to implement this without querying the data dictionary, but... I'm willing to do that if its the only way. But I would like to use a clean ALTER TABLE DDL.
You will need to query the data dictionary, something like this will disable all constraints on the table. Be aware though, that this will disable the constraint system wide, not just for your session.. Perhaps what you really want is to defer the constraint?
drop table testxx
drop table testxx succeeded.
create table testxx ( id number not null )
create table succeeded.
select status from user_constraints where table_name = 'TESTXX'
STATUS
--------
ENABLED
1 rows selected
begin
for cnames in ( select table_name,constraint_name from user_constraints where table_name = 'TESTXX' ) loop
execute immediate 'alter table ' || cnames.table_name || ' disable constraint ' || cnames.constraint_name;
end loop;
end;
anonymous block completed
select status from user_constraints where table_name = 'TESTXX'
STATUS
--------
DISABLED
1 rows selected
You can also just alter the column as follows
create table test_null (col_n number not null);
alter table test_null modify col_n number null;
I would like to do an INSERT / SELECT, this means INSERT in the TARGET_TABLE the records of the SOURCE_TABLE, with this assumption:
The SOURCE and the TARGET table have only a SUBSET of common columns, this means in example:
==> The SOURCE TABLE has ALPHA, BETA and GAMMA columns;
==> The TARGET TABLE has BETA, GAMMA and DELTA columns.
What is the most efficient way to produce INSERT / SELECT statements, respecting the assumption that not all the target columns are present in the source table?
The idea is that the PL/SQL script CHECKS the columns in the source table and in the target table, makes the INTERSECTION, and then produces a dynamic SQL with the correct list of columns.
Please assume that the columns present in the target table, but not present in the source table, have to be left NULL.
I wish to extract the data from SOURCE into a set of INSERT statements for later insertion into the TARGET table.
You can assume that the TARGET table has more columns than the SOURCE table, and that all the columns in the SOURCE table are present in the TARGET table in the same order.
Thank you in advance for your useful suggestions!
In Oracle, You can get common columns with this SQL query:
select column_name
from user_tab_columns
where table_name = 'TABLE_1'
intersect
select column_name
from user_tab_columns
where table_name = 'TABLE_2'
Then you iterate a cursor with the mentioned query to generate a comma separated list of all values returned. Put that comma separated string into a varchar2 variable named common_fields. Then, you can:
sql_sentence := 'insert into TABLE_1 (' ||
common_fields ||
') select ' ||
common_fields ||
' from TABLE_2';
execute immediate sql_sentence;