Oracle converted Varchar field to number how to convert it back to Varchar - oracle

Till today we use to enter numeric value in the Varchar column so Oracle converted that varchar field to Numeric field.
And now when we are trying to insert Character value it is throwing ORA-01722 (invalid number).
Could anyone help me out in order to convert it back to varchar field?

The commenters are correct: the database does not change column types on its own. In general, you must create a new column, copy the old data over to the new column, drop the original column, and rename the new column.
drop table deleteme_table;
-- zippy s/b varchar2(30), not integer
CREATE TABLE deleteme_table
(
adate DATE
, zippy INTEGER
);
-- Add the correct type to the table as a new column
ALTER TABLE deleteme_table
ADD (tempcol VARCHAR2 (30));
-- Copy the old values to the new column
UPDATE deleteme_table
SET tempcol = zippy;
-- Get rid of the original column
ALTER TABLE deleteme_table
DROP COLUMN zippy;
-- Rename to original column name
alter table deleteme_table rename column tempcol to zippy;

Related

Replace hive table with partition

There is a Hive-table with 2 string columns one partition "cmd_out".
I'm trying to rename all 2 columns ('col1', 'col2'), by using Replace-function:
Alter table 'table_test' replace columns(
'col22' String,
'coll33' String
)
But I receive the following exception:
Partition column name 'cmd_out' conflicts with table columns.
When I include the partition column in query
Alter table 'table_test' replace columns(
'cmd_out' String,
'col22' String,
'coll33' String
)
I receive:
Duplicate column name cmd_out in the table definition
if you want to rename a column, you need to use alter table ... change.
Here is the syntax
alter table mytab change col1 new_col1 string;

In hive, is there a way to specify between which columns to add a new column to?

I can do
ALTER TABLE table_name ADD COLUMNS (user_id BIGINT)
to add a new column to the end of my non-partition columns and before my partition columns.
Is there any way to add a new column to anywhere among my non-partition columns?
For example, I would like to put this new column user_id as the first column of my table
Yes it is possible to change the location of columns but only after adding it in the table using CHANGE COLUMN
In your case, first add the column user_id to the table with below command:
ALTER TABLE table_name ADD COLUMNS (user_id BIGINT);
Now to make user_id column as the first column in your table use change column with FIRST clause:
ALTER TABLE table_name CHANGE COLUMN user_id user_id BIGINT first;
This will move the user_id column to the first position.
Similarly you can use After instead of first if you want to move the specified column after any other column. Like say, I want to move dob column after user_id column. Then my command would be:
ALTER TABLE table_name CHANGE COLUMN dob dob date AFTER user_id;
Please note that this commands changes metadata only. If you are moving columns, the data must already match the new schema or you must change it to match by some other means.
Ah, here's the explanation for why you listed user_id twice (it's not a type):
// Next change column a1's name to a2, its data type to string, and put it after column b.
ALTER TABLE test_change CHANGE a1 a2 STRING AFTER b;
// The new table's structure is: b int, a2 string, c int.
No, it is not possible.
One solution is to create new table using "CREATE TABLE AS SELECT" approach and drop older one.

How to automatically get the current date and time in a column using HIVE

Hey I have two columns in my HIVE table :
For example :-
c1 : name
c2 : age
Now while creating a table I want to declare two more columns which automatically give me the current date and time when the row is loaded.
eg: John 24 26/08/2015 11:15
How can this be done?
Hive currently does not support the feature to add a default value to any column definition while creating a table. Please refer to the link for complete hive create table syntax:
Hive Create Table specification
Alternative work around for this issue would be to temporarily load data into temporary table and use the insert overwrite table statement to add the current date and time into the main table.
Below example may help:
1. Create a temporary table
create table EmpInfoTmp(name string, age int);
2. Insert data using a file or existing table into the EmpInfoTmp table:
name|age Alan|28 Sue|32 Martha|26
3. Create a table which will contain your final data:
create table EmpInfo(name string, age tinyint, createDate string, createTime string);
4. Insert data from the temporary table and with that also add the columns with default value as current date and time:
insert overwrite table empinfo select name, age, FROM_UNIXTIME( UNIX_TIMESTAMP(), 'dd/MM/YYYY' ), FROM_UNIXTIME( UNIX_TIMESTAMP(), 'HH:mm' ) from empinfofromfile;
5. End result would be like this:
name|age|createdate|createtime Alan|28|26/08/2015|03:56 Martha|26|26/08/2015|03:56 Sue|32|26/08/2015|03:56
Please note that the creation date and time values will be entered accurately by adding the data to your final table as and when it comes into the temp table.
Note: You can't set more then 1 column as CURRENT_TIMESTAMP.
Here this way, You cant set CURRENT_TIMESTAMP in one column
SQL:
CREATE TABLE IF NOT EXISTS `hive` (
`id` int(11) NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`age` int(11) DEFAULT '0',
`datecreated` timestamp NULL DEFAULT CURRENT_TIMESTAMP
);
Hey i found a way to do it using shell script.
Heres how :
echo "$(date +"%Y-%m-%d-%T") $(wc -l /home/hive/landing/$line ) $dir " >> /home/hive/recon/fileinfo.txt
Here i get the date without spaces. In the end I upload the textfile to my hive table.

running sqlldr control files in UNIX

I have a question. I have this control files that works fine when I run it from a windows client. However when I run it directly in Linux, it shows load complete but when I look at my oracle data, there is NO DATA and there are even bad records. Below is my control file that works well in windows but fails in Linux.
NOTE: The control file works if I remove the string or date converted fields
Control file
load data
infile 'HOME/INPUT/FILEA.dat'
badfile 'HOME/BAD/FILEA.bad'
discardfile 'HOME/DIS/FILEA.dsc'
truncate
into table TEST
fields terminated by '|'
trailing nullcols
( ABCcode CHAR(11),
ABCID CHAR(6),
ABC_SEQNO "to_number(:ABC_SEQNO,'999999')",
PSNO "to_number(:PSNO,'99999999999.999')",
ABDF CHAR(1),
ABCFI CHAR(1),
ABC_DATE NULLIF ABC_DATE="00000000" "to_date(:ABC_DATE, 'YYYYMMDD')",
XZY_date NULLIF XZY_date="00000000" "to_date(:XZY_date, 'YYYYMMDD')",
DESC CHAR(1))
Any help or ideas to get this code to run in Linux will be appreciated
Notes about the logfile: The logfile had the following
ORA-00604: error occurred at recursive SQL level 1
ORA-12899: value too large for column "ABCschema"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
Also, the date conversion had the following
NULL if ABC_DATE = 0X3030303030303030(character '00000000')
SQL string for column : "to_date(:ABC_DATE, 'YYYYMMDD')"
Your TEST table has the ABC_DATE column defined as VARCHAR2(8), not as a DATE.
If I create a table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE DATE,
XZY_date DATE,
"DESC" VARCHAR2(1)
);
and have a data file with:
A|B|1|2.3|C|D|20140217|20140218|E
then it loads fine. If I recreate the table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE VARCHAR2(8),
XZY_date DATE,
"DESC" VARCHAR2(1)
);
... then the same control file and data file now give me:
Record 1: Rejected - Error on table TEST, column ABC_DATE.
ORA-12899: value too large for column "<schema>"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
You are converting the string value to a date, but then you're doing an implicit conversion back to a string when it actually inserts the data into the VARCHAR2 column. When it does that it's using your NLS_DATE_FORMAT settings, and the error I got was from having that set to DD-MON-RR.
You have three options really. Either modify your table to have actual DATE columns; or change the control file so it just inserts the plain text value and doesn't do the date conversion at all; or massage your environment so the conversion back to a string gets the format you want the string to be.
Only the first one is really sensible - if it's a date value, always store it as a DATE, never as a string.
The 0X30... thing isn't a problem, that's just showing the internal representation it's using.

date Not null , error ora_01758

why I am getting this error ?
In the table DDL I only have 2 columns , id (number) and name (varchar)
ALTER TABLE mytable ADD SUSPEND date NOT NULL
ORA-01758: table must be empty to add mandatory (NOT NULL) column
ORA-06512: at line 7
ORA-01758: table must be empty to add mandatory (NOT NULL) column ORA-06512: at line 7
And is your table empty? I think not.
There's probably a way around this involving adding the column as nullable, then populating every row with a non-NULL value, the altering the column to be not null.
Alternatively, since the problem is that these current rows will be given NULL as a default value, and the column is not allowed to be NULL, you can also get around it with a default value. From the Oracle docs:
However, a column with a NOT NULL constraint can be added to an existing table if you give a default value; otherwise, an exception is thrown when the ALTER TABLE statement is executed.
Here is a fiddle, how you could do it
Would a date in the future be acceptable as a temporary default? If so, this would work:
ALTER TABLE MYTABLE ADD (SUSPEND_DATE DATE DEFAULT(TO_DATE('21000101', 'YYYYMMDD'))
CONSTRAINT SUSPEND_DATE_NOT_NULL NOT NULL);
If table already contain the records then table won't allowes to add "Not null" column.
If you need same then set default value for the column or truncate the table then try.

Resources