In Oracle, while trying to concatenate two columns of both Number type and then trying to take MAX of it, I am having a question.
i.e column A column B of Number data type,
Select MAX(A||B) from table
Table data
A B
20150501 95906
20150501 161938
when I’m running the query Select MAX(A||B) from table
O/P - 2015050195906
Ideally 20150501161938 should be the output????
I am trying to format column B like TO_CHAR(B,'FM000000') and execute i'm getting the expected output.
Select MAX(A || TO_CHAR(B,'FM000000')) FROM table
O/P - 2015011161938
Why is 2015050195906 is considered as MAX in first case.
Presumably, column A is a date and column B is a time.
If that's true, treat them as such:
select max(to_date(to_char(a)||to_char(b,'FM000000'),'YYYYMMDDHH24MISS')) from your_table;
That will add a leading space for the time component (if necessary) then concatenate the columns into a string, which is then passed to the to_date function, and then the max function will treat as a DATE datatype, which is presumably what you want.
PS: The real solution here, is to fix your data model. Don't store dates and times as numbers. In addition to sorting issues like this, the optimizer can get confused. (If you store a date as a number, how can the optimizer know that '20141231' will immediately be followed by '20150101'?)
You should convert to number;
select MAX(TO_NUMBER(A||B)) from table
Concatenation will result in a character/text output. As such, it sorts alphabetically, so 9 appears after 16.
In the second case, you are specifiying a format to pad the number to six digits. That works well, because 095906 will now appear before 161938.
Related
Have existing table called temptable, column largenumber is a NUMBER field, with no precision set:
largenumber NUMBER;
Query:
select largenumber from temptable;
It returns:
-51524845525550100000000000000000000
But If I do
column largenumber format 999999999999999999999999999999999999999
And then
select largenumber from temptable;
It returns:
-51524845525550:100000000000000000000
Why is there a colon?
To test, I took the number, remove the colon, and insert it to another table temptable2, and did the same column largenumber format, the select returns the number without the colon:
select largenumber from temptable2;
It returns:
-51524845525550100000000000000000000
So the colon is not present here.
So what could possibly be in the original number field to cause that colon?
In the original row, If I do a select and try to do any TO_CHAR, REPLACE, CAST, or concatenate to text, it would give me number conversion error.
For example, trying to generate a csv:
select '"' || largenumber || '",'
FROM temptable;
would result in:
ORA-01722 ("invalid number") error occurs when an attempt is made to convert a character string into a number, and the string cannot be converted into a valid number
In a comment (in response to a question from me), you shared that dump(largenumber) on the offending value returns
Typ=2 Len=8: 45,50,56,53,52,48,46,48
From the outset, that means that the data stored on disk is invalid (it is not a valid representation of a value of number data type). Typ=2 is correct, that is for data type number. The length (8 bytes) is correct (we can all count to eight to see that).
What is wrong is the bytes themselves. And, we only need to inspect the first and the last byte to see that.
The first byte is 45. It encodes the sign and the exponent of your number. The first bit (1 or 0) represents the sign: 1 for positive, 0 for negative. 45 is less than 128, so the first bit in the first byte is 0; so the number is negative. (So far this matches what you know about the intended value.)
But, for negative numbers, the last byte is always the magic value 102. Always. In another comment under your original question, Connor McDonald asks about your platform - but this is platform-independent, it is how Oracle encodes numbers for permanent storage on any platform. So, we already know that the dump value you got tells us the value is invalid.
In fact, Connor, in the same comment, gave the correct representation of that number (according to Oracle's scheme for internal representation of numbers). Indeed, just the last byte is wrong: your dump shows 48, but it should be 102.
How can you fix this? If it's a one-off, just use an update statement to replace the value with the correct one and move on. If your table has a primary key, let's call it id, then find the id for this row, and then
update {your_table} set largenumber = -50...... where id = {that_id};
Question is, how many such corrupt values might you have in your table? If it's just one, you can shrug it off; but if it's many (or even "a handful") you may want to figure out how they got there in the first place.
In most cases, the database will reject invalid values; you can't simply insert 'abc' in a number column, for example. But there are ways to get bad data in; even intentionally, and in a repeatable way. So, you would have to investigate how the bad values were inserted (what process was used for insertion).
For a trivial way to insert bad data in a number column, in a repeatable manner, you can see this thread on the Oracle developers forum: https://community.oracle.com/tech/developers/discussion/3903746/detecting-invalid-values-in-the-db
Please be advised that I had just started learning Oracle at that time (I was less than two months in), so I may have said some stupid things in that thread; but the method to insert bad data is described there in full detail, and it was tested. That shows just one possible (and plausible!) way to insert invalid stuff in a table; how it happened in your specific case, you will have to investigate yourself.
I have an situation to handle negative numeric data. The position of the negative sign for some of the numeric values in the files (Stored in HDFS) are on the right side (like this 12345-), ideally negative number are denoted as minus in the left (like this, -12345).
I cannot change the data because this data is correct and when the data is used by the source system (SAP) it is able to read the data as negative number.
In hive I have to run some arithmetic manipulation, say I want SUM this values which contains data like '12345-', then HIVE is unable to recognize this value as number (the column type is DECIMAL(10,2)) and the result shows NULL as value ! Need your kind advice how to handle this situation.Thanks in advance.
Check last character, if it is '-' then use substr and concat to construct correct value:
select case when substr('12345-',-1,1)='-' then cast(concat('-',substr('12345-',1,length('12345-')-1)) as int) else cast('12345-' as int) end as column_name;
OK
-12345
Replace '12345-' with your column_name
Use this syntax:
CAST(CAST(-1 AS DECIMAL(1,0)) AS
DECIMAL(10,2))*CAST(regexp_replace(regexp_replace(TRIM(column name),'\\-',''),'-','') as decimal(10,2)),
How can we use variable as column name ?
In my table days (MONDAY,TUESDAY..) are column names.
I want to get the DAY dynamically and use AS COLUMN NAME in my query.
My query :
SELECT EMP FROM SCHEDULE WHERE "DAY"(Dynamically I want) =1;
You simply can't use variables to change the actual text of the queries. variables can be used just in place of literal values (dates, strings, times, numbers) but they can't change the text of the actual command.
The technical reason is that (oversimplyfying the things) oracle FIRST parses the text, establishes an execution plan and only after this considers the values of the variables. more or less you can think (this is just an analogy, of course, it is not really the same thing!) that oracle "compiles" the queries like an C++ compiler compiles the source code of a function: it is not possible to pass a c++ procedure a variable that modifies the text of the procedure itself.
what you have to do is to rethink your approach taking in consideration what I just said:
SELECT EMP FROM SCHEDULE
WHERE
(case :DAY_I_WANT
WHEN 'MONDAY' then -- 'MONDAY' is the string value of the variable :DAY_I_WANT
monday -- monday, here is the column whose value I want CASE to return
WHEN 'TUESDAY' then tuesday
...
...
WHEN 'SUNDAY' then sunday
end) = 1
keep in mind that this solution will not take advantage on any index on the MONDAY..SUNDAY columns. the best approach would be to create a different data structure where you have a separate row for each day and a proper dayofweek column. If you do this, you will be able able to write:
select EMP from schedule
where schedule.DAY = :DAY_I_WANT
and it will allow you to create an index on the DAY column, speeding up searches.
Having a separate column for each day equals to be looking for troubles.
Can anyone tell me how to compare column which has clob datatype in oracle for multiple values?
For one value we are comparing like
dbms_lob.compare(attr_value,'A')=0
Similarly if I want to know whether attr_value is in ('A','B','C','D'). I tried this:
dbms_lob.compare(attr_value,'A')=0 or dbms_lob.compare(attr_value,'B')=0 or ...
This is not giving me proper result. Is there any other way?
OR should work fine. Also you may try this:
SELECT * FROM your_tab WHERE CAST(s as VARCHAR2(2)) IN ('A', 'B', 'C', 'D');
but I'm not sure about the performance.
Since it seems you don't really want to compare CLOBS of massive size with a bunch of other massive CLOBS, the fastest way would be to just compare a Substring of the CLOB:
WHERE DBMS_LOB.SUBSTR( attr_value, 4000, 1 ) IN ('A','B','C')
Here 4000 can be replaced by the maximum length of all you comparison values.
If you really want to compare massive CLOBS I don't think a select is the right approach, you
should probably rework your application logic...
DBMS_LOB.COMPARE does an exact comparison between two LOB objects. The documentation says:
COMPARE returns zero if the data exactly matches over the range
specified by the offset and amount parameters. Otherwise, a nonzero
INTEGER is returned.
On Oracle 11g, you could use REGEXP_INSTR function:
SELECT REGEXP_INTR(attr_value,'A|B|C|D|E') from dual;
I hope it helps.
I want to convert a 9 digit number to 10 digit by appending a 0 to it.
For example In Table ABC say there is a column named B which takes a number which is at the max 10 digit long.
Now sometimes I will get a 9 digit number only.
So in that case when a 9 digit number is faced i need to fire a trigger to make it 10 digit and then insert in the table.
For that you need to create the column with character datatype so that it can hold the leading zeros.
You don't need to write any trigger for this simple operation. you can use lpad for this purpose:
eg.g
Insert into table1(number_col) values ( lpad(999999999, 10, '0'));
select * from table1;
| number_col |
|-----------------|
| 0999999999 |
To use this in trigger, create a trigger as follows (Not Tested);
create or replace trigger trg_table1
before insert or update of number_col on table1
for each row
begin
:new.number_col := lpad( :new.number_col, 10, '0' );
end;
You don't really need to make this a trigger. Adding a 0 to the front of the number is really only for humans, the computer doesn't care and that information can't be stored in the database unless you convert the column to a string format.
What you're looking for is one of three things: Either, change the way your forms display the information to add padding if the number is less 10,000,000,000 to affect the way the user sees the information (most recommended)
Or, use the lpad function to convert the number to a string with 0 padding if necessary
lpad(input,10,'0')
Note that this will require conversion back to a number to insert into the DB if it is possible for the user to edit this number. (second most recommended)
Lastly, you can always store the value in a string format and use lpad as above on insert.
I wouldn't recommend this as strings take up much more space than numbers, and the db won't search them as fast. Also, why store a number as a string purely for the user's sake, when you can change the way your data looks to the user programatically?