Why a column with datatype as Float defined in Oracle gets automatically rounded off?
Column is created as FLOAT(5).
"TRK_TARE" FLOAT(5)
Example (Value Entered -> Final Value Retained) -:
123 -> 120
1237 -> 1200
12347 -> 12000
123457 -> 120000
1234567 -> 1200000
12345678 -> 12000000
What is the difference between FLOAT & NUMBER.
Note : No modification on Trigger is present on this columns
This is because the precision of a FLOAT is measured in binary digits, from the docs:
A subtype of the NUMBER data type having precision p. A FLOAT value is represented internally as NUMBER. The precision p can range from 1 to 126 binary digits. A FLOAT value requires from 1 to 22 bytes.
If I cast 123 as a FLOAT(5) I get the same answer:
SQL> select cast(123 as float(5)) from dual;
CAST(123ASFLOAT(5))
-------------------
120
However, when casting as a FLOAT(7) the result is 123
SQL> select cast(123 as float(7)) from dual;
CAST(123ASFLOAT(7))
-------------------
123
In general, unless there is a reason for specifying a precision, don't.
You got it wrong with the type declaration. I recommend to read this chapter of the Oracle Documentation (under "Float Datatype" part).
In this example, the float value returned cannot exceed 5 binary digits. The largest decimal number that can be represented by 5 binary digits is 31. In your examples the value exceeds 31.
Therefore, the float value must be truncated so that its significant digits do not require more than 5 binary digits. For example 123 is rounded to 120, which has only two significant decimal digits, requiring only 4 binary digits.
Oracle Database uses the Oracle FLOAT datatype internally when
converting ANSI FLOAT data. Oracle FLOAT is available for you to use,
but Oracle recommends that you use
the BINARY_FLOAT and BINARY_DOUBLE datatypes instead, as they are more
robust. Refer to "Floating-Point Numbers" for more information.
Related
The xlib docs present two string formats for specifying RGB colours:
RGB Device String Specification
An RGB Device specification is
identified by the prefix “rgb:” and conforms to the following syntax:
rgb:<red>/<green>/<blue>
<red>, <green>, <blue> := h | hh | hhh | hhhh
h := single hexadecimal digits (case insignificant)
Note that h indicates the value scaled in 4 bits, hh the value scaled in 8 bits,
hhh the value scaled in 12 bits, and hhhh the value scaled in 16 bits,
respectively.
Typical examples are the strings “rgb:ea/75/52” and “rgb:ccc/320/320”,
but mixed numbers of hexadecimal digit strings (“rgb:ff/a5/0” and
“rgb:ccc/32/0”) are also allowed.
For backward compatibility, an older syntax for RGB Device is
supported, but its continued use is not encouraged. The syntax is an
initial sharp sign character followed by a numeric specification, in
one of the following formats:
#RGB (4 bits each)
#RRGGBB (8 bits each)
#RRRGGGBBB (12 bits each)
#RRRRGGGGBBBB (16 bits each)
The R, G, and B represent single hexadecimal digits. When fewer than 16 bits each are specified, they
represent the most significant bits of the value (unlike the “rgb:”
syntax, in which values are scaled). For example, the string “#3a7” is
the same as “#3000a0007000”.
https://www.x.org/releases/X11R7.7/doc/libX11/libX11/libX11.html#RGB_Device_String_Specification
So there are strings like either rgb:c7f1/c7f1/c7f1 or #c7f1c7f1c7f1
The part that confuses me is:
When fewer than 16 bits each are specified, they represent the most significant bits of the value (unlike the “rgb:” syntax, in which values are scaled)
Let's say we have 4-bit colour strings, i.e. one hexadecimal digit per component.
C is 12 in decimal, out of 16.
If we want to scale it up to 8-bits, I guess you have 12 / 16 * 256 = 192
192 in hexadecimal is C0
So this seems to give exactly the same result as "the most significant bits of the value" i.e. padding with 0 to the right-hand side.
Since the docs make a distinction between the two formats I wondered if I have misunderstood what they mean by "scaling in bits" for the rgb: format?
I am working on a software problem and I found myself needing to convert a 2-letter string to a 3-digit number. We're talking about English alphabet only (26 letters).
So essentially I need to convert something like AA, AR, ZF, ZZ etc. to a number in the range 0-999.
We have 676 combinations of letters and 1000 numbers, so the range is covered.
Now, I could just write up a map manually, saying that AA = 1, AB = 2 etc., but I was wondering if maybe there is a better, more "mathematical" or "logical" solution to this.
The order of numbers is of course not relevant, as long as the conversion from letters to numbers is unique and always yields the same results.
The conversion should work both ways (from letters to numbers and from numbers to letters).
Does anyone have an idea?
Thanks a lot
Treat A-Z as 1-26 in base 27, with 0 reserved for blanks.
E.g. 'CD' -> 3 * 27 + 4 = 85
85 -> 85 / 27, 85 % 27 = 3, 4 = C, D
If you don’t have to use consecutive numbers, you can view a two-letter string as a 36-based number. So, you can just use the int function to convert it into an Integer.
int('AA', 36) # 370
int('AB', 36) # 371
#...
int('ZY', 36) # 1294
int('ZZ', 36) # 1295
As for how to convert the number back to a string, you can refer to the method on How to convert an integer to a string in any base?
#furry12 because the diff between the first number and the last one is 1295-370=925<999. It is quite lucky, so you can minus every number for like 300, the results will be in the range of 0-999
def str2num(s):
return int(s, 36) - 300
print(str2num('AA')) # 70
print(str2num('ZZ')) # 995
If I declare in oracle a column as a number , What will be the maximum number it can be stored ?
Based on documentation:
Positive numbers in the range 1 x 10(raised)-130 to 9.99...9 x 10(raised)125 with up
to 38 significant digits
10(raised)125 is a very big number which has more than 38 digits. Will it not be stored ? If a number greater than 38 digits is stored, it will fail ? , will it save but when queried will lose precision ?
Thanks
From Oracle Doc
Positive numbers in the range 1 x 10^130 to 9.99...9 x 10^125 with up
to 38 significant digits Negative numbers from -1 x 10^130 to
9.99...99 x 10^125 with up to 38 significant digits
Test
create table tbl(clm number);
insert into tbl select power(10, -130) from dual;
insert into tbl select 9.9999*power(10, 125) from dual;
insert into tbl select 0.12345678912345678912345678912345678912123456 from dual;
insert into tbl select -1*power(10, -130) from dual;
select clm from tbl;
select to_char(clm) from tbl;
OutPut
1.000000000000000000000000000000000E-130
9.999900000000000000000000000000000E+125
.123456789123456789123456789123456789121
-1.00000000000000000000000000000000E-130
Numbers (datatype NUMBER) are stored using scientific notation
i.e. 1000000 as 1 * 10^6 i.e. you store only 1 (mantissa) and 6 (exponent)
select VSIZE(1000000), VSIZE(1000001) from dual;
VSIZE(1000000) VSIZE(1000001)
-------------- --------------
2 5
For the first number you need only 1 byte for mantissa, for the second 4 bytes (2 digist per byte).
So using NUMBER you will not get an exception while starting to loose precision.
select power(2,136) from dual;
87112285931760246646623899502532662132700
This number not exact and "filled" with zeroes (exponent). This may or may not be harmfull - consider e.g. the MOD function:
select mod(power(2,136),2) from dual;
-100
If you want to controll the precision exactly use e.g. datatype NUMBER(38,0)
select cast(power(2,136) as NUMBER(38,0)) from dual;
ORA-01438: value larger than specified precision allowed for this column
I want to validate numbers by regular expression.
My valid numbers are:
123456789012345.123
or
123.9
or
0.686`
Before decimal point must be 1 to maximum 15 numbers and after it must be maximum to 3 numbers; and negative numbers is optional.
invalid numbers are:
0.0
0.00
0.000
000
097654
05978.7
.657665
5857.
I found this regex but I can,t set numbers length limitation:
^-?(([1-9]\d*)|0)(\.0*[1-9](0*[0-9])*)?$
In place of * use {a,b} where a is the minimum number of preceding and b the maximum. Omit a or b for no minimum / maximum.
I found solution myself
^-?(([1-9])([0-9]{1,14})?|0)(\.[0-9]?[0-9]?[1-9])?$
My books(Artificial Intelligence A modern approach) says that Genetic algorithms begin with a set of k randomly generated states, called population. Each state is represented as a string over a finite alphabet- most commonly, a string of 0s and 1s. For eg, an 8-queens state must specify the positions of 8 queens, each in a column of 8 squares, and so requires 8 * log(2)8 = 24 bits. Alternatively the state could be represented as 8 digits, each in range from 1 to 8.
[ http://en.wikipedia.org/wiki/Eight_queens_puzzle ]
I don't understand the expression 8 * log(2)8 = 24 bits , why log2 ^ 8? And what are these 24 bits supposed to be for?
If we take first example on the wikipedia page, the solution can be encoded as [2,4,6,8,3,1,7,5] : the first digit gives the row number for the queen in column A, the second for the queen in column B and so on. Now instead of starting the row numbering at 1, we will start at 0. The solution is then encoded with [1,3,5,7,0,6,4]. Any position can be encoded such way.
We have only digits between 0 and 7, if we write them in binary 3 bit (=log2(8)) are enough :
000 -> 0
001 -> 1
...
110 -> 6
111 -> 7
A position can be encoded using 8 times 3 digits, e.g. from [1,3,5,7,2,0,6,4] we get [001,011,101,111,010,000,110,100] or more briefly 001011101111010000110100 : 24 bits.
In the other way, the bitstring 000010001011100101111110 decodes as 000.010.001.011.100.101.111.110 then [0,2,1,3,4,5,7,6] and gives [1,3,2,4,5,8,7] : queen in column A is on row 1, queen in column B is on row 3, etc.
The number of bits needed to store the possible squares (8 possibilities 0-7) is log(2)8. Note that 111 in binary is 7 in decimal. You have to specify the square for 8 columns, so you need 3 bits 8 times