What is PLS INTEGER? - oracle

Is someone can explain what is PLS Integer in oracle pl/sql with an example?
Thanks
declare
idx pls_integer :=1;
begin
...

pls_integer (link) is a data type just like number or date or varchar2. pls_integer is more efficient than the number data type because it consumes a bit less space and is faster for computations since it uses hardware arithmetic rather than library arithmetic.

Related

Storage, computation, and unpacking multiple Boolean values in Oracle and PowerBI

In my work, I often need to translate the data I find in the database into Pass/Fail or True/False values. For example, we may want to know if a patient's blood pressure was taken during a particular visit, with the only possible values being "True" or "False". In fact, there are dozens of such fields. Additionally, we actually don't need to do all of these things at every visit. There is another set of matching Boolean values that indicate if something, such as a blood pressure, was required.
I have researched this problem before and came up with little. Oracle does not support a Boolean datatype. However, I recently learned that Oracle DOES support bit-wise operators (logical AND, OR, XOR, NOT, etc.) and learned that the best way to store the data and use these operators is to:
Use the RAW datatype to store a large count of bits. Each bit can correspond to a particular real-world concept such as "Blood Pressure Done" or "Height Measurement Done".
Use bit-wise operators in the Oracle package UTL_RAW to do calculations on multiple RAW values to derive results such as "what was required AND done".
I have not yet determined that I want to go all the way down this rabbit-hole. But if I do, I have one challenge, yet unsolved and that is how do I unpack the RAW values into individual truth values, elegantly? Oracle will display the values only in hexadecimal, which is not convenient when we want to see the actual bits. It would be nice if I could carry out this operation in SQL for testing purposes. It is also necessary that I do this in Power BI so the results are formatted for the customer's needs. I could write a function, if one does not exist yet.
In resolving this challenge, I wish to not increase the size of the solution considerably. I am dealing with millions of rows and wish to have the space-savings from using the RAW datatype (storing each value as a single bit) but also have an output layer of unpacked bits into the required dozens of True-False columns for the customer's needs in seeing the details.
I feel like this problem has been present for me since I began working on these kinds of business problems over ten years ago. Certainly, I am not the only analyst who wonders: Has it been solved yet?
Edit 4/28:
I completed the Oracle-side of the work. I implemented a package with the following header file (I don't think I can show all the code in the body without permission from my employer, but feel free to ask for a peek).
With the Oracle-side of the project wrapped up, I have yet to figure out how to unpack these RAW values (called 'Binary' in Power BI) into their individual bits within Power BI. I need a visualization that will carry out a "bits to columns" operation on the fly or something like that.
Also, it would be nice to have an aggregation of a column of RAWs based upon a single bit position, so we can for example determine what percentage of the rows have a particular bit set to 1 without explicitly transforming all the data into columns, with one column per bit.
CREATE OR REPLACE PACKAGE bool AS
byte_count CONSTANT INTEGER := 8; --Must be an integer <= 2000
nibble_count CONSTANT INTEGER := byte_count * 2;
--construct a bitstring with a single '1', or with all zeros (pass 0).
FUNCTION raw_bitstring ( bit_to_set_to_one_in INTEGER ) RETURN RAW;
--visualize the bits as zeros and ones in a varchar2 field
FUNCTION binary_string ( input_in RAW ) RETURN VARCHAR2;
--takes an input RAW and sets a single bit to zero or one
--and return the altered RAW.
FUNCTION set_bit ( raw_bitstring_in RAW,
bit_loc_in INTEGER,
set_to_in INTEGER ) RETURN RAW;
--returns the value (0 or 1) of the indicated bit as an INTEGER.
FUNCTION bit_to_integer ( raw_bitstring_in RAW, bit_loc_in INTEGER) RETURN INTEGER;
--counts all the ones 1's in a RAW and returns the count as an INTEGER
FUNCTION bit_sum (raw_bitstring_in IN RAW) RETURN INTEGER;
END bool;
'''

PLS_INTEGER vs BINARY_INTEGER in associative array

What is the difference in index by PLS_INTEGER vs BINARY_INTEGER vs VARCHAR2 in associative array oracle plsql? I understood from a few sites that the PLS_INTEGER and BINARY_INTEGER no differences from Oracle 9i release. How we can decide which indexing option chosen over other while using associative array to get maximum benefits.

what is the maximum number of parameters a PL/SQL procedure can have?

I need to develop a PL/SQL procedure that needs at least 26 parameters to be passed in it. what is the maximum number of parameters a PL/SQL procedure can have? And what are the consequences of having a large number of parameters? will it cause any inconvenience?
Reading the fine manual: PL/SQL Program Limits
number of formal parameters in an explicit cursor, function, or procedure: 65536
You'll run into other problems before hitting this limit.

Can dbms_utility.get_time rollover?

I'm having problems with a mammoth legacy PL/SQL procedure which has the following logic:
l_elapsed := dbms_utility.get_time - l_timestamp;
where l_elapsed and l_timestamp are of type PLS_INTEGER and l_timestamp holds the result of a previous call to get_time
This line suddenly started failing during a batch run with a ORA-01426: numeric overflow
The documentation on get_time is a bit vague, possibly deliberately so, but it strongly suggests that the return value has no absolute significance, and can be pretty much any numeric value. So I was suspicious to see it being assigned to a PLS_INTEGER, which can only support 32 bit integers. However, the interweb is replete with examples of people doing exactly this kind of thing.
The smoking gun is found when I invoke get_time manually, it is returning a value of -214512572, which is suspiciously close to the min value of a 32 bit signed integer. I'm wondering if during the time elapsed between the first call to get_time and the next, Oracle's internal counter rolled over from its max value and its min value, resulting in an overflow when trying to subtract one from the other.
Is this a likely explanation? If so, is this an inherent flaw in the get_time function? I could just wait and see if the batch fails again tonight, but I'm keen to get an explanation for this behaviour before then.
Maybe late, but this may benefit someone searching on the same question.
The underlying implementation is a simple 32 bit binary counter, which is incremented every 100th of a second, starting from when the database was last started.
This binary counter is is being mapped onto a PL/SQL BINARY_INTEGER type - which is a signed 32-bit integer (there is no sign of it being changed to 64-bit on 64-bit machines).
So, presuming the clock starts at zero it will hit the +ve integer limit after about 248 days, and then flip over to become a -ve value falling back down to zero.
The good news is that provided both numbers are the same sign, you can do a simple subtraction to find duration - otherwise you can use the 32-bit remainder.
IF SIGN(:now) = SIGN(:then) THEN
RETURN :now - :then;
ELSE
RETURN MOD(:now - :then + POWER(2,32),POWER(2,32));
END IF;
Edit : This code will blow the int limit and fail if the gap between the times is too large (248 days) but you shouldn't be using GET_TIME to compare durations measure in days anyway (see below).
Lastly - there's the question of why you would ever use GET_TIME.
Historically, it was the only way to get a sub-second time, but since the introduction of SYSTIMESTAMP, the only reason you would ever use GET_TIME is because it's fast - it is a simple mapping of a 32-bit counter, with no real type conversion, and doesn't make any hit on the underlying OS clock functions (SYSTIMESTAMP seems to).
As it only measures relative time, it's only use is for measuring the duration between two points. For any task that takes a significant amount of time (you know, over 1/1000th of a second or so) the cost of using a timestamp instead is insignificant.
The number of occasions on where it is actually useful is minimal (the only one I've found is checking the age of data in a cache, where doing a clock hit for every access becomes significant).
From the 10g doc:
Numbers are returned in the range -2147483648 to 2147483647 depending on platform and machine, and your application must take the sign of the number into account in determining the interval. For instance, in the case of two negative numbers, application logic must allow that the first (earlier) number will be larger than the second (later) number which is closer to zero. By the same token, your application should also allow that the first (earlier) number be negative and the second (later) number be positive.
So while it is safe to assign the result of dbms_utility.get_time to a PLS_INTEGER it is theoretically possible (however unlikely) to have an overflow during the execution of your batch run. The difference between the two values would then be greater than 2^31.
If your job takes a lot of time (therefore increasing the chance that the overflow will happen), you may want to switch to a TIMESTAMP datatype.
Assigning a negative value to your PLS_INTEGER variable does raise an ORA-01426:
SQL> l
1 declare
2 a pls_integer;
3 begin
4 a := -power(2,33);
5* end;
SQL> /
declare
*
FOUT in regel 1:
.ORA-01426: numeric overflow
ORA-06512: at line 4
However, you seem to suggest that -214512572 is close to -2^31, but it's not, unless you forgot to typ a digit. Are we looking at a smoking gun?
Regards,
Rob.

Oracle Floats vs Number

I'm seeing conflicting references in Oracles documentation. Is there any difference between how decimals are stored in a FLOAT and a NUMBER types in the database?
As I recall from C, et al, a float has accuracy limitations that an int doesn't have. R.g., For 'float's, 0.1(Base 10) is approximated as 0.110011001100110011001101(Base 2) which equals roughtly something like 0.100000001490116119384765625 (Base 10). However, for 'int's, 5(Base 10) is exactly 101(Base 2).
Which is why the following won't terminate as expected in C:
float i;
i = 0;
for (i=0; i != 10; )
{
i += 0.1
}
However I see elsewhere in Oracle's documentation that FLOAT has been defined as a NUMBER. And as I understand it, Oracle's implementation of the NUMBER type does not run into the same problem as C's float.
So, what's the real story here? Has Oracle deviated from the norm of what I expect to happen with floats/FLOATs?
(I'm sure it's a bee-fart-in-a-hurricane of difference for what I'll be using them for, but I know I'm going to have questions if 0.1*10 comes out to 1.00000000000000001)
Oracle's BINARY_FLOAT stores the data internally using IEEE 754 floating-point representation, like C and many other languages do. When you fetch them from the database, and typically store them in an IEEE 754 data type in the host language, it's able to copy the value without transforming it.
Whereas Oracle's FLOAT data type is a synonym for the ANSI SQL NUMERIC data type, called NUMBER in Oracle. This is an exact numeric, a scaled decimal data type that doesn't have the rounding behavior of IEEE 754. But if you fetch these values from the database and put them into a C or Java float, you can lose precision during this step.
The Oracle BINARY_FLOAT and BINARY_DOUBLE are mostly equivalent to the IEEE 754 standard but they are definitely not stored internally in the standard IEEE 754 representation.
For example, a BINARY_DOUBLE takes 9 bytes of storage vs. IEEE's 8. Also the double floating number -3.0 is represented as 3F-F7-FF-FF-FF-FF-FF-FF which if you use real IEEE would be C0-08-00-00-00-00-00-00. Notice that bit 63 is 0 in the Oracle representation while it is 1 in the IEEE one (if 's' is the sign bit, according to IEEE, the sign of the number is (-1)^s). See the very good IEEE 754 calculators at http://babbage.cs.qc.cuny.edu/IEEE-754/
You can easily find this if you have a BINARY__DOUBLE column BD in table T with the query:
select BD,DUMP(BD) from T
Now all of that is fine and interesting (maybe) but when one works in C and gets a numeric value from Oracle (by binding a variable to a numeric column of any kind), one typically gets the result in a real IEEE double as is supported by C. Now this value is subject to all of the usual IEEE inaccuracies.
If one wants to do precise arithmetic one can either do it in PL/SQL or using special precise-arithmetic C libraries.
For Oracle's own explanation of their numeric data types see: http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#i16209
Oracle's Number is in fact a Decimal (base-10) floating point representation...
Float is just an alias for Number and does the exact same thing.
if you want Binary (base-2) floats, you need to use Oracle's BINARY_FLOAT or BINARY_DOUBLE datatypes.
link text
Bill's answer about Oracle's FLOAT is only correct to late version(say 11i), in Oracle 8i, the document says:
You can specify floating-point numbers with the form discussed in
"NUMBER Datatype". Oracle also supports the ANSI datatype FLOAT. You
can specify this datatype using one of these syntactic forms:
FLOAT specifies a floating-point number with decimal precision 38, or
binary precision 126. FLOAT(b) specifies a floating-point number with
binary precision b. The precision b can range from 1 to 126. To
convert from binary to decimal precision, multiply b by 0.30103. To
convert from decimal to binary precision, multiply the decimal
precision by 3.32193. The maximum of 126 digits of binary precision is
roughly equivalent to 38 digits of decimal precision.
It sounds like a Quadruple precision(126 binary precision). If I am not mistaken, IEEE754 only requires b = 2, p = 24 for single precision and p = 53 for double precision. The differences between 8i an 11i caused a lot of confusion when I was looking into a conversion plan between Oracle and PostgreSQL.
Like the PLS_INTEGER mentioned previously, the BINARY_FLOAT and BINARY_DOUBLE types in Oracle 10g use machine arithmetic and require less storage space, both of which make them more efficient than the NUMBER type
ONLY BINARY_FLOAT and BINARY_DOUBLE supports NAN values
-not precise calculations

Resources