I'm having an overflow error in VB 6.0 when using the Clong datatype because of really big values. How to overcome this? Is there anything else available higher than the Clong datatype?
Depending on how big your really big values are, the VB6 Currency data type might be a good choice.
It supports values in the range -922,337,203,685,477.5808 to 922,337,203,685,477.5807.
You could use a Double instead of a Long since it can hold larger numbers. The function is CDbl() instead CLng().
In VB6.0, a Long is 32-bits and can hold values up to: 2,147,483,648
A Double is 64-bits and can old values up to: 1.79769313486231570E+308
EDIT: Please refer to this reference
I believe the upcoming VB in MSVS2010 has the CLonger (64 bits), CEvenLongerYet (128 bits) and CTooDamnLongForSensibleUse (256 bits) data types.
</humor>
Here are some options from the VB6 reference manual topic on data types
Long (long integer) 4 bytes
-2,147,483,648 to 2,147,483,647
Single (single-precision
floating-point) 4 bytes -3.402823E38
to -1.401298E-45 for negative values;
1.401298E-45 to 3.402823E38 for positive values. About 6 or 7 significant figures accuracy.
Double
(double-precision floating-point) 8
bytes -1.79769313486231E308 to
-4.94065645841247E-324 for negative values; 4.94065645841247E-324 to
1.79769313486232E308 for positive values. About 15 or 16 significant figures accuracy.
Currency (scaled integer) 8
bytes -922,337,203,685,477.5808 to
922,337,203,685,477.5807
Decimal 14
bytes
+/-79,228,162,514,264,337,593,543,950,335
with no decimal point;
+/-7.9228162514264337593543950335 with 28 places to the right of the
decimal; smallest non-zero number is
+/-0.0000000000000000000000000001
Try avoiding division by zero. If the numerator and denominator object of your code is equal to zero, try making the denominator equal to 1. hence, zero/zero = overflow
zero/1 = zero ( no overflow)
Related
I want to write a program to convert hexadecimal numbers into their decimal forms without using a variable of fixed length to store the result because that would restrict the range of inputs that my program can work with.
Let's say I were to use a variable of type long long int to calculate, store and print the result. Doing so would limit the range of hexadecimal numbers that my program can handle to between 8000000000000001 and 7FFFFFFFFFFFFFFF. Anything outside this range would cause the variable to overflow.
I did write a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations but it runs much slower, even for numbers that are as big as 7FFFFFFFF!
Then I stumbled onto this site which could take numbers that are way outside the range of a 64 bit variable. I tried their converter with numbers much larger than 16^65 - 1 and still couldn't get it to overflow. It just kept on going and printing the result.
I figured that they must be using a much better algorithm for hex to decimal conversion, one that isn't limited to 64 bit values.
So far, Google's search results have only led me to algorithms that use some fixed-length variable for storing the result.
That's why I am here. I wanna know if such an algorithm exists and if it does, what is it?
Well, it sounds like you already did it when you wrote "a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations".
Converting from base 16 (hexadecimal) to base 10 means implementing multiplication and addition of numbers in a base 10x representation. Then for each hex digit d, you calculate result = result*16 + d. When you're done you have the same number in a 10-based representation that is easy to write out as a decimal string.
There could be any number of reasons why your string-based method was slow. If you provide it, I'm sure someone could comment.
The most important trick for making it reasonably fast, though, is to pick the right base to convert to and from. I would probably do the multiplication and addition in base 109, so that each digit will be as large as possible while still fitting into a 32-bit integer, and process 7 hex digits at a time, which is as many as I can while only multiplying by single digits.
For every 7 hex digts, I'd convert them to a number d, and then do result = result * (16^7) + d.
Then I can get the 9 decimal digits for each resulting digit in base 109.
This process is pretty easy, since you only have to multiply by single digits. I'm sure there are faster, more complicated ways that recursively break the number into equal-sized pieces.
I want to know how many number of digits are allowed after decimal point for primitive double datatype in vb6, without actually getting rounded off.
You get up to 16 significant figures in case of double data type in vb6 with accuracy
eg 1.0000000000000006
I’m writing a Radix-2 DIT FFT algorithm in VHDL, which requires some fractional multiplication of input data by Twiddle Factor (TF). I use Fixed Point arithmetic’s to achieve that, with every word being 16 bit long, where 1 bit is a sign bit and the rest is distributed between integer and fraction. Therefore my dilemma:
I have no idea, in what range my input data will be, so if I just decide that 4 bits go to integer and the rest 11 bits to fraction, in case I get integer numbers higher than 4 bits = 15 decimal, I’m screwed. The same applies if I do 50/50, like 7 bits to integer and the rest to fraction. If I get numbers, which are very small, I’m screwed because of truncation or rounding, i.e:
Let’s assume I have an integer "3"(0000 0011) on input and TF of "0.7071" ( 0.10110101 - 8 bit), and let’s assume, for simplicity, my data is 8 bit long, therefore:
3x0.7071 = 2.1213
3x0.7071 = 0000 0010 . 0001 1111 = 2.12109375 (for 16 bits).
Here comes the trick - I need to up/down round or truncate 16 bits to 8 bits, therefore, I get 0000 0010, i.e 2 - the error is way too high.
My questions are:
How would you solve this problem of range vs precision if you don’t know the range of your input data AND you would have numbers represented in fixed point?
Should I make a process, which decides after every multiplication where to put the comma? Wouldn’t it make the multiplication slower?
Xilinx IP Core has 3 different ways for Fixed Number Arithmetic’s – Unscaled (similar to what I want to do, just truncate in case overflow happens), Scaled fixed point (I would assume, that in that case it decides after each multiplication, where the comma should be and what should be rounded) and Block Floating Point(No idea what it is or how it works - would appreciate an explanation). So how does this IP Core decide where to put the comma? If the decision is made depending on the highest value in my dataset, then in case I have just 1 high peak and the rest of the data is low, the error will be very high.
I will appreciate any ideas or information on any known methods.
You don't need to know the fixed-point format of your input. You can safely treat it as normalized -1 to 1 range or full integer-range.
The reason is that your output will have the same format as the input. Or, more likely for FFT, a known relationship like 3 bits increase, which would the output has 3 more integer bits than the input.
It is the core user's burden to know where the decimal point will end up, you have to document the change to dynamic range of course.
Official document says uint64 is an unsigned integer of 64-bits, does that mean any uint64 number should take 8 bytes storage, no matter how small or how large it is?
Edit:
Thanks for everyone's answer!
I raised the doubt when I noticed that binary.PutUvarint consumes up to 10 bytes to store a large uint64, despite that maximum uint64 should only take 8 bytes.
I then found answer to my doubt in the source code of Golang lib:
Design note:
// At most 10 bytes are needed for 64-bit values. The encoding could
// be more dense: a full 64-bit value needs an extra byte just to hold bit 63.
// Instead, the msb of the previous byte could be used to hold bit 63 since we
// know there can't be more than 64 bits. This is a trivial improvement and
// would reduce the maximum encoding length to 9 bytes. However, it breaks the
// invariant that the msb is always the "continuation bit" and thus makes the
// format incompatible with a varint encoding for larger numbers (say 128-bit).
According to http://golang.org/ref/spec#Size_and_alignment_guarantees:
type size in bytes
byte, uint8, int8 1
uint16, int16 2
uint32, int32, float32 4
uint64, int64, float64, complex64 8
complex128 16
So, yes, uint64 will always take 8 bytes.
Simply put: yes, a 64-bit fixed size integer type will always take 8 bytes. It would be an unusual language where that isn't the case.
There are languages/platforms which support variable-length numeric types where the storage in memory does depend on the value, but you wouldn't then specify the number of bits in the type in such a simple way, as that can vary.
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values.
The predeclared architecture-independent numeric types are:
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
Yes, exactly 64 bits or 8 bytes.
Just remember the simple rule, the variable type is usually optimized to fit certain memory space and the minimum memory space is 1 bit(s). And 8 bit(s) = 1 byte(s):
Therefore 64bit(s) = 8 byte(s)
If I store an integer field in int32...will this use more space than int64?
From what I understand the varint will adjust its size with the size of the number being stored.
No, this only impacts the generated code. Any combination of [s|u]int{32|64} uses "varint" encoding, so the size is generally related to the magnitude, at least after noting the difference in negative numbers. In particular, a negative number that doesn't use sint* will be disproportionately large (10 bytes, IIRC), regardless of whether it is 32 or 64.