Conversion of Integer to Characters [duplicate] - algorithm

This question already has answers here:
How do I print an integer in Assembly Level Programming without printf from the c library? (itoa, integer to decimal ASCII string)
(5 answers)
Closed 3 years ago.
I have been looking for the way that integers are converted to characters. I understand that there are ways using modulo and division to extract each number. I am looking for the way that programming languages do this.
Example:
int a = 101;
printf("%d\", &a);
This prints 101 to the console.
I want to understand how the bits 01100101 turn into "101" at the processor level.

I want to understand how the bits 01100101 turn into "101" at the processor level.
It doesn't. At the Assembly level the meanings of bitpatterns are entirely dependent on their usage. If I do mov eax, 1 the computer doesn't know whether I mean the decimal number 1, the boolean true or even the ASCII delete character. Meaning is something humans ascribe to their programs.
I understand that there are ways using modulo and division to extract each number. I am looking for the way that programming languages do this.
This is pretty much what programming languages do, sometimes they will also use look up tables and such to speed things up but the core algorithm stays the same.

Related

Fastest algorithm to convert hexadecimal numbers into decimal form without using a fixed length variable to store the result

I want to write a program to convert hexadecimal numbers into their decimal forms without using a variable of fixed length to store the result because that would restrict the range of inputs that my program can work with.
Let's say I were to use a variable of type long long int to calculate, store and print the result. Doing so would limit the range of hexadecimal numbers that my program can handle to between 8000000000000001 and 7FFFFFFFFFFFFFFF. Anything outside this range would cause the variable to overflow.
I did write a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations but it runs much slower, even for numbers that are as big as 7FFFFFFFF!
Then I stumbled onto this site which could take numbers that are way outside the range of a 64 bit variable. I tried their converter with numbers much larger than 16^65 - 1 and still couldn't get it to overflow. It just kept on going and printing the result.
I figured that they must be using a much better algorithm for hex to decimal conversion, one that isn't limited to 64 bit values.
So far, Google's search results have only led me to algorithms that use some fixed-length variable for storing the result.
That's why I am here. I wanna know if such an algorithm exists and if it does, what is it?
Well, it sounds like you already did it when you wrote "a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations".
Converting from base 16 (hexadecimal) to base 10 means implementing multiplication and addition of numbers in a base 10x representation. Then for each hex digit d, you calculate result = result*16 + d. When you're done you have the same number in a 10-based representation that is easy to write out as a decimal string.
There could be any number of reasons why your string-based method was slow. If you provide it, I'm sure someone could comment.
The most important trick for making it reasonably fast, though, is to pick the right base to convert to and from. I would probably do the multiplication and addition in base 109, so that each digit will be as large as possible while still fitting into a 32-bit integer, and process 7 hex digits at a time, which is as many as I can while only multiplying by single digits.
For every 7 hex digts, I'd convert them to a number d, and then do result = result * ‭(16^7) + d.
Then I can get the 9 decimal digits for each resulting digit in base 109.
This process is pretty easy, since you only have to multiply by single digits. I'm sure there are faster, more complicated ways that recursively break the number into equal-sized pieces.

Ruby and mathematical problems [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I was trying to solve a mathematical problem:
37.9 - 6.05
Ruby gives me:
37.9 - 6.05
#=> 31.849999999999998
37.90 - 6.05
#=> 31.849999999999998
37.90 + 6.05
#=> 43.949999999999996
Why am I getting this?
In a nutshell, computers have a problem working with real numbers and use
floating point representation to deal with them. Much in the same way you can only represent 256 numbers with 8 bits for natural numbers, you can only represent a fixed amount of numbers with 64 bits for real numbers. For more details on this, read this http://floating-point-gui.de/ or google for "floating point arithmetic".
How should i deal with that?
Never store currency values in floating point variables. Use BigDecimal or do your calculations in cents using only integer numbers.
use round to round your floats to a user friendly length. Rounding errors will occur, especially when adding up a lot of floats.
In SQL systems use decimal data type, or use integers and divide them by a constant factor in the UI (say you need 3 decimal digits, you could store 1234 as integer and display 1.234.

What determines the result of overflowed operations? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Example:
int max = a > b ? a : b;
int min = a + b - max;
What determines whether this will work? The processor? The hardware? The language? Help me understand this at as deep a level as possible.
The processor IS the hardware (at least for the purposes of this question).
The language is purely a way for you to express things in such a way as to allow it to convert it to what the processor itself expects. The role of the language here would be to define what "int" means, what arithmetic operators are/do, and what their exceptional behavior is. In the low-level languages (like C/C++), it leaves several things to be "implementation defined", like the overflow behavior of integers. Other languages (like Python) may define "int" to be an abstract (not a hardware) concept and thereby change some of the rules (like detecting overflows and doing custom behavior).
If the language leaves something implementation defined and the implementation offloads that decision to the hardware, then the hardware is what defines the behavior of your code.
The high level programming language provides a way for humans to describe what they want to happen. A compiler reduces that down into a language the processor understands, (ultimately) machine code. The instruction set for a particular processor is designed to be useful for doing tasks, general purpose processors for general purpose tasks including the ones you have described. Unlike pencil and paper math where if we need another column another power of ten, 99+1 = 100 for example two digits wide going in, 3 digits coming put. Processors have a fixed with for their registers, that doesnt mean you cant get creative, but the language and the resources (memory, disk space, etc) have limits. And the processor either directly in the logic or the compiler implementing the right sequence of instructions, can and will detect an overflow if you ask it to, in general. Some processors harder than others and some processors are not general purpose enough, but I dont think we need to worry about those, the one you are reading this web page in definitely can handle this.
Computers(hardware) represent numbers in two's complement. Check this for details of two's complement, and why computers use it.
In two's complement signed numbers(not floating ones for now, for sake of simplicity) have a sign bit as most significant bit. For example:
01111111
Represents 127 in two's complement. And
10000000
represents -128. In both example, the first bit is sign bit, if it's 0, then the number is positive, else negative.
8-bit signed numbers can represent numbers between -128 and 127, so if you add 127 and 3, you won't get 130, you will get -126 because of overflow. Let's see why:
01111111
00000011
+________
10000010 which is a negative number, -126 in two's complement.
How hardware understand if an overflow occurred? In addition for example, if you add two positive numbers and the result gets negative, it means overflow. And if you add two negative numbers and result gets positive it means overflow again.
I hope that would be a nice example for how these things are happening in hardware level.

Print integer with "most appropriate" kilo/mega/etc multiplier [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to convert byte size into human readable format in java?
Given an integer, I'd like to print it in a human-readable way using kilo, mega, giga etc. multipliers. How do I pick the "best" multiplier?
Here are some examples
1 print as 1
12345 print as 12.3k
987654321 print as 988M
Ideally the number of digits printed should be configurable, e.g. in the last example, 3 digits would lead to 988M, 2 digits would lead to 1.0G, 1 digit would lead to 1G, and 4 digits would lead to 987.7M.
Example: Apple uses an algorithm of this kind, I think, when OSX tells me how many more bytes have to be copied.
This will be for Java, but I'm more interested in the algorithm than the language.
As a starting point, you could use the Math.log() function to get the "magnitude" of your value, and then use some form of associative container for the suffix (k, M, G, etc).
var magnitude = Math.log(value) / Math.log(10);
Hope this helps somehow

Ada Digits Confusion

I have been doing some reading, and I'm having a tough time understanding how to interpret something that is a "digits x".
I.E.
type something is digits 6
I get that it's 6 digits of precision, but I guess what has me mixed up is what does that mean.
1) Y.XXXXXX (6X's),
2) XXX.XXX (Any number of digits, just will always be 6 of them counting both fore and aft the mantissa)
...
I'm just trying to understand what a range of something that is digits 6 (or digits n to be more generic), is there a formula I can simply plug into to determine what my ranges are on a type that is some number of digits?
A type declared with digits is a floating-point type, similar to Float or Long_Float.
The 6 is "the minimum number of significant decimal digits required for
the floating point type". For example, all the following will be represented reasonably accurately (but not exactly):
type My_Real is digits 6;
X: My_Real := 1.23456;
Y: My_Real := 12345.6;
Z: My_Real := 1.23456E7;
In practice, there are usually just 2 or 3 underlying floating-point types on a given system. The compiler will choose an appropriate one as the underlying type for your declaration. In practice, two types declared with digits 2 and digits 6 will probably have exactly the same representation and precision.
Understanding the phrase "not exactly" requires an understanding of floating-point that's well beyond the scope of a single question, but if you're familiar with floating-point in other languages, it's the same general idea.
If you want a general understanding of what floating-point is and how it works, the Wikipedia Article isn't bad. A much more advanced treatment is David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic", available here as a web page and here as a PDF.

Resources