Keep real numbers in Ruby language [closed] - ruby

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How are real numbers kept in Ruby language? How can I keep 7.125 as real number in the memory? In this code:
myNumber = 7.125
puts("The number is #{myNumber}")
I do not understand how the number is kept in memory.

In Ruby 1.8 & 1.9, floats are never immediates, so all floats require a new memory allocation.
In Ruby 2.0.0, on 64 bit systems, many floats are now immediate. This means that the typical floats don't require memory allocation & deallocation anymore, so much faster operations.
Ruby stores its values in a pointer (32 or 64 bits, depending on the platform). It actually uses a trick to store immediates in that pointer. This is the reason why Fixnum can only hold 31 / 63 bits.
On 32 bit platforms, there's no clever way to store floats, but on 64 bits platforms, it's possible to use the first ones to flag this value as an immediate float and the remaining 60 or so to hold the data. The floats that do require the full 64 bits can not be immediates, though, so these are stored like before using an actual pointer.
More info on this optimization can be found in the https://bugs.ruby-lang.org/issues/6763

There is no way to keep real numbers in Ruby. The closest things are Float or Rational, both of which cannot express real numbers fully.

How about the Bigdecimal class (arbitrary precision arithmetics)? See the API, which follows standards on the topic.

Related

Implementing signed adder in HDL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hey Everyone,
I'm working on a project which requires me to use signed adders.
How does one implement this adder in HDL without using the arithmetic operators?(test bench not required), is there a way to modify an existing adder circuit to work as a signed adder, if so your inputs will be of great help.
Thank you in advance.
To answer the first part of your question, you need to implement (for example) a ripple carry adder, which is a series of full adders connected in a certain way. There will be tons of stuff about this on the internet, because it is standard stuff. For example: http://eda-playground.readthedocs.io/en/latest/code-examples/full-adder.html
To answer the second part of your question: an adder which can handle signed numbers is identical to one that can only handle unsigned numbers as long as the inputs and outputs use two's complement representation.
To negate a number using two's complement representation, one inverts all the bits and adds one. For example, using an 8-bit two's complement representation
23 is 00010111 and
-23 is 11101001 which was generated by inverting all the bits (11101000) and then adding 1
now let's add the two together using a standard adder, ie by doing long addition as you would have learnt at school:
23 is 00010111
-23 is 11101001 +
--------
(1)00000000
--------
Look: it works! The (1) is the carry output, which you can ignore. The result of adding two numbers needs to be one bit wider than the wider of those two numbers. So, really we should have widened the two inputs to 9-bit first. So, in a 9-bit two's complement representation:
23 is 000010111
-23 is 111101001 +
---------
(1)000000000
---------
You widen numbers which use a two's complement representation (i.e., signed numbers) by sign-extending: that is, you take the left hand bit (the MSB, the sign bit) and repeat it leftwards (in this case once, because we are widening by just 1 bit).

What determines the result of overflowed operations? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Example:
int max = a > b ? a : b;
int min = a + b - max;
What determines whether this will work? The processor? The hardware? The language? Help me understand this at as deep a level as possible.
The processor IS the hardware (at least for the purposes of this question).
The language is purely a way for you to express things in such a way as to allow it to convert it to what the processor itself expects. The role of the language here would be to define what "int" means, what arithmetic operators are/do, and what their exceptional behavior is. In the low-level languages (like C/C++), it leaves several things to be "implementation defined", like the overflow behavior of integers. Other languages (like Python) may define "int" to be an abstract (not a hardware) concept and thereby change some of the rules (like detecting overflows and doing custom behavior).
If the language leaves something implementation defined and the implementation offloads that decision to the hardware, then the hardware is what defines the behavior of your code.
The high level programming language provides a way for humans to describe what they want to happen. A compiler reduces that down into a language the processor understands, (ultimately) machine code. The instruction set for a particular processor is designed to be useful for doing tasks, general purpose processors for general purpose tasks including the ones you have described. Unlike pencil and paper math where if we need another column another power of ten, 99+1 = 100 for example two digits wide going in, 3 digits coming put. Processors have a fixed with for their registers, that doesnt mean you cant get creative, but the language and the resources (memory, disk space, etc) have limits. And the processor either directly in the logic or the compiler implementing the right sequence of instructions, can and will detect an overflow if you ask it to, in general. Some processors harder than others and some processors are not general purpose enough, but I dont think we need to worry about those, the one you are reading this web page in definitely can handle this.
Computers(hardware) represent numbers in two's complement. Check this for details of two's complement, and why computers use it.
In two's complement signed numbers(not floating ones for now, for sake of simplicity) have a sign bit as most significant bit. For example:
01111111
Represents 127 in two's complement. And
10000000
represents -128. In both example, the first bit is sign bit, if it's 0, then the number is positive, else negative.
8-bit signed numbers can represent numbers between -128 and 127, so if you add 127 and 3, you won't get 130, you will get -126 because of overflow. Let's see why:
01111111
00000011
+________
10000010 which is a negative number, -126 in two's complement.
How hardware understand if an overflow occurred? In addition for example, if you add two positive numbers and the result gets negative, it means overflow. And if you add two negative numbers and result gets positive it means overflow again.
I hope that would be a nice example for how these things are happening in hardware level.

How many number of primitive operations does a 16, 32 or a 64-bit processor execute to perform logical right shift of an N-bit Binary number? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Recently,I have been trying to understand how the Binary Extended Euclidean Algorithm works at the processor level. This question is all about finding an Inverse element in GF(2^m) with polynomial basis.
Generally I came across the Extended Euclidean Algorithm for evaluating an inverse element but the fact is that it involves too many addition and multiplication operations. The Binary EEA algorithm requires just bit shifting operations (equivalent to division by 2--logical shift right). The algorithm is in this link, page number 8.
In step 3 and 5 of this algorithm, every iteration shifts the parameters u and b by 1 bit to the right adding zero to the MSB at the same time. The loop ends when u == 1 and returns b. My question is how many primitive operations does a processor (say a 32 bit processor for example) perform in step 3 or step 5 of every iteration?
I came across barrel shifter and I am quite confused about how fast the shifting takes place. Should I really consider these primitive operations or should I ignore them if because the shifting may be faster?
It would really help me a lot if someone would show the primitive operations for the case where the size of u is 194 bits.
In case you might be wondering about the denominator x in step 3 and 5 of the algorithm, its the polynomial representation and x means nothing but 10 in binary and parameter u is an N-bit binary number.
There is no generic answer to this question: you can use portable code that will be tedious to optimize or highly machine specific code that will be even more complicated to optimize without breaking.
If you want real performance, you have to use MMX/AVX registers on the maximum width you can get your hands on. Intel provides lightweight wrappers on low-level instructions as macros and inline functions.
Always use unsigned types for your shifting operations to avoid unnecessary steps.
Usually ther is a "right shift" assembly OP code which is able to right shift a register a given number of bits. Such an operation takes one cycle.
This assumes thet your value is already loaded to the register however.
The best answer anyway: Implement this algorithm in a low level language (C, C++) and look at the assembly code produced by the compiler.

How to write an assembly sorting program 8086 with works with 6 digits numbers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
im new to assembly language and i know many codes.
However im working with 8086 emulator with only works with 16 bit numbers.
this is a home work that im really stuck in :
How can i write an assembly code with do the following :
1-get 20 , maximum 6-digits decimal numbers and store them in an array.
2- sort the array in ascending order.
its really hard for me to understand how to manage registers and stack for this long numbers.
every help will be appreciated in advance .
In order to sort 32bit numbers (or broader) with 16bit registers you have to compare the upper part of each number separately.
Assume we have these random two 32 bit numbers (shown in hex) 4567afdf and 321abc09.
Now when you look at them as 16 bit values they look like this:
4567 afdf
321a bc09
As you can easily see, the upper 16 bits you can compare individually.
If the upper 16 bits are higher or lower, then you know that the lower part doesn't matter anymore and you sort them accordingly.
If the upper 16 bits are equal, then you compare the lower 16 bits and if they are also equal, both numbers are equal => no sort needed, otherwise you shuffle them accordingly. Since the upper 16 bits are also equal, you don't even need to shuffle them.
If the upper 16bits are different, you still have to shuffle the lower 16 bits accordingly, as they might be different.
The basics of this approach can be used for an arbitrary number of bits not just 32bits. Generally when you have a seemingly hard problem, you should try to think of the easy examples and how you can solve it. Then you can extend it to more complicated cases.
EDIT:
An alternative approach would be, if you have strings of decimal numbers and you want to sort them based on the string representation instead of the numbers.
In this case, you can do it as follows
If the length of the two number strings are differnt, the shorter one is the lower number.
if the length is equal, then you can look at each digit individually (starting with the first digit) until you hit a non-equal digit or the string end. If you reaced the end of the string, the numbers are the same, otherwise you kn ow which one is higher/lower.

Arbitrary precision arithmetic with Ruby

How the heck does Ruby do this? Does Jörg or anyone else know what's happening behind the scenes?
Unfortunately I don't know C very well so bignum.c is of little help to me. I was just kind of curious it someone could explain (in plain English) the theory behind whatever miracle algorithm its using.
irb(main):001:0> 999**999
368063488259223267894700840060521865838338232037353204655959621437025609300472231530103873614505175218691345257589896391130393189447969771645832382192366076536631132001776175977932178658703660778465765811830827876982014124022948671975678131724958064427949902810498973271030787716781467419524180040734398996952930832508934116945966120176735120823151959779536852290090377452502236990839453416790640456116471139751546750048602189291028640970574762600185950226138244530187489211615864021135312077912018844630780307462205252807737757672094320692373101032517459518497524015120165166724189816766397247824175394802028228160027100623998873667435799073054618906855460488351426611310634023489044291860510352301912426608488807462312126590206830413782664554260411266378866626653755763627796569082931785645600816236891168141774993267488171702172191072731069216881668294625679492696148976999868715671440874206427212056717373099639711168901197440416590226524192782842896415414611688187391232048327738965820265934093108172054875188246591760877131657895633586576611857277011782497943522945011248430439201297015119468730712364007639373910811953430309476832453230123996750235710787086641070310288725389595138936784715274150426495416196669832679980253436807864187160054589045664027158817958549374490512399055448819148487049363674611664609890030088549591992466360050042566270348330911795487647045949301286614658650071299695652245266080672989921799342509291635330827874264789587306974472327718704306352445925996155619153783913237212716010410294999877569745287353422903443387562746452522860420416689019732913798073773281533570910205207767157128174184873357050830752777900041943256738499067821488421053870869022738698816059810579221002560882999884763252161747566893835178558961142349304466506402373556318707175710866983035313122068321102457824112014969387225476259342872866363550383840720010832906695360553556647545295849966279980830561242960013654529514995113584909050813015198928283202189194615501403435553060147713139766323195743324848047347575473228198492343231496580885057330510949058490527738662697480293583612233134502078182014347192522391449087738579081585795613547198599661273567662441490401862839817822686573112998663038868314974259766039340894024308383451039874674061160538242392803580758232755749310843694194787991556647907091849600704712003371103926967137408125713631396699343733288014254084819379380555174777020843568689927348949484201042595271932630685747613835385434424807024615161848223715989797178155169951121052285149157137697718850449708843330475301440373094611119631361702936342263219382793996895988331701890693689862459020775599439506870005130750427949747071390095256759203426671803377068109744629909769176319526837824364926844730545524646494321826241925107158040561607706364484910978348669388142016838792902926158979355432483611517588605967745393958061959024834251565197963477521095821435651996730128376734574843289089682710350244222290017891280419782767803785277960834729869249991658417000499998999
Simple: it does it the same way you do, ever since first grade. Except it doesn't compute in base 10, it computes in base 4 billion (and change).
Think about it: with our number system, we can only represent numbers from 0 to 9. So, how can we compute 6+7 without overflowing? Easy: we do actually overflow! We cannot represent the result of 6+7 as a number between 0 and 9, but we can overflow to the next place and represent it as two numbers between 0 and 9: 3×100 + 1×101. If you want to add two numbers, you add them digit-wise from the right and overflow ("carry") to the left. If you want to multiply two numbers, you have to multiply every digit of one number individually with the other number, then add up the intermediate results.
BigNum arithmetic (this is what this kind of arithmetic where the numbers are bigger than the native machine numbers is usually called) works basically the same way. Except that the base is not 10, and its not 2, either – it's the size of a native machine integer. So, on a 32 bit machine, it would be base 232 or 4 294 967 296.
Specifically, in Ruby Integer is actually an abstract class that is never instianted. Instead, it has two subclasses, Fixnum and Bignum, and numbers automagically migrate between them, depending on their size. In MRI and YARV, Fixnum can hold a 31 or 63 bit signed integer (one bit is used for tagging) depending on the native word size of the machine. In JRuby, a Fixnum can hold a full 64 bit signed integer, even on an 32 bit machine.
The simplest operation is adding two numbers. And if you look at the implementation of + or rather bigadd_core in YARV's bignum.c, it's not too bad to follow. I can't read C either, but you can cleary see how it loops over the individual digits.
You could read the source for bignum.c...
At a very high level, without going into any implementation details, bignums are calculated "by hand" like you used to do in grade school. Now, there are certainly many optimizations that can be applied, but that's the gist of it.
I don't know of the implementation details so I'll cover how a basic Big Number implementation would work.
Basically instead of relying on CPU "integers" it will create it's own using multiple CPU integers. To store arbritrary precision, well lets say you have 2 bits. So the current integer is 11. You want to add one. In normal CPU integers, this would roll over to 00
But, for big number, instead of rolling over and keeping a "fixed" integer width, it would allocate another bit and simulate an addition so that the number becomes the correct 100.
Try looking up how binary math can be done on paper. It's very simple and is trivial to convert to an algorithm.
Beaconaut APICalc 2 just released on Jan.18, 2011, which is an arbitrary-precision integer calculator for bignum arithmetic, cryptography analysis and number theory research......
http://www.beaconaut.com/forums/default.aspx?g=posts&t=13
It uses the Bignum class
irb(main):001:0> (999**999).class
=> Bignum
Rdoc is available of course

Resources