I was wondering how to find the GCD of two inputs on super simple CPU I have been struggling because there is only 16 bits of memory, so I am not sure how to edit the GCD program to accept two inputs without going beyond the memory capabilities? Could someone help please thank you!
You could use the euclidean gcd algorithm. If both inputs fit in 16 bits it will work for you
This is the pseudocode:
function gcd(a, b)
while b ≠ 0
t := b;
b := a mod b;
a := t;
return a;
Related
I'll make examples in Python, since I use Python, but the question is not about Python.
Lets say I want to increment a variable by specific value so that it stays in given boundaries.
So for increment and decrement I have these two functions:
def up (a, s, Bmax):
r = a + s
if r > Bmax : return Bmax
else : return r
def down (a, s, Bmin):
r = a - s
if r < Bmin : return Bmin
else : return r
Note: it is supposed that initial value of the variable "a" is already in boundaries (min <= a <= max) so additional initial checking does not belong to this function. What makes me curious, almost every program I made needs these functions.
The question is:
are those classified as some typical operations and have they specific names?
if yes, is there some correspondence to intrinsic processor functionality so it is optimised in some compilers?
Reason why I ask is pure curiousity, of course I cannot optimise it in Python and I know little about CPU architecture.
To be more specific, on a lower level for an unsigned 8-bit integer the increment would look I suppose like this:
def up (a, s, Bmax):
counter = 0
while True:
if counter == s : break
if a == Bmax : break
if a == 255 : break
a += 1
counter += 1
I know the latter would not make any sense in Python so treat it as my naive attempt to imagine low level code which adds the value in place. There are some nuances, e.g. signed, unsigned, but I was interested merely about unsigned integers since I came across it more often.
It is called saturation arithmetic, it has native support on DSPs and GPUs (not a random pair: both deals with signals).
For example the NVIDIA PTX ISA let the programmer chose if an addition is saturated or not
add.type d, a, b;
add{.sat}.s32 d, a, b; // .sat applies only to .s32
.sat
limits result to MININT..MAXINT (no overflow) for the size of the operation.
The TI TMS320C64x/C64x+ DSP has support for
Dual 16-bit saturated arithmetic operations
and instruction like sadd to perform a saturated add and even a whole register (Saturation Status Register) dedicated to collecting precise information about saturation while executing a sequence of instructions.
Even the mainstream x86 has support for saturation with instructions like vpaddsb and similar (including conversions).
Another example is the GLSL clamp function, used to make sure color values are not outside the range [0, 1].
In general if the architecture must be optimized for signal/media processing it has support for saturation arithmetic.
Much more rare is the support for saturation with arbitrary bounds, e.g. asymmetrical bounds, non power of two bounds, non word sized bounds.
However, saturation can be implemented easily as min(max(v, b), B) where v is the result of the unsaturated (and not overflowed) operation, b the lower bound and B the upper bound.
So any architecture that support finding the minimum and the maximum without a branch, can implement any form of saturation efficiently.
See also this question for a more real example of how saturated addition is implemented.
As a side note the default behavior is wrap around: for 8-bit quantities the sum 255 + 1 equals 0 (i.e. operations are modulo 28).
Some one ask a this question from me. I am confuse how it is possible to multiply two numbers without using multiplication operator? Plz share you idea.
Its so simple. see this code:
int multiplication(int a, int b){
if(b==1|| b==0)
return a;
else
return a+multiplication(a,--b);
}
I have not tested it. Just share it for idea.
Assuming the terms you are multiplying are non-negative integers, you don't even need full addition, just a successor function (i.e. the ability to add one). That's because ...
a*b is the same as add together b lots of a
a+b is the same as add 1 to a, b times
So you could program a*b that with nested loops like this:
answer = 0
for iMultiply from 1 to b
for iAdd from 1 to a
answer++
next iAdd
next iMultiply
return answer
I have to multiply arrays A and B element by element and calculate the sum of the first dimension, then returns the result in C. A is a N-by-M-by-L matrix. B is a N-by-1-by-L matrix. N and M is lower than 30, but L is very large. My code is:
C=zeros(size(B));
parfor i=1:size(A,2)
C(i,1,:) = sum(bsxfun(#times, A(:,i,:), B(:,1,:)), 1);
end
The problem is the code is slow, anyone can help to make the code faster? Thank you very much.
How about something along the lines of this:
C = permute(sum(A.*repmat(B,1,M)),[2,1,3]);
This speeds computation on my PC up by a factor of ~4. Interestingly enough, you can actually speed up the computation by a factor of 2 (at least on my PC) simply by changing the parfor loop to a for loop.
If I understand correctly, just do this:
C = squeeze(sum(bsxfun(#times, A, B)));
This gives C with size M x L.
Taking the comments from Luis Mendo, I propose to use this command:
C=reshape(sum(bsxfun(#times, A, B), 1), size(B))
I think this is the fastest.
Few months ago I had asked a question on an "Algorithm to find factors for primes in linear time" in StackOverflow.
In the replies i was clear that my assumptions where wrong and the Algorithm cannot find factors in linear time.
However I would like to know if the algorithm is an unique way to do division and find factors; that is do any similar/same way to do division is known? I am posting the algorithm here again:
Input: A Number (whose factors is to be found)
Output: The two factor of the Number. If the one of the factor found is 1 then it can be concluded that the
Number is prime.
Integer N, mL, mR, r;
Integer temp1; // used for temporary data storage
mR = mL = square root of (N);
/*Check if perfect square*/
temp1 = mL * mR;
if temp1 equals N then
{
r = 0; //answer is found
End;
}
mR = N/mL; (have the value of mL less than mR)
r = N%mL;
while r not equals 0 do
{
mL = mL-1;
r = r+ mR;
temp1 = r/mL;
mR = mR + temp1;
r = r%mL;
}
End; //mR and mL has answer
Let me know your inputs/ The question is purely out of personal interest to know if a similar algorithm exists to do division and find factors, which I am not able to find.
I understand and appreciate thay you may require to understand my funny algorithm to give answers! :)
Further explanation:
Yes, it does work on numbers above 10 (which i tested) and all positive integers.
The algorithm depends on remainder r to proceed further.I basically formed the idea that for a number, its factors gives us the sides of the
rectangles whose area is the number itself. For all other numbers which are not factors there would be a
remainder left, or consequently the rectangle cannot be formed in complete.
Thus idea is for each decrease of mL, we can increase r = mR+r (basically shifting one mR from mRmL to r) and then this large r is divided by mL to see how much we can increase mR (how many times we can increase mR for one decrease of mL). Thus remaining r is r mod mL.
I have calculated the number of while loop it takes to find the factors and it comes below or equal 5*N for all numbers. Trial division will take more.*
Thanks for your time, Harish
The main loop is equivalent to the following C code:
mR = mL = sqrt(N);
...
mR = N/mL; // have the value of mL less than mR
r = N%mL;
while (r) {
mL = mL-1;
r += mR;
mR = mR + r/mL;
r = r%mL;
}
Note that after each r += mR statement, the value of r is r%(mL-1)+mR. Since r%(mL-1) < mL, the value of r/mL in the next statement is either mR/mL or 1 + mR/mL. I agree (as a result of numerical testing) that it works out that mR*mL = N when you come out of the loop, but I don't understand why. If you know why, you should explain why, if you want your method to be taken seriously.
In terms of efficiency, your method uses the same number of loops as Fermat factorization although the inner loop of Fermat factorization can be written without using any divides, where your method uses two division operations (r/mL and r%mL) inside its inner loop. In the worst case for both methods, the inner loop runs about sqrt(N) times.
There are others, for example Pollard's rho algorithm, and GNFS which you were already told about in the previous question.
I have been asked to make a simple sort aglorithm to sort a random series of 6 numbers into numerical order. However, I have been asked to do this using Barebones-a theoretical language put forward in the book Computer Science, an Overview.
Some information on the language can be found here.
Just to clarify, I am a student teacher and have been doing anaysis on "mini-programing languages" and their uses in a teaching environment. I suggested to my tutor that I look at barebones (the language) and asked what sort of exmaple program I should write. He suggested a simple sort algorithm. Now since looking at the language I can't understand how I can do this without using arrays and if statements.
The code to swap the value of variables would be
while a not 0 do;
incr Aux1;
decr a;
end;
while b not 0 do;
incr Aux2
decr b
end;
while Aux1 not 0 do;
incr a;
decr Aux1;
end;
while Aux2 not 0 do;
incr b;
decr Aux2;
end;
However, the language does not provide < or > operators. What could I use as a workaround?
Oh, come on, start thinking about the problem!
What's an array? A list of variables.
So Barebones doesn't have an if statement? It's got while loops.
Get on with your homework.
Interesting exercise.
I would suggest you try to first implement the following:
Swap values of two variables
Set a variable (say z) to zero if value of variable x >= value of variable y.
Since the program is supposed to sort exactly 6 integers, I suppose you can assume they are in the variables x1, x2, .., x6.
In the end we need: x1 <= x2 <= ... <= x6.