How long for X basic operations to execute? - execution-time

How long for 10^9 basic operations to execute?
Firstly, how would try to solve how long this thing will execute?
Second, what if instead of 10^9 I am given a random Integer i.e.: (x^10) or (10^x) or (n^x)
Thank you!

Related

App Inventor app does not take answers in divisions

I am trying to make an app that has the four basic mathematical operations. Addition, subtraction, multiplication and division.
Each operation has a series of exercises and for each correct exercise, assign a point in a score counter.
I clarify that both, the exercises and the answers are selected at random. They are not questions and answers stored in a list.
Everything is ready and finished, but I have a problem with the division and it is as follows.
If, for example, the result of the division has exactly two decimals, the score counter takes as correct the answer that is selected. But if the result of the division has more than two decimals, the score counter does not take the answer as correct.
Example:
20/8 = 1.25 No more decimals, then the score counter takes it as the correct answer
9/7 = 1.28571428571 This answer has many decimals, then the score counter does not take it as the correct answer
The problem is not in rounding up the figures or in formatting the number of decimals. The problem is that for some reason, answers with more than two decimal numbers are not taken as correct.
Not matter if I round the result to a integer or if I set only a 2 decimals for each result, for some reason, the score counter do not show the result as correct.
For example, if I take the division 9/7 = 1.28571428571 and I set only 2 decimals for the result, leaving it as 1.28, the score counter do not take this result as a correct result.
Even if I round the result to 1, occur the same problem.
How can this be fixed?
Many thanks to anyone who can help me find a solution.
P.S.: I'm not a programmer I'm just an amateur and nothing else, that just starts, so I appreciate, please, answers for a layman like me. Thanks in advance.
Here are the blocks

I need to generate a 32 digit random number using lua, please suggest some way

I want to generate a 32 digit key containing alphanumeric characters and they should always be UNIQUE. Please suggest a way of doing that.
I've used math.random function but I get the same random number over and over again, i want it to be unique.
If you want a unique number you have to have a list of used numbers. Otherwise you will always have the chance of getting a used number again. Although it is quite unlikely with 32 digits.
You obviously did not read the documentation on math.random:
http://www.lua.org/manual/5.3/manual.html#pdf-math.random
Otherweise you would know that math.random will always give you the same pseudo random numbers unless you change the seed value using math.randomseed...
Please make sure to read the documentation on functions befor you use them.
The trivial program below generates 1000 random 32-bit keys without repetion with no wasted effort:
M=2^32-1
R={}
N=1000
n=0
for i=1,N do
local x=math.random(M)
if R[x]==nil then n=n+1 R[x]=n print(n,i,n==i,x) end
end
The point is that M is so large that repetitions are very unlikely, even if we increase N to one million.

handle the total of Integers exceeding Long

I have the following code :
Dim L as Integer
Dim R as Integer
Dim a as Integer
a=((L+R)/2)
Now (L+R) exceeds limit of Integer.
In order to handle this case:
I have following three options:
Define L (or R) as Long
Write a= ((CLng(L)+R)/2)
Declare new variable as Long :
Like this
Dim S as Long
S=S+L+R
I am confused which one is the best to implement?
Change all the variables to Long.
The code will be more robust.
The code will execute faster.
The additional 2 bytes of memory per variable is totally insignificant, unless you have many millions of these integer variables in use simultaneously.
You've already posted several questions here about integer overflow errors. With all respect, I really advise you to just change all your Integer variables to Long and get on with your coding.
I'd pick #2. I think (not sure) that this uses a little less memory than #1 because there's only one Long -value in the equation where as changing L or R to Long would require space for 2 Long values.
I'm thinking #2 and #3 might end up looking the same (or pretty damn close) after compile and I personally think that in this case an extra variable wouldn't make it more readable. The difference of course is that in #2 the result of the L+R might not need to be saved anywhere, but only moved between registers for the calculation.
I'm thinking alot here, but I'm posting this partly because I hope that if I'm wrong, someone would correct me. Anyway, with the reasoning above, I'd go with #2. Edit: at least I'm quite certain that if one of the options uses less memory than the others, it's #2, but they might all be the same in that regard.

Query on Lambda calculus

Continuing on exercises in book Lambda Calculus, the question is as follows:
Suppose a symbol of the λ-calculus
alphabet is always 0.5cm wide. Write
down a λ-term with length less than 20
cm having a normal form with length at
least (10^10)^10 lightyear. The speed
of light is c = 3 * (10^10) cm/sec.
I have absolutely no idea as to what needs to be done in this question. Can anyone please give me some pointers to help understand the question and what needs to be done here? Please do not solve or mention the final answer.
Hoping for a reply.
Regards,
darkie
Not knowing anything about lambda calculus, I understand the question as following:
You have to write a λ-term in less than 20 cm, where a symbol is 0.5cm, meaning you are allowed less than 40 symbols. This λ-term should expand to a normal form with the length of at least (10^10)^10 = 10^100 lightyears, which results in (10^100)*2*3*(10^10)*24*60*60 symbols. Basically a very long recursive function.
Here's another hint: in lambda calculus, the typical way to represent an integer is by its Church encoding, which is a unary representation. So if you convert the distances into numbers, one thing that would do the trick would be a small function which, when applied to a small number, terminates and produces a very large number.

What's better multiplication by 2 or adding the number to itself ? BIGnums

I need some help deciding what is better performance wise.
I'm working with bigints (more then 5 million digits) and most of the computation (if not all) is in the part of doubling the current bigint. So i wanted to know is it better to multiply every cell (part of the bigint) by 2 then mod it and you know the rest. Or is it better just add the bigint to itself.
I'm thinking a bit about the ease of implementation too (addition of 2 bigints is more complicated then multiplication by 2) , but I'm more concerned about the performance rather then the size of code or ease of implementation.
Other info:
I'll code it in C++ , I'm fairly familiar with bigints (just never came across this problem).
I'm not in the need of any source code or similar i just need a nice opinion and explanation/proof of it , since i need to make a good decision form the start as the project will be fairly large and mostly built around this part it depends heavily on what i chose now.
Thanks.
Try bitshifting each bit. That is probably the fastest method. When you bitshift an integer to the left, then you double it (multiply by 2). If you have several long integers in a chain, then you need to store the most significant bit, because after shifting it, it will be gone, and you need to use it as the least significant bit on the next long integer.
This doesn't actually matter a whole lot. Modern 64bit computers can add two integers in the same time it takes to bitshift them (1 clockcycle), so it will take just as long. I suggest you try different methods, and then report back if there is any major time differences. All three methods should be easy to implement, and generating a 5mb number should also be easy, using a random number generator.
To store a 5 million digit integer, you'll need quite a few bits -- 5 million if you were referring to binary digits, or ~17 million bits if those were decimal digits. Let's assume the numbers are stored in a binary representation, and your arithmetic happens in chunks of some size, e.g. 32 bits or 64 bits.
If adding the number to itself, each chunk is added to itself and to the carry from the addition of the previous chunk. Any carry forward is kept for the next chunk. That's a couple of addition operation, and some book keeping for tracking the carry.
If multiplying by two by left-shifting, that's one left-shift operation for the multiplication, and one right-shift operation + and with 1 to obtain the carry. Carry book keeping is a little simpler.
Superficially, the shift version appears slightly faster. The overall cost of doubling the number, however, is highly influenced by the size of the number. A 17 million bits number exceeds the cpu's L1 cache, and processing time is likely overwhelmed by memory fetch operations. On modern PC hardware, memory fetch is orders of magnitude slower than addition and shifting.
With that, you might want to pick the one that's simpler for you to implement. I'm leaning towards the left-shift version.
did you try shifting the bits?
<< multiplies by 2
>> divides by 2
Left bit shifting by one is the same as a multiplication by two !
This link explains the mecanism and give examples.
int A = 10; //...01010 = 10
int B = A<<1; //..010100 = 20
If it really matters, you need to write all three methods (including bit-shift!), and profile them, on various input. (Use small numbers, large numbers, and random numbers, to avoid biasing the results.)
Sorry for the "Do it yourself" answer, but that's really the best way. No one cares about this result more than you, which just makes you the best person to figure it out.
Well implemented multiplication of BigNums is O(N log(N) log(log(N)). Addition is O(n). Therefore, adding to itself should be faster than multiplying by two. However that's only true if you're multiplying two arbitrary bignums; if your library knows you're multiplying a bignum by a small integer it may be able to optimize to O(n).
As others have noted, bit-shifting is also an option. It should be O(n) as well but faster constant time. But that will only work if your bignum library supports bit shifting.
most of the computation (if not all) is in the part of doubling the current bigint
If all your computation is in doubling the number, why don't you just keep a distinct (base-2) scale field? Then just add one to scale, which can just be a plain-old int. This will surely be faster than any manipulation of some-odd million bits.
IOW, use a bigfloat.
random benchmark
use Math::GMP;
use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_PROCESS_CPUTIME_ID);
my $n = Math::GMP->new(2);
$n = $n ** 1_000_000;
my $m = Math::GMP->new(2);
$m = $m ** 10_000;
my $str;
for ($bits = 1_000_000; $bits <= 2_000_000; $bits += 10_000) {
my $start = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
$str = "$n" for (1..3);
my $stop = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
print "$bits,#{[($stop-$start)/3]}\n";
$n = $n * $m;
}
Seems to show that somehow GMP is doing its conversion in O(n) time (where n the number of bits in the binary number). This may be due to the special case of having a 1 followed by a million (or two) zeros; the GNU MP docs say it should be slower (but still better than O(N^2).
http://img197.imageshack.us/img197/6527/chartp.png

Resources