0-999 Up - Down Counter with PIC16F877A - pic

i have a problem about microprocessor C programming.
Hardware picture is here!
I have to write a code on the C compiler, counting the 10^0 digit on the D port, the 10^1 digit on the C port, and the 10^2 digit on the B port for the given hardware. Also, the direction of counting with pin_a0 is regulated. If pin_a0 = 1 hardware will count forward, otherwise, count backwards.
I really appreciate any help, thank you so much

Related

Prime Factorization of numbers of form x^a + b where x is prime

I need to calculate prime factorization of large numbers, by large numbers I mean of range 10^100.
I get an input a[0] <= 10^5 (whose prime factors I have already calculated using the sieve and other optimizations). After that I get series of inputs of a[1], a[2], a[3] all in range 2 <= a[i] <= 10^5. I need to calculate the product and get the factors of the new product. I have the following maths
Let X be the data in memory and X can be represented as:
X = (p[0]^c1)(p[1]^c2)(p[2]^c[3]) .... where p[i] are its prime factors.
So I save this as,
A[p[0]] = c1, A[p[1]] = c2.... as p[i] <= 100000 this seems to work pretty well.
and as new number arrives, I just add the power of primes of the new number in A.
So this works really well and also is fast enough. Now I am thinking of optimizing space and compensating with reduction of time efficiency.
So If I can represent Any number P as x^a + b where x is a prime. Can I factorize it? P obviously doesn't fit in the memory but 2 <= x, a, b <= 100000? Or Is there any other method that is possible which will save me the space of A? I am okay with a slower algorithm than the above one.
I don't think representing a number as xa + b with prime x makes it any easier to factor.
Factoring hundred-digit numbers isn't all that hard these days. A good personal computer with lots of cores running a good quadratic sieve can factor most hundred-digit numbers in about a day, though you should know that hundred-digit numbers are about at the limit of what is reasonable to factor with a desktop computer. Look at Jason Papadopoulos' program msieve for a cutting edge factorization program.
First, you'll better do some math on paper (perhaps some simplifications are possible; I don't know...).
Then you need to use some arbitrary precision arithmetic (a.k.a. bignums or bigints) library. I recommend GMPlib but they are other ones.
See also this answer.

Fastest method for adding/summing the individual digit components of a number

I saw a question on a math forum a while back where a person was discussing adding up the digits in a number over and over again until a single digit is achieved. (i.e. "362" would become "3+6+2" which would become "11"... then "11" would become "1+1" would would become "2" therefor "362" would return 2... I wrote some nice code to get an answer to this and posted it only to be outdone by a user who suggested that any number in modulo 9 is equal to this "infinite digit sum", I checked it an he was right... well almost right, if zero was returned you had to switch it out with a "9" but that was a very quick fix...
362 = 3+6+2 = 11 = 1+1 = 2
or...
362%9 = 2
Anways, the mod9 method works fantastic for infinitely adding the sum of the digits until you are left with just a single digit... but what about only doing it once (i.e. 362 would just return "11")... Can anyone think of fast algorithms?
There's a cool trick for summing the 1 digits in binary, and with a fixed-width integer. At each iteration, you separate out half the digits each into two values, bit-shift one value down, then add. First iteration, separate ever other digit. Second iteration, pairs of digits, and so on.
Given that 27 is 00011011 as 8-bit binary, the process is...
00010001 + 00000101 = 00010110 <- every other digit step
00010010 + 00000001 = 00010011 <- pairs of digits
00000011 + 00000001 = 00000100 <- quads, giving final result 4
You could do a similar trick with decimal, but it would be less efficient than a simple loop unless you had a direct representation of decimal numbers with fast operations to zero out selected digits and to do digit-shifting. So for 12345678 you get...
02040608 + 01030507 = 03071115 <- every other digit
00070015 + 00030011 = 00100026 <- pairs
00000026 + 00000010 = 00000036 <- quads, final result
So 1+2+3+4+5+6+7+8 = 36, which is correct, but you can only do this efficiently if your number representation is fixed-width decimal. It always takes lg(n) iterations, where lg means the base two logarithm, and you round upwards.
To expand on this a little (based on in-comments discussions), let's pretend this was sane, for a bit...
If you count single-digit additions, there's actually more work than a simple loop here. The idea, as with the bitwise trick for counting bits, is to re-order those additions (using associativity) and then to compute as many as possible in parallel, using a single full-width addition to implement two half-width additions, four quarter-width additions etc. There's significant overhead for the digit-clearing and digit-shifting operations, and even more if you implement this as a loop (calculating or looking up the digit-masking and shift-distance values for each step). The "loop" should probably be fully unrolled and those masks and shift-distances be included as constants in the code to avoid that.
A processor with support for Binary Coded Decimal (BCD) could handle this. Digit masking and digit shifting would be implemented using bit masking and bit shifting, as each decimal digit would be encoded in 4 (or more) bits, independent of the encoding of other digits.
One issue is that BCD support is quite rare these days. It used to be fairly common in the 8 bit and 16 bit days, but as far as I'm aware, processors that still support it now do so mainly for backward compatibility. Reasons include...
Very early processors didn't include hardware multiplication and division. Hardware support for these operations means it's easier and more efficient to convert binary to decimal now. Binary is used for almost everything now, and BCD is mostly forgotten.
There are decimal number representations around in libraries, but few if any high level languages ever provided portable support to hardware BCD, so since assembler stopped being a real-world option for most developers BCD support simply stopped being used.
As numbers get larger, even packed BCD is quite inefficiently packed. Number representations base 10^x have the most important properties of base 10, and are easily decoded as decimal. Base 1000 only needs 10 bits per three digits, not 12, because 2^10 is 1024. That's enough to show you get an extra decimal digit for 32 bits - 9 digits instead of 8 - and you've still got 2 bits left over, e.g. for a sign bit.
The thing is, for this digit-totalling algorithm to be worthwhile at all, you need to be working with fixed-width decimal of probably at least 32 bits (8 digits). That gives 12 operations (6 masks, 3 shifts, 3 additions) rather than 15 additions for the (fully unrolled) simple loop. That's a borderline gain, though - and other issues in the code could easily mean it's actually slower.
The efficiency gain is clearer at 64 bits (16 decimal digits) as there's still only 16 operations (8 masks, 4 shifts, 4 additions) rather than 31, but the odds of finding a processor that supports 64-bit BCD operations seems slim. And even if you did, how often do you need this anyway? It seems unlikely that it could be worth the effort and loss of portability.
Here's something in Haskell:
sumDigits n =
if n == 0
then 0
else let a = mod n 10
in a + sumDigits (div n 10)
Oh, but I just read you're doing that already...
(then there's also the obvious:
sumDigits n = sum $ map (read . (:[])) . show $ n
)
For short code, try this:
int digit_sum(int n){
if (n<10) return n;
return n%10 + digit_sum(n/10);
}
Or, in words,
-If the number is less than ten, then the digit sum is the number itself.
-Otherwise, the digit sum is the current last digit (a.k.a. n mod10 or n%10), plus the digit sum of everything to the left of that number (n divided by 10, using integer division).
-This algorithm can also be generalized for any base, substituting the base in for 10.
int digit_sum(int n)
Do
if (n<10) return n;
Exit do
else
n=n%10 + digit_sum(n/10);
Loop

How to create PI sequentially in Ruby

Out of pure interested, I'm curious how to create PI sequentially so that instead of the number being produced after the outcome of the process, allow the numbers to display as the process itself is being generated. If this is the case, then the number could produce itself, and I could implement garbage collection on previously seen numbers thus creating an infinite series. The outcome is just a number being generated every second that follows the series of Pi.
Here's what I've found sifting through the internets :
This it the popular computer-friendly algorithm, The Machin-like Algorithm :
def arccot(x, unity)
xpow = unity / x
n = 1
sign = 1
sum = 0
loop do
term = xpow / n
break if term == 0
sum += sign * (xpow/n)
xpow /= x*x
n += 2
sign = -sign
end
sum
end
def calc_pi(digits = 10000)
fudge = 10
unity = 10**(digits+fudge)
pi = 4*(4*arccot(5, unity) - arccot(239, unity))
pi / (10**fudge)
end
digits = (ARGV[0] || 10000).to_i
p calc_pi(digits)
To expand on "Moron's" answer: What the Bailey-Borwein-Plouffe formula does for you is that it lets you compute binary (or equivalently hex) digits of pi without computing all of the digits before it. This formula was used to compute the quadrillionth bit of pi ten years ago. It's a 0. (I'm sure that you were on the edge of your seat to find out.)
This is not the same thing as a low-memory, dynamic algorithm to compute the bits or digits of pi, which I think what you could mean by "sequentially". I don't think that anyone knows how to do that in base 10 or in base 2, although the BPP algorithm can be viewed as a partial solution.
Well, some of the iterative formula for pi are also sort-of like a sequential algorithm, in the sense that there is an iteration that produces more digits with each round. However, it's also only a partial solution, because typically the number of digits doubles or triples with each step. So you'd wait with a lot of digits for a while, and the whoosh a lot more digits come quickly.
In fact, I don't know if there is any low-memory, efficient algorithm to produce digits of any standard irrational number. Even for e, you'd think that the standard infinite series is an efficient formula and that it's low-memory. But it only looks low memory at the beginning, and actually there are also faster algorithms to compute many digits of e.
Perhaps you can work with hexadecimal? David Bailey, Peter Borwein and Simon Plouffe discovered a formula for the nth digit after the decimal, in the hexadecimal expansion of pi.
The formula is:
(source: sciencenews.org)
You can read more about it here: http://www.andrews.edu/~calkins/physics/Miracle.pdf
The question of whether such a formula exists for base 10 is still open.
More info: http://www.sciencenews.org/sn_arc98/2_28_98/mathland.htm

How do you seed a PRNG with two seeds?

For a game that I'm making, where solar systems have an x and y coordinates, I'd like to use the coordinates to randomly generate the features for that solar system. The easiest way to do this seems to seed a random number generator with two seeds, the x and y coordinates. Is there anyway to get one reliable seed from the two seeds, or is there a good PRNG that takes two seeds and produces long periods?
EDIT: I'm aware of binary operations between the two numbers, but I'm trying to find the method that will lead to the least number of collisions? Addition and multiplication will easily result in collisions. But what about XOR?
Why not just combine the numbers in a meaningful way to generate your seed. For example, you could add them, which could be unique enough, or perhaps stack them using a little multiplication, for example:
seed = (x << 32) + y
seed1 ^ seed2
(where ^ is the bitwise XOR operator)
A simple Fibonacci PRNG uses 2 seeds
One of which should be odd. This generator
Uses a modulus which is a power of 10.
The period is long and invariable being
1.5 times the modulus; thus for modulus
1000000 or 10^6 the period is 1,500,000.
The simple pseudocode is:
Input "Enter power for 10^n modulus";m
Mod& = 10 ^ m
Input "Enter # of iterations"; n
Input "Enter seed #1"; a
Input "Enter seed #2"; b
Loop = 1
For loop = 1 to n
C = a + b
If c > m then c = c - m
A = b
B = c
Next
This generator is very fast and gives
An excellent uniform distribution.
Hope this helps.
why not use some kind of super simple fibonacci arithmetic or something like it to produce coordinates directly in base 10. Use the two starting numbers as the seeds. It won't produce random numbers suitable for monte carlo or anything like that, but they should be all right for a game. I'm not a programer or a mathmatician and have never tried to code anything so I couldn't do it for you.....
edit - something like f1 = some seed then f2 = some seed and G = (sqrt(5) + 1) / 2....
then some kind of loop. Xn = Xn-1 + Xn-2 mod(G) mod(1) (should produce a decimal between 0 and 1) and then multiply by what ever and take the least significant digits
and perhaps to prevent decay for as long as the numbers need to be produced...
an initial reseeding point at which f1 and f2 will be reseeded based on the generators own output, which will prevent the sequence of numbers being able to be described by a closed expression so...
if counter = initial reseeding point f1 = Xn and f2 = Xn - something. and... reseeding point is set to ceiling Xn * some multiplier.
so it's period should end when identical values for Xn and Xn - something are re-fed into f1 and f2, which shouldn't happen for at least what ever bit length you are using for the numbers.
.... I mean, that's my best guess...
Is there a reason you want to use the co-ordinates? For example, do you always want a system generated at the same coordinate to always be identical to any other system generated at that particular co-ordinate?
I would suggest using the more classical method of just seeding with the current time and using the results of that to continue generating your pseudo-randomness.
If you're adamant about using the coordinates, I would suggest concatenation (As I believe someone else suggested). At least then you're guaranteed to avoid collisions, assuming that you don't have two systems at the same co-ords.
I use one of George Marsaglia's PRNGs:
http://www.math.uni-bielefeld.de/~sillke/ALGORITHMS/random/marsaglia-c
It explicitly relies on two seeds so might just what you are looking for.

Entropy repacking

I have been tossing around a conceptual idea for a machine (as in a Turing machine) and I'm wondering if any work has been done on this or related topics.
The idea is a machine that takes an entropy stream and gives out random symbols in any range without losing any entropy.
I'll grand that is a far from rigorous description so I'll give an example: Say I have a generator of random symbols in the range of 1 to n and I want to be able to ask for a symbols in any given range, first 1 to 12 and then 1 to 1234. (To keep it practicable I'll only consider deterministic machines where, given the same input stream and requests, it will always give the same output.) One necessary constraint is that the output contain at least as much entropy as the input. However, the constraint I'm most interested in is that the machine only reads in as much entropy as it spits out.
E.g. if asked for tokens in the range of 1 to S1, S2, S3, ... Sm it would only consume ceiling(sum(i = 1 to m, log(Si))/log(n)) inputs tokens.
This question asks about how to do this conversion while satisfying the first constraint but does very badly on the second.
Okay, I'm still not sure that I'm following what you want. It sounds like you want a function
f: I → O
where the inputs are a strongly random (uniform distribution etc) sequence of symbols on an alphabet I={1..n}. (So a series of random natural numbers ≤ n.) The outputs are another sequence on O={1..m} and you want that sequence to have as much entropy as the inputs.
Okay, if I've got this right, first off, if m < n, you can't. If m < n then lg m < lg n, so the entropy of the set of output symbols is smaller.
If m ≥ n, then you can do it trivially by just selecting the ith element of {1..m}. Entropy will be the same, since the number of possible output symbols is the same. They aren't going to be "random" in the sense of being uniformly distributed over the whole set {1..m}, though, because necessarily (pigeonhole principle) some symbols won't be selected at all.
If, on the other hand, you'd be satisfied with having a random sequence on {1..m}, then you can do it by selecting an appropriate pseudorandom number generator using your input from the random source as a seed.
My current pass at it:
By adding the following restriction: you know in advance what the sequence of ranges is {S1, S2, S3, ..., Sn}, than using base translation with a non-constant base might work:
Find Sp = S1 * S2 * S3 * ... * Sn
Extract m=cealing(log(Sp)/log(n)) terms from the input {R1, R2, R3, ..., Rm}
Find X = R1 + R2*n + R3*n^2 + ... + Rm*n^(m-1)
Reform X as O1 + S1*O2 + S1*S2*O3 + ... Sn*On + x where 1 <= Oi <= Si
This might be reformable into a solution that works for one value at a time by pushing x back into the input stream. However I can't convince my self that even the known outputs range form is sound so...

Resources