It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to design a circuit which accepts n numbers at the input (infinite input) and calculates the average of these numbers as the output. The numbers for the input can only be of values <0,15>.
I need to implement this circuit in VHDL but I cannot find the proper algorithm since I need it to design the logical schema. I understand that I will definitely need a 4bit adder and some registers to store the values. I tried to understand the problem using moving average principle but it just did not work at all.
For input n+1, with value x, the average will be equal to (average*n+x)/(n+1) --> ... = average + (next - average)/(n+1).
From this observation a simple algorithm can be derived:
Initialize all registers to 0
Get the next input and store it in temp register
Increase count register by 1
Subtract previous average from temp register
Divide the temp register by count
Add temp to average
Go to step 2
Lets see, you'd need as input ports: reset, input[3:0], clock; outputs: average[3:0] and internal registers accumulator[a:0] and count[c:0].
I can't remember the syntax of my VHDL and Verilog just now but...
whenever you get an input you need to add it to the accumulator, increment the count by 1, then set the average to be the accumulator divided by the count.
On reset set the accumulator and count to zero.
If you know the maximum number of values for incrementing is countmax then the accumulator needs to be big enough to hole countmax*15 and count has to have enough bits to hold countmax.
This will also give you a size for the divider.
If countmax is unknown then you need to add an overflow output and set it when the accumulator overflows and un-set it on reset.
Hope that helps.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm trying to find an algorithm which uses linear space of memory for:
Given two strings x and y over an arbitrary alphabet, determine their longest common sub sequence.
Note that when you're calculating the next row of the table in the dynamic programming solution to solve the LCS problem, you only need the previous row and your current row. Then you can modify the dynamic programming solution to keep track of only the previous row and the current row instead of the m x n table. Every time you reach the end of the current row, you set the previous row to the current row, and start from the beginning of the row again. You do this m times where m is the number of rows in your table. This will use space linear in the number of columns.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to generate a very large pseudorandom permutation p : [0,n-1] -> [0,n-1], and then compute m specific values p[i], where m << n. Is it possible to do this in O(m) time? The motivation is a large parallel computation where each processor only needs to see a small piece of the permutation, but the permutation must be consistent between processors.
Note that in order to help in the parallel case, different processes computing disjoint sets of i values shouldn't accidentally produce p[i] == p[j] for i != j.
EDIT: There is a much more clever algorithm based on block ciphers that I think Geoff will write up.
There are two common algorithms for generating permutations. Knuth's shuffle is inherently sequential so not a nice choice for parallelism. The other is random selection with retry any time repetition is encountered. Random selection is clearly equivalent when applied in any order, thus I propose the following simple algorithm:
Randomly sample candidate p[i] in [0,n-1] for each i in Needed (in parallel).
Remove all non-collided entries from Needed, as well as (optionally) some deterministic choice from the collisions (e.g., keep p[i] if i < {j | p[j] = p[i]}).
Repeat from step 1 with new (smaller) set Needed.
Since we haven't lost entropy in this process, the result is essentially equivalent to sequential random sampling in some different order, starting with the locations i that did not collide (we just didn't know that order in advance). Note that if we used the computed value in a comparison, for example, we would have introduced bias.
An example very low strength version:
Generate 2k = O(1) random integers a_i,b_i in [0,n-1], with a_i relatively prime to n.
Pick a weak permutation wp : [0,n-1] -> [0,n-1], say w(i) = i with all the but the high bit flipped.
p[i] = b_0 + a_0 * wp(b_1 + a_1 * wp(... i ...))
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a list of projects and each project takes you exactly two days to be completed and has a due date. let P[i].id, P[i].duedate, and p[i].value be the id of the project, the due date of the project, and the value you get if you complete the project on time(on or before due date)
write an algorithm that takes as input array A and returns a schedule of which projects you will do and when, to maximize the value you get.
the output of the algorithm is an array B such that B[i] is the id of the project that you will work on during day i, i>= 1.
no more than one project in a particular date, and you don't get the value of the project unless you complete it by the due date, today is day 0 and you will start working on the projects from day 1 (the due date is an integer), e.g., if the due date of a project is 5, you can choose to work on it on days 3 and 5)
1- write the algorithm.
2- prove that the algorithm is optimal?
3- what is the time complexity for the algorithm?
If all the values be same it's simple, just greedy approach by selecting least possible due date works well.
When the values are different, you can use similar approach but this time by dynamic programming (I'll assume your due dates are discrete).
Create an array of size Max{due date} name it as V, this array holds maximum possible value which can be earned in specific time, and another array for each value in V to save the selected tasks in related V[i], now you have this DP choice:
V[0] = 0, V[1] = max{value_x1, V[i] = Max {V[i-2] + value_xi, V[i-1]}
Here value_xi means maximum value task which has due date equal or smaller than i, Also this task shouldn't be in V[i-2] selection, after that update V[i] selection.
Finally I'll left to you to finish your homework by finding the order of this algorithm and it's correctness, also you can improve memory usage.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
How we can to fill the chessboard with domino and we have a some blocks. and chessboard is n x m. and the places filled with ordered numbers.
Test :
Answer like this :
input give n , m and k. k is number of blocks.
and next k lines give blocks such as 6 7 or 4 9.
sorry for my English.
Here's an idea. In your example board, it is immediately obvious that squares 7 9 and 14 must contain domino 'ends', that is to say it must be the case that there are dominos covering 2-7, 8-9, and 14-15.
(In case it's not 'immediately obvious', the rule I used is that a square with 'walls' on three sides dictates the orientation of the domino covering that square)
If we place those three dominos, it may then be the case that there are more squares to which the same rule now applies (eg 20).
By iterating this process, we can certainly make progress towards our goal, or alternatively get to a place where we know it can't be achieved.
See how far that gets you.
edit also, note that in your example, the lower-left corner 2x2 (squares 11 12 16 17) is not uniquely determined - a 90 degree rotation of the depicted arrangement would also work - so you will have to consider such situations. If you are looking for any solution, you must come up with a way of arbitrarily picking one of many possibilities; if you are trying to enumerate all possibilities, you will have to come up with a way of finding them all!
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Say you wanted to find which input causes function x to output value y, and you know the (finite) range of possible inputs.
The input and output are both numbers, and positively correlated.
What would be the best way to optimize that?
I'm currently just looping through all of the possible inputs.
Thanks.
One solution would be a binary search over the possible inputs.
Flow:
find the median input x
get the output from function(x)
if the output is less than the desired y
start over using the smaller half of the possible inputs
else
start over using the larger half of the possible inputs
A binary search algorithm, perhaps?
http://en.wikipedia.org/wiki/Binary_search_algorithm
If the range is finite and small, a precomputed lookup table might be the fastest way
if you have some sets of know "x" data that yield "y" you can divied between training and test sets and use neural networks.