Digital logic Counters - digital-logic

The minimum number of JK flip-flops required to construct a synchronous counter with the count sequence (0, 0, 1, 1, 2, 2, 3, 3, 0, 0, ...) is ? and also construct the circuit design.
My Approach:
I understand that minimum J&K Gate required is 3 and I have calculated the MSB of J&K i.e (J2&K2) is both 1. I can't understand how to realize the (j1&k1) and LSB (j0&k0) because there I only get 0 & don't care. I don't understand how to implement it using K-Map.

I got it, J2 & K2 Will be 1 and J1 & K1 will be Q2.Q0 and J0 & k0 will be Q2 where J1= F(Q2,Q1,Q0)= Q2.Q0 =K1 similarly J0 = F(Q2,Q1,Q0)= Q2= K0.

Related

Minimum Delete operations to empty the vector

My friend was asked this question in an interview:
We have a vector of integers consisting only of 0s and 1s. A delete consists of selecting consecutive equal numbers and removing them. The remaining parts are then attached to each other. For e.g., if the vector is [0,1,1,0] then after removing [1,1] we get [0,0]. We need one delete to remove an element from the vector, if no consecutive elements are found.
We need to write a function that returns the minimum number of deletes to make the vector empty.
Examples 1:
Input: [0,1,1,0]
Output: 2
Explanation: [0,1,1,0] -> [0,0] -> []
Examples 2:
Input: [1,0,1,0]
Output: 3
Explanation: [1,0,1,0] -> [0,1,0] -> [0,0] -> [].
Examples 3:
Input: [1,1,1]
Output: 1
Explanation: [1,1,1] -> []
I am unsure of how to solve this question. I feel that we can use a greedy approach:
Remove all consecutive equal elements and increment the delete counter for each;
Remove elements of the form <a, b, c> where a==c and a!=b, because of we had multiple consecutive bs, it would have been deleted in step (1) above. Increment the delete counter once as we delete one b.
Repeat steps (1) and (2) as long as we can.
Increment delete counter once for each of the remaining elements in the vector.
But I am not sure if this would work. Could someone please confirm if this is the right approach? If not, how do we solve this?
Hint
You can simplify this problem greatly by noticing the following fact: a chain of consecutive zeros or ones can be shortened or lengthened without changing the final solution. By example, the two vectors have the same solution:
[1, 0, 1]
[1, 0, 0, 0, 0, 0, 0, 1]
With that in mind, the solution becomes simpler. So I encourage you to pause and try to figure it out!
Solution
With the previous remark, we can reduce the problem to vectors of alternating zeros and ones. In fact, since zero and one have no special meaning here, it suffices to solve for all such vector which start by... say a one.
[] # number of steps: 0
[1] # number of steps: 1
[1, 0] # number of steps: 2
[1, 0, 1] # number of steps: 2
[1, 0, 1, 0] # number of steps: 3
[1, 0, 1, 0, 1] # number of steps: 3
[1, 0, 1, 0, 1, 0] # number of steps: 4
[1, 0, 1, 0, 1, 0, 1] # number of steps: 4
We notice a pattern, the solution seems to be floor(n / 2) + 1 for n > 1 where n is the length of those sequences. But can we prove it..?
Proof
We will proceed by induction. Suppose you have a solution for a vector of length n - 2, then any move you do (except for deleting the two characters on the edges of the vector) will have the following result.
[..., 0, 1, 0, 1, 0 ...]
^------------ delete this one
Result:
[..., 0, 1, 1, 0, ...]
But we already mentioned that a chain of consecutive zeros or ones can be shortened or lengthened without changing the final solution. So the result of the deletion is in fact equivalent to now having to solve for:
[..., 0, 1, 0, ...]
What we did is one deletion in n elements and arrived to a case which is equivalent to having to solve for n - 2 elements. So the solution for a vector of size n is...
Solution(n) = Solution(n - 2) + 1
= [floor((n - 2) / 2) + 1] + 1
= floor(n / 2) + 1
Keeping in mind that the solutions for [1] and [1, 0] are respectively 1 and 2, this concludes our proof. Notice here, that [] turns out to be an edge case.
Interestingly enough, this proof also shows us that the optimal sequence of deletions for a given vector is highly non-unique. You can simply delete any block of ones or zeros, except for the first and last ones, and you will end up with an optimal solution.
Conclusion
In conclusion, given an arbitrary vector of ones and zeros, the smallest number of deletions you will need can be computed by counting the number of groups of consecutive ones or zeros. The answer is then floor(n / 2) + 1 for n > 1.
Just for fun, here is a Python implementation to solve this problem.
from itertools import groupby
def solution(vector):
n = 0
for group in groupby(vector):
n += 1
return n // 2 + 1 if n > 1 else n
Intuition: If we remove the subsegments of one integer, then all the remaining integers are of one type leads to only one operation.
Choosing the integer which is not the starting one to remove subsegments leads to optimal results.
Solution:
Take the integer other than the one that is starting as a flag.
Count the number of contiguous segments of the flag in a vector.
The answer will be the above count + 1(one operation for removing a segment of starting integer)
So, the answer is:
answer = Count of contiguous segments of flag + 1
Example 1:
[0,1,1,0]
flag = 1
Count of subsegments with flag = 1
So, answer = 1 + 1 = 2
Example 2:
[1,0,1,0]
flag = 0
Count of subsegments with flag = 2
So, answer = 2 + 1 = 3
Example 3:
[1,1,1]
flag = 0
Count of subsegments with flag = 0
So, answer = 0 + 1 = 1

Minimum common remainder of division

I have n pairs of numbers: ( p[1], s[1] ), ( p[2], s[2] ), ... , ( p[n], s[n] )
Where p[i] is integer greater than 1; s[i] is integer : 0 <= s[i] < p[i]
Is there any way to determine minimum positive integer a , such that for each pair :
( s[i] + a ) mod p[i] != 0
Anything better than brute force ?
It is possible to do better than brute force. Brute force would be O(A·n), where A is the minimum valid value for a that we are looking for.
The approach described below uses a min-heap and achieves O(n·log(n) + A·log(n)) time complexity.
First, notice that replacing a with a value of the form (p[i] - s[i]) + k * p[i] leads to a reminder equal to zero in the ith pair, for any positive integer k. Thus, the numbers of that form are invalid a values (the solution that we are looking for is different from all of them).
The proposed algorithm is an efficient way to generate the numbers of that form (for all i and k), i.e. the invalid values for a, in increasing order. As soon as the current value differs from the previous one by more than 1, it means that there was a valid a in-between.
The pseudocode below details this approach.
1. construct a min-heap from all the following pairs (p[i] - s[i], p[i]),
where the heap comparator is based on the first element of the pairs.
2. a0 = -1; maxA = lcm(p[i])
3. Repeat
3a. Retrieve and remove the root of the heap, (a, p[i]).
3b. If a - a0 > 1 then the result is a0 + 1. Exit.
3c. if a is at least maxA, then no solution exists. Exit.
3d. Insert into the heap the value (a + p[i], p[i]).
3e. a0 = a
Remark: it is possible for such an a to not exist. If a valid a is not found below LCM(p[1], p[2], ... p[n]), then it is guaranteed that no valid a exists.
I'll show below an example of how this algorithm works.
Consider the following (p, s) pairs: { (2, 1), (5, 3) }.
The first pair indicates that a should avoid values like 1, 3, 5, 7, ..., whereas the second pair indicates that we should avoid values like 2, 7, 12, 17, ... .
The min-heap initially contains the first element of each sequence (step 1 of the pseudocode) -- shown in bold below:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We retrieve and remove the head of the heap, i.e., the minimum value among the two bold ones, and this is 1. We add into the heap the next element from that sequence, thus the heap now contains the elements 2 and 3:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We again retrieve the head of the heap, this time it contains the value 2, and add the next element of that sequence into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
The algorithm continues, we will next retrieve value 3, and add 5 into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
Finally, now we retrieve value 5. At this point we realize that the value 4 is not among the invalid values for a, thus that is the solution that we are looking for.
I can think of two different solutions. First:
p_max = lcm (p[0],p[1],...,p[n]) - 1;
for a = 0 to p_max:
zero_found = false;
for i = 0 to n:
if ( s[i] + a ) mod p[i] == 0:
zero_found = true;
break;
if !zero_found:
return a;
return -1;
I suppose this is the one you call "brute force". Notice that p_max represents Least Common Multiple of p[i]s - 1 (solution is either in the closed interval [0, p_max], or it does not exist). Complexity of this solution is O(n * p_max) in the worst case (plus the running time for calculating lcm!). There is a better solution regarding the time complexity, but it uses an additional binary array - classical time-space tradeoff. Its idea is similar to the Sieve of Eratosthenes, but for remainders instead of primes :)
p_max = lcm (p[0],p[1],...,p[n]) - 1;
int remainders[p_max + 1] = {0};
for i = 0 to n:
int rem = s[i] - p[i];
while rem >= -p_max:
remainders[-rem] = 1;
rem -= p[i];
for i = 0 to n:
if !remainders[i]:
return i;
return -1;
Explanation of the algorithm: first, we create an array remainders that will indicate whether certain negative remainder exists in the whole set. What is a negative remainder? It's simple, notice that 6 = 2 mod 4 is equivalent to 6 = -2 mod 4. If remainders[i] == 1, it means that if we add i to one of the s[j], we will get p[j] (which is 0, and that is what we want to avoid). Array is populated with all possible negative remainders, up to -p_max. Now all we have to do is search for the first i, such that remainder[i] == 0 and return it, if it exists - notice that the solution does not have to exists. In the problem text, you have indicated that you are searching for the minimum positive integer, I don't see why zero would not fit (if all s[i] are positive). However, if that is a strong requirement, just change the for loop to start from 1 instead of 0, and increment p_max.
The complexity of this algorithm is n + sum (p_max / p[i]) = n + p_max * sum (1 / p[i]), where i goes from to 0 to n. Since all p[i]s are at least 2, that is asymptotically better than the brute force solution.
An example for better understanding: suppose that the input is (5,4), (5,1), (2,0). p_max is lcm(5,5,2) - 1 = 10 - 1 = 9, so we create array with 10 elements, initially filled with zeros. Now let's proceed pair by pair:
from the first pair, we have remainders[1] = 1 and remainders[6] = 1
second pair gives remainders[4] = 1 and remainders[9] = 1
last pair gives remainders[0] = 1, remainders[2] = 1, remainders[4] = 1, remainders[6] = 1 and remainders[8] = 1.
Therefore, first index with zero value in the array is 3, which is a desired solution.

How can I find all possible sums and differences of numbers in a vector?

Let's say I have a vector called numbers.
Numbers = {1, 5, 6, 8}. (A possibility I have though of is to double the size of the vector and include all the negative numbers, but I still don't have a good solution to find all the possible sums.)
Possible solutions:
4 = 5 - 1
1 = 1
19 = 8 + 6 + 5
I want the search to stop when I've found a number I will be looking for, but my main issue is just to find all of the different sums.
This is very similar to the subset sum problem but I haven't really found a solution that I can understand / that includes negative numbers.
Use dynamic programming.
Let (a_0, ..., a_{n-1}) be your array of numbers.
Let A(k) be the set of all possible sums/differences of (a_0, ..., a_{k-1}).
Then you can easily deduce A(k) from A(k-1). Be sure that all repetitions are removed, using hash table or sorting or anything.
The point is that, if there is an upper bound m of the a_i's, then A(k) contains at most 2mk + 1 elements. Thus the complexity is reduced from O(3^n) to something like O(mn^2).
This is probably the best thing you can do: for example, if the a_i's are increasing exponentially, then the size of the final result is also exponential.
Think in the ternary representation {0, 1, 2}, you have a set of numbers, where every number can appear as positive, negative or not appear, then you could represent this posibilities as ternary {0, 1, 2} and use it with coef -> {-1, 0, 1} you can compute all combination easily.
coef -> (-1 , 0, 1)-> ternary '0' -> value '-1', ternary '1' -> value '0' and ternary 2 -> value '1'.
Numbers -> {n0, n1, n2, .. ni}
Ternary representation -> (t0)*3^0 + (t1)*3^1 + (t2)*3^2 + .. + (ti)*3^i
Coef -> {c0 , c1, c2, .. ci}
Result -> C0*n0 + C1*n1 + C2*n2 + .. + Ci*ni
An example:
Numbers = {1, 5, 6, 8}
Total combinations -> 3^4 = 81
combination number (c) ∈ [0 , 80].
f.e: c = 79 -> 2221(ternary)
Ternary (reverse) {1, 2, 2, 2} to coef {0, 1, 1, 1}
result (79d/2221t): 0*1 + 1*5 + 1*6 + 1*8 = 19
To calculate all combinations, you must to calculate this steps in a loop (i -> 0 .. 3^4)

Understanding Recursion / how are subproblems combined (Max-Subarray Algorithm)

I'm having some problems understanding divide and conquer algorithms. I've read that in order to apply recursion successfully you need to have a "recursive leap of faith" and you shouldn't bother with the details of every step, but I'm not really satisfied with just accepting that recursion works if all the conditions are fulfilled, because it seems like magic to me at the moment and I want to understand why it works.
So I'm given the following recursive algorithm of finding a maximum subarray in pseudocode:
Find-Maximum-Subarray(A, low, high)
if high == low
return (low, high, A[low])
else
mid = [(low + high)/2]
(left-low, left-high, left-sum) = Find-Maximum-Subarray(A, low, mid)
(right-low, right-high, right-sum) = Find-Maximum-Subarray(A,mid + 1, high)
(cross-low, cross-high, cross-sum) = Find-Max-Crossing-Subarray(A,low, mid, high)
if left-sum >= right-sum and left-sum >= cross-sum
return (left-low, left-high, left-sum)
else if right-sum >= left-sum and right-sum >= cross-sum
return (right-low, right-high, right-sum)
else
return (cross-low, cross-high, cross-sum)
where the Find-Max-Crossing-Subarray algorithm is given by the following pseudocode:
Find-Maximum-Crossing-Subarray(A, low, mid, high)
left-sum = -INF
sum = 0
for i = mid down to low
sum = sum + A[i]
if sum > left-sum
left-sum = sum
max-left = i
right-sum = -INF
sum = 0
for j = mid + 1 to high
sum = sum + A[j]
if sum > right-sum
right-sum = sum
max-right = j
return (max-left, max-right, left-sum + right-sum)
Now when I try to apply this algorithm to an example, I'm having a hard time understanding all the steps.
The array is "broken down" (using the indices, without actually changing the array itself) until high equals low. I thinks this corresponds to the first call, so Find-Maximum-Subarray is first called for all the terms on the left of the array, until high==low==1. Then (low, high, A[low]) is returned which would be (1, 1, A[1]) in this case. Now I don't understand how those values are processed in the remainder of the call.
Furthermore I don't understand how the algorithm actually compares subarrays of lengths > 1. Can anybody explain to me how the algorithm continues once one of the calls of the function has bottomed out, please?
In short:
Let A be an array of length n. You want to compute the max subarray of A sou you call Find-Maximum-Subarray(A, 0, n-1). Now try to make the problem easier:
Case. high = low:
In this case the Array has only 1 Element so the solution is simple
high != low
In this case the solutions are to hard to find. so try to make the problem smaller. What happens if we cut the array A into arrays B1 and B2 of the half length. Now there are only 3 new cases
a) the Max Subarray of A is also a subarray of B1 but not of B2
b) The max subarray of A is also a subarray of B2 but not of B1
c) The max subarray of A overlaps with B1 and B2
so you compute the max subarray of B1 and B2 separately and look for an overlapping solution and finally you take the largest one.
The trick is now, that you can du the same thing with B1 and B2.
Example:
A =[-1, 2, -1, 1]
Call Find-Maximum-Subarray(A, 0, 3);
- Call Find-Maximum-Subarray(A, 0, 1); -> returns ( 1, 1, 2 ) (cause 2 > 1 > -1, see the subcalls)
- Call Find-Maximum-Subarray(A, 0, 0); -> returns ( 0, 0, -1 )
- Call Find-Maximum-Subarray(A, 1, 1); -> returns ( 1, 1, 2 )
- Call Find-Max-Crossing-Subarray(A, 0, 0, 1); -> returns ( 0, 1, 1 )
- Call Find-Maximum-Subarray(A, 2, 3); -> returns ( 3, 3, 1 ) ( 1 > 0 > -1, see subcalls)
- Call Find-Maximum-Subarray(A, 2, 2); -> returns ( 2, 2, -1 )
- Call Find-Maximum-Subarray(A, 3, 3); -> returns ( 3, 3, 1 )
- Call Find-Max-Crossing-Subarray(A, 2, 2, 3); returns ( 2, 3, 0 )
- Call Find-Max-Crossing-Subarray(A, 0, 1, 3); -> returns ( 1, 3, 2 )
- Here you have to take at least the elements A[1] and A[2] with the sum of 1,
but if you also take A[3]=1 the sum will be 2. taking A[0] does not help
due to A[0] is negative
- Now you have only to look which subarray has the larger sum. In this case you
have two with the same size: A[1] and A[1-3]. Return one of them.

Non-restoring division algorithm

Does anyone know the steps for dividing unsigned binary integers using non-restoring division?
It's hard to find any good sources online.
i.e if A = 101110 and B = 010111
how do we find A divided by B in non-restoring division? What do the registers look like in each step?
Thanks!
(My answer is a little late-reply. But I hope it will be useful for future visitors)
Algorithm for Non-restoring division is given in below image :
In this problem, Dividend (A) = 101110, ie 46, and Divisor (B) = 010111, ie 23.
Initialization :
Set Register A = Dividend = 000000
Set Register Q = Dividend = 101110
( So AQ = 000000 101110 , Q0 = LSB of Q = 0 )
Set M = Divisor = 010111, M' = 2's complement of M = 101001
Set Count = 6, since 6 digits operation is being done here.
After this we start the algorithm, which I have shown in a table below :
In table, SHL(AQ) denotes shift left AQ by one position leaving Q0 blank.
Similarly, a square symbol in Q0 position denote, it is to be calculated later
Hope all the steps are clear from the table !!!
1) Set the value of register A as 0 (N bits)
2) Set the value of register M as Divisor (N bits)
3) Set the value of register Q as Dividend (N bits)
4) Concatenate A with Q {A,Q}
5) Repeat the following “N” number of times (here N is no. of bits in divisor):
  If the sign bit of A equals 0,
   shift A and Q combined left by 1 bit and
subtract M from A,
  else shift A and Q combined left by 1 bit and add M to A
  Now if sign bit of A equals 0, then set Q[0] as 1, else set Q[0] as 0
6) Finally if the sign bit of A equals 1 then add M to A.
7) Assign A as remainder and Q as quotient.

Resources