Is the implementation of `Don't care condition ( X ) in k- map is right - precision

I have a little confusion regarding the Don't care condition in the karnaugh map. As we all know karnaugh map is used to achieve the
complete
accurate/precise
optimal
output equation of 16bit or sometime 32 bits binary solution,till that everything was alright but the problem arises when we insert a dont care conditions in it.
My question was that,
As even the dont care conditions were generated from the o's or 1's solution of truth table & in karnaugh map we sometimes conclude or sometimes ignore dont care conditions in our karnaugh map groups. so is it an ambiguity in a karnaugh map that we ignore dont care conditions in the karnaugh map beacuse we dont know what is behind that dont care condition is it 1 or 0. so afterwards that how could we confidently use to say that our solution is complete or accurate while we are ignoring the dont care conditions in it. May be the dont care we are ignoring contains a 1 in sop and 0 at pos so according to it may contain an error.

A "don't care" is just that. Something that we don't care about. It gives us an opportunity for additional optimization, because that value is not restricted. We are able to make it whatever we wish in order to achieve the most optimal solution.
Because we don't care about it, it doesn't matter what the value is. We will use whatever suits us best (lowest cost, fastest, etc... "optimal"). If it's better as a 1 in one implementation and a 0 in another, so be it, it doesn't matter.
Yes there is always another case with the don't care, but we can say it's complete/accurate because we don't care about the other one. We will treat it whichever way makes our implementation better.

Let's take a very simple example to understand what "don't care conditions" exactly mean.
Let F be a two-variable Boolean function defined by a user as follows:
A B F
0 0 1
0 1 0
This function is not defined when the value of A is 1.
This can mean one of two things :-
The user doesn't care about the value that F produces when A = 1.
The user guarantees that A = 1 will never be given as an input to F.
Both of these cases are collectively known as "don't care conditions".
Now, the person implementing this function can use this fact to their advantage by extending the definition of F to include the cases when A = 1.
Using a K-Map,
B' B
A' | 1 | 0 |
A | X | X |
Without these don't care conditions, the algebraic expression of F would be written as A'B', i.e. F = A'B'.
But, if we modify this map as follows,
B' B
A' | 1 | 0 |
A | 1 | 0 |
then F can be expressed as F = B'.
By modifying this map, we have basically extended the definition of F as follows:
A B F
0 0 1
0 1 0
1 0 1
1 1 0
This method works because the person implementing the function already knows that either the user will not care what happens when A = 1, or the user will never use A = 1 as an input to F.
Another example is a four-variable Boolean function in which each of those 4 variables denotes one distinct bit of a BCD value, and the function gives the output as 1 if the equivalent decimal number is even & 0 if it is odd. Here, since it is guaranteed that the input will never be one of 1010, 1011, 1100, 1101, 1110 & 1111, therefore the person implementing the function can extend the definition of the function to their advantage by including these cases as well.

Related

How to sort an array with minimum swaps of adjacent elements

I had an algorithm to solve the problem where professor has to sort the students by their class score like 1 for good and 0 for bad. in minimum number of swaps where only adjacent students can be swapped. For Example if Students are given in sequence [0,1,0,1] only one swap is required to do [0,0,1,1] or in case of [0,0,0,0,1,1,1,1] no swap is required.
From the problem description I immediately know that it was a classic min adjacent swap problem or count inversion problem that we can find in merge sort. I tried my own algorithm as well as the one listed here or this site but none passed all the tests.
The most number of test cases were passed when I try to sort the array in reverse order. I also tried to sort the array in the order based on whether the first element of the array is 0 or 1. For example is the first element is 1 then I should sort the array in descending order else in ascending order as the students can be in any grouping, still none worked. Some test cases always failed. The thing was when I sort it in ascending order the one test case that was failing in case of reverse sorting passed along with some others but not all. So I don't know what I was doing wrong.
It feels to me that term "sorting" is an exaggeration when it comes to an array of 0's and 1's. You can simply count 0's and 1's in O(n) and produce an output.
To address "minimal swaps" part, I constructed a toy example; two ideas came to my mind. So, the example. We're sorting students a...f:
a b c d e f
0 1 0 0 1 1
a c d b e f
0 0 0 1 1 1
As you see, there isn't much of a sorting here. Three 0's, three 1's.
First, I framed this as an edit distance problem. I. e. you need to convert abcdef into acdbef using only "swap" operation. But how does you come up with acdbef in the first place? My hypothesis here is that you merely need to drag 0's and 1's to opposite sides of an array without disturbing their order. E. g.
A B C D
0 0 ... 0 ... 1 ... 0 ... 1 ... 1 1
0 0 0 0 ... 1 1 1 1
A C B D
I'm not 100% sure if it works and really yields you minimal swaps. But it seems reasonable - why would you spend an additional swap for e. g. A and C?
Regarding if you should place 0's first or last - I don't see an issue with running the same algorithm twice and comparing the amount of swaps.
Regarding how to find the amount of swaps, or even the sequence of swaps - thinking in terms of edit distances can help you with the latter. Finding just numbers of swaps can be a simplified form of edit distance too. Or perhaps something even more simple - e. g. find something (a 0 or 1) that is nearest to its "cluster", and move it. Then repeat until the array is sorted.
If we had to sort the zeros before the ones, this would be straightforward inversion counting.
def count(lst):
total = 0
ones = 0
for x in lst:
if x:
ones += 1
else:
total += ones
return total
Since sorting the ones before the zeros is also an option, we just need to run this algorithm twice, once interchanging the roles of zeros and ones, and take the minimum.

Cartesian product in J

I'm trying to reproduce APL code for the game of life function in J. A YouTube video explaining this code can be found after searching "Game of Life in APL". Currently I have a matrix R which is:
0 0 0 0 0 0 0
0 0 0 1 1 0 0
0 0 1 1 0 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
I wrote J code which produces the adjacency list (number of living cells in adjacent squares) which is the following:
+/ ((#:i.4),-(#:1+i.2),(1 _1),.(_1 1)) |. R
And produces:
0 0 1 2 2 1 0
0 1 3 4 3 1 0
0 1 4 5 3 0 0
0 1 3 2 1 0 0
0 0 1 1 0 0 0
My main issue with this code is that it isn't elegant, as ((#:i.4),-(#:1+i.2),(1 _1),.(_1 1)) is needed just to produce:
0 0
0 1
1 0
1 1
0 _1
_1 0
_1 1
1 _1
Which is really just the outer product or Cartesian product between vectors 1 0 _1 and itself. I could not find an easy way to produce this Cartesian product, so my end question is how would I produce the required vector more elegantly?
A Complete Catalog
#Michael Berry's answer is very clear and concise. A sterling example of the J idiom table (f"0/~). I love it because it demonstrates how the subtle design of J has permitted us to generalize and extend a concept familiar to anyone from 3rd grade: arithmetic tables (addition tables, +/~ i. 10 and multiplication tables */~ i.12¹), which even in APL were relatively clunky.
In addition to that fine answer, it's also worth noting that there is a primitive built into J to calculate the Cartesian product, the monad {.
For example:
> { 2 # <1 0 _1 NB. Or i:_1 instead of 1 0 _1
1 1
1 0
1 _1
0 1
0 0
0 _1
_1 1
_1 0
_1 _1
Taking Inventory
Note that the input to monad { is a list of boxes, and the number of boxes in that list determines the number of elements in each combination. A list of two boxes produces an array of 2-tuples, a list of 3 boxes produces an array of 3-tuples, and so on.
A Tad Excessive
Given that full outer products (Cartesian products) are so expensive (O(n^m)), it occurs to one to ask: why does J have a primitive for this?
A similar misgiving arises when we inspect monad {'s output: why is it boxed? Boxes are used in J when, and only when, we want to consolidate arrays of incompatible types and shapes. But all the results of { y will have identical types and shapes, by the very definition of {.
So, what gives? Well, it turns out these two issues are related, and justified, once we understand why the monad { was introduced in the first place.
I'm Feeling Ambivalent About This
We must recall that all verbs in J are ambivalent perforce. J's grammar does not admit a verb which is only a monad, or only a dyad. Certainly, one valence or another might have an empty domain (i.e. no valid inputs, like monad E. or dyad ~. or either valence of [:), but it still exists.
An valence with an empty domain is "real", but its range of valid inputs is empty (an extension of the idea that the range of valid inputs to e.g. + is numbers, and anything else, like characters, produces a "domain error").
Ok, fine, so all verbs have two valences, so what?
A Selected History
Well, one of the primary design goals Ken Iverson had for J, after long experience with APL, was ditching the bracket notation for array indexing (e.g. A[3 4;5 6;2]), and recognizing that selection from an array is a function.
This was an enormous insight, with a serious impact on both the design and use of the language, which unfortunately I don't have space to get into here.
And since all functions need a name, we had to give one to the selection function. All primitive verbs in J are spelled with either a glyph, an inflected glyph (in my head, the ., :, .: etc suffixes are diacritics), or an inflected alphanumeric.
Now, because selection is so common and fundamental to array-oriented programming, it was given some prime real estate (a mark of distinction in J's orthography), a single-character glyph: {².
So, since { was defined to be selection, and selection is of course dyadic (i.e having two arguments: the indices and the array), that accounts for the dyad {. And now we see why it's important to note that all verbs are ambivalent.
I Think I'm Picking Up On A Theme
When designing the language, it would be nice to give the monad { some thematic relationship to "selection"; having the two valences of a verb be thematically linked is a common pattern in J, for elegance and mnemonic purposes.
That broad pattern is also a topic worthy of a separate discussion, but for now let's focus on why catalog / Cartesian product was chosen for monad {. What's the connection? And what accounts for the other quirk, that its results are always boxed?
Bracketectomy
Well, remember that { was introduced to replace -- replace completely -- the old bracketing subscript syntax of APL and (and many other programming languages and notations). This at once made selection easier, more useful, and also simplified J's syntax: in APL, the grammar, and consequently parser, had to have special rules for indexing like:
A[3 4;5 6;2]
The syntax was an anomaly. But boy, wasn't it useful and expressive from the programmer's perspective, huh?
But why is that? What accounts for the multi-dimensional bracketing notation's economy? How is it that we can say so much in such little space?
Well, let's look at what we're saying. In the expression above A[3 4;5 6;2], we're asking for the 3rd and 4th rows, the 5th and 6th columns, and the 2nd plane.
That is, we want
plane 2, row 3, column 5, and
plane 2, row 3, column 6, and
plane 2, row 4, column 5 and
plane 2, row 4, column 6
Think about that a second. I'll wait.
The Moment Ken Blew Your Mind
Boom, right?
Indexing is a Cartesian product.
Always has been. But Ken saw it.
So, now, instead of saying A[3 4;5 6;2] in APL (with some hand-waving about whether []IO is 1 or 0), in J we say:
(3 4;5 6;2) { A
which is, of course, just shorthand, or syntactic sugar, for:
idx =. { 3 4;5 6;2 NB. Monad {
idx { A NB. Dyad {
So we retained the familiar, convenient, and suggestive semicolon syntax (what do you want to bet link being spelled ; is also not a coincidence?) while getting all the benefits of turning { into a first-class function, as it always should have been³.
Opening The Mystery Box
Which brings us back to that other, final, quibble. Why the heck are monad {'s results boxed, if they're all regular in type and shape? Isn't that superfluous and inconvenient?
Well, yes, but remember that an unboxed, i.e. numeric, LHA in x { y only selects items from y.
This is convenient because it's a frequent need to select the same item multiple times (e.g. in replacing 'abc' with 'ABC' and defaulting any non-abc character to '?', we'd typically say ('abc' i. y) { 'ABC','?', but that only works because we're allowed to select index 4, which is '?', multiple times).
But that convenience precludes using straight numeric arrays to also do multidimensional indexing. That is, the convenience of unboxed numbers to select items (most common use case) interferes with also using unboxed numeric arrays to express, e.g. A[17;3;8] by 17 3 8 { A. We can't have it both ways.
So we needed some other notation to express multi-dimensional selections, and since dyad { has left-rank 0 (precisely because of the foregoing), and a single, atomic box can encapsulate an arbitrary structure, boxes were the perfect candidate.
So, to express A[17;3;8], instead of 17 3 8 { A, we simply say (< 17;3;8) { A, which again is straighforward, convenient, and familiar, and allows us to do any number of multi-dimensional selections simultaneously e.g. ( (< 17;3;8) , (<42; 7; 2) { A), which is what you'd want and expect in an array-oriented language.
Which means, of course, that in order to produce the kinds of outputs that dyad { expects as inputs, monad { must produce boxes⁴. QED.
Oh, and PS: since, as I said, boxing permits arbitrary structure in a single atom, what happens if we don't box a box, or even a list of a boxes, but box a boxed box? Well, have you ever wanted a way to say "I want every index except the last" or 3rd, or 42nd and 55th? Well...
Footnotes:
¹ Note that in the arithmetic tables +/~ i.10 and */~ i.12, we can elide the explicit "0 (present in ,"0/~ _1 0 1) because arithmetic verbs are already scalar (obviously)
² But why was selection given that specific glyph, {?
Well, Ken intentionally never disclosed the specific mnemonic choices used in J's orthography, because he didn't want to dictate such a personal choice for his users, but to me, Dan, { looks like a little funnel pointing right-to-left. That is, a big stream of data on the right, and a much smaller stream coming out the left, like a faucet dripping.
Similarly, I've always seen dyad |: like a little coffee table or Stonehenge trilithon kicked over on its side, i.e. transposed.
And monad # is clearly mnemonic (count, tally, number of items), but the dyad was always suggestive to me because it looked like a little net, keeping the items of interest and letting everything else "fall through".
But, of course, YMMV. Which is precisely why Ken never spelled this out for us.
³ Did you also notice that while in APL the indices, which are control data, are listed to the right of the array, whereas in J they're now on the left, where control data belong?
⁴ Though this Jer would still like to see monad { produce unboxed results, at the cost of some additional complexity within the J engine, i.e. at the expense of the single implementer, and to the benefit of every single user of the language
n There is a lot of interesting literature which goes into this material in more detail, but unfortunately I do not have time now to dig it up. If there's enough interest, I may come back and edit the answer with references later. For now, it's worth reading Mastering J, an early paper on J by one of the luminaries of J, Donald McIntyre, which makes mention of the eschewing of the "anomalous bracket notation" of APL, and perhaps a tl;dr version of this answer I personally posted to the J forums in 2014.
,"0/ ~ 1 0 _1
will get you the Cartesian product you ask for (but you may want to reshape it to 9 by 2).
The cartesian product is the monadic verb catalog: {
{ ;~(1 0 _1)
┌────┬────┬─────┐
│1 1 │1 0 │1 _1 │
├────┼────┼─────┤
│0 1 │0 0 │0 _1 │
├────┼────┼─────┤
│_1 1│_1 0│_1 _1│
└────┴────┴─────┘
Ravel (,) and unbox (>) for a 9,2 list:
>,{ ;~(1 0 _1)
1 1
1 0
1 _1
0 1
0 0
0 _1
_1 1
_1 0
_1 _1

Number of ways of counting nothing is one?

This is something that I routinely err in while solving problems. How do we decide what is the value of a recursive function when the argument is at the lowest extreme. An example will help:
Given n, find the number of ways to tile a 3xN grid using 2x1 blocks only. Rotation of blocks is allowed.
The DP solution is easily found as
f(n): the number of ways of tiling a 3xN grid
g(n): the number of ways of tiling a 3xN grid with a 1x1 block cut off at the rightmost column
f(n) = f(n-2) + 2*g(n-1)
g(n) = f(n-1) + g(n-2)
I initially thought that the base cases would be f(0)=0, g(0)=0, f(1)=0, g(1)=1. However, this yields a wrong answer. I then read somewhere that f(0)=1 and reasoned it out as
The number of ways of tiling a 3x0 grid is one because there is only one way we cannot use any tiles(2x1 blocks).
My question is, by that logic, shouldn't g(0) be also one. But, in the correct solution, g(0)=0. In general, when can we say that the number of ways of using nothing is one?
About your specific question of tiling, think this way:
How many ways are there to "tile a 3*0 grid"?
I would say: Just one way, don't do anything! and you can't "do nothing" any other way. (f(0) = 1)
How many ways are there to "tile a 3*0 grid, cutting that specific block off"?
I would say: Hey! That's impossible! You can't cut the specific block off since there is nothing. So, there's no way one can solve the task anyhow. (g(0) = 0)
Now, let's get to the general case:
There's no "general" rule about zero cases.
Depending on your problem, you may be able to somehow "interpret" the situation, and find the reasonable value. Most of the times (depending on your definition of "ways") number of ways of doing "nothing" is 1, and number of ways of doing something impossible is 0!
Warning! Being able to somehow "interpret" the zero case is not enough for the relation to be correct! You should recheck your recursive relation (i.e. the way you get the n-th value from the previous ones) to be applicable for the zero-to-one case as well, since most of the time this would be a "tricky" case.
You may find it easier to base your recursive relation on some non-zero case, if you find the zero-case being tricky, or confusing.
The way I see it, g(0) is invalid, since there is no way to cut a 1x1 block out of a 3x0 grid.
Invalid values are typically represented as 0, -∞ or ∞, but this largely depends on the problem. The number of ways to place something would make sense to be 0 to represent invalid values (otherwise your counts will be off). When working with min, you'd generally use ∞. When working with max, you'd generally use -∞ (or possibly 0).
Generally, the number of ways to place 0 objects or objects in a 0-sized space makes sense to be 1 (i.e. placing no objects) (so f(0) = 1). In a lot of other cases valid values would be 0.
But these are far from rules (and avoid blindly following rules, because you'll get hurt with exceptions); the best advice I can give - when in doubt, throw a few values in and see what happens.
In this case you can easily determine what the actual values for g(1), f(1), g(2) and f(2) should be, and use these to calculate g(0) and f(0):
g(1) = 1
f(1) = 0
g(2): (all invalid, since ? is not populated)
|X |X ?X
|? || --
-- ?| --
g(2) = 0, thus g(0) = 0 - f(1) = 0 - 0 = 0
f(2):
|| -- --
|| -- ||
-- -- ||
f(2) = 3, thus f(0) = 3 - 2*g(1) = 3 - 2 = 1

Defining "<" and ">" in Ruby

I am a freshman in high-school who has some time on his hands, and I decided it would be beneficial to write some programs that demonstrate what commonly used functions do. I have always wondered what exactly goes into the greater than and less than operators, so I have set out to define them by myself. The only roadblock that I have encountered is how one can assert that a value is negative or positive, without using the greater than or less than operators. So far, I have something that looks like this:
a = 34
b = 42
c = a - b
puts "A is Greater than B" while is_positive?(c)
Does anybody have ideas on how I would define is_positive?(c)?
This question should not be tagged ruby but mathematics.
Then you absolutely do need the equality operator.
If you want to restrict yourself to just the + and - operators, you have no other way of deciding whether a or b is greater, than to count up from 0 and see which value you hit first (which of course is tested using the equality operator)
You mean operator <=> that return -1 first argument is less, 0 if equal and 1 if greater than second? Or maybe you mean sign function that return -1 if argument is less than 0, 0 if is 0 or 1 if is greater than 0?

Check whether a point is inside a rectangle by bit operator

Days ago, my teacher told me it was possible to check if a given point is inside a given rectangle using only bit operators. Is it true? If so, how can I do that?
This might not answer your question but what you are looking for could be this.
These are the tricks compiled by Sean Eron Anderson and he even put a bounty of $10 for those who can find a single bug. The closest thing I found here is a macro that finds if any integer X has a word which is between M and N
Determine if a word has a byte between m and n
When m < n, this technique tests if a word x contains an unsigned byte value, such that m < value < n. It uses 7 arithmetic/logical operations when n and m are constant.
Note: Bytes that equal n can be reported by likelyhasbetween as false positives, so this should be checked by character if a certain result is needed.
Requirements: x>=0; 0<=m<=127; 0<=n<=128
#define likelyhasbetween(x,m,n) \
((((x)-~0UL/255*(n))&~(x)&((x)&~0UL/255*127)+~0UL/255*(127-(m)))&~0UL/255*128)
This technique would be suitable for a fast pretest. A variation that takes one more operation (8 total for constant m and n) but provides the exact answer is:
#define hasbetween(x,m,n) \
((~0UL/255*(127+(n))-((x)&~0UL/255*127)&~(x)&((x)&~0UL/255*127)+~0UL/255*(127-(m)))&~0UL/255*128)
It is possible if the number is a finite positive integer.
Suppose we have a rectangle represented by the (a1,b1) and (a2,b2). Given a point (x,y), we only need to evaluate the expression (a1<x) & (x<a2) & (b1<y) & (y<b2). So the problems now is to find the corresponding bit operation for the expression c
Let ci be the i-th bit of the number c (which can be obtained by masking ci and bit shift). We prove that for numbers with at most n bit, c<d is equivalent to r_(n-1), where
r_i = ((ci^di) & ((!ci)&di)) | (!(ci^di) & r_(i-1))
Prove: When the ci and di are different, the left expression might be true (depends on ((!ci)&di)), otherwise the right expression might be true (depends on r_(i-1) which is the comparison of next bit).
The expression ((!ci)&di) is actually equivalent to the bit comparison ci < di. Hence, this recursive relation return true that it compares the bit by bit from left to right until we can decide c is smaller than d.
Hence there is an purely bit operation expression corresponding to the comparison operator, and so it is possible to find a point inside a rectangle with pure bitwise operation.
Edit: There is actually no need for condition statement, just expands the r_(n+1), then done.
x,y is in the rectangle {x0<x<x1 and y0<y<y1} if {x0<x and x<x1 and y0<y and y<y1}
If we can simulate < with bit operators, then we're good to go.
What does it mean to say something is < in binary? Consider
a: 0 0 0 0 1 1 0 1
b: 0 0 0 0 1 0 1 1
In the above, a>b, because it contains the first 1 whose counterpart in b is 0. We are those seeking the leftmost bit such that myBit!=otherBit. (== or equiv is a bitwise operator which can be represented with and/or/not)
However we need some way through to propagate information in one bit to many bits. So we ask ourselves this: can we "code" a function using only "bit" operators, which is equivalent to if(q,k,a,b) = if q[k] then a else b. The answer is yes:
We create a bit-word consisting of replicating q[k] onto every bit. There are two ways I can think of to do this:
1) Left-shift by k, then right-shift by wordsize (efficient, but only works if you have shift operators which duplicate the last bit)
2) Inefficient but theoretically correct way:
We left-shift q by k bits
We take this result and and it with 10000...0
We right-shift this by 1 bit, and or it with the non-right-shifted version. This copies the bit in the first place to the second place. We repeat this process until the entire word is the same as the first bit (e.g. 64 times)
Calling this result mask, our function is (mask and a) or (!mask and b): the result will be a if the kth bit of q is true, other the result will be b
Taking the bit-vector c=a!=b and a==1111..1 and b==0000..0, we use our if function to successively test whether the first bit is 1, then the second bit is 1, etc:
a<b :=
if(c,0,
if(a,0, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,1,
if(a,1, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,2,
if(a,2, B_LESSTHAN_A, A_LESSTHAN_B),
if(c,3,
if(a,3, B_LESSTHAN_A, A_LESSTHAN_B),
if(...
if(c,64,
if(a,64, B_LESSTHAN_A, A_LESSTHAN_B),
A_EQUAL_B)
)
...)
)
)
)
)
This takes wordsize steps. It can however be written in 3 lines by using a recursively-defined function, or a fixed-point combinator if recursion is not allowed.
Then we just turn that into an even larger function: xMin<x and x<xMax and yMin<y and y<yMax

Resources