Related
I'm a beginner to logic circuits and I'm trying to construct a truth table for a LED dice circuit.
I've got 7 outputs in my table, 1 for each LED, but I can't figure out how many inputs I need.
I've been told that the formula below gives the number of inputs, but I don't know what Y is. Can anyone confirm that the formula is correct, and tell me what Y is so I can work this out? Thanks
n = log(Y + 1) / log(2)
There is no telling how many inputs you must have. You could have as many as you wish. But there is a minimum.
Every input line can be thought of as a digit in a binary number. So in order to identify 7 different numbers we would need at least three binary digits (000 to 111). So the formula would be the ceil(log2(Y)) where Y in the number of output lines.
A great example of such a circuit would be a demultiplexer. You will notice the number of selector bits in the DEMUX is ceil(log2(Y)) the number of output lines.
I recently saw a logic/math problem called 4 Fours where you need to use 4 fours and a range of operators to create equations that equal to all the integers 0 to N.
How would you go about writing an elegant algorithm to come up with say the first 100...
I started by creating base calculations like 4-4, 4+4, 4x4, 4/4, 4!, Sqrt 4 and made these values integers.
However, I realized that this was going to be a brute force method testing the combinations to see if they equal, 0 then 1, then 2, then 3 etc...
I then thought of finding all possible combinations of the above values, checking that the result was less than 100 and filling an array and then sorting it...again inefficient because it may find 1000s of numbers over 100
Any help on how to approach a problem like this would be helpful...not actual code...but how to think through this problem
Thanks!!
This is an interesting problem. There are a couple of different things going on here. One issue is how to describe the sequence of operations and operands that go into an arithmetic expression. Using parentheses to establish order of operations is quite messy, so instead I suggest thinking of an expression as a stack of operations and operands, like - 4 4 for 4-4, + 4 * 4 4 for (4*4)+4, * 4 + 4 4 for (4+4)*4, etc. It's like Reverse Polish Notation on an HP calculator. Then you don't have to worry about parentheses, having the data structure for expressions will help below when we build up larger and larger expressions.
Now we turn to the algorithm for building expressions. Dynamic programming doesn't work in this situation, in my opinion, because (for example) to construct some numbers in the range from 0 to 100 you might have to go outside of that range temporarily.
A better way to conceptualize the problem, I think, is as breadth first search (BFS) on a graph. Technically, the graph would be infinite (all positive integers, or all integers, or all rational numbers, depending on how elaborate you want to get) but at any time you'd only have a finite portion of the graph. A sparse graph data structure would be appropriate.
Each node (number) on the graph would have a weight associated with it, the minimum number of 4's needed to reach that node, and also the expression which achieves that result. Initially, you would start with just the node (4), with the number 1 associated with it (it takes one 4 to make 4) and the simple expression "4". You can also throw in (44) with weight 2, (444) with weight 3, and (4444) with weight 4.
To build up larger expressions, apply all the different operations you have to those initial node. For example, unary negation, factorial, square root; binary operations like * 4 at the bottom of your stack for multiply by 4, + 4, - 4, / 4, ^ 4 for exponentiation, and also + 44, etc. The weight of an operation is the number of 4s required for that operation; unary operations would have weight 0, + 4 would have weight 1, * 44 would have weight 2, etc. You would add the weight of the operation to the weight of the node on which it operates to get a new weight, so for example + 4 acting on node (44) with weight 2 and expression "44" would result in a new node (48) with weight 3 and expression "+ 4 44". If the result for 48 has better weight than the existing result for 48, substitute that new node for (48).
You will have to use some sense when applying functions. factorial(4444) would be a very large number; it would be wise to set a domain for your factorial function which would prevent the result from getting too big or going out of bounds. The same with functions like / 4; if you don't want to deal with fractions, say that non-multiples of 4 are outside of the domain of / 4 and don't apply the operator in that case.
The resulting algorithm is very much like Dijkstra's algorithm for calculating distance in a graph, though not exactly the same.
I think that the brute force solution here is the only way to go.
The reasoning behind this is that each number has a different way to get to it, and getting to a certain x might have nothing to do with getting to x+1.
Having said that, you might be able to make the brute force solution a bit quicker by using obvious moves where possible.
For instance, if I got to 20 using "4" three times (4*4+4), it is obvious to get to 16, 24 and 80. Holding an array of 100 bits and marking the numbers reached
Similar to subset sum problem, it can be solved using Dynamic Programming (DP) by following the recursive formulas:
D(0,0) = true
D(x,0) = false x!=0
D(x,i) = D(x-4,i-1) OR D(x+4,i-1) OR D(x*4,i-1) OR D(x/4,i-1)
By computing the above using DP technique, it is easy to find out which numbers can be produced using these 4's, and by walking back the solution, you can find out how each number was built.
The advantage of this method (when implemented with DP) is you do not recalculate multiple values more than once. I am not sure it will actually be effective for 4 4's, but I believe theoretically it could be a significant improvement for a less restricted generalization of this problem.
This answer is just an extension of Amit's.
Essentially, your operations are:
Apply a unary operator to an existing expression to get a new expression (this does not use any additional 4s)
Apply a binary operator to two existing expressions to get a new expression (the new expression has number of 4s equal to the sum of the two input expressions)
For each n from 1..4, calculate Expressions(n) - a List of (Expression, Value) pairs as follows:
(For a fixed n, only store 1 expression in the list that evaluates to any given value)
Initialise the list with the concatenation of n 4s (i.e. 4, 44, 444, 4444)
For i from 1 to n-1, and each permitted binary operator op, add an expression (and value) e1 op e2 where e1 is in Expressions(i) and e2 is in Expressions(n-i)
Repeatedly apply unary operators to the expressions/values calculated so far in steps 1-3. When to stop (applying 3 recursively) is a little vague, certainly stop if an iteration produces no new values. Potentially limit the magnitude of the values you allow, or the size of the expressions.
Example unary operators are !, Sqrt, -, etc. Example binary operators are +-*/^ etc. You can easily extend this approach to operators with more arguments if permitted.
You could do something a bit cleverer in terms of step 3 never ending for any given n. The simple way (described above) does not start calculating Expressions(i) until Expressions(j) is complete for all j < i. This requires that we know when to stop. The alternative is to build Expressions of a certain maximum length for each n, then if you need to (because you haven't found certain values), extend the maximum length in an outer loop.
Problem:
The numbers from 1 to 10 are given. Put the equal sign(somewhere between
them) and any arithmetic operator {+ - * /} so that a perfect integer
equality is obtained(both the final result and the partial results must be
integer)
Example:
1*2*3*4*5/6+7=8+9+10
1*2*3*4*5/6+7-8=9+10
My first idea to resolve this was using backtracking:
Generate all possibilities of putting operators between the numbers
For one such possibility replace all the operators, one by one, with the equal sign and check if we have two equal results
But this solution takes a lot of time.
So, my question is: Is there a faster solution, maybe something that uses the operator properties or some other cool math trick ?
I'd start with the equals sign. Pick a possible location for that, and split your sequence there. For left and right side independently, find all possible results you could get for each, and store them in a dict. Then match them up later on.
Finding all 226 solutions took my Python program, based on this approach, less than 0.15 seconds. So there certainly is no need to optimize further, is there? Along the way, I computed a total of 20683 subexpressions for a single side of one equation. They are fairly well balenced: 10327 expressions for left hand sides and 10356 expressions for right hand sides.
If you want to be a bit more clever, you can try reduce the places where you even attempt division. In order to allov for division without remainder, the prime factors of the divisor must be contained in those of the dividend. So the dividend must be some product and that product must contain the factors of number by which you divide. 2, 3, 5 and 7 are prime numbers, so they can never be such divisors. 4 will never have two even numbers before it. So the only possible ways are 2*3*4*5/6, 4*5*6*7/8 and 3*4*5*6*7*8/9. But I'd say it's far easier to check whether a given division is possible as you go, without any need for cleverness.
I want to make power function using vhdl where the power is floating number and the number is integer (will be always "2").
2^ some floating number.
I use ieee library and (fixed_float_types.all, fixed_pkg.all, and float_pkg.all).
I thought of calculating all the possible outputs and save them in ROM, but i don't know the ranges of the power.
How to implement this function and if there is any implemented function like this where to find it?
thanks
For simulation, you will find suitable power functions in the IEEE.math_real library
library IEEE;
use IEEE.math_real.all;
...
X <= 2 ** Y;
or
X <= 2.0 ** Y;
This is probably not synthesisable. If I needed a similar operation for synthesis, I would use a lookup table of values, slopes and second derivatives, and a quadratic interpolator. I have used this approach for reciprocal and square root functions to single precision accuracy; 2**n over a reasonable range of n is smooth enough that the same approach should work.
If an approximation would do, I think I would use the integer part of my exponent to determine the integer power of 2, like if the floating point number is 111.011010111 You know that the integer power of 2 part is 0b10000000. Then I would do a left to right conditional add based on the fractional bit, so for 111.011010111 you know you need to add implement 0b10000000 times ( 0*(1/2) + 1*(1/4) + 1*(1/8) + 0*(1/16).....and so on). 1/2, 1/4, 1/8, et cetera are right shifts of 0b10000000. This implements the integer part of the exponentiation, and then approximates the fractional part as multiplication of the integer part.
As simple as any, 0.1 in binary is equivalent to 0.5 in decimal and that is equivalent to calculating a square root.
I've been working on floating point numbers and it took about 4-5 hours to figure this out for implementation of power function in the most simple and synthesizeable way. Just go on with repeated square roots like for b"0.01" you want to do double square root like sqrt(sqrt(x)) and for b"0.11" sqrt * double sqrt like sqrt(x)*sqrt(sqrt(x)) and so on...
This is a synthesizeable implementation of pow function...
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've noticed recently that there are a great many algorithms out there based in part or in whole on clever uses of numbers in creative bases. For example:
Binomial heaps are based on binary numbers, and the more complex skew binomial heaps are based on skew binary numbers.
Some algorithms for generating lexicographically ordered permutations are based on the factoradic number system.
Tries can be thought of as trees that look at one digit of the string at a time, for an appropriate base.
Huffman encoding trees are designed to have each edge in the tree encode a zero or one in some binary representation.
Fibonacci coding is used in Fibonacci search and to invert certain types of logarithms.
My question is: what other algorithms are out there that use a clever number system as a key step of their intuition or proof?. I'm thinking about putting together a talk on the subject, so the more examples I have to draw from, the better.
Chris Okasaki has a very good chapter in his book Purely Functional Data Structures that discusses "Numerical Representations": essentially, take some representation of a number and convert it into a data structure. To give a flavor, here are the sections of that chapter:
Positional Number Systems
Binary Numbers (Binary Random-Access Lists, Zeroless Representations, Lazy Representations, Segmented Representations)
Skew Binary Numbers (Skew Binary Random Access Lists, Skew Binomial Heaps)
Trinary and Quaternary Numbers
Some of the best tricks, distilled:
Distinguish between dense and sparse representations of numbers (usually you see this in matrices or graphs, but it's applicable to numbers too!)
Redundant number systems (systems that have more than one representation of a number) are useful.
If you arrange the first digit to be non-zero or use a zeroless representation, retrieving the head of the data structure can be efficient.
Avoid cascading borrows (from taking the tail of the list) and carries (from consing onto the list) by segmenting the data structure
Here is also the reference list for that chapter:
Guibas, McCreight, Plass and Roberts: A new representation for linear lists.
Myers: An applicative random-access stack
Carlsson, Munro, Poblete: An implicit binomial queue with constant insertion time.
Kaplan, Tarjan: Purely functional lists with catenation via recursive slow-down.
"Ternary numbers can be used to convey
self-similar structures like a
Sierpinski Triangle or a Cantor set
conveniently." source
"Quaternary numbers are used in the
representation of 2D Hilbert curves." source
"The quater-imaginary numeral system
was first proposed by Donald Knuth in
1955, in a submission to a high-school
science talent search. It is a
non-standard positional numeral system
which uses the imaginary number 2i as
its base. It is able to represent
every complex number using only the
digits 0, 1, 2, and 3." source
"Roman numerals are a biquinary system." source
"Senary may be considered useful in the
study of prime numbers since all
primes, when expressed in base-six,
other than 2 and 3 have 1 or 5 as the
final digit." source
"Sexagesimal (base 60) is a numeral
system with sixty as its base. It
originated with the ancient Sumerians
in the 3rd millennium BC, it was
passed down to the ancient
Babylonians, and it is still used — in
a modified form — for measuring time,
angles, and the geographic coordinates
that are angles." source
etc...
This list is a good starting point.
I read your question the other day, and today was faced with a problem: How do I generate all partitionings of a set? The solution that occurred to me, and that I used (maybe due to reading your question) was this:
For a set with (n) elements, where I need (p) partitions, count through all (n) digit numbers in base (p).
Each number corresponds to a partitioning. Each digit corresponds to an element in the set, and the value of the digit tells you which partition to put the element in.
It's not amazing, but it's neat. It's complete, causes no redundancy, and uses arbitrary bases. The base you use depends on the specific partitioning problem.
I recently came across a cool algorithm for generating subsets in lexicographical order based on the binary representations of the numbers between 0 and 2n - 1. It uses the numbers' bits both to determine what elements should be chosen for the set and to locally reorder the generated sets to get them into lexicographical order. If you're curious, I have a writeup posted here.
Also, many algorithms are based on scaling (such as a weakly-polynomial version of the Ford-Fulkerson max-flow algorithm), which uses the binary representation of the numbers in the input problem to progressively refine a rough approximation into a complete solution.
Not exactly a clever base system but a clever use of the base system: Van der Corput sequences are low-discrepancy sequences formed by reversing the base-n representation of numbers. They're used to construct the 2-d Halton sequences which look kind of like this.
I vaguely remember something about double base systems for speeding up some matrix multiplication.
Double base system is a redundant system that uses two bases for one number.
n = Sum(i=1 --> l){ c_i * 2^{a_i} * 3 ^ {b_i}, where c in {-1,1}
Redundant means that one number can be specified in many ways.
You can look for the article "Hybrid Algorithm for the Computation of the Matrix Polynomial" by Vassil Dimitrov, Todor Cooklev.
Trying to give the best short overview I can.
They were trying to compute matrix polynomial G(N,A) = I + A + ... + A^{N-1}.
Supoosing N is composite G(N,A) = G(J,A) * G(K, A^J), if we apply for J = 2, we get:
/ (I + A) * G(K, A^2) , if N = 2K
G(N,A) = |
\ I + (A + A^2) * G(K, A^2) , if N = 2K + 1
also,
/ (I + A + A^2) * G(K, A^3) , if N = 3K
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3) , if N = 3K + 1
\ I + A * (A + A^2 + A^3) * G(K, A^3) , if N = 3K + 2
As it's "obvious" (jokingly) that some of these equations are fast in the first system and some better in the second - so it is a good idea to choose the best of those depending on N. But this would require fast modulo operation for both 2 and 3. Here's why the double base comes in - you can basically do the modulo operation fast for both of them giving you a combined system:
/ (I + A + A^2) * G(K, A^3) , if N = 0 or 3 mod 6
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3) , if N = 1 or 4 mod 6
| (I + A) * G(3K + 1, A^2) , if N = 2 mod 6
\ I + (A + A^2) * G(3K + 2, A^2) , if N = 5 mod 6
Look at the article for better explanation as I'm not an expert in this area.
RadixSort can use a various number bases.
http://en.wikipedia.org/wiki/Radix_sort
Pretty interesting implementation of a bucketSort.
here is a good post on using ternary numbers to solve the "counterfeit coin" problem (where you have to detect a single counterfeit coin in a bag of regular ones, using a balance as few times as possible)
Hashing strings (e.g. in the Rabin-Karp algorithm) often evaluate the string as a base-b number consisting of n digits (where n is the length of the string, and b is some chosen base that is large enough). For example the string "ABCD" can be hashed as:
'A'*b^3+'B'*b^2+'C'*b^1+'D'*b^0
Substituting ASCII values for characters and taking b to be 256 this becomes,
65*256^3+66*256^2+67*256^1+68*256^0
Though, in most practical applications, the resulting value is taken modulo some reasonably sized number to keep the result sufficiently small.
Exponentiation by squaring is based on binary representation of the exponent.
In Hackers Delight (a book every programmer should know in my eyes) there is a complete chapter about unusal bases, like -2 as base (yea, right negative bases) or -1+i (i as imaginary unit sqrt(-1)) as base.
Also I nice calculation what the best base is (in terms of hardware design, for all who dont want to read it: The solution of the equation is e, so you can go with 2 or 3, 3 would be little bit better (factor 1.056 times better than 2) - but is technical more practical).
Other things which come to my mind are gray counter (you when you count in this system only 1 bit changes, you often use this property in hardware design to reduce metastability issues) or the generalisation of the already mentioned Huffmann encoding - the arithmetic encoding.
Cryptography makes extensive use of integer rings (modular arithmatic) and also finite fields, whose operations are intuitively based on the way polynomials with integer coefficients behave.
I really like this one for converting binary numbers into Gray codes: http://www.matrixlab-examples.com/gray-code.html
Great question. The list is long indeed.
Telling time is a simple instance of mixed bases (days | hours | minutes | seconds | am/pm)
I've created a meta-base enumeration n-tuple framework if you're interested in hearing about it. It's some very sweet syntactic sugar for base numbering systems. It's not released yet. Email my username (at gmail).
One of my favourites using base 2 is Arithmetic Encoding. Its unusual because the hart of the algorithm uses representations of numbers between 0 and 1 in binary.
May be AKS is the case.