I'm looking into implementing a 4-bit BitSet function at the logic gate level so that it can be written in structural Verilog--I have looked elsewhere for an answer to this question, but can only find C/C++ resources, which operate at a higher level and are mostly unehlpful to me.
The inputs to my interface are a 4-bit number x, a two-bit number index, containing the index to be set or cleared in x, a one-bit number value, containing the value x[index] should be set to (1 or 0 to set or clear, respectively), and a 4-bit output y, which is the final outcome of x.
To my understanding, setting a value in x follows the logic y |= 1 < < x
and clearing a value in x follows y &= 1 < < x, such that if value is equal to 1, sending it through an OR gate with the value already in that index of x will result in a 1, and if value is equal to 0, sending it through an AND gate with the value already in that index of x will result in a 0. This makes sense to me.
It also makes sense that if I am starting with a 4-bit number x, that I might put it through a 1-to-4 DEMUX block (aside from the basic logic gates, I have MUX, DEMUX, magnitude comparators, and binary adders at my disposal) to obtain the individual bits.
What I am unsure about is how to get from the four separate bits to selecting one of them to modify based on the value stored in index using only basic logic gates. Any ideas or pointers for me to start from? Am I thinking about this the right way?
I think you are looking for something like this
reg [3:0] x;
reg [3:0] y;
reg [1:0] index;
// wont work in synthesis
x[index] = 0;
In Verilog you can access each bit individually using [Bit_Number]
But in your case the index is not constant which will end up failing during synthesis. What you can do is write an if else to check the index and change the right index so
if (index == 1) // change bit one
x[1] = 1'b0; // value to assign is zero here
else if(index == 2)
x[2] 1'b0;
A side note:
You can also assign values
// Here x gets the bit 1 and bit 0 of y concatenated with the value of index
x = {y[1:0],index};
So assume
y = 4'b1011;
index = 2'b00;
x = {y[1:0],index} = 1100
Related
I need to generate random numbers in a very large range, 128 bits integers, and I will generate a many many of them. I'll generate so many of them, that I cannot fit into memory a list of the numbers generated.
I also have the requirement that the generated numbers do not repeat, or at least that the probability of repetition is vanishingly small.
Is there an algorithm that does this?
Build a 128 bit linear congruential generator or linear feedback shift register generator. With properly chosen coefficients either of those will achieve full cycle, meaning no repeats until you've exhausted all outcomes.
Any full-period PRNG with a 128-bit state will do what you need in principle. Unfortunately many of these generators tend to produce only 32 or 64 bits per iteration while the rest of the state goes through a predictable permutation (LFSRs being the worst case, producing only 1 bit per iteration). Each 128-bit state is unique, but many of its bits would show a trivial relation to the previous state.
This can be overcome with tempering -- taking your questionable-quality PRNG state with a known-good period, and permuting it through a 1:1 transform to hide the not-so-random factors.
For example, borrowing from the example xorshift+ shown on Wikipedia:
static uint64_t s[2] = { 1, 0 };
void random128(uint64_t result[]) {
uint64_t x = s[0];
uint64_t y = s[1];
x ^= x << 23;
x ^= y ^ (x >> 17) ^ (y >> 26);
s[0] = y;
s[1] = x;
At this point we know that s[0] is just the old value of s[1], which would be a terrible PRNG if all 128 bits were exposed (normally only s[1] is exposed). To overcome this we permute the result to disguise that relationship (following the same principle as a feistel network to ensure that the transform is 1:1).
y += x * 1630144151483159999;
x ^= y >> 3;
result[0] = x;
result[1] = y;
}
This seems to be sufficient to pass diehard. So long as the original generator has full(ish) period, the whole generator should be full period too.
The logical conclusion to tempering a low-quality generator is to use AES-128 in counter mode. Simply run a counter from 0 to 2**128-1 (an extremely low-quality generator), and encrypt each value using AES-128 and a consistent key (an ideal temper) for your final output.
If you do this, don't get distracted by full cryptographic RNG requirements. Those involve re-seeding and consequently can produce the same number more than once (which is more random, but it's what you want to avoid).
I'm currently working on a design in which I need to do sgn(x)*y, where both x and y are signed vectors. What is the preferred method to implement a synthesizable sgn function in VHDL with signed vectors? I would use the SIGN function in the IEEE.math_real package but it seems like I won't be able to synthesize it.
You don't really need a sign function to accomplish what you need. In numeric_std the leftmost bit is always the sign bit for the signed type. Examine this bit to decide if you need to negate y.
variable z : signed(y'range);
...
if x(x'left) = '1' then -- Negative
z := -y; -- z := -1 * y
else
z := y; -- z := 1 * y
end if;
-- The same as a VHDL-2008 one-liner
z := -y when x(x'left) else y;
Modify as needed if you need to do this with signal assignments.
If you are only interested in using the sign function, the best option is that you define a block in your program whose input is the signed vector x and its output is another bit vector whit the value of the sign.
The way of designing that block is taking the most significative bit of the vector as it indicates the sign. Then, if you write that the output must be one(1(0), others->(0) or how it writes) if that bit is 0 (positive or null value), or every bit are ones(others->(1)) if that bit is 1 (negative value).
You can also define that if input value is 0, the output will also be a 0.
Let me be clear at start that this is a contrived example and not a real world problem.
If I have a problem of creating a random number between 0 to 10. I do this 11 times making sure that a previously occurred number is not drawn again, if I get a repeated number,
I create another random number again to make sure it has not be seen earlier. So essentially I get a a sequence of unique numbers from 0 - 10 in a random order
e.g. 3 1 2 0 5 9 4 8 10 6 7 and so on
Now to come up with logic to make sure that the random numbers are unique and not one which we have drawn before, we could use many approaches
Use C++ std::bitset and set the bit corresponding to the index equal to value of each random no. and check it next time when a new random number is drawn.
Or
Use a std::map<int,int> to count the number of times or even simple C array with some sentinel values stored in that array to indicate if that number has occurred or not.
If I have to avoid these methods above and use some mathematical/logical/bitwise operation to find whether a random number has been draw before or not, is there a way?
You don't want to do it the way you suggest. Consider what happens when you have already selected 10 of the 11 items; your random number generator will cycle until it finds the missing number, which might be never, depending on your random number generator.
A better solution is to create a list of numbers 0 to 10 in order, then shuffle the list into a random order. The normal algorithm for doing this is due to Knuth, Fisher and Yates: starting at the first element, swap each element with an element at a location greater than the current element in the array.
function shuffle(a, n)
for i from n-1 to 1 step -1
j = randint(i)
swap(a[i], a[j])
We assume an array with indices 0 to n-1, and a randint function that sets j to the range 0 <= j <= i.
Use an array and add all possible values to it. Then pick one out of the array and remove it. Next time, pick again until the array is empty.
Yes, there is a mathematical way to do it, but it is a bit expansive.
have an array: primes[] where primes[i] = the i'th prime number. So its beginning will be [2,3,5,7,11,...].
Also store a number mult Now, once you draw a number (let it be i) you check if mult % primes[i] == 0, if it is - the number was drawn before, if it wasn't - then the number was not. chose it and do mult = mult * primes[i].
However, it is expansive because it might require a lot of space for large ranges (the possible values of mult increases exponentially
(This is a nice mathematical approach, because we actually look at a set of primes p_i, the array of primes is only the implementation to the abstract set of primes).
A bit manipulation alternative for small values is using an int or long as a bitset.
With this approach, to check a candidate i is not in the set you only need to check:
if (pow(2,i) & set == 0) // not in the set
else //already in the set
To enter an element i to the set:
set = set | pow(2,i)
A better approach will be to populate a list with all the numbers, shuffle it with fisher-yates shuffle, and iterate it for generating new random numbers.
If I have to avoid these methods above and use some
mathematical/logical/bitwise operation to find whether a random number
has been draw before or not, is there a way?
Subject to your contrived constraints yes, you can imitate a small bitset using bitwise operations:
You can choose different integer types on the right according to what size you need.
bitset code bitwise code
std::bitset<32> x; unsigned long x = 0;
if (x[i]) { ... } if (x & (1UL << i)) { ... }
// assuming v is 0 or 1
x[i] = v; x = (x & ~(1UL << i)) | ((unsigned long)v << i);
x[i] = true; x |= (1UL << i);
x[i] = false; x &= ~(1UL << i);
For a larger set (beyond the size in bits of unsigned long long), you will need an array of your chosen integer type. Divide the index by the width of each value to know what index to look up in the array, and use the modulus for the bit shifts. This is basically what bitset does.
I'm assuming that the various answers that tell you how best to shuffle 10 numbers are missing the point entirely: that your contrived constraints are there because you do not in fact want or need to know how best to shuffle 10 numbers :-)
Keep a variable too map the drawn numbers. The i'th bit of that variable will be 1 if the number was drawn before:
int mapNumbers = 0;
int generateRand() {
if (mapNumbers & ((1 << 11) - 1) == ((1 << 11) - 1)) return; // return if all numbers have been generated
int x;
do {
x = newVal();
} while (!x & mapNumbers);
mapNumbers |= (1 << x);
return x;
}
How to compute the integer absolute value without using if condition.
I guess we need to use some bitwise operation.
Can anybody help?
Same as existing answers, but with more explanations:
Let's assume a twos-complement number (as it's the usual case and you don't say otherwise) and let's assume 32-bit:
First, we perform an arithmetic right-shift by 31 bits. This shifts in all 1s for a negative number or all 0s for a positive one (but note that the actual >>-operator's behaviour in C or C++ is implementation defined for negative numbers, but will usually also perform an arithmetic shift, but let's just assume pseudocode or actual hardware instructions, since it sounds like homework anyway):
mask = x >> 31;
So what we get is 111...111 (-1) for negative numbers and 000...000 (0) for positives
Now we XOR this with x, getting the behaviour of a NOT for mask=111...111 (negative) and a no-op for mask=000...000 (positive):
x = x XOR mask;
And finally subtract our mask, which means +1 for negatives and +0/no-op for positives:
x = x - mask;
So for positives we perform an XOR with 0 and a subtraction of 0 and thus get the same number. And for negatives, we got (NOT x) + 1, which is exactly -x when using twos-complement representation.
Set the mask as right shift of integer by 31 (assuming integers are stored as two's-complement 32-bit values and that the right-shift operator does sign extension).
mask = n>>31
XOR the mask with number
mask ^ n
Subtract mask from result of step 2 and return the result.
(mask^n) - mask
Assume int is of 32-bit.
int my_abs(int x)
{
int y = (x >> 31);
return (x ^ y) - y;
}
One can also perform the above operation as:
return n*(((n>0)<<1)-1);
where n is the number whose absolute need to be calculated.
In C, you can use unions to perform bit manipulations on doubles. The following will work in C and can be used for both integers, floats, and doubles.
/**
* Calculates the absolute value of a double.
* #param x An 8-byte floating-point double
* #return A positive double
* #note Uses bit manipulation and does not care about NaNs
*/
double abs(double x)
{
union{
uint64_t bits;
double dub;
} b;
b.dub = x;
//Sets the sign bit to 0
b.bits &= 0x7FFFFFFFFFFFFFFF;
return b.dub;
}
Note that this assumes that doubles are 8 bytes.
I wrote my own, before discovering this question.
My answer is probably slower, but still valid:
int abs_of_x = ((x*(x >> 31)) | ((~x + 1) * ((~x + 1) >> 31)));
If you are not allowed to use the minus sign you could do something like this:
int absVal(int x) {
return ((x >> 31) + x) ^ (x >> 31);
}
For assembly the most efficient would be to initialize a value to 0, substract the integer, and then take the max:
pxor mm1, mm1 ; set mm1 to all zeros
psubw mm1, mm0 ; make each mm1 word contain the negative of each mm0 word
pmaxswmm1, mm0 ; mm1 will contain only the positive (larger) values - the absolute value
In C#, you can implement abs() without using any local variables:
public static long abs(long d) => (d + (d >>= 63)) ^ d;
public static int abs(int d) => (d + (d >>= 31)) ^ d;
Note: regarding 0x80000000 (int.MinValue) and 0x8000000000000000 (long.MinValue):
As with all of the other bitwise/non-branching methods shown on this page, this gives the single non-mathematical result abs(int.MinValue) == int.MinValue (likewise for long.MinValue). These represent the only cases where result value is negative, that is, where the MSB of the two's-complement result is 1 -- and are also the only cases where the input value is returned unchanged. I don't believe this important point was mentioned elsewhere on this page.
The code shown above depends on the value of d used on the right side of the xor being the value of d updated during the computation of left side. To C# programmers this will seem obvious. They are used to seeing code like this because .NET formally incorporates a strong memory model which strictly guarantees the correct fetching sequence here. The reason I mention this is because in C or C++ one may need to be more cautious. The memory models of the latter are considerably more permissive, which may allow certain compiler optimizations to issue out-of-order fetches. Obviously, in such a regime, fetch-order sensitivity would represent a correctness hazard.
If you don't want to rely on implementation of sign extension while right bit shifting, you can modify the way you calculate the mask:
mask = ~((n >> 31) & 1) + 1
then proceed as was already demonstrated in the previous answers:
(n ^ mask) - mask
What is the programming language you're using? In C# you can use the Math.Abs method:
int value1 = -1000;
int value2 = 20;
int abs1 = Math.Abs(value1);
int abs2 = Math.Abs(value2);
If I have a 32 bit two's complement number and I want to know what is the easiest way to know of two numbers are equal... what would be the fastest bitwise operator to know this? I know xor'ing both numbers and check if the results are zero works well... any other one's?
how about if a number is greater than 0?? I can check the 31'st bit to see if it's greater or equal to 0..but how about bgtz?
Contrary to your comments, '==' is part of Verilog, and unless my memory is a lot worse than usual tonight, it should synthesize just fine. Just for example, you could write something like:
// warning: untested, incomplete and utterly useless in any case.
// It's been a while since I wrote much Verilog, so my syntax is probably a bit off
// anyway (might well be more like VHDL than it should be).
//
module add_when_equal(clock, a, b, x, y, z);
input clock;
input [31:0] a, b, x, y;
output [31:0] z;
reg [31:0] a, b, x, y, z;
always begin: main
#(posedge clock);
if (a == b)
z <= x + y;
end
endmodule;
Verilog also supports the other comparison operators you'd normally expect (!=, <=, etc.). Synthesizers are fairly "smart", so something like x != 0 will normally synthesize to an N-input OR gate instead of a comparator.
// this should work as comparator for Equality
wire [31:0] Cmp1, Cmp2;
wire Equal;
assign Equal = &{Cmp1 ~^ Cmp2}; // using XNOR
assign Equal = ~|{Cmp1 ^ Cmp2}; // using XOR
if you can xor and then compare the result with zero then you can compare a result with some value and if you can compare something to a value then you can just compare the two values without using an xor and a 32 bit zero.