Implementing ceil function in Xilinx - fpga

I would like to take the ceil of the signal in Simulink(Xilinx Library). So, if for instance, the signal value is 1.5, the output would be 2.
Any suggestion on how can I implement it in Simulink ?
Also, I am keen to understand the approach how for instance floor,round function could be implemented as well.
Any blocks in xilinx library which does it ?
Thanks
Kiran

Not sure there's a block for it, but you could use an mcode block I think and put the Matlab ceil function in it.
Or you could build a block which uses Slice blocks to separate the integer and fractional parts and increment the integer part if the fractional part is not zero.
For rounding and flooring, the Cast block will round or truncate for you, you have to manage the output type yourself though.

Related

Ada random Integer in range of array length

It's a simple question, yet I can't find anything that could help me...
I want to create some random connection between graph nodes. To do this I want do two random indexes and then connect the nodes.
declare
type randRange is range 0..100000;
n1: randRange;
n2: randRange;
package Rand_Int is new ada.numerics.discrete_random(randRange);
use Rand_Int;
gen : Generator;
begin
n1 := random(gen) mod n; -- first node
n2 := random(gen) mod n;
I wanted to define the range with length of my array but I got errors. Still, it doesn't compile.
Also I can't perform modulo as n is natural.
75:15: "Generator" is not visible
75:15: multiple use clauses cause hiding
75:15: hidden declaration at a-nudira.ads:50, instance at line 73
75:15: hidden declaration at a-nuflra.ads:47
And I have no idea what these errors mean - obviously, something is wrong with my generator.
I would appreciate if someone showed me a proper way to do this simple thing.
As others have answered, the invisibility of Generator is due to you having several "use" clauses for packages all of which have a Generator. So you must specify "Rand_Int.Generator" to show that you want the Generator from the Rand_Int package.
The problem with the "non-static expression" happens because you try to define a new type randRange, and that means the compiler has to decide how many bits it needs to use for each value of the type, and for that the type must have compile-time, i.e. static, bounds. You can instead define it as a subtype:
subtype randRange is Natural range 0 .. n-1;
and then the compiler knows that it can use the same number of bits as it uses for the Natural type. (I assume here that "n" is an Integer, or Natural or Positive; otherwise, use whatever type "n" is.)
Using a subtype should also resolve the problem with the "expected type".
You don't show us the whole code neccessary to reproduce the errors, but the error messages suggest you have another use clause somewhere, a use Ada.Numerics.Float_Random;. Either remove that, or specify which generator you want, ie. gen : Rand_Int.Generator;.
As for mod, you should specify the exact range you want when instantiating Discrete_Random instead:
type randRange is 0..n-1; -- but why start at 0? A list of nodes is better decribed with 1..n
package Rand_Int is new ada.numerics.discrete_random(randRange);
Now, there's no need for mod
The error messages you mention have to do with concept of visibility in Ada, which differs from most other languages. Understanding visibility is key to understanding Ada. I recommend that beginners avoid use <package> in order to avoid the visibility issues involved with such use clauses. As you gain experience with the language you can experiment with using common pkgs such as Ada.Text_IO.
As you seem to come from a language in which arrays have to have integer indices starting from zero, I recommend Ada Distilled, which does an excellent job of describing visibility in Ada. It is ISO/IEC 8652:2007, but you should have no difficulty picking up Ada-12 from that basis.
If you're interested in the issues involved in obtaining a random integer value in a subrange of an RNG's result range, or from a floating-point random value, you can look at PragmARC.Randomness.Real_Ranges and PragmARC.Randomness.U32_Ranges in the PragmAda Reusable Components.

Chicken Scheme.- How to convert a complex number (for ex: (sqrt 2) ) to an integer? Regardless of rounding strategy

I am working on a C extension for Chicken Scheme and have everything in place but I am running into an issue with complex number types.
My code can only handle integers and when any math is done that involves say a square root my extension may end up having to handle complex number.
I just need to remove the decimal place and get whatever integer is close by. I am not worried about accuracy for this.
I have looked around and through the code but did not find anything.
Thanks!
Well, you can inspect the number type from the header tag. A complex number is a block object which has 2 slots; the real and imaginary part. Then, those numbers themselves can be ratnums, flonums, fixnums or bignums. You'll need to handle those situations as well if you want to do it all in C.
It's probably a lot easier to declare your C code as accepting an integer and do any conversion necessary in Scheme.

Unconstrained std_logic_vector

I have an assignment to create a Test Bench for a N-bit multiplier. The code is odd to me. It is a black box n-bit but for his std_logic_vectors he does not specify any size. Im guessing this is done with the test bench. I have not seem this before and was hoping someone could explain how this works
VHDL supports unconstrained std_logic_vectors. This means that you can design a block that doesn't specify the length of the inputs and outputs. There can be a number of pitfalls to doing this (see this article), but in the case of something like a multiplier it can help with code reuse. You get to define what the input and output widths of the block are by connecting them to a std_logic_vector of the desired width.
Since you indicated that the block is a multiplier, I would check the documentation or interface to see if there are generics associated with the port widths. That is a common way of creating a generic block interface with I/O specific logic inside. That way you can specify how "wide" the logic needs to be, without having to have a separate block for every possible width.

Integer to Binary Conversion in Simulink

This might look a repetition to my earlier question. But I think its not.
I am looking for a technique to convert the signal in the Decimal format to binary format.
I intend to use the Simulink blocks in the Xilinx Library to convert decimal to binary format.
So if the input is 3, the expected output should in 11( 2 Clock Cycles). I am looking for the output to be obtained serially.
Please suggest me how to do it or any pointers in the internet would be helpful.
Thanks
You are correct, what you need is the parallel to serial block from system generator.
It is described in this document:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_ref.pdf
This block is a rate changing block. Check the mentions of the parallel to serial block in these documents for further descriptions:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_gs.pdf
http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_1/sysgen_user.pdf
Use a normal constant block with a Matlab variable in it, this already gives the output in "normal" binary (assuming you set the properties on it to be unsigned and the binary point at 0.
Then you need to write a small serialiser block, which takes that input, latches it into a shift register and then shifts the register once per clock cycle with the bit that "falls off the end" becoming your output bit. Depending on which way your shift, you can make it come MSB first of LSB first.
You'll have to build the shift register out of ordinary registers and a mux before each one to select whether you are doing a parallel load or shifting. (This is the sort of thing which is a couple of lines of code in VHDL, but a right faff in graphics).
If you have to increase the serial rate, you need to clock it from a faster clock - you could use a DCM to generate this.
Matlab has a dec2bin function that will convert from a decimal number to a binary string. So, for example dec2bin(3) would return 11.
There's also a corresponding bin2dec which takes a binary string and converts to a decimal number, so that bin2dec('11') would return 3.
If you're wanting to convert a non-integer decimal number to a binary string, you'll first want to determine what's the smallest binary place you want to represent, and then do a little bit of pre- and post-processing, combined with dec2bin to get the results you're looking for. So, if the smallest binary place you want is the 1/512th place (or 2^-9), then you could do the following (where binPrecision equals 1/512):
function result = myDec2Bin(decNum, binPrecision)
isNegative=(decNum < 0);
intPart=floor(abs(decNum));
binaryIntPart=dec2bin(intPart);
fracPart=abs(decNum)-intPart;
scaledFracPart=round(fracPart / binPrecision);
scaledBinRep=dec2bin(scaledFracPart);
temp=num2str(10^log2(1/binPrecision)+str2num(scaledBinRep),'%d');
result=[binaryIntPart,'.',temp(2:end)];
if isNegative
result=['-',result];
end
end
The result of myDec2Bin(0.256, 1/512) would then be 0.010000011, and the result of myDec2Bin(-0.984, 1/512) would be -0.111111000. (Note that the output is a string.)

Derivative of a program

Let us assume you can represent a program as mathematical function, that's possible. How does the program representation of the first derivative of that function look like? Is there a way to transform a program to its "derivative" form, and does this make sense at all?
Yes it does make sense, it's known as Automatic Differentiation. There are one or two experimental compilers which can do this, for example NAGware's Differentiation Enabled Fortran Compiler Technology. And there are a lot of research papers on the topic. I suggest you get Googling.
First, it only makes sense to try to get the derivative of a pure function (one that does not affect external state and returns the exact same output for every input). Second, the type system of many programming languages involves a lot of step functions (e.g. integers), meaning you'd have to get your program to work in terms of continuous functions in order to get a valid first derivative. Third, getting the derivative of any function involves breaking it down and manipulating it symbolically. Thus, you can't get the derivative of a function without knowing how what operations it is made of. This could be achieved with reflection.
You could create a derivative approximation function if your programming language supports closures (that is, nested functions and the ability to put functions into variables and return them). Here is a JavaScript example taken from http://en.wikipedia.org/wiki/Closure_%28computer_science%29 :
function derivative(f, dx) {
return function(x) {
return (f(x + dx) - f(x)) / dx;
};
}
Thus, you could say:
function f(x) { return x*x; }
f_prime = derivative(f, 0.0001);
Here, f_prime will approximate function(x) {return 2*x;}
If a programming language implemented higher-order functions and enough algebra, one could implement a real derivative function in it. That would be really cool.
See Lambda the Ultimate discussions on Derivatives and dissections of data types and Derivatives of Regular Expressions
How do you define the mathematical function of a program?
A derivative represent the rate of change of a function. If your function isn't continuous its derivative will be undefined over most of the domain.
I'm just gonna say that this doesn't make a lot of sense, as a program is much more abstract and "ruleless" than a mathematical function. As a derivative is a measure of the change in output as the input changes, there are certainly some programs where this could apply. However, you'd need to be able to quantify your input/output both in numerical terms.
Since input/output would both numerical, it's reasonable to assume that your program represents or operates similarly to a mathematical function, or series of functions. Hence, you can easily represent a derivative, but it would be no different than converting the mathematical derivative of a function to a computer program.
If the program is denoted as a distribution (Schwartz) then you have some notion of derivative assuming that tests functions models your postcondition (you can still take the limit to get a characteristic function). For instance, the assignment x:=x+1 is associated to the Dirac distribution \delta_{x_0+1} where x_0 is the initial value of the variable x. However, I have no idea what is the computational meaning of \delta_{x_0+1}'.
I am wondering, what if the program your're trying to "derive" uses some form of heursitics ? How can it be derived then ?
Half-jokingly, we all know that all real programs use at least a rand().

Resources