I'm trying to use the EIGEN libs. In particular I'm using the SVD.
After the calculation of the singular values I need to perform this operation:
svd.singularValues()/svd.singularValues().row(1)
which is a vector dived by a scalar.
My questions are:
1) Why this operation gives me:
main.cpp:149:56: error: no match for ‘operator/’ (operand types are
‘const SingularValuesType {aka const Eigen::Matrix}’ and
‘Eigen::DenseBase >::ConstRowXpr {aka
const Eigen::Block, 1, 1, false>}’)
2) How can i copy the values contained into svd.singularValues().row(1) in standard "double" variable?
Note that svd.singularValues().row(1) is not a scalar but a 1x1 matrix, which is why your code does not compile. Solution:
svd.singularValues()/svd.singularValues()(1)
and also note that as usual in C/C++, Eigen's matrices and vectors are 0-based indexed, so if you want to normalize by the largest singular values you should do:
svd.singularValues()/svd.singularValues()(0)
Related
I am working in sympy with symbolic matrices.
Once made explicit I can not return to implicit representations.
I tried to work something out with the pair of .as_explicit() and MatrixExpr.from_index_summation(expr)
But the latter seems to expect an explicit sigma notation sum, not a sum of indexed elements.
As a minimal working example here is my approach on matrix multiplication:
A = MatrixSymbol('A',3,4)
B = MatrixSymbol('B',4,3)
Matrix_Notation = A * B
Expanded = (A * B).as_explicit()
FromSummation = MatrixExpr.from_index_summation(Expanded)
Here we can see, that FromSummation is still the same as Expanded
I suppose that the Expanded expression should be converted to sigma sums such that .from_index_summation can be expected to work. But how can this be done?
I am trying to implement the equation in VHDL which has multiplication by some constant and addition. The equation is as below,
y<=-((x*x*x)*0.1666)+(2.5*(x*x))- (21.666*x) + 36.6653; ----error
I got the error
HDLCompiler:1731 - found '0' definitions of operator "*",
can not determine exact overloaded matching definition for "*".
entity is
entity eq1 is
Port ( x : in signed(15 downto 0);
y : out signed (15 downto 0) );
end eq1;
I tried using the function RESIZE and x in integer but it gives same error. Should i have to use another data type? x is having pure integer values like 2,4,6..etc.
Since x and y are of datatype signed, you can multiply them. However, there is no multiplication of signed with real. Even if there was, the result would be real (not signed or integer).
So first, you need to figure out what you want (the semantics). Then you should add type casts and conversion functions.
y <= x*x; -- OK
y <= 0.5 * x; -- not OK
y <= to_signed(integer(0.5 * real(to_integer(x))),y'length); -- OK
This is another case where simulating before synthesis might be handy. ghdl for instances tells you which "*" operator it finds the first error for:
ghdl -a implement.vhdl
implement.vhdl:12:21: no function declarations for operator "*"
y <= -((x*x*x) * 0.1666) + (2.5 * (x*x)) - (21.666 * x) + 36.6653;
---------------^ character position 21, line 12
The expressions with x multiplied have both operands with a type of signed.
(And for later, we also note that the complex expression on the right hand side of the signal assignment operation will eventually be interpreted as a signed value with a narrow subtype constraint when assigned to y).
VHDL determines the type of the literal 0.1666, it's an abstract literal, that is decimal literal or floating-point literal (IEEE Std 1076-2008 5.2.5 Floating-point types, 5.2.5.1 General, paragraph 5):
Floating-point literals are the literals of an anonymous predefined type that is called universal_real in this standard. Other floating-point types have no literals. However, for each floating-point type there exists an implicit conversion that converts a value of type universal_real into the corresponding value (if any) of the floating-point type (see 9.3.6).
There's only one predefined floating-point type in VHDL, see 5.2.5.2, and the floating-point literal of type universal_real is implicitly converted to type REAL.
9.3.6 Type conversions paragraph 14 tells us:
In certain cases, an implicit type conversion will be performed. An implicit conversion of an operand of type universal_integer to another integer type, or of an operand of type universal_real to another floating-point type, can only be applied if the operand is either a numeric literal or an attribute, or if the operand is an expression consisting of the division of a value of a physical type by a value of the same type; such an operand is called a convertible universal operand. An implicit conversion of a convertible universal operand is applied if and only if the innermost complete context determines a unique (numeric) target type for the implicit conversion, and there is no legal interpretation of this context without this conversion.
Because you haven't included a package containing another floating-point type that leaves us searching for a "*" multiplying operator with one operand of type signed and one of type REAL with a return type of signed (or another "*" operator with the opposite operand type arguments) and VHDL found 0 of those.
There is no
function "*" (l: signed; r: REAL) return REAL;
or
function "*" (l: signed; r: REAL) return signed;
found in package numeric_std.
Phillipe suggests one way to overcome this by converting signed x to integer.
Historically synthesis doesn't encompass type REAL, prior to the 2008 version of the VHDL standard you were likely to have arbitrary precision, while 5.2.5 paragraph 7 now tells us:
An implementation shall choose a representation for all floating-point types except for universal_real that conforms either to IEEE Std 754-1985 or to IEEE Std 854-1987; in either case, a minimum representation size of 64 bits is required for this chosen representation.
And that doesn't help us unless the synthesis tool supports floating-point types of REAL and is -2008 compliant.
VHDL has the float_generic_pkg package introduced in the 2008 version, which performs synthesis eligible floating point operations and is compatible with the used of signed types by converting to and from it's float type.
Before we suggest something so drastic as performing all these calculations as 64 bit floating point numbers and synthesize all that let's again note that the result is a 16 bit signed which is an array type of std_ulogic and represents a 16 bit integer.
You can model the multiplications on the right hand side as distinct expressions executed in both floating point or signed
representation to determine when the error is significant.
Because you are using a 16 bit signed value for y, significant would mean a difference greater than 1 in magnitude. Flipped signs or unexpected 0s between the two methods will likely tell you there's a precision issue.
I wrote a little C program to look at the differences and right off the bat it tells us 16 bits isn't enough to hold the math:
int16_t x, y, Y;
int16_t a,b,c,d;
double A,B,C,D;
a = x*x*x * 0.1666;
A = x*x*x * 0.1666;
b = 2.5 * x*x;
B = 2.5 * x*x;
c = 21.666 * x;
C = 21.666 * x;
d = 36;
D = 36.6653;
y = -( a + b - c + d);
Y = (int16_t) -(A + B - C + D);
And outputs for the left most value of x:
x = -32767, a = 11515, b = 0, c = 10967, y = -584, Y = 0
x = -32767, A = -178901765.158200, B = 2684190722.500000, C = -709929.822000
x = -32767 , y = -584 , Y= 0, double = -2505998923.829100
The first line of output is for 16 bit multiplies and you can see all three expressions with multiplies are incorrect.
The second line says double has enough precision, yet Y (-(A + B - C + D)) doesn't fit in a 16 bit number. And you can't cure that by making the result size larger unless the input size remains the same. Chaining operations then becomes a matter of picking best product and keeping track of the scale, meaning you might as well use floating point.
You could of course do clamping if it were appropriate. The double value on the third line of output is the non truncated value. It's more negative than x'LOW.
You could also do clamping in the 16 bit math domain, though all this tells you this math has no meaning in the hardware domain unless it's done in floating point.
So if you were trying to solve a real math problem in hardware it would require floating point, likely accomplished using package float_generic_pkg, and wouldn't fit meaningfully in a 16 bit result.
As stated in found '0' definitions of operator “+” in VHDL, the VHDL compiler is unable to find the matching operator for your operation, which is e.g. multiplying x*x. You probably want to use numeric_std (see here) in order to make operators for signed (and unsigned) available.
But note, that VHDL is not a programming language but a hardware design language. That is, if your long-term goal is to move the code to an FPGA or CPLD, these functions might not work any longer, because they are not synthesizable.
I'm stating this, because you will become more problems when you try to multiply with e.g. 0.1666, because VHDL usually has no knowledge about floating point numbers out of the box.
I have to write a piece of prolog where I have to calculate which position in an array is used to store a value. However the result of these calculations should return an integer, so I use the floor/1 predicate to get myself the integer of the value but this doesn't work in my code. It keeps returning a number with decimal point, for example 3.0 instead of 3
The following is my code:
assign_value(El, NumberArray, RowNumber, I) :-
ground(El),
Number is NumberArray[El],
Col is I/3,
Row is RowNumber/3*3,
Sum is floor(Col + Row + 1),
subscript(Number, [Sum], El).
assign_value(_, _, _, _).
The result of the Sum is floor(Col + Row + 1) is never an integer and I don't know why. Can anyone help me with this?
In ISO Prolog, the evaluable functor floor/1 has as signature (9.1.1 in ISO/IEC 13211-1):
floorF→I
So it expects a float and returns an integer.
However, I do not believe that first creating floats out of integers and then flooring them back to integers is what you want, instead, consider to use (div)/2 in place of (/)/2 thereby staying with integers all the time.
From the documentation of floor/2 (http://www.eclipseclp.org/doc/bips/kernel/arithmetic/floor-2.html)
The result type is the same as the argument type. To convert the
type to integer, use integer/2.
For example:
...,
Floor is floor(Col+Row+1), Sum is integer(Floor).
Reading the documentation for floor/2, we see that
[floor/2] works on all numeric types. The result value is the largest integral
value that is smaller that Number (rounding down towards minus infinity).
The result type is the same as the argument type. To convert the type to integer,
use integer/2.
So you get the same type you supplied as the argument. Looking further at your predicate, we see the use of the / operator. Reading the documentation further, we see that
'/'/3 is used by the ECLiPSe compiler to expand evaluable arithmetic expressions.
So the call to /(Number1, Number2, Result) is equivalent to
Result is Number1 / Number2
which should be preferred for portability.
The result type of the division depends on the value of the global flag prefer_rationals.
When it is off, the result is a float, when it is on, the result is a rational.
Your division operation never returns an integer, meaning that things get upcast to floating point.
If you want to perform integer division, you should use the operators // or div.
This article talks about the optimization of code and discusses Instruction level parallelism. They give an example of GPU vector math where the float4 vector math can be performed on the vector rather than the individual scalars. Example given:
float4 x_neighbor = center.xyxy + float4(-1.0f, 0.0f, 1.0f, 0.0f);
Now my question is can it be used for comparison purposes as well? So in the reduction example, can I do this:
accumulator.xyz = (accumulator.xyz < element.xyz) ? accumulator.xyz : element.xyz;
Thank you.
As already stated by Austin comparison operators apply on vectors as well.
The point d. in the section 6.3 of the standard is the relevant part for you. It says:
The relational operators greater than (>), less than (<), greater than
or equal (>=), and less than or equal (<=) operate on scalar and
vector types.
it explains as well the valid cases:
The two operands are scalars. (...)
One operand is a scalar, and the other is a vector. (...) The scalar type is then widened to a vector that has the same number of
components as the vector operand. The operation is done component-wise
resulting in the same size vector.
The two operands are vectors of the same type. In this case, the operation is done component-wise resulting in the same size vector.
And finally, what these comparison operators return:
The result is a scalar signed integer of type int if the source
operands are scalar and a vector signed integer type of the same size
as the source operands if the source operands are vector types.
For scalar types, the relational operators shall return 0 if the
specified relation is false and 1 if the specified relation is true.
For vector types, the relational operators shall return 0 if the
specified relation is false and –1 (i.e. all bits set) if the
specified relation is true. The relational operators always return 0
if either argument is not a number (NaN).
EDIT:
To complete a bit the return value part, especially after #redrum's comment; It seems odd at first that the true value is -1 for the vector types. However, since OCL behaves as much as possible like C, it doesn't make a big change since everything that is different than 0 is true.
As an example is you have the vector:
int2 vect = (int2)(0, -1);
This statement will evaluate to true and do something:
if(vect.y){
//Do something
}
Now, note that this isn't valid (not related to the value returned, but only to the fact it is a vector):
if(vect){
//do something
}
This won't compile, however, you can use the function all and any to evaluate all elements of a vector in an "if statement":
if(any(vect){
//this will evaluate to true in our example
}
Note that the returned value is (from the quick reference card):
int any (Ti x): 1 if MSB in component of x is set; else 0
So any negative number will do.
But still, why not keep 1 as the returned value when evaluated to true?
I think that the important part is the fact that all bits are set. My guess, would be that like that you can make easily bitwise operation on vectors, like say you want to eliminate the elements smaller than a given value. Thanks to the fact that the value "true" is -1, i.e. 111111...111, you can do something like that:
int4 vect = (int4)(75, 3, 42, 105);
int ref = 50;
int4 result = (vect < ref) & vect;
and result's elements will be: 0, 3, 42, 0
in the other hand if the returned value was 1 for true, the result would be: 0, 1, 0, 0
The OpenCL 1.2 Reference Card from Khronos says that logical operators:
Operators [6.3]
These operators behave similarly as in C99 except that
operands may include vector types when possible:
+ - * % / -- ++ == != &
~ ^ > < >= <= | ! && ||
?: >> << = , op= sizeof
i would like to define a value with a float type; to be more exact, with the squareroot function.
It should look something like this #define hyp sqrt(pow(50,2) + pow(50,2)). But for this value it
seems to be not constant, so i have some warnings and type- conflictions. For what reason it is not constant?
Is a float number always generated at run-time, and all integers when compiling?
Or is this conflicting because, the sqrt declaration has to be inside some scope of a function?
[edit]
To be more clear:
The warnings are beacause of some operation with the defined sqrt-value #define P + hyp - and for that i get the warnings. And P is then put into an array, double arr_ps[] = {P,...}. There is no problem with integers, just with that sqrt- value.[/edit]
#Simon
I have a header file points.h:
#define x 10
#define y 10
#define distance 100
#define P1x x
#define P1y y
#define hyp sqrt(pow(50,2) + pow(50,2))
#define P1x_new P1x + distance
#define P1y_new P1y + hyp
Then i have the c file:
#include "points.h"
double arr_x[2]={P1x,P1x_new};
double arr_y[2]={P1y,P1y_new};
main(){
printf("Px: %f, Py: %f \n",arr_x[0],arr_y[0]);
printf("Px_new: %f, Py_new: %f \n", arr_x[1],arr_y[1]);
}
The warning:initializer element is not constant (near initialization for 'arr_y') - and i get three of them.
Clause 6.6, paragraph 3 of the standard says
Constant expressions shall not contain assignment, increment, decrement, function-call, or comma operators, except when they are contained within a subexpression that is not evaluated.
that a constant expression must not contain a function-call that is evaluated.
That is because
A constant expression can be evaluated during translation rather than runtime, and accordingly may be used in any place that a constant may be.
(paragraph 2 ibid), and a function-call may not be possible to evaluate during translation.
In a constant expression - and such are needed to initialise objects of static storage duration - you can only use basic arithmetic, +-*/, and sizeof (but only if the result is an integer constant expression) and _Alignof:
An arithmetic constant expression shall have arithmetic type and shall only have operands that are integer constants, floating constants, enumeration constants, character constants, sizeof expressions whose results are integer constants, and _Alignof expressions. Cast operators in an arithmetic constant expression shall only convert arithmetic types to arithmetic types, except as part of an operand to a sizeof or _Alignof operator.
The term "constant expression" has a technical meaning that is much narrower than the everyday sense.