initializer element is not constant (sqrt function) - gcc

i would like to define a value with a float type; to be more exact, with the squareroot function.
It should look something like this #define hyp sqrt(pow(50,2) + pow(50,2)). But for this value it
seems to be not constant, so i have some warnings and type- conflictions. For what reason it is not constant?
Is a float number always generated at run-time, and all integers when compiling?
Or is this conflicting because, the sqrt declaration has to be inside some scope of a function?
[edit]
To be more clear:
The warnings are beacause of some operation with the defined sqrt-value #define P + hyp - and for that i get the warnings. And P is then put into an array, double arr_ps[] = {P,...}. There is no problem with integers, just with that sqrt- value.[/edit]
#Simon
I have a header file points.h:
#define x 10
#define y 10
#define distance 100
#define P1x x
#define P1y y
#define hyp sqrt(pow(50,2) + pow(50,2))
#define P1x_new P1x + distance
#define P1y_new P1y + hyp
Then i have the c file:
#include "points.h"
double arr_x[2]={P1x,P1x_new};
double arr_y[2]={P1y,P1y_new};
main(){
printf("Px: %f, Py: %f \n",arr_x[0],arr_y[0]);
printf("Px_new: %f, Py_new: %f \n", arr_x[1],arr_y[1]);
}
The warning:initializer element is not constant (near initialization for 'arr_y') - and i get three of them.

Clause 6.6, paragraph 3 of the standard says
Constant expressions shall not contain assignment, increment, decrement, function-call, or comma operators, except when they are contained within a subexpression that is not evaluated.
that a constant expression must not contain a function-call that is evaluated.
That is because
A constant expression can be evaluated during translation rather than runtime, and accordingly may be used in any place that a constant may be.
(paragraph 2 ibid), and a function-call may not be possible to evaluate during translation.
In a constant expression - and such are needed to initialise objects of static storage duration - you can only use basic arithmetic, +-*/, and sizeof (but only if the result is an integer constant expression) and _Alignof:
An arithmetic constant expression shall have arithmetic type and shall only have operands that are integer constants, floating constants, enumeration constants, character constants, sizeof expressions whose results are integer constants, and _Alignof expressions. Cast operators in an arithmetic constant expression shall only convert arithmetic types to arithmetic types, except as part of an operand to a sizeof or _Alignof operator.
The term "constant expression" has a technical meaning that is much narrower than the everyday sense.

Related

Misunderstanding Go Language specification on floating-point rounding

The Go language specification on the section about Constant expressions states:
A compiler may use rounding while computing untyped floating-point or complex constant expressions; see the implementation restriction in the section on constants. This rounding may cause a floating-point constant expression to be invalid in an integer context, even if it would be integral when calculated using infinite precision, and vice versa.
Does the sentence
This rounding may cause a floating-point constant expression to be invalid in an integer context
point to something like the following:
func main() {
a := 853784574674.23846278367
fmt.Println(int8(a)) // output: 0
}
The quoted part from the spec does not apply to your example, as a is not a constant expression but a variable, so int8(a) is converting a non-constant expression. This conversion is covered by Spec: Conversions, Conversions between numeric types:
When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero).
[...] In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Since you convert a non-constant expression a being 853784574674.23846278367 to an integer, the fraction part is discarded, and since the result does not fit into an int8, the result is not specified, it's implementation-dependent.
The quoted part means that while constants are represented with a lot higher precision than the builtin types (eg. float64 or int64), the precision that a compiler (have to) implement is not infinite (for practical reasons), and even if a floating point literal is representable precisely, performing operations on them may be carried out with intermediate roundings and may not give mathematically correct result.
The spec includes the minimum supportable precision:
Implementation restriction: Although numeric constants have arbitrary precision in the language, a compiler may implement them using an internal representation with limited precision. That said, every implementation must:
Represent integer constants with at least 256 bits.
Represent floating-point constants, including the parts of a complex constant, with a mantissa of at least 256 bits and a signed binary exponent of at least 16 bits.
Give an error if unable to represent an integer constant precisely.
Give an error if unable to represent a floating-point or complex constant due to overflow.
Round to the nearest representable constant if unable to represent a floating-point or complex constant due to limits on precision.
For example:
const (
x = 1e100000 + 1
y = 1e100000
)
func main() {
fmt.Println(x - y)
}
This code should output 1 as x is being 1 larger than y. Running it on the Go Playground outputs 0 because the constant expression x - y is executed with roundings, and the +1 is lost as a result. Both x and y are integers (have no fraction part), so in integer context the result should be 1. But the number being 1e100000, representing it requires around ~333000 bits, which is not a valid requirement from a compiler (according to the spec, 256 bit mantissa is sufficient).
If we lower the constants, we get correct result:
const (
x = 1e1000 + 1
y = 1e1000
)
func main() {
fmt.Println(x - y)
}
This outputs the mathematically correct 1 result. Try it on the Go Playground. Representing the number 1e1000 requires around ~3333 bits which seems to be supported (and it's way above the minimum 256 bit requirement).
An int8 is a signed integer, and can have a value from -128 to 127. That's why you are seeing unexpected value with int8(a) conversion.

Does Go compiler's evaluation differ for constant expression and other expression

Why does below code fail to compile?
package main
import (
"fmt"
"unsafe"
)
var x int = 1
const (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
I get an error
main.go:12: constant 2147483648 overflows int
Above statement is correct. Yes, 2147483648 overflows int (In 32 bit architecture). But the shift operation should result in a negative value ie -2147483648.
But the same code works, If I change the constants into variables and I get the expected output.
package main
import (
"fmt"
"unsafe"
)
var x int = 1
var (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
There is a difference in evaluation between constant and non-constant expression that arises from constants being precise:
Numeric constants represent exact values of arbitrary precision and do not overflow.
Typed constant expressions cannot overflow; if the result cannot be represented by its type, it's a compile-time error (this can be detected at compile-time).
The same thing does not apply to non-constant expressions, as this can't be detected at compile-time (it could only be detected at runtime). Operations on variables can overflow.
In your first example ONE is a typed constant with type int. This constant expression:
ONE << (unsafe.Sizeof(x)*8 - 1)
Is a constant shift expression, the following applies: Spec: Constant expressions:
If the left operand of a constant shift expression is an untyped constant, the result is an integer constant; otherwise it is a constant of the same type as the left operand, which must be of integer type.
So the result of the shift expression must fit into an int because this is a constant expression; but since it doesn't, it's a compile-time error.
In your second example ONE is not a constant, it's a variable of type int. So the shift expression here may –and will– overflow, resulting in the expected negative value.
Notes:
Should you change ONE in the 2nd example to a constant instead of a variable, you'd get the same error (as the expression in the initializer would be a constant expression). Should you change ONE to a variable in the first example, it wouldn't work as variables cannot be used in constant expressions (it must be a constant expression because it initializes a constant).
Constant expressions to find min-max values
You may use the following solution which yields the max and min values of uint and int types:
const (
MaxUint = ^uint(0)
MinUint = 0
MaxInt = int(MaxUint >> 1)
MinInt = -MaxInt - 1
)
func main() {
fmt.Printf("uint: %d..%d\n", MinUint, MaxUint)
fmt.Printf("int: %d..%d\n", MinInt, MaxInt)
}
Output (try it on the Go Playground):
uint: 0..4294967295
int: -2147483648..2147483647
The logic behind it lies in the Spec: Constant expressions:
The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for unsigned constants and -1 for signed and untyped constants.
So the typed constant expression ^uint(0) is of type uint and is the max value of uint: it has all its bits set to 1. Given that integers are represented using 2's complement: shifting this to the left by 1 you'll get the value of max int, from which the min int value is -MaxInt - 1 (-1 due to the 0 value).
Reasoning for the different behavior
Why is there no overflow for constant expressions and overflow for non-constant expressions?
The latter is easy: in most other (programming) languages there is overflow. So this behavior is consistent with other languages and it has its benefits.
The real question is the first: why isn't overflow allowed for constant expressions?
Constants in Go are more than values of typed variables: they represent exact values of arbitrary precision. Staying at the word exact, if you have a value that you want to assign to a typed constant, allowing overflow and assigning a completely different value doesn't really live up to exact.
Going forward, this type checking and disallowing overflow can catch mistakes like this one:
type Char byte
var c1 Char = 'a' // OK
var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
What happens here? c1 Char = 'a' works because 'a' is a rune constant, and rune is alias for int32, and 'a' has numeric value 97 which fits into byte's valid range (which is 0..255).
But c2 Char = '世' results in a compile-time error because the rune '世' has numeric value 19990 which doesn't fit into a byte. If overflow would be allowed, your code would compile and assign 22 numeric value ('\x16') to c2 but obviously this wasn't your intent. By disallowing overflow this mistake is easily caught, and at compile-time.
To verify the results:
var c1 Char = 'a'
fmt.Printf("%d %q %c\n", c1, c1, c1)
// var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
r := '世'
var c2 Char = Char(r)
fmt.Printf("%d %q %c\n", c2, c2, c2)
Output (try it on the Go Playground):
97 'a' a
22 '\x16'
To read more about constants and their philosophy, read the blog post: The Go Blog: Constants
And a couple more questions (+answers) that relate and / or are interesting:
Golang: on-purpose int overflow
How does Go perform arithmetic on constants?
Find address of constant in go
Why do these two float64s have different values?
How to change a float64 number to uint64 in a right way?
Writing powers of 10 as constants compactly

How does Go perform arithmetic on constants?

I've been reading this post on constants in Go, and I'm trying to understand how they are stored and used in memory. You can perform operations on very large constants in Go, and as long as the result fits in memory, you can coerce that result to a type. For example, this code prints 10, as you would expect:
const Huge = 1e1000
fmt.Println(Huge / 1e999)
How does this work under the hood? At some point, Go has to store 1e1000 and 1e999 in memory, in order to perform operations on them. So how are constants stored, and how does Go perform arithmetic on them?
Short summary (TL;DR) is at the end of the answer.
Untyped arbitrary-precision constants don't live at runtime, constants live only at compile time (during the compilation). That being said, Go does not have to represent constants with arbitrary precision at runtime, only when compiling your application.
Why? Because constants do not get compiled into the executable binaries. They don't have to be. Let's take your example:
const Huge = 1e1000
fmt.Println(Huge / 1e999)
There is a constant Huge in the source code (and will be in the package object), but it won't appear in your executable. Instead a function call to fmt.Println() will be recorded with a value passed to it, whose type will be float64. So in the executable only a float64 value being 10.0 will be recorded. There is no sign of any number being 1e1000 in the executable.
This float64 type is derived from the default type of the untyped constant Huge. 1e1000 is a floating-point literal. To verify it:
const Huge = 1e1000
x := Huge / 1e999
fmt.Printf("%T", x) // Prints float64
Back to the arbitrary precision:
Spec: Constants:
Numeric constants represent exact values of arbitrary precision and do not overflow.
So constants represent exact values of arbitrary precision. As we saw, there is no need to represent constants with arbitrary precision at runtime, but the compiler still has to do something at compile time. And it does!
Obviously "infinite" precision cannot be dealt with. But there is no need, as the source code itself is not "infinite" (size of the source is finite). Still, it's not practical to allow truly arbitrary precision. So the spec gives some freedom to compilers regarding to this:
Implementation restriction: Although numeric constants have arbitrary precision in the language, a compiler may implement them using an internal representation with limited precision. That said, every implementation must:
Represent integer constants with at least 256 bits.
Represent floating-point constants, including the parts of a complex constant, with a mantissa of at least 256 bits and a signed exponent of at least 32 bits.
Give an error if unable to represent an integer constant precisely.
Give an error if unable to represent a floating-point or complex constant due to overflow.
Round to the nearest representable constant if unable to represent a floating-point or complex constant due to limits on precision.
These requirements apply both to literal constants and to the result of evaluating constant expressions.
However, also note that when all the above said, the standard package provides you the means to still represent and work with values (constants) with "arbitrary" precision, see package go/constant. You may look into its source to get an idea how it's implemented.
Implementation is in go/constant/value.go. Types representing such values:
// A Value represents the value of a Go constant.
type Value interface {
// Kind returns the value kind.
Kind() Kind
// String returns a short, human-readable form of the value.
// For numeric values, the result may be an approximation;
// for String values the result may be a shortened string.
// Use ExactString for a string representing a value exactly.
String() string
// ExactString returns an exact, printable form of the value.
ExactString() string
// Prevent external implementations.
implementsValue()
}
type (
unknownVal struct{}
boolVal bool
stringVal string
int64Val int64 // Int values representable as an int64
intVal struct{ val *big.Int } // Int values not representable as an int64
ratVal struct{ val *big.Rat } // Float values representable as a fraction
floatVal struct{ val *big.Float } // Float values not representable as a fraction
complexVal struct{ re, im Value }
)
As you can see, the math/big package is used to represent untyped arbitrary precision values. big.Int is for example (from math/big/int.go):
// An Int represents a signed multi-precision integer.
// The zero value for an Int represents the value 0.
type Int struct {
neg bool // sign
abs nat // absolute value of the integer
}
Where nat is (from math/big/nat.go):
// An unsigned integer x of the form
//
// x = x[n-1]*_B^(n-1) + x[n-2]*_B^(n-2) + ... + x[1]*_B + x[0]
//
// with 0 <= x[i] < _B and 0 <= i < n is stored in a slice of length n,
// with the digits x[i] as the slice elements.
//
// A number is normalized if the slice contains no leading 0 digits.
// During arithmetic operations, denormalized values may occur but are
// always normalized before returning the final result. The normalized
// representation of 0 is the empty or nil slice (length = 0).
//
type nat []Word
And finally Word is (from math/big/arith.go)
// A Word represents a single digit of a multi-precision unsigned integer.
type Word uintptr
Summary
At runtime: predefined types provide limited precision, but you can "mimic" arbitrary precision with certain packages, such as math/big and go/constant. At compile time: constants seemingly provide arbitrary precision, but in reality a compiler may not live up to this (doesn't have to); but still the spec provides minimal precision for constants that all compiler must support, e.g. integer constants must be represented with at least 256 bits which is 32 bytes (compared to int64 which is "only" 8 bytes).
When an executable binary is created, results of constant expressions (with arbitrary precision) have to be converted and represented with values of finite precision types – which may not be possible and thus may result in compile-time errors. Note that only results –not intermediate operands– have to be converted to finite precision, constant operations are carried out with arbitrary precision.
How this arbitrary or enhanced precision is implemented is not defined by the spec, math/big for example stores "digits" of the number in a slice (where digits is not a digit of the base 10 representation, but "digit" is an uintptr which is like base 4294967295 representation on 32-bit architectures, and even bigger on 64-bit architectures).
Go constants are not allocated to memory. They are used in context by the compiler. The blog post you refer to gives the example of Pi:
Pi = 3.14159265358979323846264338327950288419716939937510582097494459
If you assign Pi to a float32 it will lose precision to fit, but if you assign it to a float64, it will lose less precision, but the compiler will determine what type to use.

Implement equation in VHDL

I am trying to implement the equation in VHDL which has multiplication by some constant and addition. The equation is as below,
y<=-((x*x*x)*0.1666)+(2.5*(x*x))- (21.666*x) + 36.6653; ----error
I got the error
HDLCompiler:1731 - found '0' definitions of operator "*",
can not determine exact overloaded matching definition for "*".
entity is
entity eq1 is
Port ( x : in signed(15 downto 0);
y : out signed (15 downto 0) );
end eq1;
I tried using the function RESIZE and x in integer but it gives same error. Should i have to use another data type? x is having pure integer values like 2,4,6..etc.
Since x and y are of datatype signed, you can multiply them. However, there is no multiplication of signed with real. Even if there was, the result would be real (not signed or integer).
So first, you need to figure out what you want (the semantics). Then you should add type casts and conversion functions.
y <= x*x; -- OK
y <= 0.5 * x; -- not OK
y <= to_signed(integer(0.5 * real(to_integer(x))),y'length); -- OK
This is another case where simulating before synthesis might be handy. ghdl for instances tells you which "*" operator it finds the first error for:
ghdl -a implement.vhdl
implement.vhdl:12:21: no function declarations for operator "*"
y <= -((x*x*x) * 0.1666) + (2.5 * (x*x)) - (21.666 * x) + 36.6653;
---------------^ character position 21, line 12
The expressions with x multiplied have both operands with a type of signed.
(And for later, we also note that the complex expression on the right hand side of the signal assignment operation will eventually be interpreted as a signed value with a narrow subtype constraint when assigned to y).
VHDL determines the type of the literal 0.1666, it's an abstract literal, that is decimal literal or floating-point literal (IEEE Std 1076-2008 5.2.5 Floating-point types, 5.2.5.1 General, paragraph 5):
Floating-point literals are the literals of an anonymous predefined type that is called universal_real in this standard. Other floating-point types have no literals. However, for each floating-point type there exists an implicit conversion that converts a value of type universal_real into the corresponding value (if any) of the floating-point type (see 9.3.6).
There's only one predefined floating-point type in VHDL, see 5.2.5.2, and the floating-point literal of type universal_real is implicitly converted to type REAL.
9.3.6 Type conversions paragraph 14 tells us:
In certain cases, an implicit type conversion will be performed. An implicit conversion of an operand of type universal_integer to another integer type, or of an operand of type universal_real to another floating-point type, can only be applied if the operand is either a numeric literal or an attribute, or if the operand is an expression consisting of the division of a value of a physical type by a value of the same type; such an operand is called a convertible universal operand. An implicit conversion of a convertible universal operand is applied if and only if the innermost complete context determines a unique (numeric) target type for the implicit conversion, and there is no legal interpretation of this context without this conversion.
Because you haven't included a package containing another floating-point type that leaves us searching for a "*" multiplying operator with one operand of type signed and one of type REAL with a return type of signed (or another "*" operator with the opposite operand type arguments) and VHDL found 0 of those.
There is no
function "*" (l: signed; r: REAL) return REAL;
or
function "*" (l: signed; r: REAL) return signed;
found in package numeric_std.
Phillipe suggests one way to overcome this by converting signed x to integer.
Historically synthesis doesn't encompass type REAL, prior to the 2008 version of the VHDL standard you were likely to have arbitrary precision, while 5.2.5 paragraph 7 now tells us:
An implementation shall choose a representation for all floating-point types except for universal_real that conforms either to IEEE Std 754-1985 or to IEEE Std 854-1987; in either case, a minimum representation size of 64 bits is required for this chosen representation.
And that doesn't help us unless the synthesis tool supports floating-point types of REAL and is -2008 compliant.
VHDL has the float_generic_pkg package introduced in the 2008 version, which performs synthesis eligible floating point operations and is compatible with the used of signed types by converting to and from it's float type.
Before we suggest something so drastic as performing all these calculations as 64 bit floating point numbers and synthesize all that let's again note that the result is a 16 bit signed which is an array type of std_ulogic and represents a 16 bit integer.
You can model the multiplications on the right hand side as distinct expressions executed in both floating point or signed
representation to determine when the error is significant.
Because you are using a 16 bit signed value for y, significant would mean a difference greater than 1 in magnitude. Flipped signs or unexpected 0s between the two methods will likely tell you there's a precision issue.
I wrote a little C program to look at the differences and right off the bat it tells us 16 bits isn't enough to hold the math:
int16_t x, y, Y;
int16_t a,b,c,d;
double A,B,C,D;
a = x*x*x * 0.1666;
A = x*x*x * 0.1666;
b = 2.5 * x*x;
B = 2.5 * x*x;
c = 21.666 * x;
C = 21.666 * x;
d = 36;
D = 36.6653;
y = -( a + b - c + d);
Y = (int16_t) -(A + B - C + D);
And outputs for the left most value of x:
x = -32767, a = 11515, b = 0, c = 10967, y = -584, Y = 0
x = -32767, A = -178901765.158200, B = 2684190722.500000, C = -709929.822000
x = -32767 , y = -584 , Y= 0, double = -2505998923.829100
The first line of output is for 16 bit multiplies and you can see all three expressions with multiplies are incorrect.
The second line says double has enough precision, yet Y (-(A + B - C + D)) doesn't fit in a 16 bit number. And you can't cure that by making the result size larger unless the input size remains the same. Chaining operations then becomes a matter of picking best product and keeping track of the scale, meaning you might as well use floating point.
You could of course do clamping if it were appropriate. The double value on the third line of output is the non truncated value. It's more negative than x'LOW.
You could also do clamping in the 16 bit math domain, though all this tells you this math has no meaning in the hardware domain unless it's done in floating point.
So if you were trying to solve a real math problem in hardware it would require floating point, likely accomplished using package float_generic_pkg, and wouldn't fit meaningfully in a 16 bit result.
As stated in found '0' definitions of operator “+” in VHDL, the VHDL compiler is unable to find the matching operator for your operation, which is e.g. multiplying x*x. You probably want to use numeric_std (see here) in order to make operators for signed (and unsigned) available.
But note, that VHDL is not a programming language but a hardware design language. That is, if your long-term goal is to move the code to an FPGA or CPLD, these functions might not work any longer, because they are not synthesizable.
I'm stating this, because you will become more problems when you try to multiply with e.g. 0.1666, because VHDL usually has no knowledge about floating point numbers out of the box.

OpenCL - GPU Vector Math (Instruction Level Parallelism)

This article talks about the optimization of code and discusses Instruction level parallelism. They give an example of GPU vector math where the float4 vector math can be performed on the vector rather than the individual scalars. Example given:
float4 x_neighbor = center.xyxy + float4(-1.0f, 0.0f, 1.0f, 0.0f);
Now my question is can it be used for comparison purposes as well? So in the reduction example, can I do this:
accumulator.xyz = (accumulator.xyz < element.xyz) ? accumulator.xyz : element.xyz;
Thank you.
As already stated by Austin comparison operators apply on vectors as well.
The point d. in the section 6.3 of the standard is the relevant part for you. It says:
The relational operators greater than (>), less than (<), greater than
or equal (>=), and less than or equal (<=) operate on scalar and
vector types.
it explains as well the valid cases:
The two operands are scalars. (...)
One operand is a scalar, and the other is a vector. (...) The scalar type is then widened to a vector that has the same number of
components as the vector operand. The operation is done component-wise
resulting in the same size vector.
The two operands are vectors of the same type. In this case, the operation is done component-wise resulting in the same size vector.
And finally, what these comparison operators return:
The result is a scalar signed integer of type int if the source
operands are scalar and a vector signed integer type of the same size
as the source operands if the source operands are vector types.
For scalar types, the relational operators shall return 0 if the
specified relation is false and 1 if the specified relation is true.
For vector types, the relational operators shall return 0 if the
specified relation is false and –1 (i.e. all bits set) if the
specified relation is true. The relational operators always return 0
if either argument is not a number (NaN).
EDIT:
To complete a bit the return value part, especially after #redrum's comment; It seems odd at first that the true value is -1 for the vector types. However, since OCL behaves as much as possible like C, it doesn't make a big change since everything that is different than 0 is true.
As an example is you have the vector:
int2 vect = (int2)(0, -1);
This statement will evaluate to true and do something:
if(vect.y){
//Do something
}
Now, note that this isn't valid (not related to the value returned, but only to the fact it is a vector):
if(vect){
//do something
}
This won't compile, however, you can use the function all and any to evaluate all elements of a vector in an "if statement":
if(any(vect){
//this will evaluate to true in our example
}
Note that the returned value is (from the quick reference card):
int any (Ti x): 1 if MSB in component of x is set; else 0
So any negative number will do.
But still, why not keep 1 as the returned value when evaluated to true?
I think that the important part is the fact that all bits are set. My guess, would be that like that you can make easily bitwise operation on vectors, like say you want to eliminate the elements smaller than a given value. Thanks to the fact that the value "true" is -1, i.e. 111111...111, you can do something like that:
int4 vect = (int4)(75, 3, 42, 105);
int ref = 50;
int4 result = (vect < ref) & vect;
and result's elements will be: 0, 3, 42, 0
in the other hand if the returned value was 1 for true, the result would be: 0, 1, 0, 0
The OpenCL 1.2 Reference Card from Khronos says that logical operators:
Operators [6.3]
These operators behave similarly as in C99 except that
operands may include vector types when possible:
+ - * % / -- ++ == != &
~ ^ > < >= <= | ! && ||
?: >> << = , op= sizeof

Resources