What are limitations of Event arguments? - arguments

Are there any limitation on the amount of arguments that can be send in an event?
I have a function in which I want to trigger event that has 12 arguments of which 6 arguments are arrays. I get Stack too deep, try using less variables. Without the event the function works normally.
I am guessing event arguments have some limitations or count towards max arguments in a solidity function but I cannot find any documentation around it.
Can anyone clarify this?
Edit:
The contract looks something like this:
I'm using safe math and the _getAddressSubArrayTo is an internal pure function that gets a sub array from index to index.
event LogTemp(address a,
address b,
address[] c,
uint256[] d,
address[] e,
uint256[] f,
address[] g,
uint256[] h,
uint256 i,
uint256 j,
uint256 k,
bytes32 l);
function test(address[] _addresses,
uint256[] _uints,
uint8 _v,
bytes32 _r,
bytes32 _s,
bool test)
public
returns (bool)
{
Temp memory temp = Temp({
a: _addresses[0],
b: _addresses[1],
c: _getAddressSubArrayTo(_addresses, 2, _uints[3].add(2)),
d: _getUintSubArrayTo(_uints, 5, _uints[3].add(5)),
e: _getAddressSubArrayTo(_addresses, _uints[3].add(2), (_uints[3].add(2)).add(_uints[4])),
f: _getUintSubArrayTo(_uints, _uints[3].add(5), (_uints[3].add(5)).add(_uints[4])),
g: _getAddressSubArrayTo(_addresses, (_uints[3].add(2)).add(_uints[4]), _addresses.length),
h: _getUintSubArrayTo(_uints,(_uints[3].add(5)).add(_uints[4]), _uints.length),
i: _uints[0],
j: _uints[1],
k: _uints[2],
l: hash(
_addresses,
_uints
)
});
LogTemp(
temp.a,
temp.b,
temp.c,
temp.d,
temp.e,
temp.f,
temp.g,
temp.h,
temp.i,
temp.j,
temp.k,
temp.l
);
}

Yes, there are limits. You can have up to three indexed arguments in your event. Non-indexed arguments are less restrictive as it’s not limited by the event data structure itself, but is limited by the block gas size for storage (at a cost of 8 gas per byte of data stored in the log).
Solidity event documentation

I was able to find an answer:
If you look at the ContractCompiler.cpp where FunctionDefinition is declared, you see there is a limit of 17 elements on the stack ;
if (stackLayout.size() > 17)
BOOST_THROW_EXCEPTION(
CompilerError() <<
errinfo_sourceLocation(_function.location()) <<
errinfo_comment("Stack too deep, try removing local variables.")
);
Events are defined as functions, as can be seen in ExpressionCompiler.cpp.
Simply put Events are treated as functions so they have a limit of 17 arguments. Array counts as 2 so in my example where I have 6 arrays + 6 normal arguments this equals 18 and I'm breaking the stack by 1.

Related

Returning the last return values of an iterator without storing a vararg in a table

Writing a function that takes a generic for loop iterator consisting of the iterator function, the invariant state & the loop control variable to return the value the control variable has in the last iteration is straightforward:
function iterator_last_value(iterator, state, control_var)
local last
for value in iterator, state, control_var do
last = value
end
return last
end
print(iterator_last_value(("hello world"):gmatch"%a+")) -- world
This could be easily extended to support arbitrary constant numbers of arguments up to Lua's local register limit. We can also add vararg iterator return value support by always storing the last vararg in a table; this requires us to get rid of Lua's for loop syntactic sugar:
function iterator_last(iterator, state, control_var)
local last = {}
local last_n = 0
local function iter(...)
local control_var = ...
if control_var == nil then
return table.unpack(last, 1, last_n)
end
last = {...}
last_n = select("#", ...)
return iter(iterator(state, control_var))
end
return iter(iterator(state, control_var))
end
print(iterator_last(ipairs{"a", "b", "c"})) -- 3, c
which works well but creates a garbage table every iteration. If we replace
last = {...}
last_n = select("#", ...)
with
last_n = select("#", ...)
for i = 1, last_n do
last[i] = select(i, ...)
end
we can get away with reusing one table - presumably at the cost of manually filling the table using select being less efficient than {...}, but creating significantly fewer garbage tables (only one garbage table per call to iterator_last).
Is it possible to implement a variadic return value iterator_last without storing a vararg with significant overhead using a table, coroutine or the like, leaving it on the stack and only passing the varargs around through function calls? I conjure that this is not possible, but have been unable to prove or disprove it.

Why does golang implement different behavior on `[]`operator between slice and map? [duplicate]

This question already has answers here:
Why are map values not addressable?
(2 answers)
Closed 4 years ago.
type S struct {
e int
}
func main() {
a := []S{{1}}
a[0].e = 2
b := map[int]S{0: {1}}
b[0].e = 2 // error
}
a[0] is addressable but b[0] is not.
I know first 0 is an index and second 0 is a key.
Why golang implement like this? Any further consideration?
I've read source code of map in github.com/golang/go/src/runtime and map structure already supported indirectkey and indirectvalue if maxKeySize and maxValueSize are little enough.
type maptype struct {
...
keysize uint8 // size of key slot
indirectkey bool // store ptr to key instead of key itself
valuesize uint8 // size of value slot
indirectvalue bool // store ptr to value instead of value itself
...
}
I think if golang designers want this syntax, it works easy now.
Of course indirectkey indirectvalue may cost more resource and GC also need do more work.
So performance is the only reason for supporting this?
Or any other consideration?
In my opinion, supporting syntax like this is valuable.
As far as I known,
That's because a[0] can be replaced with address of array.
Similarly, a[1] can be replace with a[0]+(keySize*1).
But, In case of map one cannot do like that, hash algorithm changes from time to time based on your key, value pairs and number of them.
They are also rearranged from time to time.
specific computation is needed in-order to get the address of value.
Arrays or slices are easily addressable, but in case of maps it's like multiple function calls or structure look-ups ...
If one is thinking to replace it with what ever computation is needed, then binary size is going to be increased in orders of magnitude, and more over hash algorithm can keep changing from time to time.

MPI_Waitall() behavior given MPI_Request array with possibly uninitialized slots for asynchronous send/recv

I have come across a scenario in which I need to allocate a static array of type MPI_Request for keeping track of asynchronous send and receive MPI operations. I have total of 8 Isend and Irecv operations - where 4 of which are Isend and the remaining is Irecv. However, I do not call these 8 functions all at once. Depending on the incoming data, these functions are called in pairs, which means I may call pair of 1 send/receive or 2 send/receive or 3 send/receive or all at once. The fact that they are going to be called in pairs is certain but how many of them will be called is not certain. Below is a pseudo-code:
MPI_Request reqs[8];
MPI_Status stats[8];
if (Rank A exists){
//The process have to send data to A and receive data from A
MPI_Isend(A, ..., &reqs[0]);
MPI_Irecv(A, ..., &reqs[1]);
}
if(Rank B exists){
//The process have to send data to B and receive data from B
MPI_Isend(B, ..., &reqs[2]);
MPI_Irecv(B, ..., &reqs[3]);
}
if(Rank C exists){
//The process have to send data to C and receive data from C
MPI_Isend(C, ..., &reqs[4]);
MPI_Irecv(C, ..., &reqs[5]);
}
if(Rank D exists){
//The process have to send data to D and receive data from D
MPI_Isend(D, ..., &reqs[6]);
MPI_Irecv(D, ..., &reqs[7]);
}
//Wait for asynchronous operations to complete
MPI_Waitall(8, reqs, stats);
Now, I am not sure what the behavior of the program will be. There are total of 8 distinct asynchronous send and receive function calls and there is one slot for each function in MPI_reqs[8] for each call but not all of the functions will always be used. When some of them are not called, some slots in MPI_reqs[8] will be uninitialized. However, I need MPI_Waitall(8, reqs, stats) to return regardless of whether all slots in MPI_reqs[8] are initialized or not.
Could someone explain how the program might behave in this particular scenario?
You could set / initialize those missing requests with MPI_REQUEST_NULL. That said, why not just
int count = 0;
...
Isend(A, &reqs[count++]);
...
MPI_Waitall(count, reqs, stats);
Of course, leaving the value uninitialized and feeding it to some function that reads from it is not a good idea.

How to use the method Sort of the class TList in Delphi

How does the method Sort of the class TList work? Does this method only sort in a way that the elements of the list only ascend/descend?
Please, have a look at the code below.
Type
PInteger = ^Integer;
Function Compare(Item1, Item2 : Pointer) : Integer;
Begin
if PInteger(Item1)^ > Pinteger(Item2)^ then Result:= 1
else if PInteger(Item1)^ < PInteger(Item2)^ then Result:= -1
else Result:= 0;
End;
{ And, for instance, somewhere we call the method }
List.Sort(Compare);
Now the thing is, after i compile the code, it works well, the list is sorted in a way that the elements ascend. But i don't understand the following line:
PInteger(item1)^ // What does this represent?
And what item1, item2 pointers point to? Do they not need to be initialized?
First what PInteger(item1)^ does/represent?
Item1 is a Pointer, the address of an item stored in the TPointerList.
PInteger is a typed pointer, this means that this pointer points to an address where it is expected to find an integer (four bytes).
^ the dereferencing symbol, you can use this with pointers to tell the compiler that you want to use the data stored in the address that the pointer is currently pointing to
PInteger(item1)^ you are performing a typecast. in other words you are telling the compiler to treat the pointer Item1 as if it was PInteger then you dereference it to use its data/value stored at the address Item1
Now back to your code. Your function expects two pointers to Items (Integers) from the list which then you compare the data stored in those address (by dereferencing). This means that the list is responsible for the pointers given to your function, in fact if the ItemCount is less than 1 your function will not be executed.
Note: you need to understand that this function will fail if the pointers are pointing to something other than integers (or give an undefined behavior).

Using MPI_TYPE_VECTOR instead of MPI_GATHER

Suppose that k processes compute the elements of a matrix A, whose dimension is (n,m), where n is the number of rows and m is the number of columns. I am trying to use MPI_GATHER to gather these two matrices to the matrix B at the root process, where the dimension of B is (n,km). To be more specific, I wrote an example fortran code below. Here, I am passing over the columns of the matrix A (not the entire matrix) to the matrix B but this wouldn't work. When I run the executable using mpirun -n 2 a.out, I get the error:
malloc: *** error for object 0x7ffa89413fb8: incorrect checksum for freed object - object was probably modified after being freed.
1) Why do I get this error message?
2) Who can please explain conceptually, why I have to use MPI_TYPE_VECTOR?
3) How should I correct the MPI_GATHER part of the code? Can I pass over the entire matrix A?
PROGRAM test
IMPLICIT NONE
INCLUDE "mpif.h"
INTEGER, PARAMETER :: n=100, m=100
INTEGER, ALLOCATABLE, DIMENSION(:,:) :: A
INTEGER, DIMENSION(n,m) :: B
INTEGER :: ind_a, ind_c
INTEGER :: NUM_PROC, PROC_ID, IERROR, MASTER_ID=0
INTEGER :: c
INTEGER, DIMENSION(m) :: cvec
CALL MPI_INIT(IERROR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, PROC_ID, IERROR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NUM_PROC, IERROR)
ALLOCATE(A(n,m/NUM_PROC))
DO ind_c=1,m
cvec(ind_c)=ind_c
END DO
! Fill in matrix A
DO ind_a=1,n
DO ind_c=1,m/NUM_PROC
c=cvec(ind_c+PROC_ID*m/NUM_PROC)
A(ind_a,ind_c)=c*ind_a
END DO
END DO
! Gather the elements at the root process
DO ind_a=1,n
CALL MPI_GATHER(A(ind_a,:),m/NUM_PROC,MPI_INTEGER,B(ind_a,PROC_ID*m/NUM_PROC+1:(PROC_ID+1)*m/NUM_PROC),m/NUM_PROC,MPI_INTEGER,MASTER_ID,MPI_COMM_WORLD,IERROR)
END DO
CALL MPI_FINALIZE(IERROR)
END PROGRAM
There are two types of gather operation that can be performed in a 2 dimensional array.
1. gathering the elements from dimension-2 of all the process and collecting it in the dimension-2 of one process; and
2. gathering the elements from dimension-2 of all the process and collecting it in the dimension-1 of one process.
Said that in this example;
n=dimension-1 and m=dimension-2, and we know that Fortran is column major. Hence, the dimension-1 is contiguous in memory in Fortran.
In your gather statement you are trying to gather dimension-2 of Array-A from all the processes, and collect it into the dimension-2 of Array-B in MASTER_ID proc(TYPE-1). Since, dimension-2 is non-contiguous in memory, this causes the segmentation fault.
A single MPI_Gather call as shown below will get to the required operation, without any looping-tricks as shown above:
CALL MPI_GATHER(A, n*(m/NUM_PROC), MPI_INTEGER, &
B, n*(m/NUM_PROC), MPI_INTEGER, MASTER_ID, &
MPI_COMM_WORLD, IERROR)
But, if you attempting to gather elements from dimension-2 of Array-A from all the process to dimension-1 of Array-B in MASTER_ID proc, that is when we need to make use of MPI_TYPE_VECTOR, where we create a new type with the non-contiguous elements. Let, me know if that is the intention.
Because, the current code logic doesn't look like we need to make use of MPI_TYPE_VECTOR.

Resources