What is the difference between Static and Dynamic in Data Structure - data-structures

I want to know the clear cut difference between static and dynamic in C#.
I have seen many posts on different blogs but i was not satisfied by their answers .
Please explain me clearly.

These terms are used in a number of ways, depending on the specific context. But in general, static refers to something that's specified early, or hard-coded into a program, and is not easily changed. Dynamic refers to something that's intended to be updated on the fly.
For instance, in C, if you declare an array like:
int arr[100];
the size of the array is static: it's always 100 elements. Even if you use a macro, like this:
int arr[SIZE];
you would have to update the macro definition and recompile the program to change the array's size. The compiler will set aside a fixed block of memory to hold the array; if it's a local variable, it will allocate the memory in the stack frame of the function, if it's a global variable it will be allocated at program start-up time in the BSS segment (the specific details are implementation-dependent, but this is the typical way).
On the other hand, if you use:
int *arr = malloc(n * sizeof(int));
the array's size is dynamic -- it depends on the current value of the variable n, which can depend on program inputs and other state. You can also use realloc() to change the array's size.

Related

Mutable data types that use stack allocation

Based on my earlier question, I understand the benefit of using stack allocation. Suppose I have an array of arrays. For example, A is a list of matrices and each element A[i] is a 1x3 matrix. The length of A and the dimension of A[i] are known at run time (given by the user). Each A[i] is a matrix of Float64 and this is also known at run time. However, through out the program, I will be modifying the values of A[i] element by element. What data structure can also allow me to use stack allocation? I tried StaticArrays but it doesn't allow me to modify a static array.
StaticArrays defines MArray (MVector, MMatrix) types that are fixed-size and mutable. If you use these there's a higher chance of the compiler determining that they can be stack-allocated, but it's not guaranteed. Moreover, since the pattern you're using is that you're passing the mutable state vector into a function which presumably modifies it, it's not going to be valid or helpful to stack allocate that anyway. If you're going to allocate state once and modify it throughout the program, it doesn't really matter if it is heap or stack allocated—stack allocation is only a big win for objects that are allocated, used locally and then don't escape the local scope, so they can be “freed” simply by popping the stack.
From the code snippet you showed in the linked question, the state vector is allocated in the outer function, test_for_loop, which shouldn't be a big deal since it's done once at the beginning of execution. Using a variably sized state vector to index into an array with a splat (...) might be an issue, however, and that's done in test_function. Using something with fixed size like MVector might be better for that. It might, however, be better still, to use a state tuple and return a new rather than mutated state tuple at the end. The compiler is very good at turning that kind of thing into very efficient code because of immutability.
Note that by convention test_function should be called test_function! since it modifies its M argument and even more so if it modifies the state vector.
I would also note that this isn't a great question/answer pair since it's not standalone at all and really just a continuation of your other question. StackOverflow isn't very good for this kind of iterative question/discussion interaction, I'm afraid.

How do I set an array size using an atomic variable in Chapel?

In pursuing A3C I need to set multiple global and local parameters. The global parameters need to have shared size. I think this means atomic variables, but it's still new to me.
var n: atomic int,
x: [1..n] real; # a vector of global size
proc localDude(){
n +=1; # increase the size of n
}
I understand the array will grow and shrink with the domain, but I'm having a hard time getting the semantics together. Thanks!
So there are a few things.
Domains take their bounds by value, not by reference. Thus, modifying a variable used by a domain to construct its bounds does not modify the domain (example).
Arrays do take their domain by reference, and so assigning to the used domain variable does make the array change (example).
Domains are not a supported Atomic type yet, so having a global atomic domain will not work. However you could use a sync variable as a lock so that modifications are serialized. There is an example of this on the learn Chapel in Y minutes tutorial, near the bottom (search for mutex)

Halide: Filter elements out of vector (Halide::Runtime::Buffer)

I have a Halide::Runtime::Buffer and would like to remove elements that match a criteria, ideally such that the operation occurs in-place and that the function can be defined in a Halide::Generator.
I have looked into using reductions, but it seems to me that I cannot output a vector of a different length -- I can only set certain elements to a value of my choice.
So far, the only way I got it to work was by using a extern "C" call and passing the Buffer I wanted to filter, along with a boolean Buffer (1's and 0's as ints). I read the Buffers into vectors of another library (Armadillo), conducted my desired filter, then read the filtered vector back into Halide.
This seems quite messy and also, with this code, I'm passing a Halide::Buffer object, and not a Halide::Runtime::Buffer object, so I don't know how to implement this within a Halide::Generator.
So my question is twofold:
Can this kind of filtering be achieved in pure Halide, preferably in-place?
Is there an example of using extern "C" functions within Generators?
The first part is effectively stream compaction. It can be done in Halide, though the output size will either need to be fixed or a function of the input size (e.g. the same size as the input). One can get the max index produced as output as well to indicate how many results were produced. I wrote up a bit of an answer on how to do a prefix sum based stream compaction here: Halide: Reduction over a domain for the specific values . It is an open question how to do this most efficiently in parallel across a variety of targets and we hope to do some work on exploring that space soon.
Whether this is in-place or not depends on whether one can put everything into a single series of update definitions for a Func. E.g. It cannot be done in-place on an input passed into a Halide filter because reductions always allocate a buffer to work on. It may be possible to do so if the input is produced inside the Generator.
Re: the second question, are you using define_extern? This is not super well integrated with Halide::Runtime::Buffer as the external function must be implemented with halide_buffer_t but it is fairly straight forward to access from within a Generator. We don't have a tutorial on this yet, but there are a number of examples in the tests. E.g.:
https://github.com/halide/Halide/blob/master/test/generator/define_extern_opencl_generator.cpp#L19
and the definition:
https://github.com/halide/Halide/blob/master/test/generator/define_extern_opencl_aottest.cpp#L119
(These do not need to be extern "C" as I implemented C++ name mangling a while back. Just set the name mangling parameter to define_extern to NameMangling::CPlusPlus and remove the extern "C" from the external function's declaration. This is very useful as it gets one link time type checking on the external function, which catches a moderately frequent class of errors.)

Implementing a fixed run-time size array. Should move ctor and swap throw exceptions?

The problem with std::array is that it has a fixed compile-time size. I want a container that can be created with a dynamic size, but that size stays fixed throughout the life of the container (so std::vector won't work, because push_back will increment the size by 1).
I am wondering how to implement this. I tried writing a class that contains an internal std::vector for storage, and only exposes the members of std::vector that don't change the size. My question is regarding the copy/move assignment operators, as well as the swap member function. Usually, move assignment is declared as noexcept. However, before assignment, I have to check if the lhs and the rhs are of the same size. If they are not, I must throw an exception, because otherwise, assigning rhs to lhs would change the size of lhs, which I don't want. The same happens with swap, which in my implementation is noexcept for the same reason.
I know I am going against usual advice to make swap and move assignment noexcept (Item 14 of Scott Meyers' Modern Effective C++), so I am wondering if this is good design? Or is there a better way to implement a fixed runtime size container?
Example: Suppose I have defined my fixed size container with the name FixedSizeArray<T>.
auto arr1 = FixedSizeArray<double>(4, 1.0)
The last line of code defined a FixedSizeArray containing doubles of size 4. Now define another:
auto arr2 = FixedSizeArray<double>(10, 1.0)
Should the following line:
arr1 = std::move(arr2)
throw? What about:
arr1.swap(arr2)
Do declare move assignement and swap as noexcept. But don't throw on mismatched sizes...
Since your arrays are fixed-size, ending up assigning or swapping two arrays of different sizes can't possibly work, in any circumstances. It's not an exceptional condition, it's a situation where the program doesn't know what it's doing. That's a case for an assertion.

Performance of std::vector<Test> vs std::vector<Test*>

In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.

Resources