Hi, I am amateur. I'm stuck on range based loops.
I know how to use this:
std::vector<ExampleClass> vec;
But I don't know this one:
std::vector<ExampleClass*> vec;
which one should I use?
1:
for (auto x : vec)
2
for (auto& x : vec)
Thank you.
It depends.
The first gives you a copy of the actual pointer for each loop cycle.
It follows that:
the pointer points to address in the memory where the object of ExampleClass resides
you have access to the object by dereferencing with *x
changing the pointing address of the pointer will not have any effect outside the scope of the for-loop
The second one gives you reference to the pointer stored in the actual element of the vector.
It follows that:
you can do the same things as with the first option
and you could let pointer point to a new address in memory and this is also going to take effect outside the scope of the for-loop
Important
If you use the last option you may end with a memory leak, if there is no pointer left pointing to the ExampleClass object created with new!
You have to delete every element of the vector created with new!
Therefore prefereable, is the usage of unique_ptr or smart_ptr! Using smart pointers leaves the ownership and destruction semantics up to them.
I would use the second one, because it saves you a copy, but the first one is saver and the copy of a pointer is inexpensive (see comment below by Olaf).
Or for (auto const& x : vec) ;-)
Related
Based on my earlier question, I understand the benefit of using stack allocation. Suppose I have an array of arrays. For example, A is a list of matrices and each element A[i] is a 1x3 matrix. The length of A and the dimension of A[i] are known at run time (given by the user). Each A[i] is a matrix of Float64 and this is also known at run time. However, through out the program, I will be modifying the values of A[i] element by element. What data structure can also allow me to use stack allocation? I tried StaticArrays but it doesn't allow me to modify a static array.
StaticArrays defines MArray (MVector, MMatrix) types that are fixed-size and mutable. If you use these there's a higher chance of the compiler determining that they can be stack-allocated, but it's not guaranteed. Moreover, since the pattern you're using is that you're passing the mutable state vector into a function which presumably modifies it, it's not going to be valid or helpful to stack allocate that anyway. If you're going to allocate state once and modify it throughout the program, it doesn't really matter if it is heap or stack allocated—stack allocation is only a big win for objects that are allocated, used locally and then don't escape the local scope, so they can be “freed” simply by popping the stack.
From the code snippet you showed in the linked question, the state vector is allocated in the outer function, test_for_loop, which shouldn't be a big deal since it's done once at the beginning of execution. Using a variably sized state vector to index into an array with a splat (...) might be an issue, however, and that's done in test_function. Using something with fixed size like MVector might be better for that. It might, however, be better still, to use a state tuple and return a new rather than mutated state tuple at the end. The compiler is very good at turning that kind of thing into very efficient code because of immutability.
Note that by convention test_function should be called test_function! since it modifies its M argument and even more so if it modifies the state vector.
I would also note that this isn't a great question/answer pair since it's not standalone at all and really just a continuation of your other question. StackOverflow isn't very good for this kind of iterative question/discussion interaction, I'm afraid.
var a = [...]int{1,2,3,4,5,6}
s1 := a[2:4:5]
Suppose s1 goes out of scope later than a. How does gc know to reclaim the memory of s1's underlying array a?
Consider the runtime representation of s1, spec
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
The GC doesn't even know about the beginning of a.
Go uses mark-and-sweep collector as it's present implementation.
As per the algorithm, there will be one root object, and the rest is tree like structure, in case of multi-core machines gc runs along with the program on one core.
gc will traverse the tree and when something is not reachable it, considers it as free.
Go objects also have metadata for objects as stated in this post.
An excerpt:
We needed to have some information about the objects since we didn't have headers. Mark bits are kept on the side and used for marking as well as allocation. Each word has 2 bits associated with it to tell you if it was a scalar or a pointer inside that word. It also encoded whether there were more pointers in the object so we could stop scanning objects sooner than later.
The reason go's slices (slice header) were structures instead of pointer to structures is documented by russ cox in this page under slice section.
This is an excerpt:
Go originally represented a slice as a pointer to the structure(slice header) , but doing so meant that every slice operation allocated a new memory object. Even with a fast allocator, that creates a lot of unnecessary work for the garbage collector, and we found that, as was the case with strings, programs avoided slicing operations in favor of passing explicit indices. Removing the indirection and the allocation made slices cheap enough to avoid passing explicit indices in most cases.
The size(length) of an array is part of its type. The types [1]int and [2]int are distinct.
One thing to remember is go is value oriented language, instead of storing pointers, they store direct values.
[3]int, arrays are values in go, so if you pass an array, it copies the whole array.
[3]int this is a value (one as a whole).
When one does a[1] you are accessing part of the value.
SliceHeader Data field says consider this as base point of array, instead of a[0]
As far as my knowledge is considered:
When one requests for a[4],
a[0]+(sizeof(type)*4)
is calculated.
Now if you are accessing something in through slice s = a[2:4],
and if one requests for s[1], what one was requesting is,
a[2]+sizeof(type)*1
The problem with std::array is that it has a fixed compile-time size. I want a container that can be created with a dynamic size, but that size stays fixed throughout the life of the container (so std::vector won't work, because push_back will increment the size by 1).
I am wondering how to implement this. I tried writing a class that contains an internal std::vector for storage, and only exposes the members of std::vector that don't change the size. My question is regarding the copy/move assignment operators, as well as the swap member function. Usually, move assignment is declared as noexcept. However, before assignment, I have to check if the lhs and the rhs are of the same size. If they are not, I must throw an exception, because otherwise, assigning rhs to lhs would change the size of lhs, which I don't want. The same happens with swap, which in my implementation is noexcept for the same reason.
I know I am going against usual advice to make swap and move assignment noexcept (Item 14 of Scott Meyers' Modern Effective C++), so I am wondering if this is good design? Or is there a better way to implement a fixed runtime size container?
Example: Suppose I have defined my fixed size container with the name FixedSizeArray<T>.
auto arr1 = FixedSizeArray<double>(4, 1.0)
The last line of code defined a FixedSizeArray containing doubles of size 4. Now define another:
auto arr2 = FixedSizeArray<double>(10, 1.0)
Should the following line:
arr1 = std::move(arr2)
throw? What about:
arr1.swap(arr2)
Do declare move assignement and swap as noexcept. But don't throw on mismatched sizes...
Since your arrays are fixed-size, ending up assigning or swapping two arrays of different sizes can't possibly work, in any circumstances. It's not an exceptional condition, it's a situation where the program doesn't know what it's doing. That's a case for an assertion.
In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.
In C99 the new complex types were defined. I am trying to understand whether a compiler can take advantage of this knowledge in optimizing memory accesses. Are these objects (A-F) of type complex float guaranteed to be 8-byte aligned in memory?
#include "complex.h"
typedef complex float cfloat;
cfloat A;
cfloat B[10];
void func(cfloat C, cfloat *D)
{
cfloat E;
cfloat F[10];
}
Note that for D, the question relates to the object pointed to by D, not to the pointer storage itself. And, if that is assumed aligned, how can one be sure that the address passed is of an actual complex and not a cast from another (non 8-aligned) type?
UPDATE 1: I probably answered myself in the last comment regarding the D pointer. B/c there is no way to know what address will be assigned to the parameter of the function call, there is no way to guarantee that it will be 8-aligned. This is solvable via the __builtin_assumed_aligned() function.
The question is still open for the other variables.
UPDATE 2: I posted a follow-up question here.
A float complex is guaranteed to have the same memory layout and alignment as an array of two float (§6.2.5). Exactly what that alignment will be is defined by your compiler or platform. All you can say for sure is that a float complex is at least as aligned as a float.
if that is assumed aligned, how can one be sure that the address passed is of an actual complex and not a cast from another (non 8-aligned) type?
If your caller passes you an insufficiently-aligned pointer, that's undefined behavior and a bug in their code (§6.3.2.3). You don't need to support that (though you may choose to).