Max_Int reached before the limit defined in climits.h - max

I get this error
whenever I enter whatever number bigger than the following, in a code compiled with Microsoft Visual C++ 2010 Express:
int size = 276447232;
Though, according to this conversation, this one or that one, I should be able to go up to 2147483646 before encountering any problem, no?
Sky

The program is trying to allocate too much stack space:
char *outputGwb = char[size]; // array is created "on the stack"
Use malloc (or new in C++) to allocate memory from the heap. Make sure to free (or delete in C++) the memory later; just don't mix the allocation/deallocation strategies.
char* outputGwb = new char[size]; // C++: note use of the "new" keyword
char* outputGwb = malloc(size); // C: note no cast needed in a C compiler
This issue is thus about the maximum size of a particular resource and is not related to the maximum number an integer value can represent.
See What and where are the stack and heap? for an explanation between the two memory allocation areas. In addition, while I wouldn't necessarily recommend it, here is a thread that discusses how to change the stack size in Visual C++.

Related

How to find eigenvalues of a big matrix in parallel? [duplicate]

NOTE: We have a lot of segfault questions, with largely the same
answers, so I'm trying to collapse them into a canonical question like
we have for undefined reference.
Although we have a question covering what a segmentation fault
is, it covers the what, but doesn't list many reasons. The top answer says "there are many reasons", and only lists one, and most of the other answers don't list any reasons.
All in all, I believe we need a well-organized community wiki on this topic, which lists all the common causes (and then some) to get segfaults. The purpose is to aid in debugging, as mentioned in the answer's disclaimer.
I know what a segmentation fault is, but it can be hard to spot in the code without knowing what they often look like. Although there are, no doubt, far too many to list exhaustively, what are the most common causes of segmentation faults in C and C++?
WARNING!
The following are potential reasons for a segmentation fault. It is virtually impossible to list all reasons. The purpose of this list is to help diagnose an existing segfault.
The relationship between segmentation faults and undefined behavior cannot be stressed enough! All of the below situations that can create a segmentation fault are technically undefined behavior. That means that they can do anything, not just segfault -- as someone once said on USENET, "it is legal for the compiler to make demons fly out of your nose.". Don't count on a segfault happening whenever you have undefined behavior. You should learn which undefined behaviors exist in C and/or C++, and avoid writing code that has them!
More information on Undefined Behavior:
What is the simplest standard conform way to produce a Segfault in C?
Undefined, unspecified and implementation-defined behavior
How undefined is undefined behavior?
What Is a Segfault?
In short, a segmentation fault is caused when the code attempts to access memory that it doesn't have permission to access. Every program is given a piece of memory (RAM) to work with, and for security reasons, it is only allowed to access memory in that chunk.
For a more thorough technical explanation about what a segmentation fault is, see What is a segmentation fault?.
Here are the most common reasons for a segmentation fault error. Again, these should be used in diagnosing an existing segfault. To learn how to avoid them, learn your language's undefined behaviors.
This list is also no replacement for doing your own debugging work. (See that section at the bottom of the answer.) These are things you can look for, but your debugging tools are the only reliable way to zero in on the problem.
Accessing a NULL or uninitialized pointer
If you have a pointer that is NULL (ptr=0) or that is completely uninitialized (it isn't set to anything at all yet), attempting to access or modify using that pointer has undefined behavior.
int* ptr = 0;
*ptr += 5;
Since a failed allocation (such as with malloc or new) will return a null pointer, you should always check that your pointer is not NULL before working with it.
Note also that even reading values (without dereferencing) of uninitialized pointers (and variables in general) is undefined behavior.
Sometimes this access of an undefined pointer can be quite subtle, such as in trying to interpret such a pointer as a string in a C print statement.
char* ptr;
sprintf(id, "%s", ptr);
See also:
How to detect if variable uninitialized/catch segfault in C
Concatenation of string and int results in seg fault C
Accessing a dangling pointer
If you use malloc or new to allocate memory, and then later free or delete that memory through pointer, that pointer is now considered a dangling pointer. Dereferencing it (as well as simply reading its value - granted you didn't assign some new value to it such as NULL) is undefined behavior, and can result in segmentation fault.
Something* ptr = new Something(123, 456);
delete ptr;
std::cout << ptr->foo << std::endl;
See also:
What is a dangling pointer?
Why my dangling pointer doesn't cause a segmentation fault?
Stack overflow
[No, not the site you're on now, what is was named for.] Oversimplified, the "stack" is like that spike you stick your order paper on in some diners. This problem can occur when you put too many orders on that spike, so to speak. In the computer, any variable that is not dynamically allocated and any command that has yet to be processed by the CPU, goes on the stack.
One cause of this might be deep or infinite recursion, such as when a function calls itself with no way to stop. Because that stack has overflowed, the order papers start "falling off" and taking up other space not meant for them. Thus, we can get a segmentation fault. Another cause might be the attempt to initialize a very large array: it's only a single order, but one that is already large enough by itself.
int stupidFunction(int n)
{
return stupidFunction(n);
}
Another cause of a stack overflow would be having too many (non-dynamically allocated) variables at once.
int stupidArray[600851475143];
One case of a stack overflow in the wild came from a simple omission of a return statement in a conditional intended to prevent infinite recursion in a function. The moral of that story, always ensure your error checks work!
See also:
Segmentation Fault While Creating Large Arrays in C
Seg Fault when initializing array
Wild pointers
Creating a pointer to some random location in memory is like playing Russian roulette with your code - you could easily miss and create a pointer to a location you don't have access rights to.
int n = 123;
int* ptr = (&n + 0xDEADBEEF); //This is just stupid, people.
As a general rule, don't create pointers to literal memory locations. Even if they work one time, the next time they might not. You can't predict where your program's memory will be at any given execution.
See also:
What is the meaning of "wild pointer" in C?
Attempting to read past the end of an array
An array is a contiguous region of memory, where each successive element is located at the next address in memory. However, most arrays don't have an innate sense of how large they are, or what the last element is. Thus, it is easy to blow past the end of the array and never know it, especially if you're using pointer arithmetic.
If you read past the end of the array, you may wind up going into memory that is uninitialized or belongs to something else. This is technically undefined behavior. A segfault is just one of those many potential undefined behaviors. [Frankly, if you get a segfault here, you're lucky. Others are harder to diagnose.]
// like most UB, this code is a total crapshoot.
int arr[3] {5, 151, 478};
int i = 0;
while(arr[i] != 16)
{
std::cout << arr[i] << std::endl;
i++;
}
Or the frequently seen one using for with <= instead of < (reads 1 byte too much):
char arr[10];
for (int i = 0; i<=10; i++)
{
std::cout << arr[i] << std::endl;
}
Or even an unlucky typo which compiles fine (seen here) and allocates only 1 element initialized with dim instead of dim elements.
int* my_array = new int(dim);
Additionally it should be noted that you are not even allowed to create (not to mention dereferencing) a pointer which points outside the array (you can create such pointer only if it points to an element within the array, or one past the end). Otherwise, you are triggering undefined behaviour.
See also:
I have segfaults!
Forgetting a NUL terminator on a C string.
C strings are, themselves, arrays with some additional behaviors. They must be null terminated, meaning they have an \0 at the end, to be reliably used as strings. This is done automatically in some cases, and not in others.
If this is forgotten, some functions that handle C strings never know when to stop, and you can get the same problems as with reading past the end of an array.
char str[3] = {'f', 'o', 'o'};
int i = 0;
while(str[i] != '\0')
{
std::cout << str[i] << std::endl;
i++;
}
With C-strings, it really is hit-and-miss whether \0 will make any difference. You should assume it will to avoid undefined behavior: so better write char str[4] = {'f', 'o', 'o', '\0'};
Attempting to modify a string literal
If you assign a string literal to a char*, it cannot be modified. For example...
char* foo = "Hello, world!"
foo[7] = 'W';
...triggers undefined behavior, and a segmentation fault is one possible outcome.
See also:
Why is this string reversal C code causing a segmentation fault?
Mismatching Allocation and Deallocation methods
You must use malloc and free together, new and delete together, and new[] and delete[] together. If you mix 'em up, you can get segfaults and other weird behavior.
See also:
Behaviour of malloc with delete in C++
Segmentation fault (core dumped) when I delete pointer
Errors in the toolchain.
A bug in the machine code backend of a compiler is quite capable of turning valid code into an executable that segfaults. A bug in the linker can definitely do this too.
Particularly scary in that this is not UB invoked by your own code.
That said, you should always assume the problem is you until proven otherwise.
Other Causes
The possible causes of Segmentation Faults are about as numerous as the number of undefined behaviors, and there are far too many for even the standard documentation to list.
A few less common causes to check:
UD2 generated on some platforms due to other UB
c++ STL map::operator[] done on an entry being deleted
DEBUGGING
Firstly, read through the code carefully. Most errors are caused simply by typos or mistakes. Make sure to check all the potential causes of the segmentation fault. If this fails, you may need to use dedicated debugging tools to find out the underlying issues.
Debugging tools are instrumental in diagnosing the causes of a segfault. Compile your program with the debugging flag (-g), and then run it with your debugger to find where the segfault is likely occurring.
Recent compilers support building with -fsanitize=address, which typically results in program that run about 2x slower but can detect address errors more accurately. However, other errors (such as reading from uninitialized memory or leaking non-memory resources such as file descriptors) are not supported by this method, and it is impossible to use many debugging tools and ASan at the same time.
Some Memory Debuggers
GDB | Mac, Linux
valgrind (memcheck)| Linux
Dr. Memory | Windows
Additionally it is recommended to use static analysis tools to detect undefined behaviour - but again, they are a tool merely to help you find undefined behaviour, and they don't guarantee to find all occurrences of undefined behaviour.
If you are really unlucky however, using a debugger (or, more rarely, just recompiling with debug information) may influence the program's code and memory sufficiently that the segfault no longer occurs, a phenomenon known as a heisenbug.
In such cases, what you may want to do is to obtain a core dump, and get a backtrace using your debugger.
How to generate a core dump in Linux on a segmentation fault?
How do I analyse a program's core dump file with GDB when it has command-line parameters?

Max size in array 2-dimentional in C++

I want to execute large computational program in 3 and 2 dimension with size of array[40000][40000] or more ,this code can explain my problem a bit,I comment vector because it have same problem when I run it it goes to lib of vector, how to increase memory of compiler or delete(clean) some part of it when program running?
#include<iostream>
#include<cstdlib>
#include<vector>
using namespace std;
int main(){
float array[40000][40000];
//vector< vector<double> > array(1000,1000);
cout<<"bingo"<<endl;
return 0;
}
A slightly better option than vector (and far better than vector-of-vector1), which like vector, uses dynamic allocation for the contents (and therefore doesn't overflow the stack), but doesn't invite resizing:
std::unique_ptr<float[][40000]> array{ new float[40000][40000] };
Conveniently, float[40000][40000] still appears, making it fairly obvious what is going on here even to a programmer unfamiliar with incomplete array types.
1 vector<vector<T> > is very bad, since it would have many different allocations, which all have to be separately initialized, and the resulting storage would be discontiguous. Slightly better is a combination of vector<T> with vector<T*>, with the latter storing pointers created one row apart into a single large buffer managed by the former.

Compiler assumptions about relative locations from memory objects

I wonder what assumptions compilers make about the relative locations of memory objects.
For example if we allocate two stack variables of size 1 byte each, right after another and initialize them both with zero, can a compiler optimize this case by only emitting one single instruction that overwrites both bytes in memory with zeros, because the compiler knows the relative position of both variables?
I am interested specifically in the more well known compilers like gcc, g++, clang, the Windows C/C++ compiler etc.
A compiler can optimize multiple assignments into one.
a = 0;
b = 0;
might become something like
*(short*)&a = 0;
The subtle part is "if we allocate two stack variables of size 1 byte each, right after another" since you cannot really do that. A compiler can shuffle stack positions around at will. Also, simply declaring variables will not necessarily mean any stack allocation. Variables might just be in registers. In C you would have to use alloca and even that does not provide "right after another".
Even more general, the C standard does not allow you to compare the memory positions of different objects. This is undefined behavior.

How to pass a list of arbitrary size to an OpenCL kernel

I have been fiddling with OpenCL recently, and I have run into a serious limitation: You cannot pass an array of pointers to a kernel. This makes it difficult to pass an arbitrarily sized list of, say, images to a kernel. I had a couple of thoughts toward this, and I was wondering if anybody could say for sure whether or not they would work, or offer better suggestions.
Let's say you had x image objects that you wanted to be passed to the kernel. If they were all only 2D, one solution might be to pack them all into a 3D image, and just index the slices. The problem with this is, if the images are different sizes, then space will be wasted, because the 3D image has to have the width of the widest image, the height of the tallest image, and the depth would be the number of images.
However, I was also thinking that when you pass a buffer object to a kernel, it appears in the kernel as a pointer. If you had a kernel that took an arbitrary data buffer, and a buffer designated just for storing pointers, and then appended the pointer to the first buffer to the end of the second buffer, (provided there was enough allocated space of course) then maybe you could keep a buffer of pointers to other buffers on the device. This buffer could then be passed to other kernels, which would then, with some interesting casting, be able to access these arbitrary buffers on the device. The only problem is whether or not a given buffer pointer would remain the same throughout the life of the buffer. Also, when you pass an image, you get a struct as an argument. Now, does this struct actually have a home in device memory? Or is it around just long enough to be passed to the kernel? These things are important in that they would determine whether or not the pointer buffer trick would work on images too, assuming it would work at all.
Does anybody know if the buffer trick would work? Are there any other ways anybody can think of to pass a list of arbitrary size to a kernel?
EDIT: The buffer trick does NOT work. I have tested it. I am not sure why exactly, but the pointers on the device don't seem to stay the same from one invocation to another.
Passing an array of pointers to a kernel does not make sense, because the pointers would point to host memory, which the OpenCL device does not know anything about. You would have to transfer the data to a device buffer and then pass the buffer pointer to the kernel. (There are some more complicated options with mapped/pinned memory and especially in the case of APUs, but they don't change the main fact, that host pointers are invalid on the device).
I can suggest one approach, although I have never actually used it myself. If you have a large device buffer preallocated, you could fill it up with images back to back from the host. Then call the kernel with the buffer and a list of offsets as arguments.
This is easy, and I've done it. You don't use pointers, so much as references, and you do it like this. In your kernel, you can provide two arguments:
kernel void(
global float *rowoffsets,
global float *data
) {
Now, in your host, you simply take your 2d data, copy it into a 1d array, and put the index of the start of each row into rowoffsets
For the last row, you add an additional rowoffset, pointing to one past the end of data.
Then in your kernel, to read the data from a row, you can do things like:
kernel void(
global float *rowoffsets,
global float *data,
const int N
) {
for( int n = 0; n < N; n++ ) {
const int rowoffset = rowoffsets[n];
const int rowlen = rowoffsets[n+1] - rowoffset;
for( int col = 0; col < rowlen; col++ ) {
// do stuff with data[rowoffset + col] here
}
}
}
Obviously, how you're actually going to assign the data to each workitem is up to you, so whether you're using actual loops, or giving each workitem a single row and column is part of your own application design.

Performance of std::vector<Test> vs std::vector<Test*>

In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.

Resources