I'm not quite sure why is it happening. I have a program:
std::ifstream file(path, std::ios::binary);
file.seekg(0, file.end);
int length = file.tellg();
file.seekg(0, file.beg);
char* buffer = new char[length];
file.read(buffer, length);
file.close();
It is running well, and reads data correctly. However, if I replace buffer's declaration with:
char buffer[length];
then I get a segmentation fault. The size of data is around a few megabytes. What is the difference?
The difference is that "a few megabytes" is too large to put on your process's "stack", where you are now putting the data.
(Furthermore, you are relying on a GCC extension; length, as a variable whose value is not known until runtime, cannot legally be used as the size of a "normal" array like this)
Put your code back the way it was, and don't forget to delete[] the buffer when you are done using it.
Actually, this would be better:
std::vector<char> buffer(length);
file.read(&buffer[0], length);
Related
I was looking at Microsoft site about single inheritance. In the example given (code is copied at the end), I am not sure how memory is allocated to Name. Memory is allocated for 10 objects. But Name is a pointer member of the class. I guess I can assign constant string something like
DocLib[i]->Name = "Hello";
But we cannot change this string. In such situation, do I need allocate memory to even Name using new operator in the same for loop something like
DocLib[i]->Name = new char[50];
The code from Microsoft site is here:
// deriv_SingleInheritance4.cpp
// compile with: /W3
struct Document {
char *Name;
void PrintNameOf() {}
};
class PaperbackBook : public Document {};
int main() {
Document * DocLib[10]; // Library of ten documents.
for (int i = 0 ; i < 10 ; i++)
DocLib[i] = new Document;
}
Yes in short. Name is just a pointer to a char (or char array). The structure instantiation does not allocate space for this char (or array). You have to allocate space, and make the pointer(Name) point to that space. In the following case
DocLib[i]->Name = "Hello";
the memory (for "Hello") is allocated in the read only data section of the executable(on load) and your pointer just points to this location. Thats why its not modifiable.
Alternatively you could use string objects instead of char pointers.
I'm having trouble understanding the offset variable provided to the data applier for a dispatch_io_read function call. I see that the documentation claims the offset is the logical offset from the base of the data object. Looking at the source code for the dispatch_data_apply function confirms that this variable always starts from 0 for the first apply for a data chunk, and then is simply the sum of the range lengths.
I guess I don't understand the purpose of this variable then. I had originally assumed this was the offset for the entire read, but it's not. It seems you have to keep track of the bytes read and offset by that amount to actually properly do a read in libdispatch.
// Outside the dispatch_io_read handler...
char * currBufferPosition = destinationBuffer;
// Inside the dispatch_io_read handler...
dispatch_io_read(channel, fileOffset, bytesRequested, queue, ^(bool done, dispatch_data_t data, int error) {
// Note: Real code would handle error variable.
dispatch_data_apply(data, ^bool(dispatch_data_t region, size_t offset, const void * buffer, size_t size) {
memcpy(currBufferPosition, buffer, size);
currBufferPosition += size;
return true;
});
});
My question is: Is this the right way of using the data returned by dispatch_data_apply? And if so, what is the purpose of the offset variable passed into the applier handler? The documentation does not seem clear about this to me.
A dispatch_data_t is an sequence of bytes. The bytes can be stored in multiple non-contiguous byte arrays. For example, bytes 0-6 can be stored in an array, and then bytes 7-12 are stored in a separate array somewhere else in memory.
For efficiency, the dispatch_data_apply function lets you iterate over those arrays in-place (without copying out the data). On each call to your “applier”, you receive a pointer to one of the underlying storage arrays in the buffer argument. The size argument tells you how many bytes are in this particular array, and the offset argument tells you how (logically) far the first byte of this particular array is from the first byte of the entire dispatch_data_t.
Example:
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
#autoreleasepool {
dispatch_data_t aData = dispatch_data_create("Hello, ", 7, nil, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
dispatch_data_t bData = dispatch_data_create("world!", 6, nil, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
dispatch_data_t cData = dispatch_data_create_concat(aData, bData);
dispatch_data_apply(cData, ^bool(dispatch_data_t _Nonnull region, size_t offset, const void * _Nonnull buffer, size_t size) {
printf("applying at offset %lu, buffer %p, size %lu, contents: [%*.*s]\n", (unsigned long)offset, buffer, (unsigned long)size, (int)size, (int)size, buffer);
return true;
});
}
return 0;
}
Output:
applying at offset 0, buffer 0x100407970, size 7, contents: [Hello, ]
applying at offset 7, buffer 0x1004087b0, size 6, contents: [world!]
Okay, so that's what the offset argument is for. Now how does this relate to dispatch_io_read?
Well, dispatch_io_read doesn't pass you the same bytes twice. Once it has passed you some bytes, it discards them. The next time it passes you bytes, they are fresh, newly-read bytes. If you want the old bytes, you have to keep them around yourself. If you want to know how many old bytes you were given before the current call to your callback, you have to keep that count yourself. That is not what the offset argument is for.
It's possible that when dispatch_io_read calls you, it passes you a dispatch_data_t that has stored its bytes in multiple non-contiguous arrays, so when you call dispatch_data_apply on it, your applier gets called multiple times, with different offsets and buffers and sizes. But those calls only get you access to the fresh new bytes for the current call to your callback, not to old bytes from any prior calls to your callback.
I get the warning " cast increases required alignment of target type" while compiling the following code for ARM.
char data[2] = "aa";
int *ptr = (int *)(data);
I understand that the alignment requirement for char is 1 byte and that of int is 4 bytes and hence the warning.
I tried to change the alignment of char by using the aligned attribute.
char data[2] __attribute__((aligned (4)));
memcpy(data, "aa", 2);
int *ptr = (int *)(data);
But the warning doesn't go away.
My questions are
Why doesn't the warning go away?
As ARM generates hardware exception for misaligned accesses, I want to make sure that alignment issues don't occur. Is there any other way to write this code so that the alignment issue won't arise?
By the way, when I print alignof(data), it prints 4 which means the alignment of data is changed.
I'm using gcc version 4.4.1. Is it possible that the gcc would give the warning even if the aligned was changed using aligned attribute?
I don't quite understand why you would want to do this... but the problem is that the string literal "aa" isn't stored at an aligned address. The compiler likely optimized away the variable data entirely, and therefore only sees the code as int* ptr = (int*)"aa"; and then give the misalignment warning. No amount of fiddling with the data variable will change how the literal "aa" is aligned.
To avoid the literal being allocated on a misaligned address, you would have to tweak around with how string literals are stored in the compiler settings, which is probably not a very good idea.
Also note that it doesn't make sense to have a pointer to non-constant data pointing at a string literal.
So your code is nonsense. If you still for reasons unknown insist of having an int pointer to a string literal, I'd do some kind of work-around, for example like this:
typedef union
{
char arr[3];
int dummy;
} data_t;
const data_t my_literal = { .arr="aa" };
const int* strange_pointer = (const int*)&my_literal;
I ran into a problem with a dirty shutdown of my program, because of a shared pointer. I found a solution, but I'm not sure, if I have the right answer.
This is the minimalistic example:
double var;
boost::shared_ptr<const double> ptr1 (&var); // creates an invalid pointer on program exit
boost::shared_ptr<const double> ptr2 (new double); // works fine
int main (int argc, char **argv)
{
return 0;
}
Here is my answer which I would like to varify:
In the case of ptr1 the pointed-to object will be deleted before the pointer, which then points to an invalid address. But in the case of ptr2 the "smartness" of the shared pointer handles the issue above.
True?
Small extra question:
Is there a way to make ptr1 work (I tried reset()) or is that bad programming practice (if so why)?
Thanks for the clarification!
in the first case, the pointer isn't dynamically allocated, so the shared_ptr should not attempt to destroy the underlying object. that can be done by using a custom no-op deleter functor:
http://www.boost.org/doc/libs/1_51_0/libs/smart_ptr/sp_techniques.html#static
I'm trying to get the status of a text field in my application. But I don't get it to work. I'm using "SendMessage" to get "WM_GETTEXT", I save the content to a char *.
I output the char * to a file, but I only get "D" back. This is what I have now:
LRESULT result;
char * output = (char*)malloc(1024);
result = SendMessage(hwnd,WM_GETTEXT,1024,(LPARAM)output);
ofstream file("test.txt");
file << *output;
file.close();
delete [] output;
Pointers concepts
file << *output; will print the first element of the string array
file << output; print the entire string
C# code:
public const uint WM_GETTEXT = 0xD;
const int bufferSize = 10000;
StringBuilder sb = new StringBuilder(bufferSize);
SendMessageGetText(handle, WM_GETTEXT, new UIntPtr(bufferSize), sb);
Console.WriteLine(sb.ToString());
Working properly to me!
Sophia's answer is correct. However, the default now for a Visual Studio project is to create a Unicode project. You will only get the first letter if your project is Unicode and not MBCS.
Have you examined the buffer returned from WM_GETTEXT to verify it has the entire string?
If not, try declaring your output variable as TCHAR* (to be generic) or as a wchar_t* and see what results you get in the buffer.
p.s. It is bad form to allocate memory with malloc and release it with delete. You should either use malloc/free pairs or new/delete pairs. Even safer way to allocate a char buffer is to use std::string or use std::wstring for a wide string.
p.p.s Try making sure your project settings are for a Multibyte project and not Unicode project. Then everything in Sophia's answer will work.
One more thing... Just use GetWindowText() API instead of the SendMessage stuff. That's why it is there so you don't have to go through the rigmarole of casting a pointer to a LPARAM or WPARAM. It's more typesafe and will give you a compile time error (better than runtime errors) if your types don't match up--especially with Unicode/MBCS and wchar_t/char.