First I'd like to point out that I'm using a GNU GCC compiler. I'm using Code::Blocks as my IDE so I don't have to type in all the compiler junk into the Windows DOS command prompt. If I could be more specific about my compiler, what shows up as a line at the bottom of Cod::Blocks when I successfully compile is
mingw32-g++.exe -std=c++11 -g
Anyways, my question involves using the delete operator to release dynamically allocated memory. When I compile this code snippet:
int* x;
x = new int;
delete x;
delete x;
I don't get any warnings or errors or crashes. From the book I'm learning C++ from, releasing a pointer to a dynamically allocated memory chuck can only be done once, then the pointer is invalid. If you use delete on the same pointer again, then there will be problems. However, I don't get this problem.
Likewise, if I pass an object by value to a function, so that it is shallow copied, I get no error if I don't have a copy constructor to ensure deep copy (using raw pointers in the object). This means that when the function returns, the shallow copy goes out of scope, and invokes its destructor (where I'm using delete on a pointer). When int main returns, the original object goes out of scope, its destructor is invoked, and that same shallow copied pointer is deleted. But I have no problems.
I tried finding documentation online about the compiler I'm using and couldn't find any. Does this mean that the mingw32 compiler has some sort of default copy constructor it uses? Thus, I don't have to worry about creating copy constructors?
The compiler documentation is not likely to be helpful in this case: If it exists, it is likely to list exceptions to the C++ spec. It's the C++ spec that you need here.
When you delete the same pointer twice, the result--according to the C++ spec--is undefined. The compiler could do anything at all and have it meet spec. The compiler is allowed to recognize the fault and give an error message, or not recognize the fault and blow up immediately or sometime later. That your compiler appeared to work this time is no indication that the double delete is safe. It could be mucking up the heap in a way that results in a seg fault much later.
If you do not define a copy constructor, C++ defines one for you. The default copy constructor does a memberwise copy.
When you have the same object pointed to by multiple pointers, such as you do, consider using std::smart_ptr.
Related
As a result of studying the ELF format, I can see that the object file has a symbol corresponding to each function, and the corresponding symbol table entry has a value of st_size, which means the size of the function.
The problem is that the executable file was successfully created even though I changed the st_size of the specific function in the object file and linked it. The following code is the test code I used.
// In main.c,
int main(void)
{
myprintf("TEST");
}
// In log.c
#include <stdio.h>
void myprintf(const char *str)
{
printf(str);
}
In the code above, I changed the st_size value of the myprintf function in the log.o file, and linked the log.o and main.o files. In the default, the st_size value was 0x13. I tested it by changing it to 0x00. I tested it by changing it to 0x40. But the myprintf function of the a.out result file is well up. How does the linker determine the size of each function?
Well, firstly I'd like to begin with an old saying that it is more likely for humanity to find a theory of everything and unify quantum mechanics with general relativity rather than understand the optimizations and a decision tree of a linker.
back to our business, I've played with this on my machine and came to the conclusion that the only reasonable explanation for this, is that the linker doesn't truly need the size of a function in order to unify raw machine instruction from different compilation units into a single executable, and let's discuss why:
Let's say you have two compilation units, each containing three consecutive functions,
why would one need to know the size of each function? Isn't the fixed resolved virtual address granted to that function by that specific linker enough for relocation? and the true answer is - it is sufficient to have nothing but the offset of a function within the object file to link different compilation units into one executables.
However, with that being said, some executable formats such as ELF don't offer you an offset for a function machine code within a compilation unit, and you must calculate it yourself by indeed using the offset of that section within the ELF file and the size of each symbol entry within the section being pointed by the symbol table. which simply means, if you had as I've said earlier, two compilation units with three functions each after corrupting the size entry within the symbol table, as the linker would attempt resolving the compilation units into a single executable, it would simply corrupt it, and your executable would cause you segfaults quickly. I've attempted this at my home, and these are the results I've received:
when corrupting a symbol table's size entry of a compilation unit with one function nothing happens, as the entire text section's size (for this matter) is the exact same as that function's size, so the linker has no problem resolving it,
and when doing the same thing for compilation units with three functions it corrupts my executable, as the linker starts copying corrupted offsets of text from one compilation unit into the final executable.
Generally speaking, if you were to use an executable format which offers the linker an immediate offset of that function within the object file, without the need of calculation by size and section offset within the file, you'd probably end up with the same results even if you've had more than one function in a single compilation unit, unless there is some sanity test done by the linker. in my opinion the only reason a linker would need to use the size rather than the one I've just mentioned, is probably the need to clean out some section from redundant functions or variables not being referenced by anyone else (link time optimizations) and hence the need to recalculate relocation offsets for other referenced functions within that compilation unit, or to somehow recalculate relative jumps from within that same compilation unit.
hope this somehow answers your question, I'd be more than glad to help if you'd like a deeper demonstration of this
When I compile 32-bit C code with GCC and the -fomit-frame-pointer option, the frame pointer (ebp) is not used unless my function calls Windows API functions with stdcall and atleast one parameter.
For example, if I only use GetCommandLine() from the Windows API, which has no parameters/arguments, GCC will omit the frame pointer and use ebp for other things, speeding up the code and not having that useless prologue.
But the moment I call a stdcall Win32 function that accepts at least one argument, GCC completely ignores the -fomit-frame-pointer and uses the frame pointer anyway, and the code is worse in inspection as it can't use ebp for general purpose things. Not to mention I find the frame pointer quite pointless. I mean, I want to compile for release and distribution, why should I care about debugging? (if I want to debug I'll just use a debug build instead after reproducing the bug)
My stack most certainly does NOT contain dynamic allocation like alloca. So, the stack has a defined structure yet GCC chooses the dumb method despite my options? Is there something I'm missing to force it to not use frame pointer?
My second grip I have with it is that it refuses to use "push" instructions for Win32 functions. Every other compiler I tried, they used push instructions to push on the stack, resulting in much better more compact code, not to mention it is the most natural way to push arguments for stdcall. Yet GCC stubbornly uses "mov" instructions to move in each spot, manually, at offsets relative to esp because it needs to keep the stack pointer completely static. stdcall is made to be easy on the caller, and yet GCC completely misses the point of stdcall since it generates this crappy code when interfacing with it. What's worse, since the stack pointer is static, it still uses a frame pointer? Just why?
I tried -mpush-args, it doesn't do anything.
I also noticed that if I make my stack big enough for it to exceed a page (4096 bytes), GCC will add a prologue with a function that does nothing but "bitwise or" the stack every 4096 bytes with zero (which does nothing). I assume it's for touching the stack and automatically commiting memory with page faults if the stack was reserved? Unfortunately, it does this even if I set the initial commit of the stack (not reserve) to high enough to hold my stack, not to mention this shouldn't even be needed in the first place. Redundant code at its best.
Are these bugs in GCC? Or something I'm missing in options? Should I use something else? Please tell me if I'm missing some options.
I seriously hope I won't have to make an inline asm macro just to call stdcall functions and use push instructions (and this will avoid frame pointer too I guess). That sounds really overkill for something so basic that should be in compilers of today. And yes I use GCC 4.8.1 so not an ancient version.
As extra question, is it possible to force GCC to not save registers on the stack at function prologue? I use my own direct entry point with -nostartfiles argument, because it is a pure Windows application and it works just fine without standard lib startup. If I use attribute((noreturn)), it will discard the epilogue restoring the registers but it will still push them on the stack at prologue, I don't know if there's a way to force it to not save registers for this entry point function. Either way not a big deal in the least, it would just feel more complete I guess. Thanks!
See the answer Force GCC to push arguments on the stack before calling function (using PUSH instruction)
I.e. try -mpush-args -mno-accumulate-outgoing-args. It may also require -mno-stack-arg-probe if gcc complains.
It looks like supplying the -mpush-args -mno-accumulate-outgoing-args -mno-stack-arg-probe works, specifically the last one. Now the code is cleaner and more normal like other compilers, and it uses PUSH for arguments, even makes it easier to track in OllyDbg this way.
Unfortunately, this FORCES the stupid frame pointer to be used, even in small functions that absolutely do not need it at all. Seriously is there a way to absolutely force GCC to disable the frame pointer?!
I have a Go object whose address in memory I would like to keep constant. in C# one can pin an object's location in memory. Is there a way to do this in Go?
An object on which you keep a reference won't move. There is no handle or indirection, and the address you get is permanent.
From the documentation :
Note that, unlike in C, it's perfectly OK to return the address of a
local variable; the storage associated with the variable survives
after the function returns
When you set a variable, you can read this address using the & operator, and you can pass it.
tl;dr no - but it does not matter unless you're trying to do something unusual.
Worth noting that the accepted answer is partially incorrect.
There is no guarantee that objects are not moved - either on the stack or on the Go heap - but as long as you don't use unsafe this will not matter to you because the Go runtime will take care of transparently updating your pointers in case an object is moved.
If OTOH you use unsafe to obtain a uintptr, invoke raw syscalls, perform CGO calls, or otherwise expose the address (e.g. oldAddr := fmt.Sprintf("%p", &foo)), etc. you should be aware that addresses can change, and that nor compiler nor runtime will magically patch things for you.
While currently the standard Go compiler only moves objects on the stack (e.g. when a goroutine stack needs to be resized), there is nothing in the Go language specification that prevents a different implementation from moving objects on the Go heap.
While there is (yet) no explicit support for pinning objects in the stack or in the Go heap, there is a recommended workaround: allocate manually the memory outside of the Go heap (e.g. via mmap) and using finalizers to automatically free that allocation once all references to it are dropped. The benefit of this approach is that memory allocated manually outside of the Go heap will never be moved by the Go runtime, so its address will never change, but it will still be deallocated automatically when it's not needed anymore, so it can't leak.
I'm working on the Pintos toy operating system at university, but there's a strange bug when using GCC 4.6.2. When I push my system call arguments (just 3 pushl-s in inline assembly), some mysterious data also appears on the stack, and the arguments are in the wrong order. Setting -fno-omit-frame-pointer gets rid of the strange data, but the arguments are still in the wrong order. GCC 4.5 works fine. Any idea what specific option could fix this?
NOTE: the problem still occurs with -O0.
Without a code example and a listing of the result from your different compilations, it's difficult to help you. But here are three possible causes for your problems:
Make sure you understand how arguments are pushed to the stack. Arguments are pushed from the back. This makes it possible for printf(char *, ...) to examine the first item to find out how many more there are. If you want to call the function int foo(int a, int b, int c), you'll need to push c, then b and finally a.
Could the strange data on the stack be a return address or EFLAGS? I don't know Pintos and how system calls are made, but make sure that you understand the difference between CALL/RET and INT/IRET. INT pushes the flags onto the stack.
If your inline assembly has side effects, you might want to write volatile/__volatile__ in front of it. Otherwise GCC is allowed to move it when optimizing.
I need to see your code to better understand what's going on.
The culprit was -fomit-frame-pointer, which has been enabled by default since 4.6.2. -fno-omit-frame-pointer fixed the issue.
Did you clean the parameters on stack after the syscall? gcc may not be aware that you touch the stack and generate code depends on the stack pointer it expected.
-fno-omit-frame-pointer force gcc to use e/rbp for accessing locate data but it just hide the actual problem.
We have an older massive C++ application and we have been converting it to support Unicode as well as 64-bits. The following strange thing has been happening:
Calls to registry functions and windows creation functions, like the following, have been failing:
hWnd = CreateSysWindowExW( ExStyle, ClassNameW.StringW(), Label2.StringW(), Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
ClassNameW and Label2 are instances of our own Text class which essentially uses malloc to allocate the memory used to store the string.
Anyway, when the functions fail, and I call GetLastError it returns the error code for "invalid memory access" (though I can inspect and see the string arguments fine in the debugger). Yet if I change the code as follows then it works perfectly fine:
BSTR Label2S = SysAllocString(Label2.StringW());
BSTR ClassNameWS = SysAllocString(ClassNameW.StringW());
hWnd = CreateSysWindowExW( ExStyle, ClassNameWS, Label2S, Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
SysFreeString(ClassNameWS); ClassNameWS = 0;
SysFreeString(Label2S); Label2S = 0;
So what gives? Why would the original functions work fine with the arguments in local memory, but when used with Unicode, the registry function require SysAllocString, and when used in 64-bit, the Windows creation functions also require SysAllocString'd string arguments? Our Windows procedure functions have all been converted to be Unicode, always, and yes we use SetWindowLogW call the correct default Unicode DefWindowProcW etc. That all seems to work fine and handles and draws Unicode properly etc.
The documentation at http://msdn.microsoft.com/en-us/library/ms632679%28v=vs.85%29.aspx does not say anything about this. While our application is massive we do use debug heaps and tools like Purify to check for and clean up any memory corruption. Also at the time of this failure, there is still only one main system thread. So it is not a thread issue.
So what is going on? I have read that if string arguments are marshalled anywhere or passed across process boundaries, then you have to use SysAllocString/BSTR, yet we call lots of API functions and there is lots of code out there which calls these functions just using plain local strings?
What am I missing? I have tried Googling this, as someone else must have run into this, but with little luck.
Edit 1: Our StringW function does not create any temporary objects which might go out of scope before the actual API call. The function is as follows:
Class Text {
const wchar_t* StringW () const
{
return TextStartW;
}
wchar_t* TextStartW; // pointer to current start of text in DataArea
I have been running our application with the debug heap and memory checking and other diagnostic tools, and found no source of memory corruption, and looking at the assembly, there is no sign of temporary objects or invalid memory access.
BUT I finally figured it out:
We compile our code /Zp1, which means byte aligned memory allocations. SysAllocString (in 64-bits) always return a pointer that is aligned on a 8 byte boundary. Presumably a 32-bit ANSI C++ application goes through an API layer to the underlying Unicode windows DLLs, which would also align the pointer for you.
But if you use Unicode, you do not get that incidental pointer alignment that the conversion mapping layer gives you, and if you use 64-bits, of course the situation will get even worse.
I added a method to our Text class which shifts the string pointer so that it is aligned on an eight byte boundary, and viola, everything runs fine!!!
Of course the Microsoft people say it must be memory corruption and I am jumping the wrong conclusion, but there is evidence it is not the case.
Also, if you use /Zp1 and include windows.h in a 64-bit application, the debugger will tell you sizeof(BITMAP)==28, but calling GetObject on a bitmap will fail and tell you it needs a 32-byte structure. So I suspect that some of Microsoft's API is inherently dependent on aligned pointers, and I also know that some optimized assembly (I have seen some from Fortran compilers) takes advantage of that and crashes badly if you ever give it unaligned pointers.
So the moral of all of this is, dont use "funky" compiler arguments like /Zp1. In our case we have to for historical reasons, but the number of times this has bitten us...
Someone please give me a "this is useful" tick on my answer please?
Using a bit of psychic debugging, I'm going to guess that the strings in your application are pooled in a read-only section.
It's possible that the CreateSysWindowsEx is attempting to write to the memory passed in for the window class or title. That would explain why the calls work when allocated on the heap (SysAllocString) but not when used as constants.
The easiest way to investigate this is to use a low level debugger like windbg - it should break into the debugger at the point where the access violation occurs which should help figure out the problem. Don't use Visual Studio, it has a nasty habit of being helpful and hiding first chance exceptions.
Another thing to try is to enable appverifier on your application - it's possible that it may show something.
Calling a Windows API function does not cross the process boundary, since the various Windows DLLs are loaded into your process.
It sounds like whatever pointer that StringW() is returning isn't valid when Windows is trying to access it. I would look there - is it possible that the pointer returned it out of scope and deleted shortly after it is called?
If you share some more details about your string class, that could help diagnose the problem here.