i, and a few thousand other people, are getting an error being thrown by the Microsoft Visual C++ Runtime:
Which for the benefit of search engines, says:
Microsoft Visual C++ Runtime Library
Buffer overrun detected!
Program: %s
A buffer overrun has been detected which has corrupted the program's
internal state. The program cannot safely continue execution and must
now be terminated.
Now i understand what a buffer overrun is, and why it is a bad thing. Given Microsoft's new emphasis on "it's just broken", the extra buffer checks in MSVCRT can be a nice thing.
On the other hand, i don't care. It's not that the program can't continue, it's that the program cannot safely continue. Well i'd rather be unsafe, because it's better than nothing. i enjoy living dangerously.
So can anyone suggest anything? i was thinking things like:
a registry key to prevent MSVCRT from halting execution
running the application in compability with a previous operating system (previous to Windows 7)
adding an assembly manifest to the executable folder so that it uses an older version of the MSVCRT, one which doesn't perform this overflow checking
a version number, or download location, of a copy of MSVCRT that doesn't have the overflow checking
i tried searching the support site of the company that wrote the Microsoft Visual C++ Runtime Library, but they have no mention of which functions could be overflowing, or how to disable overflow checking.
There is an option here. Set it to no.
Project Properties -> Configuration Properties -> C/C++ -> Code Generation -> Buffer Security Check.
This corresponds to the /GS (Buffer Security Check) compiler option:
Detects some buffer overruns that
overwrite the return address, a common
technique for exploiting code that
does not enforce buffer size
restrictions. This is achieved by
injecting security checks into the
compiled code.
Is this happening in you code or actually in the library? If it's in the library, I know you say you want to just ignore the error, but what you would you do if it was an access violation that crashed the process?
You should treat it the same way, because logically it's the same thing. It's just the CRT is crashing the process instead of the OS.
But, If you're using the debug build of the library you might get better (?) results using the release build (maybe it'll just crash without the dialog box notification).
If it's in your code you can disable the overflow check using the /GS- option. But you should really fix the bug.
Related
I've switched over to using the new Modules in Visual Studio (2019 - 16.10.4). Everything compiles and runs just fine if I do a full compilation (i.e. Clean the solution first). Very often when I compile normally, though, I get a crash or initialisation. The error is Application Error (0xC0000005) and appears on startup.
In x86, there's no valid call stack to look at at all. In x64, the debugger points to a valid call stack, at exe_common.inl in a line that says __scrt_current_native_startup_state = __scrt_native_startup_state::initialized;. I realise that's a red herring, and the actual line is _initterm(__xc_a, __xc_z); which I understand is the global memory initialisation.
Is anyone aware of such issues with Modules in VC? The internet doesn't have much information on this case, which leads me to believe it has to do with various configuration options that I might be using. Any pointers? I'm really hoping I won't have to stop using Modules, which I was really looking forward to. There is also a slight chance a recent update to Visual Studio or perhaps something obscure I might have done in the conversion to Modules that is the culprit.
It's just strange that it's guaranteed to work with a full compilation, which would lead me to think that the compiler is to blame.
I am developing an algorithm that uses ARM Neon instructions. I am writing the code using assembler file (.S and no inline asm).
My question is that what is the best way for debugging purpose i.e. viewing registers, memory, etc.
Currently, I am using Android NDK to compile and my Android phone to run the algorithm.
Poor man's debug solutions...
You can use gdb / gdbserver to remotely control execution of applications on an Android phone. I'm not giving full details here because they change all the time but for example you can start with this answer or make a quick search on Internet. Learning to use GDB might seem to have a high steep curve however material on web is exhaustive. You can easily find something to your taste.
Single-stepping an ARM core via software tools is hard that's why ARM ecosystem is full of expensive tools and extra HW equipment.
Trick I use is to insert BRK instructions manually in assembly code. BRK is Self-hosted debug breakpoint. When core sees this instruction it stops and informs OS about situation. OS then notifies debugger about the situation and passes control to it. When debugger gets control you can check contents of registers and probably even make changes to them. Last part of the operation is to make your process continue. Since PC is still at our break point instruction what you must do is to increase PC, set it to instruction after BRK.
Since you mentioned you use .S files instead of .s files you can utilize gcc to do preprocessing / macro work. This way enabling, disabling BRK might become less of an issue.
Big down side of this way of working is turnaround time. If there is a certain point that you want to investigate with gdb you must make sure there is a BRK instruction there and this will probably require another build/push/debug cycle.
Breakpoints are one of the coolest feature supported by most popular Debuggers like GDB. But how a breakpoint works ? What code modifications does the compiler do to achieve the breakpoint? Are there any special hardware features used to support breakpoints?
Compiler does not need to "modify" the binary in any way to support the breakpoints. However it is important, that:
Compiler includes enough information in the executable (that is not in the code itself but in special sections in same file), so that debugger can relate source that user wants to debug with machine code. One typical thing debugger needs to know to be able to set breakpoints (unless you specify addresses directly), is where (at which address) program functions and lines of source code start (within machine code).
Code is not optimized by compiler in any way, that makes it impossible to relate source and machine code. Typically you will want debug code that was not optimized or code where only carefully selected optimizations were performed.
The rest of work is then performed by debugger itself.
Software breakpoints don't necessarily need special hardware features. Debugger here relies on modifying original binary (it's copy that is loaded to memory). When you set a breakpoint, debugger will place special instruction at the location of breakpoint. This special instruction needs to somehow let debugger detect when it (this special instruction) is executing. This can be some instruction that causes some kind of interrupt/exception, that debugger can hook onto, or some instruction that handles the control to debug unit. If this runs under some OS, that OS needs to support modifying running program (with something like ptrace poke/peek). Downside of SW breakpoints is that debugger needs to be able to modify running program, which is not possible if program is running from some kind of read-only memory (quite common in embedded world).
Hardware breakpoints (which need to be supported by CPU) implement similar behavior without modifying program binary. This is CPU specific, but usually it lets you to at least define a program address at which execution should hit a breakpoint. CPU continuously compares current PC with these breakpoint addresses and once the condition is matched, it breaks the execution. Number of these breakpoints is always limited.
To put a break point first we have to add some special information in to the binary .We use the flag -g while compiling the c source files to include this info.The Software debugger actually use this info to put break points.The best example for hardware break point support is in VxWorks as I have experienced.
Basically at the break point the processor halts.So internally any step which will give an exception to processor can be used to put a software break point.While a Hardware break point works by matching the address stored in Hardware registers to cause an exception.So Hardware break point is very powerful but it is heavily architecture dependent.
A very good explanation is here
What is the difference between hardware and software breakpoints?
A good intro with Processor related information is given here
http://processors.wiki.ti.com/index.php/How_Do_Breakpoints_Work
We have an application built with Visual C++ 2005, and one customer has reported that he's getting this runtime error:
Microsoft Visual C++ Runtime Library
Runtime Error!
Program: [path to our application]
R6002
- floating point support not loaded
According to Microsoft (on this page), the possible reasons for this are:
the machine does not have an FPU (not in this case: the customer has an Intel Core 2 Duo CPU and I haven't seen a machine without FPU since the 486SX)
printf or scanf is used with a floating-point format specification but there are no FP variables in the program (our app contains FP variables but I'm pretty sure we never use printf or scanf with FP formats)
Something to do with FORTRAN (no FORTRAN code in our app)
Also, the error is occurring while they're using our application (specifically, just after they select a file to be processed), not when the application starts up.
I realise this is a long shot, but has anyone seen anything like this anywhere before? Google was pretty unhelpful (there were lots of unsupported claims that it was a symptom of some kind of virus infection but very little apart from that).
Any suggestions gratefully received :-)
Are you linking a static version of the CRT? If so, you need to have floating point variables in the binary that calls printf(). And these variables have to be really used (i.e not optimized out by the comppiler).
Another possibility is a race between the CRT initialization and the code that uses these FP routines, but that would be hard to produce.
R6002 can be caused by printf trying to print a string that contains a percent-sign.
Most likely root cause of such printf failure is a program that manipulates arbitrary files and prints their names. Amazing to me, there really ARE people who put percent-signs in file names! (Yes, I realize that is technically legal.)
printf("%f\n", (float)rand() / RAND_MAX);
I experienced the same runtime error in a program compiled with VS2010 command line cl.
The reported error occurred without the (float) cast and disappeared when I added it.
I've just gotten an application to compile and run by telling my VS 2008 project to ignore libc.lib in the linker->input section off the project properties menu. Before I did this VS gave me the old "fatal error LNK1104: cannot open file 'LIBC.lib'" message.
I'm not sure how this app compiles if I'm ignoring the crt, but that's obviously my ignorance speaking.
I checked the C/C++ project settings and the runtime library setting reads multithreaded debug dll (/MDd flag)-- so I must be linking to a VC80*.dll somewhere.
I'm not sure how though. I've always been confused about the crt settings in visual studio, static or debug, multithreaded or not. From reading this site and google I have a rough improvement in my understanding now-- if you use the dlls you don't have as much code bloat, things link when the program needs them, crt updates can be applied by overwriting the dll. The usual reasons for using a dll in other words.
But what's with the multi-vs-single threaded versions? If I happen to link with a static version I can't use win threads or pthreads, is that what that means?
One other thing which I've heard about but never quite followed-- there are problems in allocating an object in one dll and deallocating it from another, or something like that, to do with cross-allocation. I'm not explaining that well (because I don't understand it) but I hope you get my point and can explain what's going on there. Does it mean that in my program I can't call new ObjectX() on a class that lives in a dll? Can't mean that, can it?
Thanks everyone!
But what's with the multi-vs-single threaded versions? If I happen to link with a static version I can't use win threads or pthreads, is that what that means?
There is no (native) pthreads in Windows - so it's more Windows Threading, but that's the basic idea. However, VS 2008 doesn't support the single-threaded versions - they have been dropped, so if you're using VC++ 2008, you'll always use the multi-threaded VC++ runtime libraries.
One other thing which I've heard about but never quite followed-- there are problems in allocating an object in one dll and deallocating it from another, or something like that, to do with cross-allocation. I'm not explaining that well (because I don't understand it) but I hope you get my point and can explain what's going on there. Does it mean that in my program I can't call new ObjectX() on a class that lives in a dll? Can't mean that, can it?
This is more of an issue with allocating objects in a DLL which was compiled with a different VC++ runtime than the calling project. If you compile a DLL with VS2005, then try to do "new MyClass();" within a VC 2008 project, you can run into problems.
This is due to changes that are made between VC++ runtime versions. Different versions are free to manage memory allocations and deallocations in their own way. Trying to allocate or delete across a "runtime library" boundary tends to cause bad things to happen - quite often, you get crashes.