Windows CRT and assert reporting (abort,retry,ignore) - visual-studio

The Windows CRT in debug mode will show a "Abort,Retry, Ignore" window if the application hits an assert(false) and sometimes it is created many times and fills my screen.
I would love it if the assert would break in the debugger and not ask me any questions.
I have modified the CRT reporting flags which have had no effect.
I have also tried to modify the reporting hook. It does get called by after 25-30 "Abort" dialogs appear.
I am building a DLL that is loaded by a separate program if that helps. It also looks like the host program loading my DLL is not consistent with what thread is calling my code.
It seems like the one of the threads was stopped but the others are still running.
How do I configure the CRT to do this ?

This works (for me atleast, on vs 2008):
(Essentially, return TRUE from the hooked function)
int __cdecl CrtDbgHook(int nReportType, char* szMsg, int* pnRet)
{
return TRUE;//Return true - Abort,Retry,Ignore dialog will *not* be displayed
return FALSE;//Return false - Abort,Retry,Ignore dialog *will be displayed*
}
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
_CrtSetReportHook2(_CRT_RPTHOOK_INSTALL, CrtDbgHook);
assert(false);
getch();
return 1;
}
You could also write your own assert-like behavior (Note that this will show the "Break, Continue" dialog):
#define MYASSERT(x) { if(!(x)) {DbgRaiseAssertionFailure();} }
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
MYASSERT(false);
getch();
return 1;
}
Hope that helps!

Liao's answer takes you most of the way there, but I'd like to propose that you add one more thing to your debug hook:
int __cdecl StraightToDebugger(int, char*, int*)
{
_CrtDbgBreak(); // breaks into debugger
return TRUE; // handled -- don't process further.
}
Otherwise your assertions will just disappear and the process will terminate.
Problem with this approach is that -- at least for my home install of VC Express -- the debugger throws up a big "program.exe has triggered a breakpoint" message instead of the normal Assertion Failure, so it may not be a great improvement.

I'm not sure if you want the behavior to be for any assert, or whether you're just trying to use assert(false) specifically as a general-purpose pattern to unconditionally break into debugger on a given line. If it's the former, see Liao's and Kim's answers. If it's the latter, then you should really use the __debugbreak intrinsic function instead.

Why does it assert? assert(false) looks like "should never happen" code was executed in CRT. I would be scared if I were you. Is it always on one line? Are there any comments around it?
EDIT:
I mean: assert happens in CRT code because there is some assumption it is checking that you don't meet (maybe you managed to link to mixed runtime, or you making managed C++ assembly and forgot to manually initialize CRT, or you trying to call LoadLibrary from within DllMain, or some other thing that should never happen).
So before figuring out how to suppress asserts, find out why exactly does it assert in the first place. Otherwise you'll likely get seemengly unrelated problems later on and will have lots of fun trying to debug them. (from your question it is unclear if you know what those asserts are about)
Code like this
if(somebadcondition)
{
assert(false);
// recovery code
}
literally means "this branch of code should never be executed".

Why not use DebugBreak Function?
Or even use an opcode?
#ifdef _X86_
#define BreakPoint() _asm { int 3h }
#else
#define BreakPoint() DebugBreak()
#endif
Before Visual C++ 2005, the instruction,
__asm int 3 did not cause native code to be generated when compiled with
/clr; the compiler translated the
instruction to a CLR break
instruction. Beginning in Visual C++
2005, __asm int 3 now results in
native code generation for the
function. If you want a function to
cause a break point in your code and
if you want that function compiled to
MSIL, use __debugbreak.

Related

How to debug if a constexpr function doesn't run on compile time?

For example I have a constexpr function, but I use a runtime variable (not marked as constexpr) to take the return value. In this case, I'm not sure whether the function runs on compile time or runtime, So is there any way to debug?
At first I thinked about static_assert, but it looks like static_assert cannot do this. Then I thought convert the code to assembly code, but it is way too difficult to check the assembly code to figure out.
Before C++20 there is no way to directly handle it from the program itself.
With C++20 you have std::is_constant_evaluated.
If the return type from your constexpr func is a valid non type template parameter, you can force your function to be evaluated in compile time like this:
constexpr int func( int x )
{
return x*2;
}
template < auto x >
auto force_constexpr_evaluation()
{
return x;
}
int main()
{
int y = force_constexpr_evaluation<func(99)>();
}
If you are using c++20 already, you can directly force compile time evaluation by using consteval
Debugging on assembly level should be not so hard.
If you see a function call to your constexpr func, it is running in run time.
If you see directly the forwarded value, it was evaluated in compile time.
If it is inlined, you should be able to detect it by having the function name associated from the debug symbols to the location of the inlined code. Typically, if you set a breakpoint on the constexpr function and it is not always be executed at compile time but inlined, you get a number of breakpoints not only a single one. Even if it is one, it points to the inlined position in that case.
BTW: It is not possible to back port std::is_constant_evaluated to older compilers, as it needs some implementation magic.

Why the stdscr variable does not work in PDCurses?

My PDCurses program terminates when I pass the stdscr variable to any function that receives a WINDOW* argument (e.g., keypad and wprintw). But it works when I capture the WINDOW* returned by initscr and use it instead.
I assume that once initscr is called, the WINDOW* returned by it and the stdscr variable should be the same. But after comparing their addresses I realized it is not so.
I could keep using the WINDOW* returned by initscr, but that would not work in a multi-terminal program where one have to use newterm which returns a SCREEN*, not a WINDOW*. In that case I necessarily would need to use the stdscr variable, which still refuses to work.
Here is a sample code that works:
#include <curses.h>
int main()
{
WINDOW* wnd = initscr();
wprintw(wnd, "Hello world!");
refresh();
endwin();
return 0;
}
But this one does not:
...
int main()
{
initscr();
wprintw(stdscr, "Hello world!"); // the program terminates here
refresh();
endwin();
return 0;
}
This potentially multi-terminal program doesn't work either:
...
int main()
{
SCREEN* term = newterm(NULL, stdout, stdin);
set_term(term);
wprintw(stdscr, "Hello world!"); // the program terminates here
refresh();
endwin();
return 0;
}
So I don't know what is happening with the stdscr variable. I am using Windows 8.1 x64, VC++ x64 of Visual Studio 2012 and PDCurses 3.4.0.3 (downloaded with Nuget package manager).
So, referencing Git Issue #31: https://github.com/wmcbrine/PDCurses/issues/31
it looks like you probably were building without defining PDC_BUILD_DLL. As noted in win32/README (later win32/README.md, wincon/README.md):
"When you build the library as a Windows DLL, you must always define
PDCURSES_DLL_BUILD when linking against it. (Or, if you only want to
use the DLL, you could add this definition to your curses.h.)"
The described modification was made to the curses.h files bundled with the DLLs I distriubted on SourceForge, but not those from the NuGet project, nor apparently is the relevant documentation included in that package.
The last line of PDCurses' implementation of initscr() (really Xinitscr(), which is called by initscr(), but anyway) is simply return stdscr;. So there's absolutely no difference between stdscr and the return value of initscr().
I don't know what you're doing wrong, but I can't reproduce any problem with your sample program. You might want to specify more about your environment -- OS, compiler, PDCurses version -- and exactly what it is that you're interpreting as a crash. BTW, the inclusion of stdio.h here is unnecessary (but harmless).
PDCurses doesn't support multiple simultaneous terminals, anyway.

register_kprobe is returning -2

I am trying to hook some kernel function for learning purpose, I wrote the simple kernel module below, but for some reasons, the register_kprobe always returns -2. I didn't find nothing about what it says what this error means and have no idea how to continue. At first I thought it is because list_add is an inline function, so I tried replacing it with kvm_create_vm and got the same result. Then I checked the /proc/kallsyms and found that both don't appear there. So I chose kvm_alloc which is exported, and still I get error -2. I also tried alloc_uid but this worked just fine.
My question: What kind of functions can be hooked with kprobes?
#undef __KERNEL__
#define __KERNEL__
#undef MODULE
#define MODULE
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/ptrace.h>
MODULE_LICENSE("GPL");
static int pre(struct kprobe *kp, struct pt_regs *regs){
printk(KERN_INFO "It is working!\n");
return 0;
}
static struct kprobe kp = {
.symbol_name = "list_add",
.pre_handler = pre,
.post_handler = NULL,
.fault_handler = NULL
};
int init_module(void){
printk(KERN_INFO "Hi\n");
printk(KERN_INFO "register_kprobe: %d\n" , register_kprobe(&kp));
return 0;
}
void cleanup_module(void){
unregister_kprobe(&kp);
printk(KERN_INFO "Bye\n");
}
Edit
The line I stroked through was the main reason I got confused. I miss spelled kvm_alloc, it should be kvmalloc without the underscore. And this function got hooked just fine.
To probe inlined functions, you need to find all the PC addresses at which their inlined instances live, and plop those addresses into the struct kprobes .addr field. A tool such as systemtap searches DWARF debuginfo for such inlined functions to compute PC addresses. See readelf -w vmlinux; DW_TAG_inlined_subroutine, DW_AT_low_pc etc.
A negative return value can usually be interpreted as a negated errno value. Have a look at http://www.virtsync.com/c-error-codes-include-errno or so:
#define ENOENT 2 /* No such file or directory */
So the problem seems to be that register_kprobe could not find something, probably the list_add symbol. Let's dig into the source to figure out why it is that way.
register_kprobe calls kprobe_addr to resolve the symbol name, which in turn calls kprobe_lookup_name, which is a #define for kallsyms_lookup_name. So it seems that you need to get the symbol you want to hook into kallsyms for this to work.
For documentation about kprobes, have a look at Documentation/kprobes.txt in the kernel source tree. About kprobe'ing inline functions, it says:
If you install a probe in an inline-able function, Kprobes makes
no attempt to chase down all inline instances of the function and
install probes there. gcc may inline a function without being asked,
so keep this in mind if you're not seeing the probe hits you expect.
So, it doesn't really work for inlined functions.
Now that we have figured out the problems, let's look for solutions. You'll probably need to recompile your kernel for this though.
First, make sure that the kernel configuration option CONFIG_KALLSYMS_ALL is turned on – that makes sure that kallsyms knows about more symbols. Then, try moving the implementation of list_add into a seperate .c file and adding __attribute__ ((noinline)) to it. That new kernel build is going to be slower, but I think that your kprobe module should work with it.

maintain MPI version and non MPI version in a convenient way

Recently, I used MPI to parallelize my simulation program to speed up. The way I adopted was to rewrite one function that is very time-consuming but easy to be parallelized.
The simplified model of non-MPI program is as follows,
int main( int argc, char* argv[] ){
// some declaration here
Some_OBJ.Serial_Function_1();
Some_OBJ.Serial_Function_2();
Some_OBJ.Serial_Function_3();
return 0;
}
While my MPI version is,
#include "mpi.h"
int main( int argc, char* argv[] ){
// some declaration here
MPI_Init( NULL, NULL );
Some_OBJ.Serial_Function_1();
Some_OBJ.Parallel_Function_2(); // I rewrite this function to replace Some_OBJ.Serial_Function_2();
Some_OBJ.Serial_Function_3();
MPI_Finalize();
return 0;
}
I copied my non MPI code to a new folder, something like mpi_simulation, and add a mpi function, revised the main file to . It works, but very inconveniently. If I update some functions, say OBJ.Serial_Function_1(), I need to copy the code with caution even if I just change a constant. There are still some slight differences between these versions of programs. I felt exhausted to keep them in accordance.
So I wander if there is any way to let MPI program dependent on non MPI version, so that my revisions can be easily applied to both of them safely and conveniently.
Thanks.
Update
I finally adopt haraldkl's suggestion.
The method is to define a macro to enclose all functions that use MPI interfaces, like this:
#ifdef USE_MPI
void Some_OBJ::Parallel_Function_2(){
// ...
}
#endif
To initialize MPI automatically, I define a singleton called MPI_plugin:
#ifdef USE_MPI
class MPI_plugin{
private:
static MPI_plugin auto_MPI;
MPI_plugin(){
MPI_Init( NULL, NULL );
}
public:
~MPI_plugin(){
MPI_Finalize();
}
};
MPI_plugin::MPI_plugin auto_MPI;
#endif
Including MPI_plugin.h in main.cpp can survive me from adding MPI_Init() and MPI_Finalize() in main.cpp when compiling MPI version.
The last step is to add a PHONY target "mpi" in makefile:
CPP := mpic++
OTHER_FLAGS := -DUSE_MPI
.PHONY: mpi
mpi: ${MPI_TARGET}
...
I hope it helpful to anyone who meets the same problem.
One approach to solving your problem would be to install (if it is not already installed) one of the 'dummy MPI' libraries available. So long as your code runs correctly on one MPI process (I'm sure you've written it so that it does) then it should run correctly when linked to a dummy MPI library. If you're not familiar with a dummy MPI library, Google.

Code compiles in VS2008 but not in VS2010 for std::set with boost::trim

The following code
#include "stdafx.h"
#include <string>
#include <set>
#include <boost/algorithm/string/trim.hpp>
int _tmain(int argc, _TCHAR* argv[])
{
std::set<std::string> test;
test.insert("test1 ");
test.insert("test2 ");
for(std::set<std::string>::iterator iter = test.begin(); iter != test.end(); ++iter)
{
boost::algorithm::trim(*iter);
}
return 0;
}
compiles in VS2008 but fails in VS2010 with the error
error C2663: 'std::basic_string<_Elem,_Traits,_Ax>::erase' : 3 overloads have no legal conversion for 'this' pointer \boost\include\boost\algorithm\string\trim.hpp
which indicates there is a problem with const matching in the functions. If I change set to vector everything is fine. I can also do a
boost::algorithm::trim(const_cast<std::string&>(*iter));
but I hate putting that in my code, and it seems like I shouldn't have to as I'm not using a const iterator on the set. Does anyone have any ideas if this is the intended behavior and why?
The elements of a std::set are intended to be immutable. If you could update them in place, it would either require the set to be re-ordered whenever you updated an element (which would be very hard, possibly impossible, to implement), or updating an element would break the set's guarantee of ordering.
Allowing the elements of a set to be mutable was an oversight in the original C++98 standard. It was corrected in C++11; the new standard requires set iterators to dereference to a const element. VS10 implements the new rule; VS08 followed the old standard, where updating a set element in place invokes undefined behaviour.
(See the final draft C++11 standard, section 23.2.4 para 6.)
Yes, it's intended behavior. Even when you don't use a const_iterator, you normally need to treat the contents of a set as const. Even though the modification you're making (probably) shouldn't cause a problem, modifying them in general could require that the order of the elements be changed to maintain set's invariant that the items are always in order.
To ensure that they stay in order, you aren't allowed to modify them at all.
VS 2008 allowed this, but probably shouldn't have (as in: the standard sort of allowed it, but it definitely wasn't a good idea). VS 2010 fixes the problem (and conforms with the new draft standard) by not allowing in-place modification.
The cure is to remove the item from the set, modify as needed, and then re-insert it into the set (or, as you've done, cast away the const-ness, and pray that nothing you do screws up the ordering).

Resources