I am working on a program that crashes when it is run, but works just fine when debugged in GDB. I have seen this thread and removed optimizations and tried checking values of relevant local and global variables, with nothing seemingly out of place. It is not a concurrent program, so there shouldn't be issues with race conditions between threads. Windows Event Viewer logs the issue as a heap corruption (a problem with ntdll.dll), and I'm not sure what could be causing this. I am compiling with the 64-bit version of MinGW.
The program itself is rather large, and I'm not even sure which part to post. I don't really know how to proceed or what else I could check for. Any guidance if this is a known issue would be greatly appreciated, and if there is any other information I could post please let me know.
I was able to track down the issue - somewhere in the code, I was using fscanf to read in arrays of type int, but the variables that they were being stored in (i.e., the third arg to fscanf) were of type char*. Changed the argument to one of type int* and it fixed the issue.
Related
I am trying to track function call counts on a program I'm interested in. If I run the program on its own, it will run fine. If I try to run it with valgrind using the command seen below I seem to be getting a different result.
Command run:
Produces this input immediately, even though the execution is normally slow.
I'd say that this is more likely to be related to this issue. However to be certain you will need to tell us
what compilation options are being used - specifically are you using anything related to AVX or x87?
What hardware this is running on.
It would help if you can cut this down to a small example and either update this or the frexp bugzilla items.
valgrind has limited floating point support. You're probably using non-standard or very large floats.
UPDATE: since you're using long double, you're outta luck. Unfortunately,
Your least-worst option
is to find a way to make your world work just using standard IEEE754
64-bit double precision.
This probably isn't easy considering you're using an existing project.
I'm trying to use Rust's Windows WriteProcessMemory in a project of mine in order to replicate the process hollowing technique. Although I use it in nearly exactly the same way in another place in the project, I'm having trouble getting this one to work. It looks to me like the whole buffer isn't getting copied to the location I enter, and/or the u8 integers are being squished into u64s when written.
The WriteProcessMemory call returns BOOL(1), which evaluates to true, and makes me think it is running successfully. If I provide the lpnumberofbyteswritten variable, it comes back as the same size as the shellcode buffer I intended to write. But the memory doesn't look right if I read it after writing, and the shellcode doesn't run properly (whereas in the other place in my project it does). Have I made a silly mistake? If so, does anyone see where?
Thank you!
I'm having a leak which is very hard to detect;
Can valgrind tell me which is the last call where address was accessible? and what were the values of the variables? I use Clion, can it just break when it happens?
There is no "instantaneous" detection of leaks functionality in valgrind/memcheck
that reports a leak exactly at the time the last pointer to a block is lost.
There was an experimental tool that tried to do that, but it was never considered for integration in valgrind, due to various difficulties to make this work properly.
If your leak is easy to reproduce, you can run your application under valgrind +
gdb/vgdb. You can then add breaks at various points in your program, and then
use monitor commands such as "leak_check" or "who_points_at" to check if the leak already happened. By refining the locations where to put a break, this might help to find when the last pointer to a block is lost.
See e.g. https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands for more info.
I'm doing research in ruby interpreter and mJIT.
And, as a first step, I would like to understand the behaviors of both. Thus, I simply ran a very simple ruby program without --jit command puts ("hello world!") and got the execution trace of it. Then, one thing I found that even without mJIT enabled, some of the mJIT functions get invoked, such as mjit_add_class_serial, mjit_remove_class_serial, mjit_mark, mjit_gc_finish_hook, mjit_free_iseq, and mjit_finish.
And, I would like to understand why that is. My guess is that the interpreter and mJIT shares some of those codes, but not 100% sure. Especially, the description of mjit_finish is briefly saying that it is for finishing up whatever the operation is happening by the mJIT compiler. In such case, why does this function gets invoked when interpreter-only execution code?
If anyone has an idea regarding my question, any recommendation would be very much appreciated.
Thank you.
This is for ruby version 2.6.2. And, I've gone through the source code as well as the comments explaining each code, but they are not very clear.
I am using Intel's FORTRAN compiler to compile a numerical library. The test case provided errors out within libc.so.6. When I attach Intel's debugger (IDB) the application runs through successfully. How do I debug a bug where the debugger prevents the bug? Note that the same bug arose with gfortran.
I am working within OpenSUSE 11.2 x64.
The error is:
forrtl: severe (408): fort: (3): Subscript #1 of the array B has value -534829264 which is less than the lower bound of 1
The error message is pretty clear to me, you are attempting to access a non-existent element of an array. I suspect that the value -534829264 is either junk when you use an uninitialised variable to identify the element in the array, or the result of an integer arithmetic overflow. Either way you should switch on the compilation flag to force array bounds checking and run some tests. I think the flag for the Intel compiler would be -CB, but check the documentation.
As to why the program apparently runs successfully in the debugger I cannot help much, but perhaps the debugger imposes some default values on variables that the run time system itself doesn't. Or some other factor entirely is responsible.
EDIT:
Doesn't the run-time system tell you what line of code causes the problem ? Some more things to try to diagnose the problem. Use the compiler to warn you of
use of variables before they are initialised;
integer arithmetic overflow (not sure if the compiler can spot this ?);
any forced conversions from one type to another and from one kind to another within the same type.
Also, check that the default integer size is what you expect it to be and, more important, what the rest of the code expects it to be.
Not an expert in the area but couple of things to consider:
1) Is the debugger initialising the variable used as the index to zero first, but the non-debug does not and so the variable starts with a "junk" value (had an old version of Pascal that used to do that).
2) Are you using threading? If so is the debug changing the order of execution so some prep-thread is completing in time.