Error Bad Calling Convention when debugging vb6 program - visual-studio

I have a standard VB 6 exe (mailviewer).
This program has a "link" to a cobol DLL:
Declare Sub InkMvwMail Lib "inkvwm" Alias "INKMVWMAIL" ...
When starting the normal exe from windows,
EVERYTHING WORKS FINE,
but when I want to debug the call to the cobol DLL entry point in Visual Studio 6.0 (SP6) (on windows xp), I get
"Error 49, Bad Calling Convention"
Thanks for any help in advance
Wolfgang

EVERYTHING WORKS FINE,
No, it only looks that way. Everything is not fine, that Cobol function was designed to be called from a C program. It has the wrong calling convention, cdecl instead of stdcall. The stack imbalance that's caused by this can cause extremely hard to diagnose runtime failure, like local variables mysteriously having the wrong value and includes the hard crash for which this site is named.
When you run from the IDE, the debugger performs an extra check to verify that the stack pointer is properly restored across the function call. It is not, thus generating the error 49 diagnostic.
You'll need to follow the guidance in this KB article. It cannot be solved in VB6, this requires writing a little helper function in another language that can make cdecl calls, like C or C++. The KB article shows what such a function could look like, although they intentionally gave it the wrong convention to demonstrate the problem.

Related

Why doesn't Qt raise a compile error with invalid class/type names in signal/slot?

Windows 7 SP1
MSVS 2010
Qt libraries 4.8.4 for Windows (VS 2010)
Visual Studio Add-in 1.1.11 for Qt4
I, at first, couldn't figure out why this slot didn't fire:
connect (lineEdit, SIGNAL(textChanged(const QString &)),
this, SLOT(enableFindButton(const Qstring &)));
A diff made it clear: Qstring should be QString.
My question: Why did it compile? In fact, it will compile with this:
connect (lineEdit, SIGNAL(textChanged(const nonsense &)),
this, SLOT(enableFindButton(const more_nonsense &)));
Is this expected behavior? Why wouldn't this raise an error?
As far as I know and understand how works Qt, the connection lines works on runtime, not in compilation time. This means that all the operations needed to connect a signal and a slot are performed when the code flow reach that part.
Remember something important about Qt, some of the calls are only macros, not C++ functions. For example, the line "Q_OBJECT" you should add in your class declarations to enable the signals and slots mechanims works, is a macro. Also, the connect call is a macro (...), emit is a macro, etc. Those calls expand into real code once MOC (a pre-compiler, translates Qt macros into real code) analize your code.
Also, the signal/slot mechanism, I repeat, as far as I know, works on runtime, not compilation time. If you read the documentation of the connect macro, it says that the "SIGNAL" and "SLOT" macros you place there, convert the stuff you put there into a string, obviously, with some kind of format, that maybe is too complex to used by hand, so, since it is a string working there, the compilation can't check if the string is correct or not, that is checked on runtime.
I hope my bad english let you understand me, and I hope my knowledge is big enough to not being saying (too much) incorrect tinghs.

Why are some error messages in executable binary

Answering another question about how const string data was stored in an executable a question occured: Why are run time error messages stored in the executable rather than generated by the OS?
!This program cannot be run in DOS mode.
Seems reasonable, DOS can't be expected to detect a newer Windows app and generate an appropriate error message. Some others about stack frame corrupted might also occur when your program is in a state where it can't reliably communicate with an OS function
But some make no sense:
- Attempt to use MSIL code from this assembly during native code initialization...etc..
If I was trying to call a .Net assembly (which I don't - this is pure C++ code) surely the .Net or OS runtime could manage to generate the error message itself.
And the file system error messages are also in the exe. Surely I want these errors to be generated by the OS at run time - if the app is being run on a non-english version of Windows don't I want system errors to reflect that language rather than the one I used to compile it?
Just a Friday afternoon sort of question but there doesn't seem to be anything about it on Raymond Chen's site
The examples I know of are stubs that are placed specifically because there isn't a good way for the OS or some runtime to produce the error. Also, they are fundamental problems with the implementation, not the types of errors a user should ever see.
The DOS one is a perfect example. There's no way for DOS to recognize a Windows program and produce a reasonable error, so the stub code is included in Windows programs.
Your MSIL one is very similar. How can the runtime produce this error if the runtime isn't actually loaded (because the program is still initializing)?
Another example I can think of are "Pure virtual function called" in C++ programs. The abstract base class vtables are filled with stubs that emit this message. I suppose they could be stubs that make an OS call to produce the same message, but that seems like overkill for something that's never supposed to happen.
There's a similar problem if you try to call into the floating point library from a native C or C++ program that isn't actually linked against the floating point library. This is essentially a floating-point implementation detail of the language runtime library (in cooperation with the compiler and the linker). It's not an OS-level issue (in the sense that something like "file not found" is).
It's so you can globalise your application.
E.g
Ivan can have the error messages in Russian even though it's running on a French OS.
I'm guessing this is code pulled in by the CRT, try compiling a test application that does not link with the default libraries and use mainCRTstartup as the entry point, then the only string should be the error message in the DOS stub.

Customer getting R6002 runtime error when using our app

We have an application built with Visual C++ 2005, and one customer has reported that he's getting this runtime error:
Microsoft Visual C++ Runtime Library
Runtime Error!
Program: [path to our application]
R6002
- floating point support not loaded
According to Microsoft (on this page), the possible reasons for this are:
the machine does not have an FPU (not in this case: the customer has an Intel Core 2 Duo CPU and I haven't seen a machine without FPU since the 486SX)
printf or scanf is used with a floating-point format specification but there are no FP variables in the program (our app contains FP variables but I'm pretty sure we never use printf or scanf with FP formats)
Something to do with FORTRAN (no FORTRAN code in our app)
Also, the error is occurring while they're using our application (specifically, just after they select a file to be processed), not when the application starts up.
I realise this is a long shot, but has anyone seen anything like this anywhere before? Google was pretty unhelpful (there were lots of unsupported claims that it was a symptom of some kind of virus infection but very little apart from that).
Any suggestions gratefully received :-)
Are you linking a static version of the CRT? If so, you need to have floating point variables in the binary that calls printf(). And these variables have to be really used (i.e not optimized out by the comppiler).
Another possibility is a race between the CRT initialization and the code that uses these FP routines, but that would be hard to produce.
R6002 can be caused by printf trying to print a string that contains a percent-sign.
Most likely root cause of such printf failure is a program that manipulates arbitrary files and prints their names. Amazing to me, there really ARE people who put percent-signs in file names! (Yes, I realize that is technically legal.)
printf("%f\n", (float)rand() / RAND_MAX);
I experienced the same runtime error in a program compiled with VS2010 command line cl.
The reported error occurred without the (float) cast and disappeared when I added it.

Lisp code debugging

During web searching, I found the following comment : Traditional Lisp debugging practices can still be used.
What are the traditional debugging practices?
Normally, what tools are used for debugging lisp (with/without emacs)?
I don't know what Bill meant specifically, but IME:
Typically your editor will have a running instance connected to it. You can compile functions immediately to insert them into the running image -- since Lisp has its own compiler, you're just telling the running image to read and compile a small section of text. Or you can run functions directly, to see what they do.
When an exception is thrown (or a condition is signaled, if you're lucky enough to be in a dialect with conditions), the debugger will show you the stack trace and let you decide how to continue.
The major difference between Lisp and other high-level compiled languages is that in Lisp you're basically always writing code with the debugger attached.
As clojure was tagged in the question, I'll give our perspective.
Class files generated by the clojure compiler include line- and method-based debugging info, so any java debugger will interoperate directly with clojure code, including breakpoints and object inspection.
If you use emacs/slime as your development environment, integration with slime's debugger has recently been included. As documentation is a little sparse, it's probably best to check out the scope of the support on github directly.
Run edebug-defun in emacs and you will see that lisp is magic.
In something that I would call approaches a "traditional set of Lisp debugging techniques" are:
Debug printouts
Function tracing (each invocation of a traced function
is printed with an indentation that corresponds to call depth, on return the return
value is printed).
Explicit invocation of the in-image debugger
Ending up in the in-image debugger due to a bug (trying to add an integer and a symbol, for example)
Basically just things like adding code to print out values as it runs so you can see what's happening.

Attempted to read or write protected memory when calling native C DLL

I have a native C dll that exports one function besides DllEntryPoint, FuncX. I'm trying to find out how FuncX communicates with it's caller, because it has a void return type and no parameters. When I call it from a C# harness, I get an AccessViolationException - Attempted to read or write protected memory.
I have a hunch that its client application may allocate a buffer for sending or receiving values from the dll. Is this a valid hunch?
I can't debug the client application because for some reason it doesn't run, so I can't start it and attach to the process. I can, however, disassemble it in IDA Pro, but don't know how to, if I can, try and debug it in there.
If the DLL in question has any static or global symbols, it's possible that all communication is done via those symbols. Do you have any API code that looks like it might be doing this?
It is unlikely that the DLL is using a client supplied buffer, as both client and server would need to know the base address of that buffer, and you can't ask calloc or malloc for a "preferred" address at call time.
You might also try running link /dump /symbols and point it at your DLL. That will show you the list of exported symbols in your DLL. Good luck!
I would try loading the DLL itself into IDA Pro. Hopefully C# preserves the native call stack, and you can look at the code around where the DLL crashes.
Side note: the Decompiler plugin is pretty awesome.

Resources