I've just gotten an application to compile and run by telling my VS 2008 project to ignore libc.lib in the linker->input section off the project properties menu. Before I did this VS gave me the old "fatal error LNK1104: cannot open file 'LIBC.lib'" message.
I'm not sure how this app compiles if I'm ignoring the crt, but that's obviously my ignorance speaking.
I checked the C/C++ project settings and the runtime library setting reads multithreaded debug dll (/MDd flag)-- so I must be linking to a VC80*.dll somewhere.
I'm not sure how though. I've always been confused about the crt settings in visual studio, static or debug, multithreaded or not. From reading this site and google I have a rough improvement in my understanding now-- if you use the dlls you don't have as much code bloat, things link when the program needs them, crt updates can be applied by overwriting the dll. The usual reasons for using a dll in other words.
But what's with the multi-vs-single threaded versions? If I happen to link with a static version I can't use win threads or pthreads, is that what that means?
One other thing which I've heard about but never quite followed-- there are problems in allocating an object in one dll and deallocating it from another, or something like that, to do with cross-allocation. I'm not explaining that well (because I don't understand it) but I hope you get my point and can explain what's going on there. Does it mean that in my program I can't call new ObjectX() on a class that lives in a dll? Can't mean that, can it?
Thanks everyone!
But what's with the multi-vs-single threaded versions? If I happen to link with a static version I can't use win threads or pthreads, is that what that means?
There is no (native) pthreads in Windows - so it's more Windows Threading, but that's the basic idea. However, VS 2008 doesn't support the single-threaded versions - they have been dropped, so if you're using VC++ 2008, you'll always use the multi-threaded VC++ runtime libraries.
One other thing which I've heard about but never quite followed-- there are problems in allocating an object in one dll and deallocating it from another, or something like that, to do with cross-allocation. I'm not explaining that well (because I don't understand it) but I hope you get my point and can explain what's going on there. Does it mean that in my program I can't call new ObjectX() on a class that lives in a dll? Can't mean that, can it?
This is more of an issue with allocating objects in a DLL which was compiled with a different VC++ runtime than the calling project. If you compile a DLL with VS2005, then try to do "new MyClass();" within a VC 2008 project, you can run into problems.
This is due to changes that are made between VC++ runtime versions. Different versions are free to manage memory allocations and deallocations in their own way. Trying to allocate or delete across a "runtime library" boundary tends to cause bad things to happen - quite often, you get crashes.
Related
I've switched over to using the new Modules in Visual Studio (2019 - 16.10.4). Everything compiles and runs just fine if I do a full compilation (i.e. Clean the solution first). Very often when I compile normally, though, I get a crash or initialisation. The error is Application Error (0xC0000005) and appears on startup.
In x86, there's no valid call stack to look at at all. In x64, the debugger points to a valid call stack, at exe_common.inl in a line that says __scrt_current_native_startup_state = __scrt_native_startup_state::initialized;. I realise that's a red herring, and the actual line is _initterm(__xc_a, __xc_z); which I understand is the global memory initialisation.
Is anyone aware of such issues with Modules in VC? The internet doesn't have much information on this case, which leads me to believe it has to do with various configuration options that I might be using. Any pointers? I'm really hoping I won't have to stop using Modules, which I was really looking forward to. There is also a slight chance a recent update to Visual Studio or perhaps something obscure I might have done in the conversion to Modules that is the culprit.
It's just strange that it's guaranteed to work with a full compilation, which would lead me to think that the compiler is to blame.
I have to deal with one ancient software module which only support its API being called from VC++6.0.
If I add an indirection layer, that is, wrapping the module in a dynamic library written with VC++6.0 and transferring API to the underlying module. The dll will be called from a VC++ tool chain provided in Visual Studio 2015.
My question is:
Is this doable in principle?
Are there any pitfalls one might want to may attention?
Can it be done in a better way?
Update: I guess that the C ABI is stable through different windows versions as well as corresponding supported VS version. If the dll is made with pure C, this might be done without too much trouble.
There are at least two issues preventing portability between versions/vendors: C++ object/class layout and the C/C++ memory allocators.
To avoid the first issue you should design the visible API as a set of C functions (like most of the Win32 API).
The memory issue can be solved by switching from new/malloc to one of the native Windows functions like LocalAlloc, HeapAlloc or CoTaskMemAlloc. Assuming your old .dll is not statically linked it would theoretically be possible to force newer code to use the malloc in msvcrt.dll but it makes your build setup a bit fragile and it goes against current best practices.
Another alternative is for the middleman .dll to implement everything as a COM object:
EXTERN_C HRESULT WINAPI CreateMyObject(IMyObject**ppv) ...
How to debug mingw built binaries to detect heap errors? I see there are several questions on the topic, but they are general and it's hard to find the tool that would work well with MinGW. I spent much time on finding the solution, I hope the combined topic will be helpful.
Such a tool becomes necessary when for example someone reports a bug in your library while running it under Visual Studio debugger, which stops with a "Heap Error".
There is a tool provided by Microsoft called Application Verifier. It is a gui tool that changes system settings to run selected applications in a controlled environment. This makes it possible to crash your program if it causes detectable memory errors. This is a controlled crash that can be debugged.
Fortunately it is obtainable from Microsoft as a separate download. Another way to get it is to have Windows SDK installed with checked Application Verifier checkbox. SDK offers also an option Application Verifier redistributable.
After you configure Application Verifier to have an eye for your app, you need to debug it. Debugging under MinGW is a more common subject, already explained on stackoverflow. [mingw] [debugging] query on stackoverflow gives interesting articles. One of them is How do I use the MinGW gdb debugger to debug a C++ program in Windows?. Gdb is the one I used.
The general questions How to debug heap corruption errors? and Heap corruption detection tool for C++ were helpful to find this tool, but I wasn't sure if it is compatible with MinGW. It is.
I'm in the process of learning VC++ but I wonder why do end-users also need MSVC++?
As far as I can see in MSDN most if not all of the libraries that my programs use (the actual DLL files) already come with the system itself (user32.dll, kernel32.dll, etc).
But how come Paint and Notepad do not need MSVC++, but my software, which is way more simple than Notepad requires this runtime? What does the runtime do? How does it work? Is there a way to make my software work without MSVC++?
The runtime provides all the standard functions and classes, like std::string and std::vector, as well as the support code that runs constructors and destructors of global objects, finds exception handlers, etc. Windows comes with a version of all this, and for a while Visual C++ used it, but it was discovered that there were incompatibilities with the Standard, so newer versions of the compiler come with fixes (Windows can't bundle the new fixes in place of the old DLLs, because it would break existing programs).
Yes you can avoid the need for the runtime redistributable installer. You can use the /MT build option, which bundles all the required library functions right into your executable. After that, you'll only need DLLs that come with Windows.
The setting is in Project Configuration under C/C++ -> Code Generation -> Runtime Library
But note that this will make your executable file somewhat larger, and any bug fixes (especially security fixes distributed via Windows Update) won't affect your program, since you have a particular implementation baked in.
Adding to Ben's answer:
The runtime bundles a lot of features for each respective version of Visual Studio. The main advantage of using the DLL version of the runtime is that you get (security) updates "for free" whenever the system updates the DLLs in question.
Another advantage that some people will point out is that it saves resources to use the DLL version if many processes use the runtime via the DLL. This is because Windows has a mechanism to share DLLs in memory across processes (or the major part of them).
You will notice that bundling the runtime into your binary - also called static linking - will make your binary bigger, because each of your binaries now carries its own version of the runtime (that cannot be replaced without linking the program anew).
Also beware of mixing (your own) DLLs that statically link to either different versions of the runtime (i.e. Debug vs. Release) or that dynamically and statically link to the runtime depending on the DLL. The problem here is allocators. The functions to allocate (malloc, calloc, new) and free memory are incompatible across these. The best method in such a case is to use an independent mechanism such as IMalloc - or carry the deallocator inside your object instances always, ensuring that the call to free/delete doesn't cross module boundaries, even if the instance is handled in another module.
I'm in an Assembly class focusing on the intel 8086 architecture (all compiling / linking / execution comes from running DOS on win7 via DOS-Box).
I've finished programming the latest assignment, but as I have yet to program any program successfully the first time through, I am now stuck trying to debug my code.
I have visual studio 2010 and was wondering if there was some built in feature that would help me debug my assembly code, specifically, I'm looking to track the value of a variable.
Failing that, instructions pointing to a DOS-Box debugger (and instructions!) would be much appreciated. (I think I've been able to run codeview debug, but I couldn't figure out how to do what I was looking for).
You are generating 16-bit code, you have to break into a museum to find better tooling. Try Borland's, maybe the debugger included with Turbo C.
Yes, indeed, you can use the debugger in VS to examine pretty much everything. Irvine's site has a section specifically on using the debugger here. You can examine registers, use the watch window, etc. He also has a guide for highlighting asm keywords if you need that.
Edit: as Hans pointed out, if you are using 16-bit instead of 32-bit protected, you'll need different tools. There are several choices, listed here.
Borland's tools for DOS were called tasm, tlink, and tdebug.