In our c# code base (hence managed code), we have a class that we make extensive use of throughout the code.
Given its ubiquity, I decided to write a custom debugger visualizer so that we could easily examine such objects when debugging.
But, I hit a snag - when I try to run the visualizer in the IDE, I get a BadImageFormatException.
I am posting this to help others who come across the same error. I know what the issue and solution is and will post.
Currently (as of Visual Studio 2019) it's possible to split the visualizer into two:
debuggee-side DLL -- gets injected into the target process, and
debugger-side DLL -- loaded into Visual Studio.
The two halves pass data between each other using serialization/deserialization.
This architecture is required for visualizers to target multiple frameworks -- the debugger side is loaded into Visual Studio, so it must target .NET Framework; the debuggee side is injected into the target process, which might target .NET Core or .NET 5+. (I refer you to this repo for a minimal visualizer with this structure; and to other visualizers I've written (1 2) which also use a similar architecture.)
The same architecture works for bit-ness. Visual Studio is a 32-bit application, so the debugger side cannot be 64-bit; it must be 32-bit or AnyCPU. But if the target process might be 64-bit, the debuggee side has to match the target process and must be 64-bit or AnyCPU.
Per the docs:
Typically, it is best if both the debugger-side DLL and the debuggee-side DLL specify Any CPU as the target platform. The debugger-side DLL must be either Any CPU or 32-bit. The target platform for the debuggee-side DLL should correspond to the debugee process.
The issue is that Visual Studio itself, the IDE, runs as a 32-bit process only. If it is to run a custom data visualizer for you while debugging, the custom visualizer and all the code that this visualizer loads must be loadable and runnable by a 32-bit process. The custom visualizer gets the object to visualize by a serialization/deserialization process. To deserialize the object, the visualizer must be able to load the .dll in which the object is defined. And here we run in to the snag: if we are building our application to an x64 target (rather than an AnyCpu target), we’re up a creek – it doesn’t matter if the custom visualizer itself is built to a 32-bit target, because it’s the application code that must be used for deserialization.
So if your application is built to a 64-bit target, you cannot run a custom visualizer (big, big OUCH Microsoft!). To get around the snag, you can build to a target of AnyCpu, and then things work well: the application loads and runs as 64-bit (since targeted to AnyCpu), but the IDE is still able to load the .dll’s as 32-bit for the purposes of the custom data visualizer running in the IDE’s process space.
If I am wrong on this and there is a better work-around, I would love to be corrected! Thanks.
Related
While struggling to get the VS2019 remote debugger to load symbols, I noticed when I go to attach the debugger in VS, the process displays as "Managed":
But this is a native x86 C++ project:
update: I've deduced that this is not the cause of my problem (now resolved) so the question is simply why processes that are purely native C++ (they pre-date .NET) are detected as managed?
I am developing a Windows Forms application that needs to access and manipulate a SharePoint 2013 Site Collection, preferably via the Server-Side Object Model (SSOM). I seem to be running into a problem similar to the question "Visual studio designer in x64 doesn't work"
I cannot run the SSOM code in x86 or AnyCPU, and when I try to open a form with user controls in the forms designer after compiling to x64, I get the following error:
Could not find type '[UserControl]'. Please make sure that the assembly that contains this type is referenced. If this type is a part of your development project, make sure that the project has been successfully built using settings for your current platform or Any CPU.
It would seem the UserControl(s) are being compiled into x64, but cannot be rendered by the designer, which I suppose is running in x86. These user controls are all in the same project, and recompiling the whole project as x86 allows the forms designer to render
Is my only option to completely rebuild my entire code base each time I want to switch between form design and interfacing with SharePoint?
Until or unless Hans Passant comes along and claims the answer he essentially gave me in the comment thread, I'll post my resolution to the issue:
Solution
Ensure that any user controls used in a 64 bit targeted forms/projects are in a separate project, which is compiled to target "AnyCPU", and then rebuild and reference that project.
When the forms designer, which is running as x86, tries to load your user controls for rendering, it will run the controls as x86, but when you build the 64 bit project and run it, the references will be coming from an x64 process, and will thus run the controls as x64.
I have a large solution made from a combination of c++ and c# projects, most of which output dll's. We also have an executable which depends on the outputs from those projects. Our entire solution is currently built in VS2005. For numerous reasons we have to target v80 for our builds, but we've finally found time to move to the 2010 IDE.
When we build in 2010, our solution all compiles fine, but we get an access violation when running the app. This exception occurs in a number of scenarios, but always at the same point in code - It also shows as an "exception encountered during a user callback". If we edit out the line of code where the exception throws, it simply moves to somewhere else, which makes sense. The scenario's in which we have the issue are as follows:
All dll's and exe built in 2010 against the v80 toolset.
All dll's build in 2005, exe built in 2010 against the v80 toolset.
Notably though, if we use the dll's built in 2010 (against v80) but the exe built in 2005, everything works fine.
My question then is: What is the difference between the output from a build in 2005 and the output from a build in 2010 using the v80 toolset?
The above is probably dependent on whether it is possible to exactly match the commandline arguments for the build (ie, c++ and linker configuration) as it may be we haven't quite got those right. If needed I can link the settings from 2005 and those from 2010.
Any help would be much appreciated.
UPDATE:
I've recently created a very simple application in 2005, consisting of a dll and exe. The dll has a function static __declspec(dllexport) int add(int a, int b). The exe is a simple console application which calls the add function from the dll.
I then ported this to VS2010 and set it to the v80 toolset. Building this produces a dll with the same size as the original, the exe however is 4KB bigger. I'm using dumpbin to try and find out why, but I don't know it too well at the minute. If anyone else can identify in this simple case why the exe's are different sizes, this may help solve my overall problem
Solved this now, the issue was caused by DEP being turned on by default. I can confirm for anyone else though that the output from the builds should and will match exactly if you use the same compiler and linker settings.
I need to build an MVC 3.0 site and target x64 specifically. I'm having an issue trying to build my MVC 3.0 site with the Platform Target set to x64 and MvcBuildViews set to True. Everything builds fine until it tries to compile the views. If I set the Platform Target to AnyCPU everything will compile, but when set to x64 I get this error:
Could not load file or assembly 'Mvc64Bit' or one of its dependencies. An attempt was made to load a program with an incorrect format.
This can easily be recreated by creating a blank MVC 3.0 project, unload the project, edit the project file to set the MvcBuildViews item to "true", reload the project, change the Platform Target in the Project's Build Properties to x64, and then build.
I haven't been able to find anything about the above error online, just that it deals with mismatched DLLs (one x32, one x64) but this doesn't make sense unless the view build engine is 32 bit or something.
Any hints to point me in the right direction will be GREATLY appreciated. Thanks for reading!!
I got a response from Microsoft on this issue. I guess what is happening is that Visual Studio calls a 32-bit compiler that compiles the website into a 64 bit DLL. After that, it calls the 32-bit compiler again for the views. The view compilation needs to load the 64 bit Web project DLLs to get information from the defined models. This is where the "Incorrect format" comes in. The 32 bit compiler tries to load the 64 bit Web project DLLs.
Now, calling the 64 bit aspnet_compiler.exe from the Visual Studio Command Prompt works perfectly. But, I guess, since Visual Studio is a 32 bit application, it can't load the 64 bit compiler. I'm not sure of any way to call the 64 bit, and even if there was a way, Visual studio probably couldn't give the nice list of errors that typically does (just an assumption there as I don't know how Visual Studio calls the compiler...a simple command line execution works, but maybe it actually loads the DLL and calls from inside the VS code)
So, my work around was to put the MVCBuildViews=true declaration inside the property. I then put MVCBuildViews=false in the 'Release|AnyCPU' propertyGroup and I just let IIS compile the views when the site first loads. It's not precompiling completely, but it will work.
Is it possible to switch off managed code (and switch on unmanaged code) for c++ coding, so that programs (exes) made are run direct to native machine code in Visual Studio 2008?
Also, is it true that after the first time a .net (managed) exe runs (say written in C#) the exe gets converted to a native code one (like the old c++ ones pre .net)? Or is there a way to make it compile direct to native code if it was written in C#?
The answer to both of these questions is yes.
You can create unmanaged c++ code projects in VS which do not need .Net. You can also link unmanaged C++ code to managed C++ code and (sort of) get the best of both worlds - although the matching of calling parameters between the to systems is interesting.
You can also use the ngen .Net utility to pre-compile .Net projects to pure code. However in doing so you loose some flexibility. The JIT compiler will take account of local capabilities when compiling a .Net project. So if you distribute a .Net project as generated by VS then ngen on the local machine that runs the program will do the compiliing. However if you use ngen on your machine the precompiled code will be tied to the processsor capabilities of your system.
As per Joel's comment. regardless of using ngen or not, you still need .Net framework on the target machine.
In thinking about it, the use of ngen to pre-compile a .Net project probably is no worse than compiling an unmanaged c++ project to native code.
To do what you want for C#, you would use ngen.exe, which comes with the C# compiler. You run that command on the image, and it gets added to the GAC as native code.
As far as i know, you can switch temporarily to unmanaged code, i.e. using unmanaged variables etc. by marshaling. Take a look here: http://msdn.microsoft.com/de-de/library/bb384865.aspx