Okay... here is the deal (and I wish I could provide a code example, but this is a problem at the library level and not limited to a specific function/file):
I am creating a common logging dll to be used throughout all applications in my organization. As part of the logger, we are using a logging framework names 'NLog.' The .dll project has several projects and is merged into one .dll using 'ILMerge.' I, of course, compile it on my machine and make the .dll and relevant configurations available to the other application(s) that will be using it. When the .dll is used in another application, it runs just fine on my machine and on almost every other machine that is being used in my organization. However, on one of the other developer's machines, the .dll does not work correctly. The only way that I have been able to get it to work correctly on his machine is to actually re-compile the .dll on his machine.
Has anyone else encountered this problem? I have written like a billion .dll's and have NEVER encountered this issue.
Thank you for helping! If you need more information, let me know.
Brett
Are all dependent libraries having exactly the same versions on your machine and on your colleagues machine? Sounds like a version conflict.
Related
We are building an audio plugin that can be loaded by a various number of audio production softwares. To make it as compatible as possible to all common softwares, we actually build three versions of it (Steinberg VST2 format, Steinberg VST3 format, Avid AAX format), which is achieved by wrapping our core plugin code with wrappers for those three APIs. All three versions are installed in the standard location as specified for each format.
Our plugin now depends on the Microsoft onnxruntime, which we want to dynamically link against. Now, what is the right way of deploying and handling this dependency? As the plugin is loaded by the users host software of choice at runtime, placing the dll dependency next to the executable is no option, since we don't know which host software the user will use and which of the three plugin formats this software will chose.
Being a macOS developer, I'm unfamiliar with Windows best practice here.
Ideally we would like to install the dll into a custom location. But this would need us to modify the systems PATH variable to ensure that the dll is found for all users when a host loads one of our plugins, right? I'm not sure if this is a clean solution?
Another option could be to install the dll into C:\Windows\System32 but my research revealed that there is no versioning information on the dlls located there, so in case some other application installed onnxruntime there as well (or if it's a never windows installation that already ships with onnxruntime), how could we ensure that its version is equal or greater than the version needed by our plugin (in which case we wouldn't overwrite it) or below our minimum needed version (in which would replace it)? This generally seems like bad practice as well to me.
So what's common best practice on Windows for such scenarios? Am I overlooking a proper solution?
We have recently taken over a project from an outsourcing company. This project uses Moles and Pex for unit testing, but since we have not had the project for long, I am not very familiar with the frameworks.
That being said, we are busy upgrading this project to run in .Net 4. I have resolved most of the issues that have jumped out, but there is one that I cannot get a handle on. Some of the unit tests cannot compile because of the error:
Could not load file or assembly 'Example.Assembly, Version=0.0.0.0,
Culture=neutral, PublicKeyToken=null' or one of its dependencies. The
system cannot find the file specified.
The part that baffles me is that it is a project reference and the assembly is being copied to the output directory of the unit test. Most of the other project references are found and I cannot spot any difference between the ones that work and the ones that do not. I am not sure if this problem has to do with the pex/moles frameworks, but I thought I would mention it.
I have tried the usual things of removing and adding all the references and regenerating the moles assemblies.
Has anyone else run into this problem? Any help would be greatly appreciated.
EDIT1: Ok, after some more investigation into the build output, it appears as if it is not moles, but the .accessor files that are not generated correctly. I get the exact same problem as asked in Unit test project cannot find assembly under test (or dependencies), but unlike his problem, mine does not go away after deleting the accessor.
EDIT2: Turns out is is a program called Publicize.exe which falls over with that error. Still no idea why though. Looking at Fusion logs is looks like it does not search under the working directory for the dll that it is trying to generate the accessors for. Running it manually on a bunch of assemblies from our solution, I find it works on some, but not on others. I have not been able to identify a difference between the ones that work and the ones that don't, though.
Thanks
Ah, yes. I have read this story many times, and have the tee shirt. I run through my usual Moles first-aid kit, when encountering any issue, including this one.
Perhaps, this question will provide some help: Am I the only one getting "Assembly Not Available in the Currently Targeted Framework"?
Ensure the Moles framework is properly installed on the workstation and/or build server
Ensure the Moles assemblies are being built (see the excluded "Moles Assemblies" directory)
Check your build profile -- it may need to be set to full framework profile
Triple check your output destinations and post-build commands -- I have seems some solutions that copy the output to another location
Try using the Visual Studio Pex/Moles extension, if you are not already doing so
An invasive fix-all process is to simply create an all-new solution, projects, and test projects, and then copy the existing code files into them. It's surprising how many issues can be resolved for various project-related errors. Basically, a hard reboot for the entire solution.
Since you are updating to .NET 4, you may as well go to 4.5, and used the productized version of Moles, called "Fakes". You'll find Fakes in the Visual Studio 2012 release candidate. This significant feature hasn't received much attention.
I'm looking for general advice. I created a Visual Studio 2010 project that outputs an ocx file that is used on XP and Vista machines. The DLL on which it depends has been updated on our Win7 machines. I simply needed to rebuild for Win7 using the exact same code with an updated .lib file. I created a second project configuration (ReleaseW7) and it only differs from the original project config (Release) in that it points to the new .lib.
So now I have 2 files both named xx.ocx. Besides looking at the name of the folder each file resides in (or looking at the creation time of each) there is no way to determine which is which. I thought of using different file version numbers but as far as I can tell (and I'm relatively new to this so I could certainly be wrong) that would require two separate projects each with a slightly modified resource (.rc) file, instead of simply having two configurations within the same project. If nothing more, that seems like a waste of hard drive space. It also feels like the "wrong" way of using file version numbers
Is there a cleaner or more "standard" way of handling this? All I really want is a way for the folks who install the ocx and support the end user to know for certain that they are working with the correct file.
Long story short, I decided to use different version numbers. I was able to setup a preprocessor definition for the resource compiler and use that to handle different versions of VS_VERSION_INFO in my .rc file.
In case anyone is interested, this is the resource I found:
http://social.msdn.microsoft.com/Forums/en-US/winformssetup/thread/605275c0-3001-45d2-b6c1-652326ca5340/
I've got a project that depends on a particular version of MSVCR80.dll (the MS Visual C Runtime) and I'm running into problems where, depending on the particular system configuration, my app doesn't always get the right version of that file. It's been a bit of a crap shoot as to what path it takes to find a file with that name, and it's not always right...
Is there a way, when creating a Deployment Project in VS2005, to ensure that my app will always use the runtime that I provided?? When I add the runtime file to the project, it asks about creating a merge module...but not really sure what that does. And regardless of creating one, the issue remains.
Martin Richter wrote an article about that on CodeProject:
Create projects easily with private MFC, ATL and CRT assemblies
This solution does not rely on your MSI packages but on the application that uses the CRT files.
I am not sure if it is your application after installation that doesn't work, or if it is a dll you use as part of the installation that doesn't work?
To make a very long story, very short: new versions of the C / C++ runtimes are installed as Win32 assemblies, or side-by-side installation. This means the files will go into folders under C:\Windows\winsxs - the Win32 equivalent of the GAC, and several versions of the same file can co-exist here.
Applications compiled with Visual Studio 2005 / 2008 will put a manifest file into the binary, and this manifest specifies what side-by-side runtime version to bind to. It doesn't matter if you put the MSVCR80.dll next to your EXE or even in system32 - the manifest embedded in the EXE will load the file from C:\Windows\winsxs.
This is all "full circle". In the old days runtimes went to System32. This caused the original dll-hell: applications overwriting each other's global runtime files. To remedy all this the idea was to "isolate changes" to each application. Hence the new approach was to isolate a local copy of the runtime file next to the EXE. Now this caused an entirely new problem: how do you make sure security updates for the isolated dll was deployed? In most cases this never happened, and you had lots of applications running with local, unsafe dll's. So what to do? The decision was to introduce the second coming of dll-hell: the side-by-side assembly approach. In this approach runtimes are not local, but global - with the critical difference of supporting side-by-side installations. This way, in theory, applications can function without overwriting each other's runtime dlls.
So that was the quick summary of "how to make runtime deployment complicated". I am not positive it is still possible to do, but did you check whether you can statically link to the runtime? Sometimes old-school really is easier...
We're going through a massive migration project at the minute and trying to validate the code that is deployed to the live estate matches the code we have in source control.
Obviously the .net code is easy to compare because we can disassemble. I don't believe this is possible in vb6 exes because of the manner of compilation.
Does anyone have any ideas on how I could validate the source code and the compiled executable matches the file I have in Live.
Thanks
Visual Basic had (has) two ways of compiling, one to the interpreter ( called P-code) that would result in smaller binaries, and a second one that generates "regular" windows .exe file (called native) that was introduced because it was supposed to be fastar than p-code; although the compiled file size increased with this option.
If your compilation was using p-code, it is in theory possible to restore the sources.
Either way is pretty difficult to do, but there are tools that claim they can partially do this, one that I know of ( never tried but there is a trial version ) is VB-decompiler
http://www.vb-decompiler.org/
Unfortunately that's almost impossible. Bear in mind that VB6 code compiled on different machines will have different exe sizes and deployment requirements.
This is why the old VB'ers had a dedicated machine to compile their code.
This won't help you with already deployed items, but if you upped the revision number on every compile (there is a project setting to do this for you automatically) then you could easily compare version numbers.
My old company bought a copy of that VB-Decompiler and as noted before VB5/6 generates P-Code extra, that tool did produce some code and if not Assembly code which could be "read".
If you have all the code you compiled, you could compare the CRC's of that code to what is deployed in the field. But if you don't have the original compiled code, depending upon how you compiled the code you (if you used P-Code rather than Native Code you may be able to disassemble but the disassembly will look nothing like your source code). I doubt you would have shipped the PDB's with the exe's, but if you did, you could certainly use those to compare with the source code in your repository.
Have a trusted computer that can check out the various libraries and exes you make and compile them automatically. Keep those in a read-only but accessible location. Then do a binary comparison between the deployed site and your comparison site.
However I am not sure of the logic over disassembling the the complied units. My company and most other places I know of use a combination of a build computer and unit testing. In our company the EXE we make is a very thin shell over a bunch of libraries. For example a button click will be passed to a UI Active X DLL that does the actual processing. What we do after a build is run a special EXE that perform our list of unit test. If they all passed we know our libraries, where 90% of our code is, are good. As for the actual EXE we have a hand procedure that takes about two hours to do and then we are good. IIt is rare for any errors to happen in the EXE.