Looking for work-around; edit and continue not working - visual-studio

I have found that Edit-and-Continue does not work in linked files. I am working on a suite of 8 different utilities (C# winforms) and they all share several common classes, which have been added to each utility project as a linked file. I can set breakpoints in the linked files, and step thru the code in them, but not edit. When I try, I get the error "Changes are not allowed if the project wasn't built when debugging started." I've made sure to perform a clean and build before running, but that doesn't help. I've had some of my colleagues try it on their machines with the same results. These linked common classes are key to each of the utilities and where much of the code resides. Not being able to edit-and-continue is making debugging much more difficult and tedious. Edit and continue works in the non-linked non-common files. Can anyone suggest a workaround? I have considered merging all 8 utilities into one, but they each take different command line parameters and are really intended to be used individually.

Related

Recommended tool to automate complicated build procedure

I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.

Possible issues with multiple solution files for the same set of projects

Is there a list of all (or nearly all) possible issues that could stem from maintaining multiple solution files for the same set of projects? The only reason for doing so is different versions of Visual Studio.
I'm aware of the glaring issue where new projects are added in one solution file, that haven't been synced to the other. What are some others?
disclaimer: my current company is still entrenched using VS10, for mainly political reasons. so please, save the preaching about the need for having a single solution and how this is not the optimal "solution".
I've seen this done all the time, for the most part it is perfectly fine other than what you mentioned, any files added would have to be added to all of the projects. However, I would recommend you go with a make file of sorts, CMake is a very robust version but there are plenty of others. The way they work is basically, you write one script that defines how the project is to be made, then the end-user runs CMake.exe on it. It will take that script and generate the proper solution and project files for your entire project in the version of VS you want, it also supports generation of types like XCode and Eclipse solutions etc so it is very multi-platform.

Programming experiments

I frequently code numerous experiments to test various algorithms, libraries, or hardware. All code, dependencies, and output of these experiments need to be annotated and saved, so that I can return to them later. Are there good common approaches to this problem? What do you do with your experiments after running them?
At a prior job we had a project in SVN called Area51 where people would write test code. The rules were
create a package namespace
start via a public static void main
add comments via javadocs
leave the project in a compilable state
the project can never be a dependancy of other code
On a three person team this worked out ok. We could put "what if" code there to share and it was easy to run it via ide or command line
When I do these, they are usually project specific, so they go in a subdirectory of the project (usually named "Investigations" in my case). This gets checked into the version control system with everything else.
Results (where appropriate) go into the same subdirectory of "Investigations" as the code used to produce the results.
http://subversion.tigris.org/
I just have a folder which I call OneOffCode
This is a folder of just code I have written either learning a new technology trying to prove a concept etc. . . This is non production code.
I usually back it up to a jump drive and move it with me from Job to job, or computer to computer.
I'm usually switching between C# and C++. So, I have a Test console application for C# and C++ in a "Sandbox" location, under source control. The console applications are both setup the same way where there is a Main which calls the test that I'm trying at that time. When I'm done I keep the old methods and comments and just clear out the Main when the next test comes about.
I don't know if it is the best, but after it is setup then it is pretty quick to get in, get the answers, get out and have it all saved for the next time.

Comparing VB6.exes

We're going through a massive migration project at the minute and trying to validate the code that is deployed to the live estate matches the code we have in source control.
Obviously the .net code is easy to compare because we can disassemble. I don't believe this is possible in vb6 exes because of the manner of compilation.
Does anyone have any ideas on how I could validate the source code and the compiled executable matches the file I have in Live.
Thanks
Visual Basic had (has) two ways of compiling, one to the interpreter ( called P-code) that would result in smaller binaries, and a second one that generates "regular" windows .exe file (called native) that was introduced because it was supposed to be fastar than p-code; although the compiled file size increased with this option.
If your compilation was using p-code, it is in theory possible to restore the sources.
Either way is pretty difficult to do, but there are tools that claim they can partially do this, one that I know of ( never tried but there is a trial version ) is VB-decompiler
http://www.vb-decompiler.org/
Unfortunately that's almost impossible. Bear in mind that VB6 code compiled on different machines will have different exe sizes and deployment requirements.
This is why the old VB'ers had a dedicated machine to compile their code.
This won't help you with already deployed items, but if you upped the revision number on every compile (there is a project setting to do this for you automatically) then you could easily compare version numbers.
My old company bought a copy of that VB-Decompiler and as noted before VB5/6 generates P-Code extra, that tool did produce some code and if not Assembly code which could be "read".
If you have all the code you compiled, you could compare the CRC's of that code to what is deployed in the field. But if you don't have the original compiled code, depending upon how you compiled the code you (if you used P-Code rather than Native Code you may be able to disassemble but the disassembly will look nothing like your source code). I doubt you would have shipped the PDB's with the exe's, but if you did, you could certainly use those to compare with the source code in your repository.
Have a trusted computer that can check out the various libraries and exes you make and compile them automatically. Keep those in a read-only but accessible location. Then do a binary comparison between the deployed site and your comparison site.
However I am not sure of the logic over disassembling the the complied units. My company and most other places I know of use a combination of a build computer and unit testing. In our company the EXE we make is a very thin shell over a bunch of libraries. For example a button click will be passed to a UI Active X DLL that does the actual processing. What we do after a build is run a special EXE that perform our list of unit test. If they all passed we know our libraries, where 90% of our code is, are good. As for the actual EXE we have a hand procedure that takes about two hours to do and then we are good. IIt is rare for any errors to happen in the EXE.

Structuring projects & dependencies of large winforms applications in C#

UPDATE:
This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though.
How do you structure a very large application?
Multiple smallish projects/assemblies in one big solution?
A few big projects?
One solution per project?
And how do you manage dependencies in the case where you don't have one solution.
Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself).
I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear.
(Context: winforms, not web)
EDIT: (If you think this is a different question, leave me a comment)
It seems to me that there are interdependencies between:
Project/Solution structure for an application
Folder/File structure
Branch structure for source control (if you use branching)
But I have great difficulty separating these out to consider them individually, if that is even possible.
I have asked another related question here.
Source Control
We have 20 or 30 projects being built into 4 or 5 discrete solutions. We are using Subversion for SCM.
1) We have one tree in SVN containing all the projects organised logically by namespace and project name. There is a .sln at the root that will build them all, but that is not a requirement.
2) For each actual solution we have a new trunks folder in SVN with SVN:External references to all the required projects so that they get updated from their locations under the main tree.
3) In each solution is the .sln file plus a few other required files, plus any code that is unique to that solution and not shared across solutions.
Having many smaller projects is a bit of a pain at times (for example the TortoiseSVN update messages get messy with all those external links) but does have the huge advantage that dependancies are not allowed to be circular, so our UI projects depend on the BO projects but the BO projects cannot reference the UI (and nor should they!).
Architecture
We have completely switched over to using MS SCSF and CAB enterprise pattern to manage the way our various projects combine and interact in a Win Forms interface. I am unsure if you have the same problems (multiple modules need to share space in a common forms environment) but if you do then this may well bring some sanity and convention to how you architect and assemble your solutions.
I mention that because SCSF tends to merge BO and UI type functions into the same module, whereas previously we maintained a strict 3 level policy:
FW - Framework code. Code whose function relates to software concerns.
BO - Business Objects. Code whose function relates to problem domain concerns.
UI - Code which relates to the UI.
In that scenario dependancies are strictly UI -> BO -> FW
We have found that we can maintain that structure even while using SCSF generated modules so all is good in the world :-)
To manage dependencies, whatever the number of assemblies/namespaces/projects you have, you can have a glance at the tool NDepend.
Personnaly, I foster few large projects, within one or several solutions if needed. I wrote about my motivations to do so here: Benefit from the C# and VB.NET compilers perf
I think it's quite important that you have a solution that contains all your 80 projects, even if most developers use other solutions most of the time. In my experience, I tend to work with one large solution, but to avoid the pain of rebuilding all the projects each time I hit F5, I go to Solution Explorer, right-click on the projects I'm not interested in right now, and do "Unload Project". That way, the project stays in the solution but it doesn't cost me anything.
Having said that, 80 is a large number. Depending on how well those 80 break down into dicrete subsystems, I might also create other solution files that each contain a meaningful subset. That would save me the effort of lots of right-click/Unload operations. Nevertheless, the fact that you'd have one big solution means there's always a definitive view of their inter-dependencies.
In all the source control systems that I've worked with, their VS integration chooses to put the .sln file in source control, and many don't work properly unless that .sln file is in source control. I find that intriguing, since the .sln file used to be considered a personal thing, rather than a project-wide thing. I think the only kind of .sln file that definitely merits source control is the "one-big-solution" that contains all projects. You can use it for automated builds, for example. As I said, individuals might create their own solutions for convenience, and I'm not against those going into source control, but they're more meaningful to individuals than to the project.
I think the best solution is to break it in to smaller solutions. At the company I currently work for, we have the same problem; 80 projects++ in on solution. What we have done, is to split into several smaller solutions with projects belonging together. Dependent dll's from other projects are built and linked in to the project and checked in to the source control system together with the project. It uses more disk space, but disk is cheap. Doing it this way, we can stay with version 1 of a project until upgrading to version 1.5 is absolutely necessary. You still have the job with adding dll's when deciding to upgrade to a other version of the dll though. There is a project on google code called TreeFrog that shows how to structure the solution and development tree. It doesn't contain mush documentation yet, but I guess you can get a idea of how to do it by looking at the structure.
A method that i've seen work well is having one big solution which contains all the projects, for allowing a project wide build to be tested (No one really used this to build on though as it was too big.), and then having smaller projects for developers to use which had various related projects grouped together.
These did have depencies on other projects but, unless the interfaces changed, or they needed to update the version of the dll they were using, they could continue to use the smaller projects without worrying about everything else.
Thus they could check-in projects while they were working on them, and then pin them (after changing the version number), when other users should start using them.
Finally once or twice a week or even more frequently the entire solution was rebuild using pinned code only, thus checking if the integration was working correctly, and giving testers a good build to test against.
We often found that huge sections of code didn't change frequently, so it was pointless loading it all the time. (When you're working on the smaller projects.)
Another advantage of using this approach is in certain cases we had pieces of functionality which took months to complete, by using the above approach meant this could continue without interrupting other streams of work.
I guess one key criteria for this is not having lots of cross dependencies all over your solutions, if you do, this approach might not be appropriate, if however the dependencies are more limited, then this might be the way to go.
For a couple of systems I've worked on we had different solutions for different components. Each solution had a common Output folder (with Debug and Release sub-folders)
We used project references within a solution and file references between them. Each project used Reference Paths to locate the assemblies from other solutions. We had to manually edit the .csproj.user files to add a $(Configuration) msbuild variable to the reference paths as VS insists on validating the path.
For builds outside of VS I've written msbuild scripts that recursively identify project dependencies, fetch them from subversion and build them.
I gave up on project references (although your macros sound wonderful) for the following reasons:
It wasn't easy to switch between different solutions where sometimes dependency projects existed and sometimes didn't.
Needed to be able to open the project by itself and build it, and deploy it independently from other projects. If built with project references, this sometimes caused issues with deployment, because a project reference caused it to look for a specific version or higher, or something like that. It limited the mix and match ability to swap in and out different versions of dependencies.
Also, I had projects pointing to different .NET Framework versions, and so a true project reference wasn't always happening anyways.
(FYI, everything I have done is for VB.NET, so not sure if any subtle difference in behavior for C#)
So, I:
I build against any project that is open in the solution, and those that aren't, from a global folder, like C:\GlobalAssemblies
My continuous integration server keeps this up to date on a network share, and I have a batch file to sync anything new to my local folder.
I have another local folder like C:\GlobalAssembliesDebug where each project has a post build step that copies its bin folder's contents to this debug folder, only when in DEBUG mode.
Each project has these two global folders added to their reference paths. (First the C:\GlobalAssembliesDebug, and then C:\GlobalAssemblies). I have to manually add this reference paths to the .vbproj files, because Visual Studio's UI addes them to the .vbprojuser file instead.
I have a pre-build step that, if in RELEASE mode, deletes the contents from C:\GlobalAssembliesDebug.
In any project that is the host project, if there are non dlls that I need to copy (text files outputted to other project's bin folders that I need), then I put a prebuild step on that project to copy them into the host project.
I have to manually specify the project dependencies in the solution properties, to get them to build in the correct order.
So, what this does is:
Allows me to use projects in any solution without messing around with project references.
Visual Studio still lets me step into dependency projects that are open in the solution.
In DEBUG mode, it builds against open loaded projects. So, first it looks to the C:\GlobalAssembliesDebug, then if not there, to C:\GlobalAssemblies
In RELEASE mode, since it deletes everything from C:\GlobalAssembliesDebug, it only looks to C:\GlobalAssemblies. The reason I want this is so that released builds aren't built against anything that was temporarily changed in my solution.
It is easy to load and unload projects without much effort.
Of course, it isn't perfect. The debugging experience is not as nice as a project reference. (Can't do things like "go to definition" and have it work right), and some other little quirky things.
Anyways, that's where I am on my attempt to make things work for the best for us.
We have one gigantic solution on the source control, on the main branch.
But, every developer/team working on the smaller part of the project, has its own branch which contains one solution with only few projects which are needed. In that way, that solution is small enough to be easily maintenaced, and do not influence on the other projects/dlls in the larger solution.
However, there is one condition for this: there shouldn't be too much interconnected projects within solution.
OK, having digested this information, and also answers to this question about project references, I'm currently working with this configuration, which seems to 'work for me':
One big solution, containing the application project and all the dependency assembly projects
I've kept all project references, with some extra tweaking of manual dependencies (right click on project) for some dynamically instantiated assemblies.
I've got three Solution folders (_Working, Synchronised and Xternal) - given that my source control isn't integrated with VS (sob), this allows me to quickly drag and drop projects between _Working and Synchronised so I don't lose track of changes. The XTernal folder is for assemblies that 'belong' to colleagues.
I've created myself a 'WorkingSetOnly' configuration (last option in Debug/Release drop-down), which allows me to limit the projects which are rebuilt on F5/F6.
As far as disk is concerned, I have all my projects folders in just one of a few folders (so just one level of categorisation above projects)
All projects build (dll, pdb & xml) to the same output folder, and have the same folder as a reference path. (And all references are set to Don't copy) - this leaves me the choice of dropping a project from my solution and easily switching to file reference (I've got a macro for that).
At the same level as my 'Projects' folder, I have a 'Solutions' folder, where I maintain individual solutions for some assemblies - together with Test code (for example) and documentation/design etc specific to the assembly.
This configuration seems to be working ok for me at the moment, but the big test will be trying to sell it to my colleagues, and seeing if it will fly as a team setup.
Currently unresolved drawbacks:
I still have a problem with the individual assembly solutions, as I don't always want to include all the dependent projects. This creates a conflict with the 'master' solution. I've worked around this with (again) a macro which converts broken project references to file references, and restores file references to project references if the project is added back.
There's unfortunately no way (that I've found so far) of linking Build Configuration to Solution Folders - it would be useful to be able to say 'build everything in this folder' - as it stands, I have to update this by hand (painful, and easy to forget). (You can right click on a Solution Folder to build, but that doesn't handle the F5 scenario)
There is a (minor) bug in the Solution folder implementation which means that when you re-open a solution, the projects are shown in the order they were added, and not in alphabetical order. (I've opened a bug with MS, apparently now corrected, but I guess for VS2010)
I had to uninstall the CodeRushXPress add-in, because it was choking on all that code, but this was before having modified the build config, so I'm going to give it another try.
Summary - things I didn't know before asking this question which have proved useful:
Use of solution folders to organise solutions without messing with disk
Creation of build configurations to exclude some projects
Being able to manually define dependencies between projects, even if they are using file references
This is my most popular question, so I hope this answer helps readers. I'm still very interested in further feedback from other users.

Resources