How to use Subversion with HelpNDoc - windows

I am writing a documentation for a project that involves multiple developers. We use Subversion (SVN) to work on our code base.
I wrote the first draft of the documentation document using HelpNDoc, which I like for the nice tree-view and easy of use; the problem is that there is a single file, so I don't know how to use SVN to allow other developers to contribute to the documentation and update it.
Do you know if it's possible? If not, can you advice a nice software, easy to use, with a tree-view of the documentation that can be used with SVN or makes it possible for multiple users to update it? We use Windows.

HelpNDoc projects are binary files based on the SQLite open source database engine. The advantage is that the whole documentation stored in a single file so it can easily be copied, moved, shared, backed-up...
However one drawback is that it has to be checked-in as binary content in any version control system including Subversion: diff and merge are not possible on those files.
One possible solution would be to use external documents in HelpNDoc's library: each user works on her own document (which can be a Word document, and HTML web-page...) and a master HelpNDoc project is created to include those documents at generation time. See "Include a file at generation time" in the following step by step guide: How to add an item to the library

Amount of files doesn't matter, real format (text/* or binary) - does. If SVN|any VCS can merge two HelpNDoc files with diverged history (just try it by hand), you'll be happy

I once used Helpinator for software documentation, it's pretty close to HelpnDoc but it's storage format is more suitable for version control.

Related

Should nDepend's output folder be added to source control?

Background
I am new to nDepend and wish to use it on a project that will be maintained with multiple developers in Subversion.
I am quite keen on keeping historic nDepend analysis results and notice that nDepend does that quite well by default by placing such results in the folder defined by $(NdProjectOutputDir) - usually a subfolder called NDependOut immediately beneath where the .ndproj file is located.
This means however that the generated nDepend XML and binary files are located along with my source code.
I have read the following articles by nDepend:
Trend Monitoring
Logging Trend Metrics
...and even tried a Google search "ndepend ndependout source control" which at the time of writing, the latter was not particularly useful for my questions below.
Questions
Should NDependOut be added to SCM? They strike me as essentially output artifacts as the result of the source code in the same way as compiling a project is and should not be added. But I am not sure that philosophy applies
If the NDependOut folder is added to source control for the benefit of other users, do I run the risk of conflict?
Should we instead nominate one person (or perhaps a build machine) to be the sole publisher of such reports?
I note at that in the root NDependOut folder there are two files:
NDependAnalysisResult_VIP_2015Jun11_18h33.ndar - binary
InfoWarnings.xml - text obviously
As you noticed you are quite free to decide. When you wrote:
This means however that the generated NDepend XML and binary files are located along with my source code.
...maybe it means you haven't seen the possibility to customize both...
historic analysis result + report storage root folder,
and trend metric storage root folders,
...both from NDepend Project Properties > Analysis, see screenshots below.
To answer your question in reverse order:
3) I'd say that if you choose to store this data in your SCM, it should be done by your Build Process automatically after a successful build process + NDepend analysis
2) No there is no risk of conflict on Historic analysis result + reports since they are stored in a hierarchy of folders named after the build date/time. Concerning trend metric yes there is a risk of conflict since the storage is made by one XML file per year.
1)
It is certainly worth sharing trend metrics with the team through SCM.
It is certainly worth sharing the baseline for diff analysis result as well through SCM. Typically the baseline for diff represents the last release in production code snapshot.
It is certainly worth sharing the most recent report generated by the build process (not necessarily through SCM, it should be available through an url).
Concerning all intermediate analysis result it is up to you, but if they are not used as baseline for diff, probably they will be useless.

Why do vcxproj.filters files exist?

Shouldn't vcxproj.filters be embedded in the .vcxproj? As it stands I have to check this in to source control so others can see the folder structuring in the solution.
According to what Dan Moseley says in this question, they also wanted to separate the tree structure from the build specific information because changing the tree structure would cause an update to be made to the project file, and that in turn would trigger a rebuild. By moving the logical view of the project to a separate file this is avoided.
They were embedded in fact, in previous versions of Visual Studio. The extension was still .vcproj and the filters were stored inside the project file. However, as of 2010 it was decided to pull the .filter information into a separate file.
It is really up to the design teams now to decide whether to add this source control or not. If you want all the developers to have the same structure (for reasons of communication) it might be wise to check them in. If you want to allow each developer to use their own logical view, then don't.
The vcxproj file contains the commands for the msbuild environment. So it contains the files that should be built and the arguments for the compiler how to build/link etc. the source files.
Due to this, the development team decided that the 'view' of the files in the solution explorer should not be contained in the msbuild file, but in another file.
So this was done to separate the build settings from the view you have.

Anhksvn + Visual Studio - working with linked files

I could use some advice.
I'm in the process of adopting subversion, and I'm trying to put some existing Visual Studio 2010 projects into a repository. I have the current version of AhnkSvn.
The projects I have are organised as;
VS2010_projects\Project_A
VS2010_projects\Project_B
VS2010_projects\Project_C
VS2010_projects\Common_code
Where Project_A, Project_B and Project_C may all refer to one or more files in "Common_Code"
In visual studio, these files will have been added using "add as link".
There is no actual project in "Common_code" just a collection of useful code files, which we're likely to re-use in different projects.
(If we have a module or class which is re-used in various projects, then we often keep a single master copy in 'common-code', and link to it.)
Visual Studio has no problem with this.
When I add any of the actual projects to subversion, all of their own files are added just fine, but the linked files are ignored.
(And as a consequence, if I then get a working copy of those files, then it's just the project files which get handled, I won't get a copy of the linked files.)
If I right click on any of the linked files, I the only subversion options I get are to refresh their status or to select the working folder.
I was wondering what the correct way to handle this situation was ?
Any advice would be much appreciated
Thanks !
Robert
if I understand your question correctly then I think SVN is acting in the desired way. A linked file is merely a reference to another file. That reference exists only in the .csproj file which is checked in. It would not make sense to have two copies of the same file in source control, and it could lead to versioning issues. The first time you checkout your repository doing a build on your projects should copy the files from Common_code to the places that they're linked.
As an aside we've had alot of random issues with .csproj linked files and SVN, and so try to avoid linked files where possible. A better way to re-use files across projects is obviously just to embed them in a library and then reference that library. This should work fine with the exception of certain files like Javascript/CSS.
Also you may want to check out SVN externals, a workmate mentioned this can be used to share common libraries between multiple projects, although as a disclaimer I haven't tried this myself and can't comment on the merits or drawbacks of the approach.
Thanks for the advice, I actually did something similar to your suggestion.
I didn't want to make a full blown library, but I did make up a dummy project, and put my shared files into that.
Then I added the dummy project to the repository.
AhnkSvn now seems to be satisfied that the linked files are under subversion control, and seems to handle them just fine.
(I haven't added any reference to the dummy project to my existing projects - they just use the linked files as before - but now AhnkSvn shows me their status, and allows me to get the latest version, and commit changes.)
I can see the case for having a proper library - but that would have meant modifying a large body of existing projects. This approach lets me get up and running with Subversion without requiring those changes first.

Should a .sln be committed to source control?

Is it a best practice to commit a .sln file to source control? When is it appropriate or inappropriate to do so?
Update
There were several good points made in the answers. Thanks for the responses!
I think it's clear from the other answers that solution files are useful and should be committed, even if they're not used for official builds. They're handy to have for anyone using Visual Studio features like Go To Definition/Declaration.
By default, they don't contain absolute paths or any other machine-specific artifacts. (Unfortunately, some add-in tools don't properly maintain this property, for instance, AMD CodeAnalyst.) If you're careful to use relative paths in your project files (both C++ and C#), they'll be machine-independent too.
Probably the more useful question is: what files should you exclude? Here's the content of my .gitignore file for my VS 2008 projects:
*.suo
*.user
*.ncb
Debug/
Release/
CodeAnalyst/
(The last entry is just for the AMD CodeAnalyst profiler.)
For VS 2010, you should also exclude the following:
ipch/
*.sdf
*.opensdf
Yes -- I think it's always appropriate. User specific settings are in other files.
Yes you should do this. A solution file contains only information about the overall structure of your solution. The information is global to the solution and is likely common to all developers in your project.
It doesn't contain any user specific settings.
You should definitely have it. Beside the reasons other people mentioned, it's needed to make one step build of the whole projects possible.
I generally agree that solution files should be checked in, however, at the company I work for we have done something different. We have a fairly large repository and developers work on different parts of the system from time to time. To support the way we work we would either have one big solution file or several smaller. Both of these have a few shortcomings and require manual work on the developers part. To avoid this, we have made a plug-in that handles all that.
The plug-in let each developer check out a subset of the source tree to work on simply by selecting the relevant projects from the repository. The plugin then generates a solution file and modifies project files on the fly for the given solution. It also handles references. In other words, all the developer has to do is to select the appropriate projects and then the necessary files are generated/modified. This also allows us to customize various other settings to ensure company standards.
Additionally we use the plug-in to support various check-in policies, which generally prevents users from submitting faulty/non-compliant code to the repository.
Yes, things you should commit are:
solution (*.sln),
project files,
all source files,
app config files
build scripts
Things you should not commit are:
solution user options (.suo) files,
build generated files (e.g. using a build script) [Edit:] - only if all necessary build scripts and tools are available under version control (to ensure builds are authentic in cvs history)
Regarding other automatically generated files, there is a separate thread.
Yes, it should be part of the source control.
When ever you add/remove projects from your application, .sln would get updated and it would be good to have it under source control. It would allow you to pull out your application code 2 versions back and directly do a build (if at all required).
Yes, you always want to include the .sln file, it includes the links to all the projects that are in the solution.
Under most circumstances, it's a good idea to commit .sln files to source control.
If your .sln files are generated by another tool (such as CMake) then it's probably inappropriate to put them into source control.
We do because it keeps everything in sync. All the necessary projects are located together, and no one has to worry about missing one. Our build server (Ant Hill Pro) also uses the sln to figure which projects to build for a release.
We usually put all of our solutions files in a solutions directory. This way we separate the solution from the code a little bit, and it's easier to pick out the project I need to work on.
The only case where you would even considder not storing it in source control would be if you had a large solution with many projects which was in source control, and you wanted to create a small solution with some of the projects from the main solution for some private transient requirement.
Yes - Everything used to generate your product should be in source control.
We keep or solution files in TFS Version Control. But since or main solution is really large, most developers have a personal solution containing only what they need. The main solution file is mostly used by the build server.
.slns are the only thing we haven't had problems with in tfs!

tools for diffing windows binaries?

Our QA team wants to focus their testing based on what EXEs and DLLs have actually changed between builds. We have a nice svn change report, but the relationship between source and changed binaries isn't always obvious. The builds we're comparing are always full clean builds, so we can't use file system timestamps. I'm looking for tools to compare windows (and windows CE) PE binaries that will ignore the embedded timestamps and other cruft. Any recommendations for tools or other ways to generate a reliable 'what binaries have really changed' report? Thanks.
clarification: Thanks for the answers so far, but we can't generate the report by doing straightforward byte-for-byte compares or comparing checksums, because all the files appear different every time we build, even if the sources haven't changed, because of timestamps that the compiler inserts. The problem is how to ignore the false positives. The disassemble & compare idea is closest to what we need, I think...
answered! Bindiff is just what I was looking for. Many thanks.
Have you had a look at Bindiff?
I ran into this problem before. My solution was to write a tool which set all the timestamps in an .EXE/.DLL to a known value. I would run it as a post-build step. Then binary diffs would work just fine.
You could perhaps disassemble the binary, and then do a diff on the assembly...
This sounds like your QA team is taking the wrong approach though... It shouldn't matter to them what the code looks like; just that it does what it's supposed to do.
Edit:
Oh! After reading it again, I realized that I misinterpreted your question. I thought they wanted to test the methods that had changed...
In that case, why not get the MD5 hash and compare those? The tiniest change will cause a totally different hash to be generated.
Not sure what kind of binaries (DLLs? PE/WinCE executables only? Other?)Is it possible to embed version information in the binaries, e.g. using a source control tag that updates the version in the source code on commits. Then when the new build is created, the binary would have it's version string updated as well. Instead of having to diff a binary file, you could use the version string and check that for changes.
Look at NDepend.
When I was working on the "home grown" tool for installation verification at my company, we used Beyond Compare as a backend for comparison.
It has great file/folder comparison (binary as well) and scripting capabilities and can output XML reports.
Project dependency graph generator and Dependency-Grapher for C++-Projects both use GraphViz to visualize dependencies. I imagine that you could use either of them as a basis for your needs with special highlighting of the branches in the dependency graph where source files or other leaves have changed.
MD5 hashes or checksums (as suggested above), a simple diff ignoring whitespace and filtering out comment changes, or changlist information from your version control system can signal which files have changed.
gnu binutils specifically strings

Resources