We have a huge codebase with around 27000 files in ClearCase UCM. Our build process is as follows:
Copy files from the dynamic view of the stream to the local machine(say directory D:\ABC)
Start compilation
The next time we compile we clean up D:\ABC and repeat the above process. The copying takes around 50 minutes.
The reasons we prefer dynamic views over snapshot views are:
We can always be sure that we are using the latest code
We generate a lot of code and modify a few existing ones during compilation. This may turn snapshot views dirty.
We are saved from the trouble of cleaning up the snapshot views, rebasing it etc...
The troubles with snapshot views are:
We need to clean-up the code we generated for the last build(these are shown as view-private)
We need to undo hijack(we remove read-only for some files as they have to be modified at compile-time)
We have to clean up the output directories and files therein, created by Visual Studio during compilation
We need to rebase the snapshot view every time we intend to compile
We do not trust the snapshot view's cleanliness
My questions:
Are we doing the right thing by copying files from the dynamic views?
I wanted to know if there is some way we can use snapshot views and still be sure about it being clean?
Is there any other option or best practices that we can adopt to improve our process?
Any help would be appreciated.
1/ No:
Copying from a dynamic view is waaay longer than using directly a snapshot view that you would simply update (to catch the latest code)
Plus, during the copy, a file can be updated (new version checked in), and would then be copied by your process (because the dynamic view would... dynamically pick up said new version). In short: you don't know what you are copying.
an update of a snapshot view is an incremental process.
Copying a dynamic view is not (it will copy everything instead of downloading only the delta)
2/ You would update -overwrite to make sure any hijacked file is removed
3/ Using a Baseline is safer, in order to get a fixed point in time of the code base
Related
For years I have just blindly excepted that once in a while I need to delete the Derived Data folder.
The Internet - mostly comes up with ways to delete it :-)
Can someone explain why we need Derived Data and not just have output relative to each
project in Xcode - I am sure it is something smart, but what?
Note:
I know how to change it, but it is more if there is any thoughts behind having it.
I also know how to git ignore.
So if it is for speeding up builds, there must be a way to reference other Derived Data frameworks in projects?
Thanks
The module-based nature of Swift building and linking requires the creation of dozens of ancillary files (apinotesc and pcm files) in the module cache. It is cheaper and (subsequently) faster to create these once for all projects. Thus the default is that there is one location for one module cache.
Another advantage is that when cleaning up the derived data files (which take up a lot of room) — as you yourself admit one needs to do from time to time — it is easier to find them all if they are in once location together. Imagine if they were distributed inside every individual project folder!
Can someone explain why we need Derived Data and not just have output relative to each project in Xcode - I am sure it is something smart, but what?
The files in the derived data folder are intermediate files. Having them around let's Xcode avoid doing work that it has already done previously, and so speeds up your builds. If you delete those files, there's no long-term harm done -- Xcode just has to go and create them again. That takes time, so your build will take longer, but otherwise you'll get the same result.
The reason not to put them in the project folder is that they're not really party of the project. If you use version control (you do, right?), you wouldn't want to have to configure your software to ignore parts of the project, and you wouldn't want to commit any of those derived data files either. And again, removing the derived data files doesn't change the project at all; it only changes what Xcode remembers about the project from one build to the next.
I'm new on a project and the building is quite slow.
Now I see as a postbuild event the next action for a lot of projects:
<PostBuildEvent>rd "$(ProjectDir)obj" /S /Q</PostBuildEvent>
I've read that the obj folder keeps track of the builds so incremental builds can be faster, so I thought maybe this has something to do with it.
However, nobody in my team know why this is done, the removal of this folder, so I'm a bit hesitant to just remove the build action.
What can be a reason to perform this action?
A couple of things come to mind (all rather questionable by themselves):
Custom build steps in the same, or - God forbid - other project that requires it (for the next build to succeed).
A (misguided) attempt to preserve disk space (since all "precious" is in "bin" after the build you technically don't need "obj").
A (misguided) attempt to implement "clean, clobber, etc."-semantics
One needs more information about the complete build system, other projects, etc. you have in place to find out more or better reasons - if at all ;-)
The single possible reason to perform such kind of action is lack of knowledge about power of MSBuild utility.
I believe that target requirement (if it exist) could be achived another way, which will not omit the incremental build feature.
Try to find the author of that string in VCS you are using, and if author is unavailable or could not answer the question, warn your colleagues and remove it and see what happens.
There is a bug in Visual Studio where if you move the obj directory with the IntermediateOutputPath defined in the project file then the compiler still creates an empty obj directory any way. I do both myself, but with VS2010. If VS2015 has this fixed you may be able to remove it.
I have a project with a sizable codebase. Associated with that codebase is a large amount of documenation that needs to maintained at the same version as the source code and which also needs to be easily accessible from within the codebase. However when our build machine builds the codebase I do not want the length of our build process extended by having the build machine checking out hundreds of megabytes of development documentation which is not needed for the build.
If this was on Unix I could simply have a 'docs' directory at the peer level of the codebase's 'source' directory. Then individual projects in the source tree could reference documentation in the docs tree using symlinks, and when the build machine does a build it would just check out the source directory and so not waste time checking out the unneeded docs directory.
However using SVN on Windows I don't see any way to set this up in a sensible way at all since SVN doesn't support symbolic links on Windows, even though Windows has them.
The only workaround I've come up with so far is to create batch files in the source tree which use cmd.exe and a relative file reference to open the documentation files in the docs tree. It works, but for some reason I can't quite put my finger on it leaves a nasty taste in the mouth.
Can anyone think of a better way of achieving this?
After some research I think I have a solution using the externals property.
Firstly using the svn:external property to reference a directory in the same repository. Set this on trunk/Proj1 to create Proj1/Docs referencing the contents of DocsDir/Proj1Docs
../DocsDir/Proj1Docs Docs
This creates a disconnected child working copy inside Proj1/Docs which references /DocsDir/Proj1Docs. Proj1/docs must not previously exist as part of the outer working copy (which makes sense since that would make it part of two working copies at once). If you edit the contents of Proj1/Docs then executing svn status inside the 'parent' working copy will list the changes to the child working copy, but you have to commit the changes to the child copy separately. Which is not a big deal.
Secondly using the svn:external property to reference to a file in the same repository. Set this on trunk/Proj1 to create Proj1/Readme.txt which references DocsDir/Readme.txt.
../DocsDir/Readme.txt Readme.txt
In the case of a file reference the directory in which the referenced file is imported must already be part of the owning working copy. In this case no child working copy is created and if you edit the file it is commited seamlessly as part of the owning working copy.
In both cases the build machine can execute
svn checkout --ignore-externals <path>
to checkout our codebase without all the bulky documentation.
Can anyone see a problem with this strategy?
I'm trying to improve our build process, and to that end I've been looking at turning off copy local and having the whole solution build to a common \bin directory.
What however, is best practise for getting the no longer copied references into the bin directory? I don't want to do this in one of the actual implementation projects as many of them use the same referenced components, and it will mean a proliferation of post build steps.
I know I could create a custom msbuild file but then that would need to be run manually outside of visual studio (I think)? which seems like friction. Is there a way I can create an msbuild project for example, and then have that as part of my solution.
Or is it best just to manage this outside my solution build and have a copy_references.bat file which the dev has to run once to setup their environment getting them into the /bin/debug and /bin/release directories? This seems a bit fragile, but better than checking /bin and the files into svn directly.
One idea I've had is to create an empty c# component project and add the references to it, with copy local turned on. If this was then made a dependency of all other projects it would manage the copying.
Next question is how to manage this with nuget references? My preference is to not check the references into svn, but tell nuget to grab them. So this would also need to be a build step, but again at the solution level.
Additional Info
For a bit more background on why I am evaluating this approach, have a look here:
http://www.ndepend.com/Res%5CNDependWhiteBook_Assembly.pdf
The goal is to massively speed up compilation time by stopping all these redundant copies. Also side benefits if it works might be not having to manually work around the times dependency evaluation doesn't work. Causing one to have to pull referenced assemblies' dependencies into your top level project to ensure they end up in the bin folder.
I suppose in some ways the desire to turn off copy local is an artifact of the inefficiency of the ms build process at both tracing dependencies and evaluating the need to copy things.
You can override the $(OutDir) property globally and keep CopyLocal enabled. Since every project is copying to the same $(OutDir), you won't end up with too much duplication. This is pretty straight forward.
Much more involved, you can also create a shared import file that wires into the standard build and performs a custom post-build deployment. For example,
<Target Name="Deploy"
DependsOnTargets="Deploy)"
AfterTargets="Build">
... copy all output files ...
e.g. use wildcards $(OutDir)\*.dll
e.g. $(OutDir)\$(TargetName)$(TargetExt)
e.g. copy referenced assemblies and copy, see below
</Target>
To get the references, you can call the ResovleAssemblyReferences target and use Returns, or create your own target to get a specific collection as shown in the answer here,
Return the ReferenceCopyLocalPaths from <MSBuild> task
It can be rather involved, but easily configured if you can declare your own "rules" in an item array with metadata.
I'm new to XCode and I find the file management a huge pain. In most IDEs, you can simply have the project source tree reference a directory structure on disk. This makes it easy to add new files to your project - you simply put them on disk, and they will get compiled automatically.
With XCode, it appears I have to both create the file and separately add it to the project (or be forced to manipulate the filesystem through the UI). But this means that sharing the .xcodeproj through source control is fraught with problems - often, we'll get merge conflicts on the xcodeproj file - and when we don't, we often get linker errors, because during the merge some of the files that were listed in the project get excised. So I have to go and re-add them to the project file until I can get it to compile, and then re-check in the project file.
I'm sure I must be missing something here. I tried using 'reference folders' but the code in them doesn't seem to get compiled. It seems insane to build an IDE that forces everyone to modify a single shared file whenever adding or removing files to a project.
Other answers notwithstanding, this is absolutely a departure from other IDEs, and a major nuisance. There's no good solution I know of.
The one trick I use a lot to make it a little more bearable — especially with resource directories with lots of files in them — is:
select a directory in the project tree,
hit the delete key,
choose "Remove References Only", then
drag the directory into the project to re-add it.
This clobbers any manual reordering of files, but it does at least make syncing an O(1) operation, instead of being O(n) in the number of files changed.
I'm intrigued which IDEs you're using that automatically compile everything in a directory, as no IDE I've ever used does that (at least for C++). I think it's pretty standard to have a project file containing a list of all the files. Often you may want to only include certain files for different targets, have per-file compiler settings, etc.
Anyway, given that that's how it does work, you really shouldn't have too many problems from merge conflicts. The best advice would be commit early and often so that you don't get out of step with other people's changes. Merely adding files to the project shouldn't result in a conflict unless they happen to be added at exactly the same point in the project tree. We've been using Xcode in our team for years and we very rarely get conflicts: only if someone has restructured the project.
Fortunately, because the Xcode file format is text, it's generally quite easy to resolve conflicts when they occur, unlike the Bad Old Days of Codewarrior with it's binary format.