Why Derived Data - why do we need it - xcode

For years I have just blindly excepted that once in a while I need to delete the Derived Data folder.
The Internet - mostly comes up with ways to delete it :-)
Can someone explain why we need Derived Data and not just have output relative to each
project in Xcode - I am sure it is something smart, but what?
Note:
I know how to change it, but it is more if there is any thoughts behind having it.
I also know how to git ignore.
So if it is for speeding up builds, there must be a way to reference other Derived Data frameworks in projects?
Thanks

The module-based nature of Swift building and linking requires the creation of dozens of ancillary files (apinotesc and pcm files) in the module cache. It is cheaper and (subsequently) faster to create these once for all projects. Thus the default is that there is one location for one module cache.
Another advantage is that when cleaning up the derived data files (which take up a lot of room) — as you yourself admit one needs to do from time to time — it is easier to find them all if they are in once location together. Imagine if they were distributed inside every individual project folder!

Can someone explain why we need Derived Data and not just have output relative to each project in Xcode - I am sure it is something smart, but what?
The files in the derived data folder are intermediate files. Having them around let's Xcode avoid doing work that it has already done previously, and so speeds up your builds. If you delete those files, there's no long-term harm done -- Xcode just has to go and create them again. That takes time, so your build will take longer, but otherwise you'll get the same result.
The reason not to put them in the project folder is that they're not really party of the project. If you use version control (you do, right?), you wouldn't want to have to configure your software to ignore parts of the project, and you wouldn't want to commit any of those derived data files either. And again, removing the derived data files doesn't change the project at all; it only changes what Xcode remembers about the project from one build to the next.

Related

Why should you delete the 'obj' folder on every build?

I'm new on a project and the building is quite slow.
Now I see as a postbuild event the next action for a lot of projects:
<PostBuildEvent>rd "$(ProjectDir)obj" /S /Q</PostBuildEvent>
I've read that the obj folder keeps track of the builds so incremental builds can be faster, so I thought maybe this has something to do with it.
However, nobody in my team know why this is done, the removal of this folder, so I'm a bit hesitant to just remove the build action.
What can be a reason to perform this action?
A couple of things come to mind (all rather questionable by themselves):
Custom build steps in the same, or - God forbid - other project that requires it (for the next build to succeed).
A (misguided) attempt to preserve disk space (since all "precious" is in "bin" after the build you technically don't need "obj").
A (misguided) attempt to implement "clean, clobber, etc."-semantics
One needs more information about the complete build system, other projects, etc. you have in place to find out more or better reasons - if at all ;-)
The single possible reason to perform such kind of action is lack of knowledge about power of MSBuild utility.
I believe that target requirement (if it exist) could be achived another way, which will not omit the incremental build feature.
Try to find the author of that string in VCS you are using, and if author is unavailable or could not answer the question, warn your colleagues and remove it and see what happens.
There is a bug in Visual Studio where if you move the obj directory with the IntermediateOutputPath defined in the project file then the compiler still creates an empty obj directory any way. I do both myself, but with VS2010. If VS2015 has this fixed you may be able to remove it.

Include in Project slowness with Visual Studio 2010

Our company won a web project from a new client. Their old vendor basically zipped up the code (C#/ASP.NET, including an enormous number of media files) and FTPed it to us and is no longer answering phone calls/supporting it in any way. There's no solution file, no project files, just code.
So I created an empty project locally and moved it to a network path and moved their code inside it because I don't even have enough space to host it locally. Their architecture is suspect, but I need to get it back up and running ASAP so I don't have time to reconsider that at the moment. I opened the project I created, selected "show all files" and attempted to include all of the paths (both media files and code paths) and the application hung. One of the media folders has something like 65,000 files in it. Do I even need to include this?
Regardless, it seems like doing "Include in Project" is taking forever, I've spent hours wrestling with it, trying to do one folder at a time...but often it's just hanging and I have to kill the process. Is there a faster way to deal with this? I tried editing the project file directly but including this media folder made the solution take absolutely forever to load.
Any general suggestions on how to approach this situation?
as long there is no direct reference you don't need to include media files into the project.
I bet those files are just loaded runtime from a procedure. To make sure make a full search for the media folder in the sources.
Imho get just the files to a local store, create a solution, and then add all resources and sources. If needed you can copy the media files later again into the project.
I had the same problem with local files. I probably killed VS2010 three times since it would always seem to lock up. I then recreated the folder structure, but not with the correct name, then save the project, open it with a text editor and change the names to the actual structure. Finally use "Add > Existing Item". It's still slow, but a bit faster.
It's not hanging - if you leave it long enough it will finish. Know what you mean though - it took half a day to include dojo on one of my projects.
You may want to try SharpDevelop to include large folders into your projects - it seems much, much faster than visual studio when given this task. You can then just re-open the project in vs. Hope this helps.

XCode: Project portability: How to handle code files shared between applications?

As I create more applications, my /code/shared/* increases.
this creates a problem: zipping and sending a project is no longer trivial. it looks like my options are:
in Xcode set shared files to use absolute path. Then every time I zip and send, I must also zip and send /code/shared/* and give instructions, and hope the recipient doesn't have anything already at that location.
this is really not practical; it makes the zip file too big
maintain a separate copy of my library files for each project
this is not really acceptable as a modification/improvements would have to be implemented everywhere separately. this makes maintenance unreasonably cumbersome.
some utility to go through every file in the Xcode project, figure out the lowest common folder, and create a zipped file structure that only contains the necessary files, but in their correct relative folder locations, so that the code will still build
(3) is what I'm looking for, but I have a feeling it doesn't as yet exist.
Anyone?
You should rethink your current process. The workflow you're describing in (3) is not normal. This all sounds very complicated and all basically handled with relative ease if you were using source control. (3) just doesn't exist and likely never will.
A properly configured SCM will allow you to manage multiple versions of multiple libraries (packages) and allow you to share projects (in branches) without ever requiring zipping up anything.

How do you keep Xcode project source files in sync with your file system directories?

I'm new to XCode and I find the file management a huge pain. In most IDEs, you can simply have the project source tree reference a directory structure on disk. This makes it easy to add new files to your project - you simply put them on disk, and they will get compiled automatically.
With XCode, it appears I have to both create the file and separately add it to the project (or be forced to manipulate the filesystem through the UI). But this means that sharing the .xcodeproj through source control is fraught with problems - often, we'll get merge conflicts on the xcodeproj file - and when we don't, we often get linker errors, because during the merge some of the files that were listed in the project get excised. So I have to go and re-add them to the project file until I can get it to compile, and then re-check in the project file.
I'm sure I must be missing something here. I tried using 'reference folders' but the code in them doesn't seem to get compiled. It seems insane to build an IDE that forces everyone to modify a single shared file whenever adding or removing files to a project.
Other answers notwithstanding, this is absolutely a departure from other IDEs, and a major nuisance. There's no good solution I know of.
The one trick I use a lot to make it a little more bearable — especially with resource directories with lots of files in them — is:
select a directory in the project tree,
hit the delete key,
choose "Remove References Only", then
drag the directory into the project to re-add it.
This clobbers any manual reordering of files, but it does at least make syncing an O(1) operation, instead of being O(n) in the number of files changed.
I'm intrigued which IDEs you're using that automatically compile everything in a directory, as no IDE I've ever used does that (at least for C++). I think it's pretty standard to have a project file containing a list of all the files. Often you may want to only include certain files for different targets, have per-file compiler settings, etc.
Anyway, given that that's how it does work, you really shouldn't have too many problems from merge conflicts. The best advice would be commit early and often so that you don't get out of step with other people's changes. Merely adding files to the project shouldn't result in a conflict unless they happen to be added at exactly the same point in the project tree. We've been using Xcode in our team for years and we very rarely get conflicts: only if someone has restructured the project.
Fortunately, because the Xcode file format is text, it's generally quite easy to resolve conflicts when they occur, unlike the Bad Old Days of Codewarrior with it's binary format.

Structuring projects & dependencies of large winforms applications in C#

UPDATE:
This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though.
How do you structure a very large application?
Multiple smallish projects/assemblies in one big solution?
A few big projects?
One solution per project?
And how do you manage dependencies in the case where you don't have one solution.
Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself).
I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear.
(Context: winforms, not web)
EDIT: (If you think this is a different question, leave me a comment)
It seems to me that there are interdependencies between:
Project/Solution structure for an application
Folder/File structure
Branch structure for source control (if you use branching)
But I have great difficulty separating these out to consider them individually, if that is even possible.
I have asked another related question here.
Source Control
We have 20 or 30 projects being built into 4 or 5 discrete solutions. We are using Subversion for SCM.
1) We have one tree in SVN containing all the projects organised logically by namespace and project name. There is a .sln at the root that will build them all, but that is not a requirement.
2) For each actual solution we have a new trunks folder in SVN with SVN:External references to all the required projects so that they get updated from their locations under the main tree.
3) In each solution is the .sln file plus a few other required files, plus any code that is unique to that solution and not shared across solutions.
Having many smaller projects is a bit of a pain at times (for example the TortoiseSVN update messages get messy with all those external links) but does have the huge advantage that dependancies are not allowed to be circular, so our UI projects depend on the BO projects but the BO projects cannot reference the UI (and nor should they!).
Architecture
We have completely switched over to using MS SCSF and CAB enterprise pattern to manage the way our various projects combine and interact in a Win Forms interface. I am unsure if you have the same problems (multiple modules need to share space in a common forms environment) but if you do then this may well bring some sanity and convention to how you architect and assemble your solutions.
I mention that because SCSF tends to merge BO and UI type functions into the same module, whereas previously we maintained a strict 3 level policy:
FW - Framework code. Code whose function relates to software concerns.
BO - Business Objects. Code whose function relates to problem domain concerns.
UI - Code which relates to the UI.
In that scenario dependancies are strictly UI -> BO -> FW
We have found that we can maintain that structure even while using SCSF generated modules so all is good in the world :-)
To manage dependencies, whatever the number of assemblies/namespaces/projects you have, you can have a glance at the tool NDepend.
Personnaly, I foster few large projects, within one or several solutions if needed. I wrote about my motivations to do so here: Benefit from the C# and VB.NET compilers perf
I think it's quite important that you have a solution that contains all your 80 projects, even if most developers use other solutions most of the time. In my experience, I tend to work with one large solution, but to avoid the pain of rebuilding all the projects each time I hit F5, I go to Solution Explorer, right-click on the projects I'm not interested in right now, and do "Unload Project". That way, the project stays in the solution but it doesn't cost me anything.
Having said that, 80 is a large number. Depending on how well those 80 break down into dicrete subsystems, I might also create other solution files that each contain a meaningful subset. That would save me the effort of lots of right-click/Unload operations. Nevertheless, the fact that you'd have one big solution means there's always a definitive view of their inter-dependencies.
In all the source control systems that I've worked with, their VS integration chooses to put the .sln file in source control, and many don't work properly unless that .sln file is in source control. I find that intriguing, since the .sln file used to be considered a personal thing, rather than a project-wide thing. I think the only kind of .sln file that definitely merits source control is the "one-big-solution" that contains all projects. You can use it for automated builds, for example. As I said, individuals might create their own solutions for convenience, and I'm not against those going into source control, but they're more meaningful to individuals than to the project.
I think the best solution is to break it in to smaller solutions. At the company I currently work for, we have the same problem; 80 projects++ in on solution. What we have done, is to split into several smaller solutions with projects belonging together. Dependent dll's from other projects are built and linked in to the project and checked in to the source control system together with the project. It uses more disk space, but disk is cheap. Doing it this way, we can stay with version 1 of a project until upgrading to version 1.5 is absolutely necessary. You still have the job with adding dll's when deciding to upgrade to a other version of the dll though. There is a project on google code called TreeFrog that shows how to structure the solution and development tree. It doesn't contain mush documentation yet, but I guess you can get a idea of how to do it by looking at the structure.
A method that i've seen work well is having one big solution which contains all the projects, for allowing a project wide build to be tested (No one really used this to build on though as it was too big.), and then having smaller projects for developers to use which had various related projects grouped together.
These did have depencies on other projects but, unless the interfaces changed, or they needed to update the version of the dll they were using, they could continue to use the smaller projects without worrying about everything else.
Thus they could check-in projects while they were working on them, and then pin them (after changing the version number), when other users should start using them.
Finally once or twice a week or even more frequently the entire solution was rebuild using pinned code only, thus checking if the integration was working correctly, and giving testers a good build to test against.
We often found that huge sections of code didn't change frequently, so it was pointless loading it all the time. (When you're working on the smaller projects.)
Another advantage of using this approach is in certain cases we had pieces of functionality which took months to complete, by using the above approach meant this could continue without interrupting other streams of work.
I guess one key criteria for this is not having lots of cross dependencies all over your solutions, if you do, this approach might not be appropriate, if however the dependencies are more limited, then this might be the way to go.
For a couple of systems I've worked on we had different solutions for different components. Each solution had a common Output folder (with Debug and Release sub-folders)
We used project references within a solution and file references between them. Each project used Reference Paths to locate the assemblies from other solutions. We had to manually edit the .csproj.user files to add a $(Configuration) msbuild variable to the reference paths as VS insists on validating the path.
For builds outside of VS I've written msbuild scripts that recursively identify project dependencies, fetch them from subversion and build them.
I gave up on project references (although your macros sound wonderful) for the following reasons:
It wasn't easy to switch between different solutions where sometimes dependency projects existed and sometimes didn't.
Needed to be able to open the project by itself and build it, and deploy it independently from other projects. If built with project references, this sometimes caused issues with deployment, because a project reference caused it to look for a specific version or higher, or something like that. It limited the mix and match ability to swap in and out different versions of dependencies.
Also, I had projects pointing to different .NET Framework versions, and so a true project reference wasn't always happening anyways.
(FYI, everything I have done is for VB.NET, so not sure if any subtle difference in behavior for C#)
So, I:
I build against any project that is open in the solution, and those that aren't, from a global folder, like C:\GlobalAssemblies
My continuous integration server keeps this up to date on a network share, and I have a batch file to sync anything new to my local folder.
I have another local folder like C:\GlobalAssembliesDebug where each project has a post build step that copies its bin folder's contents to this debug folder, only when in DEBUG mode.
Each project has these two global folders added to their reference paths. (First the C:\GlobalAssembliesDebug, and then C:\GlobalAssemblies). I have to manually add this reference paths to the .vbproj files, because Visual Studio's UI addes them to the .vbprojuser file instead.
I have a pre-build step that, if in RELEASE mode, deletes the contents from C:\GlobalAssembliesDebug.
In any project that is the host project, if there are non dlls that I need to copy (text files outputted to other project's bin folders that I need), then I put a prebuild step on that project to copy them into the host project.
I have to manually specify the project dependencies in the solution properties, to get them to build in the correct order.
So, what this does is:
Allows me to use projects in any solution without messing around with project references.
Visual Studio still lets me step into dependency projects that are open in the solution.
In DEBUG mode, it builds against open loaded projects. So, first it looks to the C:\GlobalAssembliesDebug, then if not there, to C:\GlobalAssemblies
In RELEASE mode, since it deletes everything from C:\GlobalAssembliesDebug, it only looks to C:\GlobalAssemblies. The reason I want this is so that released builds aren't built against anything that was temporarily changed in my solution.
It is easy to load and unload projects without much effort.
Of course, it isn't perfect. The debugging experience is not as nice as a project reference. (Can't do things like "go to definition" and have it work right), and some other little quirky things.
Anyways, that's where I am on my attempt to make things work for the best for us.
We have one gigantic solution on the source control, on the main branch.
But, every developer/team working on the smaller part of the project, has its own branch which contains one solution with only few projects which are needed. In that way, that solution is small enough to be easily maintenaced, and do not influence on the other projects/dlls in the larger solution.
However, there is one condition for this: there shouldn't be too much interconnected projects within solution.
OK, having digested this information, and also answers to this question about project references, I'm currently working with this configuration, which seems to 'work for me':
One big solution, containing the application project and all the dependency assembly projects
I've kept all project references, with some extra tweaking of manual dependencies (right click on project) for some dynamically instantiated assemblies.
I've got three Solution folders (_Working, Synchronised and Xternal) - given that my source control isn't integrated with VS (sob), this allows me to quickly drag and drop projects between _Working and Synchronised so I don't lose track of changes. The XTernal folder is for assemblies that 'belong' to colleagues.
I've created myself a 'WorkingSetOnly' configuration (last option in Debug/Release drop-down), which allows me to limit the projects which are rebuilt on F5/F6.
As far as disk is concerned, I have all my projects folders in just one of a few folders (so just one level of categorisation above projects)
All projects build (dll, pdb & xml) to the same output folder, and have the same folder as a reference path. (And all references are set to Don't copy) - this leaves me the choice of dropping a project from my solution and easily switching to file reference (I've got a macro for that).
At the same level as my 'Projects' folder, I have a 'Solutions' folder, where I maintain individual solutions for some assemblies - together with Test code (for example) and documentation/design etc specific to the assembly.
This configuration seems to be working ok for me at the moment, but the big test will be trying to sell it to my colleagues, and seeing if it will fly as a team setup.
Currently unresolved drawbacks:
I still have a problem with the individual assembly solutions, as I don't always want to include all the dependent projects. This creates a conflict with the 'master' solution. I've worked around this with (again) a macro which converts broken project references to file references, and restores file references to project references if the project is added back.
There's unfortunately no way (that I've found so far) of linking Build Configuration to Solution Folders - it would be useful to be able to say 'build everything in this folder' - as it stands, I have to update this by hand (painful, and easy to forget). (You can right click on a Solution Folder to build, but that doesn't handle the F5 scenario)
There is a (minor) bug in the Solution folder implementation which means that when you re-open a solution, the projects are shown in the order they were added, and not in alphabetical order. (I've opened a bug with MS, apparently now corrected, but I guess for VS2010)
I had to uninstall the CodeRushXPress add-in, because it was choking on all that code, but this was before having modified the build config, so I'm going to give it another try.
Summary - things I didn't know before asking this question which have proved useful:
Use of solution folders to organise solutions without messing with disk
Creation of build configurations to exclude some projects
Being able to manually define dependencies between projects, even if they are using file references
This is my most popular question, so I hope this answer helps readers. I'm still very interested in further feedback from other users.

Resources