Should I duplicate an xcode project or #define sections? - xcode

I have an iPad app that can come in several different releasable flavors, and I'm trying to decide how to best make these alternate releases. They have only a few differences source-code wise, and primarily differ in resource data files (xml and a very large amount of binary files).
1) Should I duplicate the project and branch the handful of source files and include the appropriate resources separately for each? This seems more of a maintenance hassle as I add files to the project and do other basic things other than edit shared files.
2) Or should I use #defines to build the appropriate flavor I want at any time, then ifdef out entire files accordingly? This seems simpler but my suspicion is that I won't be able to find an easy way to exclude/include resource files, and that would be a deal breaker.
Any suggestions on how to deal with the resource issue in option 2, or if there is an alternate approach altogether that is better?

What about creating separate targets within a single XCode project?
Make each target include the files that are appropriate for that app; no need for ifdefs that way.

Related

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

Why Derived Data - why do we need it

For years I have just blindly excepted that once in a while I need to delete the Derived Data folder.
The Internet - mostly comes up with ways to delete it :-)
Can someone explain why we need Derived Data and not just have output relative to each
project in Xcode - I am sure it is something smart, but what?
Note:
I know how to change it, but it is more if there is any thoughts behind having it.
I also know how to git ignore.
So if it is for speeding up builds, there must be a way to reference other Derived Data frameworks in projects?
Thanks
The module-based nature of Swift building and linking requires the creation of dozens of ancillary files (apinotesc and pcm files) in the module cache. It is cheaper and (subsequently) faster to create these once for all projects. Thus the default is that there is one location for one module cache.
Another advantage is that when cleaning up the derived data files (which take up a lot of room) — as you yourself admit one needs to do from time to time — it is easier to find them all if they are in once location together. Imagine if they were distributed inside every individual project folder!
Can someone explain why we need Derived Data and not just have output relative to each project in Xcode - I am sure it is something smart, but what?
The files in the derived data folder are intermediate files. Having them around let's Xcode avoid doing work that it has already done previously, and so speeds up your builds. If you delete those files, there's no long-term harm done -- Xcode just has to go and create them again. That takes time, so your build will take longer, but otherwise you'll get the same result.
The reason not to put them in the project folder is that they're not really party of the project. If you use version control (you do, right?), you wouldn't want to have to configure your software to ignore parts of the project, and you wouldn't want to commit any of those derived data files either. And again, removing the derived data files doesn't change the project at all; it only changes what Xcode remembers about the project from one build to the next.

Mono fo Android - One Solution for many clients

I have created three different solutions for three different clients, but those solutions are for an app that have the same features, classes, methods, resolution, except for the images, XML resource files, and a web service reference, that are specific for each one.
I would like to have just one solution for all those apps, that I could open in VS2010 IDE for edition, without errors. So, when I need to build or publish an specific app, I just set the client which one I need to, and go ahead to building or publishing.
It is important to consider that XML file names will be the same, as classes and images names too. The difference will be the content, but the name will always be the same.
My intention is to reduce my effort to maintain many solutions, having just one solution to work with.
In my company, we will have more than those three clients soon, so I am worried about how to maintain that. The best way will be have just one solution and when I need to generate a new app for a new client, I have just to change/include a few things (like some resources and images) and compile to a new client folder.
Is it possible? If so how?
One option would be to have a master solution which had the following
A "Template" project that contained your actual application and all of the shared code
Projects for all of your clients
In the projects for your clients, you could have links to the files in your files that come from your shared project. Then, in each of those projects, you could add the files that are only specific to them.
With this kind of structure, whenever you made a change to your Template project, all of the client projects would be updated as well because they just have pointers back to the Template project.
A good reference for this kind of setup would be the Json.Net Code Base. There he has a solution and project for all of the different configurations, but they all share the same files.
In terms of ensuring that the xml files are named properly, you might just want to put some checks into your main application to ensure that it has all of the files needed or potentially add a check into your build process.
There are many ways you could look to tackle this.
My favorite would be to run some sort of pre-build step - probably outside of Visual Studio - which simply replaces the files with the correct ones before you do a build. This would be easy to automate and easy to scale.
If you are going to be building for many more than three customers, then I think you should look to switch from Visual Studio building to some other automated build system - e.g. MSBuild from the command line or from something like TeamCity or CruiseControl. You'll find it much easier to scale if your build is automated (and robust)
If you don't like the file idea, then there are plenty of other things you could try:
You could try doing a similar step to above, but could do it inside VS using a pre-Build step.
You could use Conditional nodes within the .csproj file to switch files via a project configuration
You could look to shift the client-specific resources into another assembly - and then use GetResourceStream (or similar) at runtime to extract the resources.
But none of these feel as nice to me!

XCode: Project portability: How to handle code files shared between applications?

As I create more applications, my /code/shared/* increases.
this creates a problem: zipping and sending a project is no longer trivial. it looks like my options are:
in Xcode set shared files to use absolute path. Then every time I zip and send, I must also zip and send /code/shared/* and give instructions, and hope the recipient doesn't have anything already at that location.
this is really not practical; it makes the zip file too big
maintain a separate copy of my library files for each project
this is not really acceptable as a modification/improvements would have to be implemented everywhere separately. this makes maintenance unreasonably cumbersome.
some utility to go through every file in the Xcode project, figure out the lowest common folder, and create a zipped file structure that only contains the necessary files, but in their correct relative folder locations, so that the code will still build
(3) is what I'm looking for, but I have a feeling it doesn't as yet exist.
Anyone?
You should rethink your current process. The workflow you're describing in (3) is not normal. This all sounds very complicated and all basically handled with relative ease if you were using source control. (3) just doesn't exist and likely never will.
A properly configured SCM will allow you to manage multiple versions of multiple libraries (packages) and allow you to share projects (in branches) without ever requiring zipping up anything.

Should a .sln be committed to source control?

Is it a best practice to commit a .sln file to source control? When is it appropriate or inappropriate to do so?
Update
There were several good points made in the answers. Thanks for the responses!
I think it's clear from the other answers that solution files are useful and should be committed, even if they're not used for official builds. They're handy to have for anyone using Visual Studio features like Go To Definition/Declaration.
By default, they don't contain absolute paths or any other machine-specific artifacts. (Unfortunately, some add-in tools don't properly maintain this property, for instance, AMD CodeAnalyst.) If you're careful to use relative paths in your project files (both C++ and C#), they'll be machine-independent too.
Probably the more useful question is: what files should you exclude? Here's the content of my .gitignore file for my VS 2008 projects:
*.suo
*.user
*.ncb
Debug/
Release/
CodeAnalyst/
(The last entry is just for the AMD CodeAnalyst profiler.)
For VS 2010, you should also exclude the following:
ipch/
*.sdf
*.opensdf
Yes -- I think it's always appropriate. User specific settings are in other files.
Yes you should do this. A solution file contains only information about the overall structure of your solution. The information is global to the solution and is likely common to all developers in your project.
It doesn't contain any user specific settings.
You should definitely have it. Beside the reasons other people mentioned, it's needed to make one step build of the whole projects possible.
I generally agree that solution files should be checked in, however, at the company I work for we have done something different. We have a fairly large repository and developers work on different parts of the system from time to time. To support the way we work we would either have one big solution file or several smaller. Both of these have a few shortcomings and require manual work on the developers part. To avoid this, we have made a plug-in that handles all that.
The plug-in let each developer check out a subset of the source tree to work on simply by selecting the relevant projects from the repository. The plugin then generates a solution file and modifies project files on the fly for the given solution. It also handles references. In other words, all the developer has to do is to select the appropriate projects and then the necessary files are generated/modified. This also allows us to customize various other settings to ensure company standards.
Additionally we use the plug-in to support various check-in policies, which generally prevents users from submitting faulty/non-compliant code to the repository.
Yes, things you should commit are:
solution (*.sln),
project files,
all source files,
app config files
build scripts
Things you should not commit are:
solution user options (.suo) files,
build generated files (e.g. using a build script) [Edit:] - only if all necessary build scripts and tools are available under version control (to ensure builds are authentic in cvs history)
Regarding other automatically generated files, there is a separate thread.
Yes, it should be part of the source control.
When ever you add/remove projects from your application, .sln would get updated and it would be good to have it under source control. It would allow you to pull out your application code 2 versions back and directly do a build (if at all required).
Yes, you always want to include the .sln file, it includes the links to all the projects that are in the solution.
Under most circumstances, it's a good idea to commit .sln files to source control.
If your .sln files are generated by another tool (such as CMake) then it's probably inappropriate to put them into source control.
We do because it keeps everything in sync. All the necessary projects are located together, and no one has to worry about missing one. Our build server (Ant Hill Pro) also uses the sln to figure which projects to build for a release.
We usually put all of our solutions files in a solutions directory. This way we separate the solution from the code a little bit, and it's easier to pick out the project I need to work on.
The only case where you would even considder not storing it in source control would be if you had a large solution with many projects which was in source control, and you wanted to create a small solution with some of the projects from the main solution for some private transient requirement.
Yes - Everything used to generate your product should be in source control.
We keep or solution files in TFS Version Control. But since or main solution is really large, most developers have a personal solution containing only what they need. The main solution file is mostly used by the build server.
.slns are the only thing we haven't had problems with in tfs!

Resources