Projects folder structure recommendation [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm getting ready to implement a source control system (subversion) but I'm facing some doubts on how to structure my folders.
I use Delphi for all my development and compile the projects from within the IDE.
My current projects folder structure is as follows:
-E:\Work\1. Shared
--Forms (shared forms across all projects)
--Units (shared units/classes across all projects including 3rd party like JCL)
-E:\Work\2. Company Name
--Admin (stuff related with admin work like a license keys generator, Windows CGI to handle order processing automatically, all developed in Delphi)
--Projects
----ProjectA
-----5.x (version 5.x)
------BIN (where all the binaries for this project go)
------Build Manager (where the FinalBuilder project lives)
-------Install (NSIS file that create the setup.exe)
-------Protection (Project files to protect the compiled exe)
-------Update (inf files related with the auto-update)
------Docs (where the readme.txt, license.txt and history.txt that are included in the setup file are)
-------Defects (docs for any testing done by me or others)
-------HTMLHelp (html help for the project)
------R&D (where screenshots, design ideas and other R&D stuff goes to)
------Releases (when building a release with FinalBuilder the setup file created by nsis is placed here)
------Resources (Images and other resources used by this project)
------Source (if a sub-project exists it will compile to BIN since they are all related)
-------SubprojectA
-------SubprojectB
-------SubprojectC
--Sites
--- companywebsite.com (the only one at the moment but if we decide to have individual web sites for products they would all be placed on the Sites folder)
The sign "-" marks directories.
Anyone cares to comment about the current structure or has any suggestions to improve it?
Thanks!

Having setup literally hundreds of projects over the years, and having specialized in software configuration management and release engineering, I would recommend that you first focus on how you want to build/release your project(s).
If you only use an IDE to build (compile and package) your project(s), then you might as well just follow the conventions typical for that IDE, plus any "best practices" you may find.
However, I would strongly recommend that you do not build only with an IDE, or even at all. Instead, create an automated build/release script using one or more of the many wonderful open-source tools available. Since you appear to be targeting Windows, I recommend starting with a look at Ant, Ivy, and the appropriate xUnit (jUnit for Java, nUnit for .NET, etc.) for testing.
Once you start down that path, you will find lots of advice regarding project structure, designing your build scripts, testing, etc. Rather than overwhelm you with detailed advice now, I will simply leave you with that suggestion--you will readily find answers to your question there, as well as find a whole lot more questions worth investigating.
Enjoy!
Based on comments, it seems that some detail is needed.
A particular recommendation that I would make is that you separate your codebase into individual subprojects that each produce a single deliverable. The main application (.EXE) should be one, any supporting binaries would each be separate projects, the installer would be a separate project, etc.
Each project produces a single primary deliverable: an .EXE, a .DLL, a .HLP, etc. That deliverable is "published" to a single, shared, local, output directory.
Make a directory tree where the subprojects are peers (no depth or hierarchy, because it does not help), and do NOT let projects "reach" into each other's subtree--each project should be completely independent, with dependencies ONLY on the primary deliverables of the other subprojects, referenced in the shared output directory.
Do NOT create a hierarchy of build scripts that invoke each other, I did and found that it does not add value but does exponentially increase the maintenance effort. Instead, make a continuous integration script that invokes your stand-alone build script, but first does a clean checkout into a temporary directory.
Do NOT commit any deliverables or dependencies into source control--not your build output, not the libraries that you use, etc. Use Ivy against a Maven-like binary repository that you deploy separate from source control, and publish your own deliverables to it for sharing within your organization.
Oh, and don't use Maven--it is too complicated, obfuscates the build process, and therefore is not cost-effective to customize.
I am moving towards SCons, BuildBot, Ant, Ivy, nAnt, etc. based on my target platform.
I have been composing a whitepaper on this topic, which I see may have an audience.
EDIT: Please see my detailed answer to How do you organize your version control repository?

why the 5.x? (under projectA)
I don't think it is useful to introduce the versions in the tree - that is what subversion, etc is for.

Related

How to manage stable binaries and avoid risk of CI rebuilds when install packaging?

I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?
This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.

How to set Xcode project dependencies with different build configurations? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I have an Xcode 7.3 workspace with three projects, App, FrameworkA, and FrameworkB. Each project has a single target. This is iOS, so the framework targets are Cocoa Touch Frameworks, which means frameworks containing dynamically linked shared libraries.
App depends on Framework A which depends on Framework B. These dependencies are working, insofar as A properly links to the build product of B, and the App properly links to and embeds both frameworks A and B (because you cannot have one framework embedding another, it seems an application bundle needs to link and embed both direct and transitive dependencies.)
But here is my problem. Frameworks A and B have the usual build configurations, Debug and Release. App has an additional build configuration, LocalRelease, which is triggered by the Run build action, and used for building an optimized build (like Release) but code signed with a developer identity (like Debug).
When I try to build App with this LocalRelease build configuration, this breaks the build since it breaks the dependencies on frameworks A and B. I believe that is because these frameworks do not have LocalRelease build configurations, so Xcode never puts their build products into a LocalRelease-iphoneos folder, as it does with App.
So my narrow question is, how do I configure build settings so that a project with a nonstandard build configuration name (like LocalRelease) can depend on other projects that use only the standard build configuration names? I'm hoping there's a simple way to do this that does not require adding scripts or xcconfig files, but if those are necessary I'd love to understand why.
And my broader question is, is it in general a bad idea to introduce additional build configurations because they do not allow smooth interaction of dependencies between projects in a shared workspace? I was led to defining this third configuration because I wanted an optimized local build, I did not want to define a new scheme, and I wanted the choice of build type to be expressed by the various build actions (run, profile, release) of a single scheme.
But maybe this was the wrong way to do it. As long as it is the case that build configuration names drive build product directory paths, and dependent projects need to find each other's build products in a shared directory, it seems like introducing a non-standard build configuration name to a project will break interoperation with depended-upon other projects.
I raised a Developer Technical Support ticket with Apple about this, and spoke to the Xcode engineers at WWDC.
Answers to my own questions
how do I configure build settings so that a project with a nonstandard build configuration name (like LocalRelease) can depend on other projects that use only the standard build configuration names?
Answer: cannot be done.
is it in general a bad idea to introduce additional build configurations because they do not allow smooth interaction of dependencies between projects in a shared workspace?
Answer: yes, this is a bad idea.
So creating a new named build configuration is not the smart way to do what I was trying to do. Unfortunately, seems like the "simplest" solution is to embrace xcconfig files and change config files manually for this sort of thing.
In Xcode 11, it appears that if no corresponding configuration is found in the project dependency, the build system falls back to the setting Use {configuration here} for command-line builds under project configurations.
Xcode settings UI
Cleaning, building, and then inspecting my built products directory shows that the dependency is only built for the selected configuration.

How do you manage common software on a large project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I work on a project far too big to reside in a single Visual Studio / Eclipse / NetBeans project and we have a "common software" team responsible for developing and maintaining software libraries used by other teams.
I'm struggling with how to manage the development of and changes to the common software. When method signatures and classes change, do I keep the old versions and mark them deprecated? The current plan is to distribute a new build of common libraries every two weeks.
Definitely set up a repository. If you are a Maven-hater check out Gradle, it uses Ivy. Maven has a reputation for being complex but it does have better tool support. IDEs support Maven either out-of-the-box or with plugins, they give you graphs showing what the jars in your project depend on, so you can see conflicts easily.
Either Ivy or Maven will sort out your dependencies so your projects are using the right versions. Each of your projects should list (in the pom.xml for Maven) what version of which of your common libraries that it uses.
A common feature of most version control systems is the use of external branches. Common software is fetched from a shared repository and integrated in each project on update.
A key difficulty lies in documentation changes to the public API of common software and I see two solutions : good communication of deprecated signatures adn continuous integration where finding out deprecated methods can prove painfull.
There are a few options you can have.
Option A: use a repository
For Java based systems I would recommend that you use Ant+Ivy or Maven and create an internal repository with the code in those common projects.
Option B: Classpath Project
If setting up a repository is too much, what you can do is a create an eclipse project called classpath with the following three directories in it
classpath\
docs\
sources\
jars\
The team working on the common project can have a build script which complier the common code and places it into the classpath project, all that the rest of the dev team need to do is checkout the classpath project and reference the files in it during development.
Personally I am a fan of option B unless there is a full time person dedicated to doing builds in which case I go for option A.
The way to manage changes in method signatures is to follow a common version convention so when you do a major version number increase you can say dependent code will have to be changed, and if it is a minor version number increase then dependent code does not need to change. marking code as deprecated is a very practical option because IDE and build systems should issue warnings and allow the coders to switch to newer versions. If the same team is changing the common code and the main project then you will need to have the actual eclipse projects all checked out in the same workspace so that re factoring tools can do their job.
Unless the code in common will be used across across many projects I would keep it in all in one project, you can use multiple source folders to make navigating to various parts of the code easy. If you are having trouble with developers checking in stuff that is breaking things, then I would recommend you have more frequent checkins or have developers work on branches where they merge from the trunk to their work branch frequently to eliminate sync problems, when done they can merge from the branch back to the trunk, the latest version of subversion have decent support for this, and DVCS source control systems like mercurial, and git hub are excellent at this.

Managing internal 3rd Party Dependencies

We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.

How to work on a Cocoa app and plugins in parallel?

I have a relatively simple goal: I want to create a Cocoa application which doesn't have much functionality itself, but is extendable through plugins. In addition I want to work on a few plugins to supply users with real functionality (and working examples).
As I am planning to make the application and each plugin separate open-source projects (and Git repositories), I'm now searching for the best way to organize my files and the Xcode projects. I'm not very experienced with Xcode and right now I don't see a simple way to get it working without copying files after building.
This is the simple monolithic setup I used for development up until now:
There's only one Xcode project with multiple products:
The main application
A framework for plugin development
Several plugin bundles
What I'm searching for is a comfortable way to split these into several Xcode projects (one for the application and framework) and one for each plugin. As my application is still in an early stage of development, I'm still changing lots of things in both the application and the plugins. So what I mean by "comfortable" is, that I don't want to copy files manually or similar inconvenience.
What I need is that the plugin projects know where they can find the current development framework and the application needs to know where it can find the development plugins. The best would be something like a inter-project dependency, but I couldn't find a way to setup something like that in Xcode.
One possible solution I have in mind is to copy both (the plugins and the framework) in a "Copy Files Build Phase" to a known location, e.g. /tmp/development, so production and development files aren't mixed up.
I think that my solution would be enough, but I'm curious if there's a better way to achieve what I want. So any suggestions are welcome.
First, don't use a static "known location" like you mention. I've worked in this kind of project; it's a royal pain. As soon as you get to the point of needing a couple of different copies of the project around (for fixing bugs in parallel, for testing a "clean" build versus your latest changes, for working on multiple branches), the builds start trashing each other and you find yourself having to do completely clean/builds much more often than you'd want.
You can create inter-project dependencies by adding the dependent project (Add File), right click the Target and choose "Get Info," and then add a Direct Dependency on the General pane.
In terms of structure, you can either put the main app and framework together, or put them in separate projects. In either case, I recommend a directory tree like:
/MyProject
/Framework
/Application
/Plugins
/Plugin1
/Plugin2
Projects should then refer to each other by relative paths. This means you can easily work on multiple copies of the project in parallel.
You can also look at a top-level build script that changes into each directory and runs "xcodebuild". I dislike complex build scripts (we have one; it's called Xcode), but if all it does is call "xcodebuild" with parameters if needed, then a simple build script is useful.

Resources