Composer - Multiple Dependencies Issue - composer-php

I have three separate projects (say P1, P2 and P3) each with composer setup.
P1 requires part of P2 and P2 requires part of P3, which is in-turn causing composer within P1 to force me to require P3. Is there a way to remove this dependency as none of the code in P3 is needed by P1? Other than deleting it outside of composer.
I tried setting P3 as require-dev within the P1 composer file and then setting the --no-dev flag when composer install/update is run, but it is still adding the vendor, I imagine because of the requirement for P2.

You cannot circumvent that P3 is appearing in the vendor directory of P1. That's how composer works.
It might be that the P2 parts that are used by P1 do not use anything that is inside P3, but that does not matter. As long as P2 states that it needs P3 to work, any project requiring P2 will also include P3.
I wonder why you are stating that you are forced to require P3. The only reason that would be true is if you also name the repositories that have your code, because you do not want to publish them via packagist.org.
But this is only half the truth. You do not have to require P3 - you only have to state all the repositories that contain all code that is eventually required.
If you want to get rid of including huge lists of private repositories with your library modules, I'd suggest you have a look at Satis. This is a script that can create a package list of all your repositories, and then you'd only have to name the location of the created package.json file, and not every repository you might use. You have to think of a way to add new repositories to Satis, though - but you'd need to do it only once, and only there.

Related

Using environment variables instead of reverse.dep to make builds "suitable"

This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated last month.
Context:
We're running the free version of Teamcity to manage our projects. Some of those projects have dependencies between each others.
The problem
Some projects have chained Snapshot Dependencies, and those dependencies are always being built instead of the latest artifacts from those dependencies being used.
Example: A depends on B, B depends on C. Push A triggers a build of C, followed by a build of B and finally a build of A.
Ideally: A would be built based on the latest built versions of B and C
Where I think the problem lies (but I might be wrong)
Each of our projects has a number of Snapshot dependencies, and each snapshot dependency is configured with the following parameters turned on:
[x] Do not run new build if there is a suitable one
[x] Only use successful builds from suitable ones
For the first option, the documentation says:
If this option is set, TeamCity will not run a new dependency build, if another dependency build in progress or completed with the appropriate source revision already exists. See also Suitable builds: (https://www.jetbrains.com/help/teamcity/2022.10/snapshot-dependencies.html#Suitable+Builds).
If we look in the Suitable Builds doc, it shows a list of requirements for a build to be considered suitable. The one I think is relevant is here:
It must not have any custom settings, including those defined via reverse.dep. (related feature request: TW-23700: (http://youtrack.jetbrains.com/issue/TW-23700)).
However, we currently have reverse.dep.*.env.SOME_PARAMETER as a Configuration Parameter in every single one of our builds (it's inherited through a template).
Based on that, it seems to me that the "Do not run new build if there is a suitable one" option is doing nothing, and therefore that's why all our dependencies are built every time (or am I wrong?)
We also have, in every one of our builds, an environment variable called env.SOME_PARAMETER which has the same value as the reverse.dep configuration parameter.
My question
Is there a way to avoid using reverse.dep in my situation so that the Do not run new build if there is a suitable one option works? Perhaps by using the environment variable instead?
I asked the senior developer at the company I work in, and they said that in theory it should work, but in practice it doesn't, but he seems recitent to explain further. I'm just a beginner in Teamcity, so detailed explanations are welcome
First things first: what is a Snapshot Dependency in a nutshell?
A Snapshot Dependency in a nutshell is a dependency between two build configurations which are linked by shared VCS Roots. VCS Roots are sort of like time lines in a book: they represent a chain of events (e.g. git commit history) and let you build from a givent point in time (e.g. commit).
Now, TeamCity excels at doing what it is intended to do: Continuous Integration and Deployment. It does so by being tied up closely to VCS Roots and effectively changes in these (optionally narrowed down scopes of the) VCS roots. A Snapshot Dependency is a dependency which links together the dependency based on VCS Roots and their changes.
An example
Build A has two VCS Roots, Foo and Bar. Foo and Bar are, say, different Git repositories which Build A needs to fetch before it is able to build the "A" artifact. Build B only needs Foo, and thus only has Foo attached as a VCS Root. Build A has a Snapshot Dependency on Build B, which we configure like yours: "Do not run new build if there is a suitable one" and "Only use successful builds from suitable ones".
So far so good. Now let's push a new commit to the "Foo" repository. TeamCity notices this and potentially triggers a new build of Build A, because the latest build of A is at that point outdated (it did not have our latest Foo commit included). The Snapshot Dependency of B in A links these two build configurations together so that - with our above configuration of the Dep. - we can require a build of B which includes the same revision of Foo, that build A was kicked off with (e.g. the latest commit). Because this does not (yet) exist, a build of B is started and put above build A in the queue.
Simply put: the VCS Root is a timeline, the Snapshot Dependency is a link between two build configurations based on the timeline(s) they have in common, and the configuration of the dependency dictates what should happen when a dependency is needed (e.g. "Do not run new build if there is a suitable one").
If we had manually started a build B with the latest Foo revision included, this would have been a suitable candidate for reuse, and TeamCity would simply remove the queued B build once it discovered that a build of B already exists, which shares the same changes that build A is started with (the latest push to Foo).
If you want just the latest artifacts of B and C...
...use Artifact Dependencies and only these. Removing the Snapshot Dependency of the build removes your need of having the dependency build every time Build A is triggered by a new change in its VCS Root. It however also means that there is no timeline linkage between the two builds and that you yourself need to ensure or be sure that the artifacts produced by B and C are not tightly linked to A. E.g. Build C could produce a driver as an artifact, B could produce a user manual of the driver and build A could just be using the driver, only expecting that it is in a working condition (but otherwise does not depend on changes in it).
Your question about reverse.dep.*...
I've not heard about this causing trouble before. I would however expect that a Snapshot Dependency (and not just an artifact dependency) is required by TeamCity for you to be allowed to use it.
Question is: do you need it? It sounds like you've got the value elsewhere already, and honestly fetching values from previous builds is likely going to cause you trouble in the long run, especially if you don't have a specifically good reason to do so.

Grade substitute dependency if not found

Maybe I’m approaching this incorrectly but say we have a module A which is built on by B (and others). We will make some changes to each of these two modules on dev branches before merging back to trunk. Sometimes we will make a change to both A and B at the same time. In this case, as they are built as independent modules, we have to publish a snapshot of the A branch and change the A-Snapshot dependency in B to point to the branch snapshot of A.
We currently use the branch information to determine the A dependency, when building the trunk the version ends up as a trunk-snapshot, when on a release branch it defaults to a snapshot of that branch. What would be nice would be to determine we are on a task branch of B, which we could do, and then try to resolve a matching A dependency and if that doesn’t exist revert back to the current approach which would give us a trunk snapshot for example.
So in B we would have something like
ifExists(‘myOrg:A:dev-snapshot’).orElse(‘myOrg:A:trunk-snapshot’)
I’ve seen there are some substitution mechanisms but I can’t see that any deal with missing dependencies so I’m not sure it’s possible.
We obviously can and do deal with this manually when it occurs it would just be nice to incorporate into the grade script somehow to avoid any accidental merge of the dev snapshot dependency onto the B trunk.
Thanks for any suggestions.

Best practice to mange multiple projects with a (maven) repository and Maven/Gradle?

This is not an exactly Gradle problem. But a more general build question. We have some projects structured as the following:
-- A (by team 1)
-- build.gradle
-- settings.gradle
-- B (by team 1,3)
--build.gradle
-- C (by team 2)
-- D (by team 3)
They are separate CVS modules. A, B and D are java projects and managed by two teams (team 1,3). C is an C++ project managed by another team (2). B use C via JNI. (The fact that C is C++ is not that important, the key is that it is developed and managed by another team.) There are two applications, A is the entrance of the application 1, A->B->C; while D is the entry point of the application 2, D->B->C. The development often involves change in all three levels. All three teams sit together and communicate constantly. In practice, we (team 1) might need some changes for application 1 in C; team 2 works on C and gives us a temporary copy; we might need some further changes after integration. We will go back and forth for several rounds for one problem. Similarly for application 2.
Currently, A,B and D are managed by various ant scripts and C by make. We started to explore new tools, in particular, Gradle. In our first cut, A includes B as a sub-project (in the Gradle sense) and they always are built and published together. We also always use the head of C (compiled ourselves from source in windows or grabed the latest Jenkin build) and when we are happy with the build, we tag all three projects. We recently adopt an internal Artifactory repository. We are thinking about how to mange the dependency and versioning. Once we finished, we will introduce it to team 3 for module D as well.
We can try to include C as a subproject for A and then always build from the scratch for all three. Similarly include C and B as subprojects for D. The application name can be in the version name for example.
alternatively we can always depends on a fixed version of C in the repository.
In 2, we cannot totally depends on the head/snapshot of C because that might involve their development and might be unstable. But we need their change so frequent that it seems inpractical to put a new version for every C changes.
We are wondering what is the best practice for such a setup? A quick search in internet does not yield much results. Can anyone give us some advices or point me to some books or documents?
Thank you so much.
As I can read from the whole description it seems that during development time (and in production also I suppose) all 3 projects are really tight coupled. So I'll use the first scenario for convenience of work. I'll also build all the projects together, tagged them together and keep same versioning pattern - in general even if it's separated this is a single project.
The first scenario can be carried on till no 3rd party (external) application/library/team uses any of the projects (I suppose it could only be C project). Then backward compatibility issues/pull requests and other may happen and the mentioned projects should be separated. If there's no chance that such situation takes place You shouldn't bother too much. But remember about good (semantic) versioning, and tagging the repository to be always sure which version is deployed to which environment.
UPDATE
After question update.
If You've dependency paths like A->B->C and D->B->C I'll reorganize the project structure. B should become a subproject of both A and D or maybe not a subproject but an external library that is added to these projects. B and C should be a separate project with B dependent on C.
All three projects (A,D, B with C) should be separately versioned (especially B with C) because this is a common part for clients (A and D).
Now, for a convenient development You can add to A and D snapshot libs that will be built often on CI server and then update to artifactory. This will allow You to introduce changes to B and have them visible fast in the projects A and D. But stable release of B (and hence C) should be maintained totally separately.
I hope I understood the problem well and helped You a bit.
P.S. Please consider also using gitflow - a branching model. Then You can used SNAPSHOT versions in dev branch and a stable version of B in release.

Bamboo's 3.1.1 manual dependency management / dependency blocking feature not working as expected?

We're using Bamboo as our CI environment and have several build dependencies in place (using manual dependency management & the dependency blocking feature). We're using SVN polling as our build strategy with all projects having the same polling frequency.
Assume we have the following build plan structure:
a parent build plan PA for project A,
a child build plan CB for project B being dependent on PA, having selected 'Block build if parent Plans have unbuilt changes' as the dependency blocking strategy
Our goal is to setup a dependency tree so that:
if project B should get built, first check if A has changes and if so, build PA first and block CB, resume CB as soon as PA has finished
do this for manual as well as automatic builds (builds triggered as a result of the SVN polling)
The described goal above seems to be exactly what the dependency blocking feature (see http://confluence.atlassian.com/display/BAMBOO/Setting+up+Build+Dependencies) is all about. However, I either have a configuration error or don't correctly understand this feature.
To test, I've constructed the following case:
create a class DummyClassA in A
create a class DummyClassB in B which references DummyClassA so that project A has first to be built for project B to compile
*manually invoke CB
I would have expected that through the configuration mentioned above, CB realizes that the parent project has changes and thus needs to block CB, build PA and resume CB.
However, what happened was that CB was attempted to be built and obviously failed (compile error) as DummyClassA was not yet known to project B. It seems as if there is no active checking of the SVN for the parent project A when CB is manually triggered, is that correct?
What am I missing here? I'm pretty sure that there must be a simple solution as this scenario comes up in virtually every serious software project so I expect Bamboo to handle this out of the box correctly.
Can anyone shed some light on this?
Best,
Chris

Migrating from SVN to TFS (what to do about externals)

So we are trying out TFS right, for the most part you can pretty much figure it out, we created a test project and added it etc. All went well. The thing I haven't really grokked is what to do about externals, we have two large projects and they both reference a shared core project, and then some of the other parts of those two solutions are separate projects themselves which of course are externals as well.
We don't care about the history (we are at a point where we can make a clean break) so I was planning to go the route of 1) create new TFS 2) add projects exported from SVN.
Again my question is how to handle the externals - If someone coule point me in the right direction that would be great.
Thanks.
Unlike Subversion, TFS branches exist in "path space." So, you could check in your "externals" and create a branch for each distinct version of them you want to reference. Then, you can configure your workspace to reference the appropriate version from the corresponding branch path. Alternatively, you could consider managing these components via NuGet, setting up a private NuGet feed (can be as simple as a UNC path).

Resources