Installing Drupal 8.x using composer BUT using a local mirror - composer-php

I'm installing Drupal 8.x via composer downloading any dependencies from the Internet and all works fine.
In this way however there is no guarantee that the same versions of dependencies will be available every time I install. One server might have an updated version of a module than another Drupal server if I install in different time. I would like to prevent against this by using a local mirror.
Is it possible to provide a local mirror to composer and how?
Any example / reference / suggestions?

If you are worried about the versions, then the best way would be to define the exact versions you want in your composer.json if need. But apart from that, after you install your dependencies, you have a composer.lock file that has the exact versions in it. This file is committed to your version control and used as the base to install: this way you always get the same versions (until you update of course).
A separate problem might be that there is no internet, or the specific versions are not available for some reason. This shouldn't happen (often), but in that case you should pick this up before you 'release'.
The best practice would be to build (finding out if you have all packages available) and then release. You could even create a separate build server that creates your project including the vendor dir, and push from there. The fact that your vendor dir is not in your version control does not mean you have to get all dependencies on your production server each time
This means you have a local copy of your vendor, which is not a local mirror of composer per se, but close enough for comfort.

Related

How can I use the maven rpm plugin to deploy a patch?

I'm using Maven's RPM plugin to create an rpm to package up my project (Java, JSP, JavaScript) and that is working fine. I've now had to make some small changes to a JSP page and a JavaScript file. I'd like to be able to create an RPM that just does an update or patch, instead of the whole project. Is that possible? If so, how?
You can't. Or maybe not in the way that you're hoping for.
You can create a delta RPM, but they were designed to save bandwidth when distributing RPMs not to apply patches. i.e. they must be served from a repository and your package manager (YUM/DNF) will take the base which is already installed, the deltas and manufacture a full new RPM and install that.
Note that to keep YUM/DNF happy, you must still supply the full new RPM so that clients that don't have the base RPM installed can do an install.
You would also have to consider what would you do for the next version that comes out? Are you going to deltas from both previous RPMs?
So yes, you can achieve the effect that you're looking for but unless you're really looking to save bandwidth it may not help you. Typically unless your download is slow, it takes longer to reconstitute the new RPM from the base and the delta than it does to download the full version.

Getting leiningen to cache packages

In a clojurescript project I'd like leiningen to be less reliant on the internet connection during our CI builds. I was hoping to get it to cache packages on a network disc (using the :local-repo setting to create a "shared cache") and then add it as a repository in such way that it fetches from there first and only from clojars and other external sites when it can't find it in the "shared cache".
I read this, removed my ~/.m2 folder, and added the following to my project.clj:
:profiles {:local-cache
{:local-repo "/shared/disc/clojars-cache"
:repositories {"local" {:uri "file:///shared/disc/clojars-cache"
:releases {:checksum :ignore}}}}}
The initial build with lein with-profile +local-cache cljsbuild does indeed populate the cache, but
My ~/.m2/repository folder is recreated and filled with stuff, though it seems to only be the clojure stuff needed by leiningen, and
after removing ~/.m2 subsequent rebuilds don't seem to use the local repository at all but instead download from clojars anyway.
Clearly I'm missing something... or maybe I'm going about this in the completely wrong way.
In short, how can I get leiningen to
create a cache of packages on a network disc, and
get it to prefer this cache as source of packages (over external sources like clojars)?
Leiningen already prefers to go to ~/.m2 by default. It will only go to Clojars if it doesn't already have a copy of the JAR that is requested stored locally in its ~/.m2. The exception to this rule is if you are specifying SNAPSHOT versions, where it will go out to the network to check if the SNAPSHOT version it has is the latest once a day (by default).
You can set the :offline? key as true if you don't want Leiningen to go to the network at all.
Answering your questions:
How can I get Leiningen to create a cache of packages on a network disk?
Leiningen already creates a cache of packages in ~/.m2. You could symlink that directory to your network disk, or use :local-repo as you are now, although it sounds like :local-repo isn't working for you?
How can I get Leiningen to prefer this cache over external sources?
Leiningen already does this. It sounds like :local-repo either isn't working, hasn't been configured correctly, or that directory isn't writable by Leiningen?
Stepping back to look at the wider problem, you're wanting to prevent unnecessary network traffic in your CI builds. Leiningen already caches every dependency by default. You haven't said which CI tool you're using, but they should all have the ability to cache the ~/.m2 folder between runs. Depending on the tool, you'll have to download your deps either once per project, or once per machine. I'd recommend sticking with that, rather than trying to share deps over the network, as that could lead to hard to debug test failures.
If that's not going to work for you, can you provide more details about your setup, and why you'd like Leiningen to be less reliant on the network in your CI builds?
UPDATE: After seeing that Gitlab CI is being used, it looks like you need to add a caching config?
cache:
paths:
- ~/.m2

What is the best way to save composer dependencies across multiple builds

I am currently using atlassian bamboo build server (cloud based, using aws) and have an initial task that simply does a composer install.
this single task can take quite a bit of time which can be a pain when developers have committed multiple times giving the build server 4 builds all downloading dependencies (these are not parallel).
I wish to speed this process up but canot figure out a way in which to save the dependancies to a common location for use across multiple builds which still allowing the application to run as intended (laravel)
Answer
Remove composer.lock from your .gitignore
Explanation
When you run compose install for the first time, composer has to check all of your dependencies (and their dependencies etc.) or compatibility. Running through the whole dependency tree is quite expensive, which is why it takes so long.
After figuring out all of your dependencies, composer then writes the exact versions it uses into the composer.lock file so that subsequent composer install commands will not have to spend that much time running through the whole graph.
If you commit your composer.lock file it'll come along to your bamboo server. The composer install command will be waaaayy faster.
Committing composer.lock is a best practice regardless. To quote the docs:
Commit your application's composer.lock (along with composer.json) into version control.
This is important because the install command checks if a lock file is present, and if it is, it downloads the versions specified there (regardless of what composer.json says).
This means that anyone who sets up the project will download the exact same version of the dependencies. Your CI server, production machines, other developers in your team, everything and everyone runs on the same dependencies, which mitigates the potential for bugs affecting only some parts of the deployments.

composer and satis tags for testing and prod

We're using composer, satis and SVN to manage our in-house PHP libraries.
We commit changes to SVN trunk during development, then tag versions (following semantic versioning) when they're ready for testing.
Once a library version is tagged, we can use composer as part of our deployment to the testing environment. Following successful testing, we'd then deploy that same version to production.
The issue here, is that once we've tagged a version for testing, we have to be very careful as the newly tagged version will be picked up by composer when preparing the next prod release.
What I'm imagining, is that we'd tag a version as a beta or RC, (eg v1.1RC1) and somehow configure our deployment process such that it will refuse to deploy an RC or beta to production. If a version is tested successfully, we'd re-tag that version as a released version (v1.1RC1 -> v1.1) and release that.
Can this be achieved?
From what you are saying, I understand that you are actually afraid of tagging a new version of a library because that code could actually be used and break that other application, right?
One approach would be to do good testing. I don't see it should be a problem to tag a version of a library. If the tests are all green, there should be no reason not to tag it. This would work even if the tests are basically only "let's see if it works, manually".
Now the second step is to integrate that new version into the application: Run composer update and see if the application is still running, i.e. start all the tests and wait for green.
I guess it might be a good idea to have a separate area where you check out the application, intentionally run composer update to fetch all the newest libraries, run all the tests and report that a) there are updates and b) they work. A developer should then confirm the update, i.e. do it again manually and commit the resulting composer.lock file, or grab the resulting lock file from that update test.
I don't think there is benefit in using non-production release versions. You have to deal with the next version anyways - constantly toggling the minimum stability setting or adding #RC or #beta flags to the version requirements of the library don't really help.

Can I force Fuse Fabric Maven Proxy to push an updated version of the same jar to containers

I've developed a project that has a bundle whose only purpose is to write a file to a certain location on all of the containers running it.
This file will change often, but does not really constitute an increase in version number. I also don't want to have 100 versions of this bundle in my repository. So I have left it as a snapshot. This question would also apply if I was doing active development on a project for fuse fabric.
Once built, I deploy the bundle to my fabric's maven proxy with:
mvn deploy:deploy-file -Dfile=target/file-1.0-SNAPSHOT.jar -DartifactId=file -DgroupId=com.some.id -Dversion=1.0.0 -Dtype=bundle -Durl=http:// username:password#hostname:port/maven/upload
I can then add my bundle to a profile with:
mvn:com.some.id/file/1.0.0
This works the first time.
Then I make a change to the file, rebuild the bundle, and deploy with exactly the same command. I remove the bundle from the profile and add it back in. The maven proxy on the fabric server has the new bundle in it if I check $FUSE_HOME/data/maven/proxy/com/some/id/file/1.0.0/
But on all of the containers running the profile on a separate server, the bundle is not updated. I assume because the version has not changed. However, fabric should be smart enough to tell the difference, as the md5 should be different.
For now I can change the version number and my problem is solved, or clear the maven proxy by hand. But in production I will not be able to clear the proxy on every server, nor can I expect someone to come up with a unique version for the bundle every time they make a small change to this file (which should happen often).
I have already tried adding updatePolicy=always to the fabric maven configuration, but I believe that only affects repositories that it is pulling from, not the proxy.
Any advice on the best way to solve this problem is welcome.
If you are using containers, your old artefacts must be cached in
$FUSE_HOME/instances/CONTAINER_NAME/data/maven/agent/
Delete the old artefacts from here and stop/start your container.

Resources