Automate updating outdated dependencies in CI/CD using `yarn outdated` - continuous-integration

My team is developing a React component library which relies on MaterialUI components. The customer of our employer wants to automatically signal and/or upgrade outdated dependencies (specifically when the dependency on MaterialUI becomes outdated at least). We are using yarn as dependency manager.
I found yarn lists all the outdated dependencies (or a specific dependency if specified) through the yarn outdated command. One can then upgrade said dependencies using the yarn upgrade command to which the dependency to be updated is supplied as parameter. To do this using a single command, running yarn upgrade-interactive lists outdated dependencies which the user can then select to be updated.
I am wondering if there is/are way(s) to automate this process. I tried piping the results of yarn outdated to yarn update as well as yarn version, but yarn upgrade seems to ignore whatever input it receives and updates every package regardless and yarn version throws errors saying the version are not proper semvers.
I realise yarn upgrade-interactive makes this process easy and quick for developers, however the project is intended to become open-source over time and the customer prefers a centralised solution rather than relying on every individual contributor to track this themselves. As far as I am aware, yarn upgrade-interactive cannot be automated as it requires user input in order to select the package(s) to be updated.
Other solutions I found, such as Dependabot or packages like 'yarn-outdated-notifier', seem to only work with GitHub. The project is currently running on Azure DevOps and, when it goes public, will run on GitLab.
Is there any way we could do this in our CI/CD environment or with any (free) solutions? The customer prefers to have as few dependencies as possible.

Related

lerna advantage with yarn workspaces, why use lerna at all

I'm trying to setup a mono repo and came across lerna+yarn. My problem is, what is the actual advantage I get in using lerna. I can use yarn workspaces only and have the same functionality. I came across the following Are there any advantages to using Lerna with Yarn workspaces? but here there is no specific answer as to the specific advantage of using lerna.
as stated in the yarn workspaces documentation,
workspaces are the primitives that tools like lerna can use
You might have a project that only needs workspaces as "primitives", and not lerna. Both are tools, but lerna as a higher-level-tool than yarn workspaces helps you organize your monorepo when you are working on open source or in a team:
The install (bootstrap) / build / start-scripts are at the project root:
meaning one install and package resolution instead of one for each package
node_modules at the root is 'fat', in sub-projects it's tiny: One place to look for when you resolve package conflicts
parallel execution of scripts in sub-projects: E.g. starting up both server and frontend development in watch-mode can just be one command
publishing and versioning is much easier
There is certainly a lot of overlap between the two. The big difference is that Lerna handles package management, versioning and deployment.

NPM caching similar to a local Maven cache

Gradle's dependency management system stores downloaded artifacts in a local Maven cache. When a build requests that same dependency again the dependency is simply retrieved from the cache, avoiding any network transfer of the artifact.
I'm trying to replicate this behavior with NPM for building JavaScript projects. I was expecting NPM to support a global node_modules cache, but installing a package "globally" in NPM has a different meaning => the package is added to PATH so that it can be used as a CLI tool.
Reading the documenation for npm install, the standard behavior is to install packages into a local node_modules directory. But this would mean many duplicated packages on the system wasting valuable disk space. It also poses a problem for doing clean production builds, since ideally the node_modules should be blown away each time.
Does NPM support something like the Gradle's Maven caching? Documentation on NPM cache doesn't make it any clearer how this is to be used. What's more, it's not obvious if a caching strategy with NPM is safe across multiple parallel builds.
This seems like such a basic requirement for busy CI environments that it must have been solved before. I found the npm-cache tool which seems to offer this support, but it would be much better if caching was supported natively in npm itself.
Thanks!
IMHO it is a pity that the makers did not learn from things like maven that have already been there. If you are doing microservices and have many apps on your machine and you might also have multiple branches or a local jenkins you will have each dependency N*M times on the disk what is an extraordinary waste of disc-space and performance. So you have to be aware that Java or .NET/C# are mature ecosystems while the JavaScript ecosystem is still in the childhood with lots of flaws and edges. But JavaScript is evolving fast so lets hope for the best. Feel free to discuss your pain with the npm makers (https://github.com/npm/npm/issues/).
However, a partial cure comes if you go away from npm and switch to yarn: http://yarnpkg.com/
NPM Cache already comes bundled with NPM out of the box(listed under cli commands). And its main utility is to avoid the network transfer of the same package over and over.
Regarding the duplicate packages issue, as of npm v3 there has been an effort in terms of finding ways to deduplicate dependencies. But it still does not work exactly like Gradle since it is still possible to end up with duplicates of the same package in your node_modules folder.
Per NPM documentation:
Your node_modules directory structure and therefore your dependency tree are dependant on install order
Although a fresh npm install from the same package json always produces the same dependency tree:
The npm install command, when used exclusively to install packages from apackage.json, will always produce the same tree. This is because install order from a package.json is always alphabetical. Same install order means that you will get the same tree.
So at least there is a way to get consistent dependency trees, albeit there's no guarantee it will be the most efficient one. At least those differences do not interfere correct functioning of NPM.
Hope that helps.

What is the best way to save composer dependencies across multiple builds

I am currently using atlassian bamboo build server (cloud based, using aws) and have an initial task that simply does a composer install.
this single task can take quite a bit of time which can be a pain when developers have committed multiple times giving the build server 4 builds all downloading dependencies (these are not parallel).
I wish to speed this process up but canot figure out a way in which to save the dependancies to a common location for use across multiple builds which still allowing the application to run as intended (laravel)
Answer
Remove composer.lock from your .gitignore
Explanation
When you run compose install for the first time, composer has to check all of your dependencies (and their dependencies etc.) or compatibility. Running through the whole dependency tree is quite expensive, which is why it takes so long.
After figuring out all of your dependencies, composer then writes the exact versions it uses into the composer.lock file so that subsequent composer install commands will not have to spend that much time running through the whole graph.
If you commit your composer.lock file it'll come along to your bamboo server. The composer install command will be waaaayy faster.
Committing composer.lock is a best practice regardless. To quote the docs:
Commit your application's composer.lock (along with composer.json) into version control.
This is important because the install command checks if a lock file is present, and if it is, it downloads the versions specified there (regardless of what composer.json says).
This means that anyone who sets up the project will download the exact same version of the dependencies. Your CI server, production machines, other developers in your team, everything and everyone runs on the same dependencies, which mitigates the potential for bugs affecting only some parts of the deployments.

composer and satis tags for testing and prod

We're using composer, satis and SVN to manage our in-house PHP libraries.
We commit changes to SVN trunk during development, then tag versions (following semantic versioning) when they're ready for testing.
Once a library version is tagged, we can use composer as part of our deployment to the testing environment. Following successful testing, we'd then deploy that same version to production.
The issue here, is that once we've tagged a version for testing, we have to be very careful as the newly tagged version will be picked up by composer when preparing the next prod release.
What I'm imagining, is that we'd tag a version as a beta or RC, (eg v1.1RC1) and somehow configure our deployment process such that it will refuse to deploy an RC or beta to production. If a version is tested successfully, we'd re-tag that version as a released version (v1.1RC1 -> v1.1) and release that.
Can this be achieved?
From what you are saying, I understand that you are actually afraid of tagging a new version of a library because that code could actually be used and break that other application, right?
One approach would be to do good testing. I don't see it should be a problem to tag a version of a library. If the tests are all green, there should be no reason not to tag it. This would work even if the tests are basically only "let's see if it works, manually".
Now the second step is to integrate that new version into the application: Run composer update and see if the application is still running, i.e. start all the tests and wait for green.
I guess it might be a good idea to have a separate area where you check out the application, intentionally run composer update to fetch all the newest libraries, run all the tests and report that a) there are updates and b) they work. A developer should then confirm the update, i.e. do it again manually and commit the resulting composer.lock file, or grab the resulting lock file from that update test.
I don't think there is benefit in using non-production release versions. You have to deal with the next version anyways - constantly toggling the minimum stability setting or adding #RC or #beta flags to the version requirements of the library don't really help.

What is the most effective way to lock down external dependency "versions" in Golang?

By default, Go pulls imported dependencies by grabbing the latest version in master (github) or default (mercurial) if it cannot find the dependency on your GOPATH. And while this workflow is quite simple to grasp, it has become somewhat difficult to tightly control. Because all software change incurs some risk, I'd like to reduce the risk of this potential change in a manageable and repeatable way and avoid inadvertently picking up changes of a dependency, especially when running clean builds via CI server or preparing to deploy.
What is the most effective way I can pin (i.e. lock down or capture) a package dependency so I don't find myself unable to reproduce an old package, or even worse, unexpectedly broken when I'm about to release?
---- Update ----
Additional info on the Current State of Go Packaging. While I ended up (as of 7.20.13) capturing dependencies in a 3rd party folder and managing updates (ala Camlistore), I'm still looking for a better way...
Here is a great list of options.
Also, be sure to see the go 1.5 vendor/ experiment to learn about how go might deal with the problem in future versions.
You might find the way Camlistore does it interesting.
See the third party directory and in particular the update.pl and rewrite-imports.sh script. These scripts update the external repositories, change imports if necessary and make sure that a static version of external repositories is checked in with the rest of the camlistore code.
This means that camlistore has a completely repeatable build as it is self contained, but the third party components can be updated under the control of the camlistore developers.
There is a project to help you in managing your dependencies. Check gopack
godep
I started using godep early last year (2014) and have been very happy with it (it met the concerns I mentioned in my original question). I am no longer using custom scripts to manage the vendoring of dependencies as godep just takes care of it. It has been excellent for ensuring that no drift is introduced regardless of timing or a machine's package state. It works with the existing mechanism of go get and introduces the ability to pin (godep save) and restore (godep restore) based on Godeps/godeps.json.
Check it out:
https://github.com/tools/godep
There is no built in tooling for this in go. However you can fork the dependencies yourself either on local disk or in a cloud service and only merge in upstream changes once you've vetted them.
The 3rd party repositories are completely under your control. 'go get' clones tip, you're right, but you're free to checkout any revision of the cloned-by-go-get or cloned-by-you repository. As long as you don't do 'go get -u', nothing touches your 3rd party repositories already sitting at your hard disk.
Effectively, your external, locally cloned, dependencies are always locked down by default.

Resources