I have many private repositories at Bitbucket, there are about 100 repo's (+/-). Time to build them by php bin/satis build takes a long time, somewhere about 3 minutes. How can i refresh one repository or optimize time of building. Because i saw some satis configuration, where config.json file contain more than 4000 repositories. I can not imagine how many time need to build all of this.
The more private repos you have, the longer Satis will need to build the static archive.
reduce the number of packages, if possible
configure Satis to provide only some and not all versions of a repo
switch from "require-all": true to manually listing the repos and specific versions (beware: this is tedious to maintain, but fast)
configure Satis to maybe skip-dev, when generating archives (to skip branches)
do you need archives at all or only source? if only source, disable archive generation
add Satis to a cronjob, the first run takes a while, after that the cache is used
with repo count of 100+, i would suggest to setup and switch to a private Packagist server
Satis does not yet support "selective update of repos".
It's a long standing issue / feature request, see https://github.com/composer/satis/issues/40
Related
We have several repositories on Github and just recently explored to integrate it our continuous integration/deployment to CircleCI. The thing is, every time we update something on .circleci directory of a certain repository, we need to do the same for the others. Is there a way to have it like a dependency so all our repository can just fetch from that?
I have Artifactory set up and working, serving other artifacts (RPM, etc)
I would like to have local copies of public and private Go programs and libraries
to ensure version consistency
to let public repositories get bugs out
to let public repositories secure from unauthorized alterations
I've created a Go repository in Artifactory, and populated it with, as an example, spf13/viper using frog-cli (which created a zip file and a mod file)
Questions:
Is the zip file the proper way to store Go modules in Artifactory?
How does one use the zip file in a Go program? E.g. the URL to get the zip file is http://hostname/artifactory/reponame/github.com/spf13/viper/#v/v1.6.1.zip (and .mod for the mod file) E.g., do I set GOPATH to some value?
Is there a way to ensure all requirements are automatically included in the local Artifactory repository? At the time of the primary package's (e.g. viper) inclusion into the local Artifactory repository?
Answering 3rd question first -
Here's another article that will help - https://jfrog.com/blog/why-goproxy-matters-and-which-to-pick/. There are two ways to publish private go modules to Artifactory. The first is a traditional way i.e. via JFrog CLI that's highlighted in another article.
Another way is to point a remote repository to a private GitHub repository. This capability was added recently. In this case, a virtual repository will have two remotes. The first remote repository defaults to GoCenter via which public go modules are fetched. The second remote repository points to private VCS systems.
Setting GOPROXY to ONLY the virtual go modules repository will ensure that Artifactory continues to be a source of truth for both public and private go modules. If you want to store complied go binaries, you can use a local generic repository but would advise using a custom layout to structure the contents of a generic repository.
Answering the first 2 questions -
Go module is a package manager in Golang similar to what maven is for Java. In Artifactory, for every go module, there are 3 files for every go module version: go.mod, .info, and the archive file.
Artifactory follows the GOPROXY protocol, hence the dependencies mentioned in the go.mod will be automatically fetched from the virtual repository. This will include the archive file too which is a collection of packages (source files).
There's additional metadata that's stored for public go modules such as tile and lookup requests since GoSumDB requests are cached to ensure that Artifactory remains the source of truth for modules and metadata even in an air-gapped environment.
Our organization has a locally running instance of Artifactory, and also a local instance of Bitbucket. We are trying to get them to play well together so that Artifactory can serve up our private PHP packages right out of Bitbucket.
Specifically, we'd like to create a Composer Remote Repository in Artifactory that serves up our private PHP packages, where those packages are sourced from git repositories on our local Bitbucket server.
Note that we'd rather not create and upload our own package zip files for each new package version, as suggested here. Ideally, we just want to be able to commit changes to a PHP package in BitBucket, tag those changes as a new package version, and have that new version be automatically picked up and served by Artifactory.
The Artifactory Composer documentation suggests that this is possible:
A Composer remote repository in Artifactory can proxy packagist.org
and other Artifactory Composer repositories for index files, and
version control systems such as GitHub or BitBucket, or local
Composer repositories in other Artifactory instances for binaries.
We've spent a lot of time trying making this work, but haven't been able to do it. The Remote Repository that we create always remains empty, no matter what we do. Can anyone offer an example to help, or even just confirm that what we're attempting isn't possible?
For reference, we've been trying to find the right settings to put into this setup page:
Thanks!
Artifactory won't download and pack the sources for you, it expects to find binary artifacts.
The mention of source control in the documentation refers to downloading the archives from source control systems, either uploaded there as archives (don't do that), or packed by the source control system on download request (that is what you are looking for).
You can use this REST API to download automatically generated zips from BitBucket. If you can configure the composer client to look for the packages in the right place, you're all set.
I'm developing some libraries i frequently use across more than one project and i use GIT to version them.
Now, i'd like to use them through Composer.
Here comes my question: Composer makes me able to specify some private repositories from which i can pull source code to include in my apps (https://getcomposer.org/doc/05-repositories.md#using-private-repositories).
Then, i found Satis: https://getcomposer.org/doc/articles/handling-private-packages-with-satis.md#satis
Now, i don't well understand the differences between the two and which advantages i can have using Satis instead of using the private repositories through the Composer capabilities.
Should i really have to setup a Setis server? Which advantages does it bring to me?
By default, Composer looks for dependencies from your composer.json on special public package repository named Packagist.
Packagist stores each added repository location and its dependencies.
When you run composer install, Composer asks Packagist for dependencies and their locations and then downloads them.
But when you have a really big project with a lot of dependencies or(and) you are building your project rather frequently, then you can soon run into two problems.
The first and the main problem is speed. If you don't have wide internet connection, building your app simultaneously by all the members of your team can take plenty of time.
The second problem is that public repository hosting services like Github usually have limits for api requests.
You can solve both these problems setting up a mirror of Packagist with Satis in your local infrastructure. In this case Composer won't go to Packagist for your dependencies, but ask your Satis server for them.
Packagist is a public service, but sometimes you want to add your another private repository as a dependency. You can add a special entry to your composer.json to make Composer download this package from another location.
And if you want, you can also make Satis to mirror your private repositories as it does with public ones.
Hudson provides the option to have a Maven build job utilize a private local repository, or use the common one from the Maven installation, i.e. one shared with other build jobs. I have the sense that our builds should use private local repositories to ensure that they are clean builds. However, this causes performance issues, particularly with respect to bandwith of downloading all dependencies for each job -- we also have the jobs configured to start with a clean "workspace", whcih seems to nuke the private maven repo along with the rest of the build space.
For daily, continuous integration builds, what are the pros and cons of choosing whether or not to use a private local maven repository for each build job? Is it a big deal to share a local repo with other jobs?
Interpreting the jenkins documentation, you would use private Maven repository if
You end up having builds incorrectly succeed, just because your have all the dependencies in your local repository, despite that fact that
none of the repositories in POM might have them.
You have problems regarding having concurrent Maven processes trying to use the same local repository.
Furthermore
When using this option, consider setting up a Maven artifact manager
so that you don't have to hit remote Maven repositories too often.
Also you could explore your scm's clean option (rather than workspace clean) to avoid this repository getting nuked.
I believe Sonatype recommends using a local Nexus instance, even though their own research shows (State of the Software Supply Chain report 2015) that less then 5% of traffic to Maven Central comes from such repositories.
To get back to the question, assuming you have a local Nexus instance and high bandwidth connectivity (tens of Gbps at least) between your build server (e.g. Jenkins) and Nexus, then I can see few drawbacks to using a private local repo, in fact I would call the decrease in build performance a reasonable trade-off.
The above said, what exactly are we trading off? We are accepting a small performance penalty on the downside and on the upside we know with 100% certainty that independent, clean builds against with our local Nexus instance as proxy works.
The latter is important because consider the scenario where the local repo on the build server (probably in the jenkins' user home directory) has an artefact that is not cached in Nexus (this is not improbable if you started off your builds against Maven Central). This out-of-sync scenario is suboptimal because it is possible to get a scenario where your cache TTL settings in Nexus means that builds fail if Nexus' upstream connectivity to Central was down temporarily.
Finally, to add more to the benefits side of the trade-off, I spent hours today getting an artefact in the shared Jenkins user .m2/repository today. Earlier on in the day upstream connectivity to Central was locally up and down for hours (mysterious issue in enterprise context). In the end I deleted the entire shared jenkins user .m2/repository so it all be retrieved from the local Nexus.
It's worth considering having builds using a local .m2/repository (in jenkins user home directory) as well as builds using private local repositories (fast and less fast builds). In my case however I may opt for private local repositories only in the first instance - I may be able to accept the penalty if I optimise the build by focussing on low hanging fruit (e.g. split up multi module build).