in Google cloud Artifact Registry I ve created registry setup using maven type and remote functionality that supposed to mirror central.
It just doesn't work. My local test project is not able to fetch anything from such registry - I assume it should populate gcp registry cache from maven central and propagate any artifacts to my local .m2 directory.
In GCP web console remote type is marked as private preview - which is not really explained what it really means. I can definitely access this config, so I assume it should work :/
any ideas?
This is already mentioned in the public document that, GCP remote Artifact Registry is not Generally available for all customers.
Feature availability: This feature is only available to users who signed up for Artifact Registry private preview. For more information, contact your Google representative.
And the term Private Preview means features are ready for testing by customers and have limited support. But, they are not complete yet.
Have a look at this document for details about launching stages
Related
I have access to a private Nexus Repository and would like to speed up my CI builds and thought that I could use the private repository to store and access my build cache. Is this a possibility or a dead end?
It works like a breeze.
Just create a "Raw" repository and give a user write permission for it.
This user then is used to fill the cache and you can use another user or anonymous access to read from the cache.
I just tried it minutes ago.
Any web server that supports PUT for storing files and GET for retrieving the same files should be fine with the default HttpBuildCache implementation.
You can even provide an own client-side implementation to use any remote service you want as build cache.
No.
Gradle's remote build cache is one of the selling points of Gradle Enterprise. So it's not something you can just "plugin" to another piece of software like Nexus.
There is however a Docker image that is designed to work with Gradle Enterprise. Maybe you could make use of that somehow.
But again, the remote build cache is a selling point of Gradle enterprise and as a result is designed to work with Gradle enterprise.
https://gradle.com/build-cache/
I am playing around with Nexus oss 3.1.0-04 OSS. I created a new maven style repository called test and it is proxying from http://repo1.maven.org/maven2/org/apache/maven; After setting this up, I tried to view the contents of test repository but there is nothing seen. I get a "no component found in repository". Why is this? What is that am missing? If I type the URL http://repo1.maven.org/maven2/org/apache/maven on a browser am able to see all its contents.
By default, the local proxy is empty. The best way to get components in is to build a maven project. Of course, make sure your maven settings are configured to point to Nexus - https://books.sonatype.com/nexus-book/reference3/maven.html#maven-sect-single-group).
I should also add, Nexus 3 provides a task for this: Publish Maven indexes
- Maven indexes can be used to download an index of available components to your repo, allowing users connecting to it to use the index to discover components. The task publishes the index for all or a specific Maven repository, hosted, group or proxy. This task will not populate the Browse UI, we intentionally did this so you only see what components and assets are available locally. More on the task here: https://books.sonatype.com/nexus-book/reference3/admin.html
I use Gradle on Android Studio 1.1, I can use the "maven-publish" Plugin. With this one, I can publish to repositories.
Can I use the google cloud storage to host a maven repository?
Or I have to create a Bitbucket/Github to make this?
Thanks
You can have a Google Cloud Storage Bucket act as a remote maven repository to host your artifacts and dependencies only if you do not require the need to have those artifacts and dependencies updated more than once per second. While there is no limit on the number of objects that you can create in a Google Cloud Storage bucket, and there is no limit to writes across multiple objects, there is an Update limit on each object of once per second; so rapid writes to a single object won’t scale.
That being said, you can configure a Google Cloud Storage Bucket as a Website and use it as your remote repository.
By default, Maven will download from the central repository. To override this, you need to specify your GCS Bucket in your pom.xml as shown in Using Mirrors for Repositories.
To upload your artifacts to your bucket you must authenticate with a username and password in your ~/.m2/settings.xml file. The username and password need to be retrieved from the Google Cloud Console:
In the menu on the left select Storage → Settings > > Interoperability.
If no keys are listed under Interoperable storage access keys, select "Create a new key".
Use the Access Key as username, and Secret as the password
You should setup a private Maven repository using https://github.com/renaudcerrato/appengine-maven-repository.
Make a separate App Engine project yourcompanyname-maven.appspot.com.
Setup the username / password configuration and you're all set.
I want to setup a development environment that allows reusing some artifacts from public Maven repositories like Maven Central, Code Haus. Specifically, I like the concept of transitive dependencies.
In our company, our production network cannot export any data outside, but we can push data inside. We already have some gateways to copy file from the outside into our network. Therefore, I could use this to copy the required packages manually but we would miss the power of maven. In our case, the perfect solution would be to be able to get data from public repository but be forbidden to deploy to the external repo.
So I would like to have your expert view on this problem.
We can use various means, as long as the capability to export data outside our network is guarantee:
External packages are created on a disk area that is read-only from production servers.
Some HTTP requests are filtered.
Using a repository manager, as Nexus.
In the repository management guide, Nexus talks about this possibility (http://books.sonatype.com/nexus-book/reference/confignx-sect-manage-repo.html). I would like a confirmation from you guys about how secure it is. Specifically, this has to be updated only by the IT manager.
Regards,
Loïc.
This is completely feasible and a common setup with Nexus. Here are the steps roughly.
Lock all developers and CI server inside the network disallowing direct access to outside servers
Setup Nexus to proxy external repositories like Central as desired
Allow Nexus to reach to those external repositories via the proxy
Configure developers and CI server machines to access Nexus to get the dependencies (and transitive dependencies) as desired
Optionally you can also
Configure CI servers to deploy any internal packages to Nexus
Configure deployment tools to get components for deployment from Nexus
Also note this can be done via different repository formats and toolchains. The common one is Maven, but Nexus also supports NPM, Nuget, Rubygems, sites, YUM and others.
And if you want to make some of your packages in Nexus available to the outside you can configure this as well following multiple options.
Also note that a proxy repository is by definition read only in terms of deployments to it directly. Thats what a hosted repository is for...
Do you know a way to configure Nexus OSS so that it publishes the artifact repository to a remote server in a form that can be statically served, e.g. by Apache Httpd? I'd like to use this static copy to serve only my own artifacts, so the nexus server could actively trigger an update in case there is something new published.
Technically, I think it should be possible to create the metadata for the repo and store them in a static file, but I'm not sure with that. Any hints appreciated.
If there is another repo manager to achieve that, it would be fine for me as well.
I clearly understand the advantages to use the repo manager directly, but due to IT rules I can run Nexus only internally and it would be necessary to have these artifacts available in a (private) repo copy on the Internet as well.
A typical way to solve this IT requirement of only exposing known servers like Apache httpd is to setup Apache httpd as a reverse proxy as documented here.
You can use that approach in a more restrictive way by only exposing a specific repository or better repository group (so you can combine snapshots and releases) and tying that together with a specific user or a specifically restricted setup of the anonymous user that is used by default when no credentials are passed through.
Also if you need more help feel free to contact us in the user mailinglist or on hipchat.