How to force authentication to Archiva internal repository? - maven

big problem. My Archiva internal repo (and maybe snapshot repo, although I don't know yet as I have not put any snapshots in there yet) seem to be accessible to the public.
I.e. if someone wanted the surefire plugin from my repo, they could download it by simply going to https://my.repo.url.com/archiva/repository/internal/org/apache/maven/surefire/surefire-junit3/2.7.1/surefire-junit3-2.7.1.jar
They could download the file right then and there. Would be a shame if there were actual project jar's and such in there also available to the general public and I can't seem to figure out how to disable private access to save the life of me.
I authenticate via LDAP.
Thanks!

To expand on Raghuram's answer, you should consider using different managed repositories for your own releases, as opposed to those proxied from an internet repository (as internal is configured to do by default).
Part of the confusion here is the legacy name of internal, which no longer accurately represents its meaning.

One possibility is you have a guest user, which has the repository observer role. You can either remove the user or disable the role. There is an FAQ which asks for the opposite of what you need.

Related

How does maven authentication works?

I am willing to create a private maven repository, where the access rules are not based on groups/patterns, but on completely custom rules. I've checked both nexus and jfrog, both of them keeping the simple user/group/pattern approach. And (AFAICS), although they provide custom ways to authenticate, they don't provide a was for custom access rules.
For this reason I have started thinking the opposite: what if I can create a simple repository with my custom rules. But when I searched in the Apache documentation, there was no clear explanation how authentication is performed on the back side.
Does anyone knows how this is done, and maybe point me to the correct documentation?
Authentication is done by HTTP Basic Authentication which basically concats the username and password and base64 encodes that. So Maven and Apache do understand each other.
But out of the box the Apache authorization is based on, you guessed, it. Directories (which represent Maven's artifact groups), username and groups. So unless you are willing to write a custom Apache model you won't gain a lot. Probably IP based access control can be done with Apache alone better than with Nexus/JFrog but I haven't looked at the authentication settings for ages.
In Artifactory what you can do, in order to achieve what you mentioned, is to create permission target per user. Meaning that all of your Maven users will deploy to the same repository BUT each to a different name space. For example, 'com/{company}/{project}/' (please replace the company and project with real values)
This is done on the permission target using the 'Include Pattern' so let's say that my company name is JFrog, and I'm working on a project named 'artifactory' I will have a permission target with the following include pattern '/com/jfrog/artifactory/**/*'.
You can also create those permission targets using a script that will automate it for you using this REST API.
That means that I will only be able to reach this namespace.
Does that help?

Can Nexus/Artifactory store copies from a public repository?

The requirements are as follows. We need copies from binaries we need in our projects on our repository server. We can't just proxy the public repository because we had several cases in the past where the binaries on the public repository were changed without changing the release number and we want to avoid problems imposed by that, thus we want to manually specify when to download it from the public repository and when to update. No changes are ever to be made to the binary stored on our repository server without manual interaction.
Is there a way achieve this? I.e. to say "I want artefacts X, Y, Z" copied to my repository server(preferably including their dependencies). Is this possible with either Nexus or Artifactory?
Yes. In Nexus define your own local repository, manually download the versions you want and add them to your repository. You may have to set up "manual routing" for dependency resolution to ensure that Nexus consults the repos in the correct order.
Then make sure your pom files refer to the specific versions you have downloaded.
One thing that will make this a little easier is that you can place the downloaded artifacts directly into the local storage directory of a Nexus repository (you don't need to upload them into Nexus).
See here for details: https://support.sonatype.com/entries/38605563

Start to use artifactory

in company where I am working we are starting to use artifactory like tool of repositories managment, and then I'm reading the user guide of this tool. We started in the configuration creating a virtual repository, a few local and remote repositories. On the use guide i found the following thing:
Prevent disclosing sensitive business information derived from your artifact queries to whomever can intercept the queries, including the
owners of the remote repository itself.
I saw that this could be avoided through
exclude pattern
functionality on the virtual repository. Can you give us some suggestion about this? What kinds of request we should avoided to do?
You should avoid requests for internal artifacts being sent to remote repositories (directly or via virtuals). This can happen when projects depends on internal libraries or within multi module projects where modules depends on each other. When working with virtual repositories Artifactory will always search for such artifacts in local repositories first. However, if someone asked for a wrong version or had a typo in the artifact name, the artifact will not be found in a local repository and Artifactory will try to look for it in the remote repositories configured in this virtual.
To avoid exposing sensitive business information as described above, we strongly recommend the following best practices:
The list of remote repositories used in an organization should be managed under a single virtual repository to which all requests are directed
All internal artifacts should be specified in the Excludes Pattern field of the virtual repository (or alternatively, of each remote repository) using wildcard characters to encapsulate the widest possible specification of internal artifacts.
Assuming all of your projects/modules are using some kind of namespace, for example com.mycompany, you can configure an exclusion pattern for artifacts under this namespace: com/mycompany/**.
For more information take a look at avoiding security risks with an excludes pattern

When using maven-release-plugin, why not just detect scm info from local repo?

As far as I am aware, in order to use the maven-release-plugin, you have to drop in an scm section into your POM file. E.g.:
<scm>
<connection>scm:hg:ssh://hg#bitbucket.org/my_account/my_project</connection>
<developerConnection>scm:hg:ssh://hg#bitbucket.org/my_account/my_project</developerConnection>
<url>ssh://hg#bitbucket.org/my_account/my_project</url>
<tag>HEAD</tag>
</scm>
I understand this data is used to determine what to tag and where to push changes. But isn't this information already available if you have the code cloned/checked-out? I'm struggling a bit with the concept that I need to tell maven what code it needs to tag when it could, at least in theory, just ask Git/HG/SVN/CVS what code it's dealing with. I suspect I'm missing something in the details, but I'm not sure what. Could the the maven-release-plugin code be changed to remove this as a requirement, or at least make auto-detection the default? If not could someone provide some context on why that wouldn't work?
For one thing, GIT and Subversion can have different SCM URIs for read-write and read-only access.
This is what the different <connection> and <developerConnection> URIs are supposed to capture. The first is a URI that is guaranteed read access. The second is a URI that is guaranteed write access.
Very often from a checked out repository, it is not possible to infer the canonical URIs.
For example, I might check out the Subversion repository in-house via the svn: protocol and the IP address of the server, but external contributors would need to use https:// with the hostname.
Or even with GIT repositories, on Github you have different URIs for different access mechanisms, e.g.
https://github.com/stephenc/eaio-uuid.git (read-write using Username / Password or OAuth)
git#github.com:stephenc/eaio-uuid.git (read-write using SSH private key Identification)
git://github.com/stephenc/eaio-uuid.git (anonymous read only)
Never mind that you may have checked out git://github.com/zznate/eaio-uuid.git or cloned a local check out, in other words, your local git repository may thing that "upstream" is ../eaio-uuid-from-nate and not git#github.com:stephenc/eaio-uuid.git
I agree that for some SCM tools, you could auto-detect... for example if you know the source is checked out from, e.g. AccuRev, you should be OK assuming its details... until you hit the Subversion or GIT or CVS or etc code module checked out into the AccuRev workspace (true story) so that the tag that was being pulled in could be updated.
So in short, the detection code would have to be damn sure that you were not using two SCM systems at the same time to be sure which is the master SCM... and the other SCM may not even be leaving marker files on disk to sniff out (AccuRev, for example, doesn't... hence why I've picked on it)
The only safe way is to require the pom to define, at least the SCM system, and for those SCM systems where the URI cannot be reliably inferred (think CVS, Subversion, GIT, HG, in fact most of them) require the URI to be specified.

Nexus OSS: publish to static mirror

Do you know a way to configure Nexus OSS so that it publishes the artifact repository to a remote server in a form that can be statically served, e.g. by Apache Httpd? I'd like to use this static copy to serve only my own artifacts, so the nexus server could actively trigger an update in case there is something new published.
Technically, I think it should be possible to create the metadata for the repo and store them in a static file, but I'm not sure with that. Any hints appreciated.
If there is another repo manager to achieve that, it would be fine for me as well.
I clearly understand the advantages to use the repo manager directly, but due to IT rules I can run Nexus only internally and it would be necessary to have these artifacts available in a (private) repo copy on the Internet as well.
A typical way to solve this IT requirement of only exposing known servers like Apache httpd is to setup Apache httpd as a reverse proxy as documented here.
You can use that approach in a more restrictive way by only exposing a specific repository or better repository group (so you can combine snapshots and releases) and tying that together with a specific user or a specifically restricted setup of the anonymous user that is used by default when no credentials are passed through.
Also if you need more help feel free to contact us in the user mailinglist or on hipchat.

Resources