Maven fails to download artifacts from local nexus - maven

I have a problem with downloading artifacts from our local nexus, sorry if this is a bit long.
Our sources tree is divided into several projects, let's call them A and B. B is dependent on a release version of A that is deployed to our local Nexus server.
Whenever I release a new A the next several builds (In TeamCity) fail to download the new artifacts and I see the error:
Could not resolve dependencies for project B-groupId:B-artifactId:jar:B-version:
Could not find artifact A-groupId:A-artifactId:jar:A-newVersion
Here are some relevant facts:
We are building with the -T 1C maven option
The artifact DOES exist in nexus - if I go the the download URL it works
When I build it locally it works
Eventually things work out, meaning it fails to download a certain artifact the first time, the next time it succeeds but fails on another and so on until all artifacts are downloaded
Another project also released to the same local repository works fine when its version is updated
I see these multiple downloading lines in the log:
Downloading: http://nexus.company.com:8081/nexus/content/groups/public/com/company/group/artifact/1.0.10/artifact-1.0.10.pom
this line repeats several time for each of the just released artifacts
It doesn't seem to be an issue of the nexus index (Like I mentioned - building locally works fine, and also on some of the TeamCity agents it works)
Also - doesn't seem to be a network problem because both the TeamCity agent and the nexus server are in the same datacenter
Sorry if this was a long read, but I would really appreciate any help. This thing is bugging us crazy.
Thanks

Related

Deploying multiple SNAPSHOT artifacts into the same repository in Sonatype OSSRH

I've seen Building and deploying native code using Maven - but can't get this (very similar) deployment working as I'd like..
I have a C++ project that builds with Maven, and the Maven CMake Plugin. This involves several Maven profiles, to select the correct settings for the various C++ compilers I use on the platforms I'm building on. (Windows 10, Ubuntu 16.04, Ubuntu 18.04, CentOS 7, Raspbian, macOS High Sierra). I use Jenkins to run this build on the various VMs/Raspberry Pi - yielding a .tar.gz or .zip via the Maven Assembly Plugin. The final result is six archive files, that vary in their classifier/type. They all have the same groupId/artifactId.
I wanted each of these jobs to deploy its archive to Sonatype's OSSRH Nexus system, using the nexus-staging-maven-plugin.
I had this plugin configured to not automatically close the repository, so that the multiple builds could run via Jenkins (sequentially), and deploy to the same repo. I would then review this in the web UI, then Release or Drop it appropriately.
This worked fine, when the project had a version number of 0.0.1-SNAPSHOT. However, when I decided to (manually) release this, by setting the version to 0.0.1, and run my Jenkins builds... the deployment behaviour was different to what I'd seen when it was a SNAPSHOT.
Each platform-specific deployment created its own staging repository in the OSSRH Snapshots repo.
After reading https://github.com/sonatype/nexus-maven-plugins/tree/master/staging/maven-plugin, I have tried a variety of these settings, but nothing seems to work:
<skipStagingRepositoryClose>true</skipStagingRepositoryClose>
<skipStaging>true</skipStaging>
<autoReleaseAfterClose>false</autoReleaseAfterClose>
<stagingRepositoryId>${project.artifactId}-${project.version}-repo</stagingRepositoryId>
The nexus-staging:rc-open goal looks like it might help, allowing me to open a named staging repository (as I tried to do with stagingRepositoryId, above) - but it requires a staging profile id - I've used the rc-list-profiles goal to find mine - but when I give this to rc-open, it's reported as "missing or invalid".
It looks like this should be possible: https://github.com/sonatype/nexus-maven-plugins/blob/master/staging/maven-plugin/WORKFLOWS.md - this suggests that you can't create a new staging profile id; that they're allocated by Nexus.
Why is this mechanism different between SNAPSHOT and non-SNAPSHOT deployments?
Kind regards, Matt
I've updated https://stackoverflow.com/a/40954957/14731. Per point 6.6, SNAPSHOTs cannot be released atomically. There is no known workaround.

Actualize repository group

I created a repository group and linked a few other hosted repositories under that : one release repo, and another snaphost repo.
I noticed that group repository is out of sync with the linked repositories: contains such snapshot artifacts (old, not used) which does not exist under my snapshot repository - since I created a scheduled task to clean up snapshots older then 14 days in.
In this case I always delete the problematic folder on the filesystem and then, and when I run a maven build next time using that public group repository url, artifacts get fetched and up to date versions can be seen then.
To make it more clear
Hi,I am trying to explain the situation. I have a Jenkins job, which had published an artifact with version eg. 3.28-SNAPSHOT to the snapshot repo before. Since I have snapshot and release hosted repos, I created a repo group and add these to it. Then some changes were done on the mentioned Jenkins job and version number (build number) started from 0 agagin (3.0-SNAPSHOT)...
From this point of view the lower versions point to the newer artifacts and the higher versions point to the older ones. As I mentioned I have also a houskeeping script: shell script, not nexus scheduled task due to that is a littlebit slow, which deletes the snapshots older than 14 days + then I run update index nexus scheduled task to make local storage and nexus in sync .
After this clean up those versions with 3.28-... were deleted from snapshot repo, but these outdated artifacts remained under the repository group. So my question comes from this point: Why artifacts are duplicated (I mean on the local storage consuming disk space) when I create a repository group pointing to other repositories (release, snapshot)?
How can I force not to copy each artifact from the linked repos, just to maintain a metadata where requested artifacts can be downloaded from? Or if it is not possible since the implementation does not support it, how can I resync my repository group to follow snapshot and release repo changes (clean up shell script + update index outcome)?
How can I resolve this situation?
Thanks in advance!

Maven - Not able to open jars it is downloading

Whenever I try to run mvn clean install on my code, maven runs and start downloading jars, after downloading some jars It give an error i.e. Not able to open xxxxx.jar
On first though i changed that particular jar , but this error is coming for more jars, then i tried to take my friends repository.
And then it works fine for the jars already available in my friends repository. But whenever it have to download new jars from central repository same error occurs.
I tried 100 of time deleting .m2 folder and create it again but no luck.
I also tried switching maven installations or version from different friend and maven official websites but still No Luck
I am fed up of this. Trying from last week.
Please Help
Just a quick suggestion from the top of my head:
You could take a look at the .jar-Files that get downloaded using a text editor or even better some hex-editor. Depending on the network from that you are accessing the repository there might be some kind of proxy-server that intercepts the jar-download-requests and sends back some html-page - at least it might do so in our company-network.
If the text-editor just shows some strange characters try opening the jar-file with some zip-tool (eg. 7zip) and see if that shows some error.

How to get artifactory to update the maven-metadata.xml for a virtual repo?

long time reader, first time asker...
I have a standalone network (no internet access). It has an artifactory server which has virtual libs-snapshot and libs-release repos. Under libs-snapshot, there are 4 local snapshot repos. The reason for this is that we get a dump of all the artifactory repos from somewhere else (non-connected), and import it to this network. But we have to modify a subset of the snapshot artifacts there. So we created another local snapshot repo, call it mine-snapshot-local (maven 2 repo, set as unique, max artifacts=1?), and added it to the top of the libs-snapshot virtual. In theory, this would allow us to modify the handful of artifacts we needed to, deploy to our own repo, and local developers would pick those up. But we would still have access to the 99% of other artifacts from the periodic dump from the other non-connected system. In addition, we can import the drops from the other network, which are concurrently being modified, on a wholesale basis without touching our standalone network repo (mine-snapshot-local). I guess we're "branching" artifactory repos...
I realize we could probably just deploy straight into one of the imported repos, but the next time we get a dump from the other network, all those custom modified artifacts would go away... so I'd really like to get this method to work if possible.
from my local eclipse, the maven plugin deploys artifacts explicitly, and without error, to the mine-snapshot-local repo. The issue I'm seeing is that the maven-metadata.xml for the virtual libs-snapshot is not being updated. The timestamp of that file is updated, and if I browse with a web browser to libs-snapshot/whatever_package, I can see my newly deployed artifacts, with newer timestamps than the existing snapshots. But the maven-metadata.xml file still contains pointers to the "older" snapshot.
maven-metadata.xml is successfully updated in the mine-snapshot-local repo, but it is as if artifactory is not merging all the metadata files together correctly for the virtual repo. Or, more likely, I have misconfigured something to cause it to ignore our top-layer local repo somehow (but why would the snapshot jar/pom still show up there?).
We are using artifactory 2.6.1 (and do not have the option to upgrade).
I've tried a number of things: setting the snapshot repos to unique, nonunique, deployer, limiting the number of snapshots, etc. None of it seems to make much of a difference.
The one thing I do see as possibly being an issue is the build number assigned to a snapshot. For example, in the imported repo, the artifact might have a timestamp that is a week old but a build number of 4355. In my new repo, when i deploy, i have a much newer timestamp, but the build number is 1 (or something much, much smaller than 4355).
Am I barking up the wrong tree by trying to have multiple local snapshot repos like this? It seems like this should be ok, but maybe not.
You are using a very (but very) old version of Artifactory and it could be that you are suffering from an issue that was long gone. The normal behavior should be that if you have 4 maven repositories and you updated/deployed new artifacts into one of those repositories, the Virtual repository should aggregate the metadata from all of the listed repositories.
Just to verify, you mentioned that you are deploying from Eclipse, are you referring to P2? If so just a side note, Artifactory will not calculate metadata for P2 artifacts.

How to debug the performance of a wrong setup of a build machine?

We have to setup new build environments regularily, and the process seems not so simple. Today I have got a new build machine, and the first Maven build was so slow, that I wanted to clarify why the performance was so bad. But how to do that?
Our context is:
We use multiple build machines, each project gets its own.
Each build machine has a similar setup, so that projects can start immediately and don't have to configure a lot.
We have the following tools preconfigured:
Hudson (currently 2.1.1, but will change)
Artifactory 2.3.3.1
Sonar
Hudson, Artifactory and Sonar have their own Tomcat configured
Maven 2.2.1 and Maven 3.0.3 (with no user configuration, only the installation has a settings.xml)
Ant 1.7.1 and Ant 1.8.2 (not relevant here)
Subversion 1.6 client
All tools should work together, especially the repository chain should be:
Build machine Maven repository
Build machine Artifactory
Central company Artifactory (is working as mirror and cache for the world)
Maven central (and other repository)
So when the Maven build needs a dependency resolved, it will be first looked-up in the local Maven repo, from there in the local Artifactory repo, then in the central Artifactory repo and only then on the internet.
We normally have to use proxies to connect to the internet, we don't need it in our intranet.
The first build (Maven Hello World) was built in around 45 minutes. In that time, all bootstrapping was happening, but I would have thought by using our chain of repositories (where the central repository is well filled), the build would be much faster. So I think the focus of the debugging will be the network, the local build is not the problem. So configuration and interaction of Maven and Artifactory is under consideration.
How do you debug such an environment? I have access to the build machine (as sudo) and to the central repository, but I do not know how to start, what to prove, where to look. So what is your experience, what are the tips and tricks you would like to share?
Here are a few things I have done up to now. If you have additional advice, you are welcome!
I suspected the chain of repositories to be the source of evil, so I addressed that first. The reasons are:
The real build on the local machine (of a hello world program) may differ in milliseconds, but not minutes.
Network makes a difference, so attack that first.
The chain of repository is interesting, if something is not found locally. Here are the steps to ensure that that is the case:
For Maven, delete the contents of the local cache. If the local cache is filled, you don't know if a resource if found in the local cache or elsewhere. (Do that at least at the end, if everything else is working again.)
For Artifactory, find that cache as well, and clean it by deleting its contents. It is only a cache, so it will be filled a new.
If you use a clever browser for measuring the lookup, ensure that what you asked for is not in the cache of the browser.
Else use a tool like wget to ask for a resource.
Try to minimize the sources for failure. So try to divide the long distance of your lookup in smaller segments that you control.
Don't use Maven for doing the lookup, start first with the Artifactory repository (only), and later then with Maven.
This led to the following tests I wanted to do. Every time I ensured that the previous prerequisits were met:
Ask for https://<my-project-artifactory>/repo/<my-pom>. Expectation:
Local lookup will fail, so has to find the resource in a remote repository in the central company Artifactory.
Possible effects could come from proxy, artifactory lookup.
Result: Lookup for a simple POM needed ~ 30 seconds. That is too much.
Remove the proxy. With wget, there is an option --no-proxy which does just that. Expection:
Faster lookup.
Result: No change at all, so proxy was not the reason.
Ask for https://<my-project-artifactory>/libs-snapshots-company/<my-pom>. So change the virtual repository to a real remote repository. Expectation:
Artifactory knows where to do the lookup, so it will be much faster.
Result: POM was found immediately, so the 30 seconds are Artifactory doing lookup. But what could be the reason for that?
Removed in Artifactory all remote and virtual repositories (only left our companies ones and the cached Maven central). But use again https://<my-project-artifactory>/repo/<my-pom>. Expectation:
Artifactory will find the repository much faster.
Result: POM came in an instant, not measurable.
I was then courageous and just started the build (with empty cache locally). The build needed then 5 seconds (instead of 15 minutes the same morning).
So I think I have now better understood what can go wrong, a lot of questions are remaining. Please add your ideas as answers, you will get reputation for them!

Resources