I have a Sonatype Nexus repository on an older machine, and I have purchased a newer server which will become my new repository host. In the installation of Nexus on the older machine I have an extensive collection of artifacts, the vast majority of which are now obsolete and can be safely removed from Nexus.
I know it is possible for me to move all of the artifacts from the old installation into the new installation by simply copying the sonatype-work directory to the new box. My question is this: If I want to prune the artifacts in that directory down to only what I need right now (probably about 20% of the repository contents) what steps would I have to take other than deleting the unwanted artifacts? For example, would I need to force Nexus to rebuild indexes? Thanks for the help!
You could just install the new Nexus and proxy off the old one via one proxy repo in addition to Central and other repos. Then you run this for a while and only things not found in other public repositories you configure will be proxied from the old Nexus instance.
At a later stage you could run scheduled task on the old repo that removed old items.
When you are satisfied you got everything you need, you do one last backup and then take the old Nexus instance offline.
Of course the other option is to just not worry and migrate it all. In the end you really only have to migrate what you actually deployed (so probably releases and 3rd party repos).
The easiest option btw. is to just copy the whole sonatype-work folder over to the new machine and fire it up with a new Nexus install there and flick the switch.
Related
I want to upgrade my nexus repo from 2.10.0-02 to latest 3.x edition.
So I followed the documentation which states I should first upgrade it to latest 2.y edition (which is nexus-2.14.13-01).
I followed the steps as mentioned in the documentation, viz.
Stop old version nexus server.
Extract nexus-2.14.13-01.zip at same location as previous nexus, so 'sonatype-work' becomes the sibling directory.
Point soft links and env variables to new nexus.
Start new nexus server.
Change owner to nexus user. (as in previous version)
But in my new nexus-2.14.13-01 instance, I cannot see my personal artifacts, be it proxy or hosted.
Although I can still see all my artifacts in /sonatype-work/nexus/storage folder.
What am I missing? Please suggest
Thank you
The browse storage UI in Nexus Repo 2.x displays what is on disk. It's as simple as that. So if the artifacts are not visible it means the work directory is not set to the right location. Edit $installdir/conf/nexus.properties and set "nexus-work=/sonaytype-work/nexus".
I have two instances of Sonatype Nexus.
One of them is installed on a processing-focused machine, in other words, it has no disk space. So, I hold this instance to keep only our lighweight dependencies, such as small jars used by a lot of projects.
The other machine have enough disk space, but it is out of our network. I want to use it to keep the bigger dependencies, the ones we don't use frequently.
That said, I want to create a proxy repository into the first Nexus installation pointing to the installation that hold the biggest artifacts. But I don't want it to download the dependencies from the other, because it would cause a lot of headaches when deal to disk space. Do you guys have any idea on how to do it? I'll clarify, I need a proxy repository pointing to another Nexus installation that that don't download the artifacts when they're requested. Please, help.
I created a repository group and linked a few other hosted repositories under that : one release repo, and another snaphost repo.
I noticed that group repository is out of sync with the linked repositories: contains such snapshot artifacts (old, not used) which does not exist under my snapshot repository - since I created a scheduled task to clean up snapshots older then 14 days in.
In this case I always delete the problematic folder on the filesystem and then, and when I run a maven build next time using that public group repository url, artifacts get fetched and up to date versions can be seen then.
To make it more clear
Hi,I am trying to explain the situation. I have a Jenkins job, which had published an artifact with version eg. 3.28-SNAPSHOT to the snapshot repo before. Since I have snapshot and release hosted repos, I created a repo group and add these to it. Then some changes were done on the mentioned Jenkins job and version number (build number) started from 0 agagin (3.0-SNAPSHOT)...
From this point of view the lower versions point to the newer artifacts and the higher versions point to the older ones. As I mentioned I have also a houskeeping script: shell script, not nexus scheduled task due to that is a littlebit slow, which deletes the snapshots older than 14 days + then I run update index nexus scheduled task to make local storage and nexus in sync .
After this clean up those versions with 3.28-... were deleted from snapshot repo, but these outdated artifacts remained under the repository group. So my question comes from this point: Why artifacts are duplicated (I mean on the local storage consuming disk space) when I create a repository group pointing to other repositories (release, snapshot)?
How can I force not to copy each artifact from the linked repos, just to maintain a metadata where requested artifacts can be downloaded from? Or if it is not possible since the implementation does not support it, how can I resync my repository group to follow snapshot and release repo changes (clean up shell script + update index outcome)?
How can I resolve this situation?
Thanks in advance!
long time reader, first time asker...
I have a standalone network (no internet access). It has an artifactory server which has virtual libs-snapshot and libs-release repos. Under libs-snapshot, there are 4 local snapshot repos. The reason for this is that we get a dump of all the artifactory repos from somewhere else (non-connected), and import it to this network. But we have to modify a subset of the snapshot artifacts there. So we created another local snapshot repo, call it mine-snapshot-local (maven 2 repo, set as unique, max artifacts=1?), and added it to the top of the libs-snapshot virtual. In theory, this would allow us to modify the handful of artifacts we needed to, deploy to our own repo, and local developers would pick those up. But we would still have access to the 99% of other artifacts from the periodic dump from the other non-connected system. In addition, we can import the drops from the other network, which are concurrently being modified, on a wholesale basis without touching our standalone network repo (mine-snapshot-local). I guess we're "branching" artifactory repos...
I realize we could probably just deploy straight into one of the imported repos, but the next time we get a dump from the other network, all those custom modified artifacts would go away... so I'd really like to get this method to work if possible.
from my local eclipse, the maven plugin deploys artifacts explicitly, and without error, to the mine-snapshot-local repo. The issue I'm seeing is that the maven-metadata.xml for the virtual libs-snapshot is not being updated. The timestamp of that file is updated, and if I browse with a web browser to libs-snapshot/whatever_package, I can see my newly deployed artifacts, with newer timestamps than the existing snapshots. But the maven-metadata.xml file still contains pointers to the "older" snapshot.
maven-metadata.xml is successfully updated in the mine-snapshot-local repo, but it is as if artifactory is not merging all the metadata files together correctly for the virtual repo. Or, more likely, I have misconfigured something to cause it to ignore our top-layer local repo somehow (but why would the snapshot jar/pom still show up there?).
We are using artifactory 2.6.1 (and do not have the option to upgrade).
I've tried a number of things: setting the snapshot repos to unique, nonunique, deployer, limiting the number of snapshots, etc. None of it seems to make much of a difference.
The one thing I do see as possibly being an issue is the build number assigned to a snapshot. For example, in the imported repo, the artifact might have a timestamp that is a week old but a build number of 4355. In my new repo, when i deploy, i have a much newer timestamp, but the build number is 1 (or something much, much smaller than 4355).
Am I barking up the wrong tree by trying to have multiple local snapshot repos like this? It seems like this should be ok, but maybe not.
You are using a very (but very) old version of Artifactory and it could be that you are suffering from an issue that was long gone. The normal behavior should be that if you have 4 maven repositories and you updated/deployed new artifacts into one of those repositories, the Virtual repository should aggregate the metadata from all of the listed repositories.
Just to verify, you mentioned that you are deploying from Eclipse, are you referring to P2? If so just a side note, Artifactory will not calculate metadata for P2 artifacts.
everytime i start with a fresh new workspace, m2eclipse downloads nexus-maven-repository-index.gz from the maven central repository.
this is good.
but,
some times, i just want to start a new workspace, and not wait for it to download,
it tried copying the whole .metadata directory from an old workspace to the new one,
but the list of maven artifacts are still empty.
is there a way i can cache it?
or at least download the file once, and the copy/extract/repackage it so that m2eclipse thinks it has already downloaded it and allows me to search for maven artifacts.
or a short version of the question
where and in what format is the "nexus-maven-repository-index.gz" file stored in the workspace?
The index is stored in the plugin's metadata location, i.e.
[workspace root]/.metadata/.plugins/org.maven.ide.eclipse/nexus
There will be one folder for each remote repository index in use.
You can configure the plugin to not download the index at startup too. Got to Window->Preferences->Maven and uncheck Download repository indexes at startup, you'll have to remember to reactivate it to get any updates though
Update:
I just verified that copying the metadata works. M2Eclipse will still contact the repository to download the deltas (assuming the above option is checked), but that only takes a few moments as it is only downloading the deltas.
Depending on your situation, you may want to try running a managed repository such as artifactory or nexus.
Configure it as the one-true-source-of-everything in maven, that way the initial download should only be from a local location and hence fast.
There is similar problem in my company, due to the network/security restrictions, the index file can't be downloaded by m2eclipse.
I have tried to use apache, to direct maven.org to my localhost to provide the index.(it should work, you can try). But again, network restriction disables local pc level ds resolution.
Last solution is try to downlaod nexus-maven-repository-index.zip, extract everything inside this zip, except the timestamp file, and extract and replace everything into the corresponding index folder for central repository.
It works. :-D