Artifactory Delay - pip

As we noticed that with artifacts uploaded to Artifactory, they do not appear available via pip straight away. As minimum 5 minutes before they can be downloaded and installed via pip. It seems like they are not indexed straight away or waiting for some timeslot to do so. Could not find any configuration related to this which is not helpful.

I found this, which might be helpful to you:
When you upload many Pypi packages to the same repository within a close period of time the indexing does not happen immediately. It waits for a "quiet period" which can be adjusted. This can be done in the $ARTIFACTORY_HOME/etc/artifactory.system.properties file by setting the values of the artifactory.pypi.index.quietPeriodSecs and the artifactory.pypi.index.sleepMilliSecs properties to an amount of seconds that meets your needs. If those parameters do not exist, please add them to the file. You will need to restart Artifactory for this setting to take affect.
From what I can tell, if these values aren't in that file, both default to 60. Also sleepMilliSecs appears to be a number of seconds, not milliseconds as the name would suggest.
I believe how this works is, Artifactory waits for the repository to "settle", until there hasn't been any changes (deployed or removed packages) for at least quietPeriodSecs seconds. It will check for this every sleepMilliSecs seconds.
Five minutes seems like a long time. If you're making a series of changes with under a minute before each change, that might explain why it's taking a while. Also, the larger your repository is, the longer the indexing will take once it starts, so that might also be a factor.

Related

SSIS Pre Validation taking a long time - only on server

I have an (SQL Server 2017) SSIS package that takes about 5 seconds to run in Visual Studio 2019, and a similar amount of time to execute after being deployed to my local database server - on my development computer. However, when I deploy it to another server, and run it, it takes about 26 seconds to run. When looking at the execution reports - almost all of the extra time is spent in the pre-validation phase in one step
The two log entries are the first two messages in the log, and the first one is for the pre validation of the whole package. All the rest of the entries look similar to those I see on my development server.
One other note: I had previously deployed this package to this server without this same issue. I then added two tasks. One to pull xml into a result set, and another to email the results of the package. Although one of the them does load an external dll to do the emailing, neither of these two tasks take more than a second to validate or execute.
Anybody have an idea of why I would see a 20 second delay on the package pre-validate - but only on another server - and how I might be able to get rid of it?
Further Note:
I re-deployed an earlier version without the latest changes, and the 20 seconds went away. Then step by step I added the functionality back. The 20 seconds did not come back.
So just to validate this, I re-built the current version (that originally had the problem) and deployed it... and it is now back to taking 5 to 6 seconds to execute!
It could be the re-build, or it could be that that server had just been re-booted. I don't know!
I will leave this question open for a day or two to see if it comes back.

Why does upload a model to HuggingFace repository so slow?

I have a problem, I'm trying to push a model to HuggingFace repository. The problem is that it says uploading for the past 16 hours and that's just the pytorch_model bin file which is about 850MB. I am using the LFS. I have tried to manually add the files to the repository which takes an eternity that I am not willing to wait, there's no completion percentage so you are not aware if it's progressing or hanging.
I have tried using the git commands, same long wait.
However, if I try to upload to Github rather than HuggingFace, it doesn't take an eternity 30 mins at most. I felt as if I wasted my time doing all this preprocessing and training for nothing.
Any suggestions or similar problems ?

AzureDevOps onPremise, Workspacemapping really slow

We are using the onPremise version of the DevOps Server 2019 (curently update 1) with self-hosted agents (agents have the last version available from gitHub) in combination with TFVC (2019).
The devOps server is running in a virtual machine and the tfvc server is running in a different virtual machine.
The communication between them is fast, i tested this already by simply copying big testdata from one to the other over network. There is no Problem.
On each and every run, at the very beginning, the workspace mapping from a previous run is getting deleted, a new one is created and than a new workspace mapping to every source paths defined in the repository is established. This is taking about 30-60 Minutes on each and every pipeline/run.
We dont have only one single path in the repository defined. there are a lot of mappings, so that the amount of code that gets taken from TFS stays little and only represents that source code, that is needed by this executed solution.
This can't be changed and has to stay as it is, also we can't simply move to github. (Just sayin in case someone would like to advice to move to github :))
Are there any people, that experienced the same behaviour in the past, that the repository paths mapping at the fist build step is taking about 30-60 minutes when a build is executed?
thanks for any hints in advance
The solution now was, installing everything from scretch on a new machine.
After that, the mappings are running in a 10th of the time it took before

How to create time reports by user/project in Phabricator Phrequent

Is there a way to create a custom reporting system under the Phrequent section in Phabricator?
In the Maniphest app there is a report feature. However, it only counts total number of task by person or project. My organization still requires total time spent on a project and task.
Inside Phrequent you can already sort by user, however, I need one step further total time spent on a task by user or project. Currently it requires a manual process of totaling each time entry per task by hand.
This is not "yet" a feature and there is no implemented way of doing it right now.
Phrequent is still in its early stage of development and a lot of work remains on it.
The tracking per project is definitely a must feature and is being logged here:
https://secure.phabricator.com/T4853
Finally, the current focus for Phabricator right now seems to be the CI part (Harbormaster and Drydock) so the roadmap does not mention incoming work in the short term:
https://secure.phabricator.com/w/roadmap/
but only in the long term:
https://secure.phabricator.com/w/starmap/
On a side note, I considered using phrequent but I believe it's too far from being production ready right now so using other time tracking system seems to be the only viable solution.

How to speed up the eclipse project 'refresh'

I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.

Resources