SSIS Pre Validation taking a long time - only on server - performance

I have an (SQL Server 2017) SSIS package that takes about 5 seconds to run in Visual Studio 2019, and a similar amount of time to execute after being deployed to my local database server - on my development computer. However, when I deploy it to another server, and run it, it takes about 26 seconds to run. When looking at the execution reports - almost all of the extra time is spent in the pre-validation phase in one step
The two log entries are the first two messages in the log, and the first one is for the pre validation of the whole package. All the rest of the entries look similar to those I see on my development server.
One other note: I had previously deployed this package to this server without this same issue. I then added two tasks. One to pull xml into a result set, and another to email the results of the package. Although one of the them does load an external dll to do the emailing, neither of these two tasks take more than a second to validate or execute.
Anybody have an idea of why I would see a 20 second delay on the package pre-validate - but only on another server - and how I might be able to get rid of it?
Further Note:
I re-deployed an earlier version without the latest changes, and the 20 seconds went away. Then step by step I added the functionality back. The 20 seconds did not come back.
So just to validate this, I re-built the current version (that originally had the problem) and deployed it... and it is now back to taking 5 to 6 seconds to execute!
It could be the re-build, or it could be that that server had just been re-booted. I don't know!
I will leave this question open for a day or two to see if it comes back.

Related

AzureDevOps onPremise, Workspacemapping really slow

We are using the onPremise version of the DevOps Server 2019 (curently update 1) with self-hosted agents (agents have the last version available from gitHub) in combination with TFVC (2019).
The devOps server is running in a virtual machine and the tfvc server is running in a different virtual machine.
The communication between them is fast, i tested this already by simply copying big testdata from one to the other over network. There is no Problem.
On each and every run, at the very beginning, the workspace mapping from a previous run is getting deleted, a new one is created and than a new workspace mapping to every source paths defined in the repository is established. This is taking about 30-60 Minutes on each and every pipeline/run.
We dont have only one single path in the repository defined. there are a lot of mappings, so that the amount of code that gets taken from TFS stays little and only represents that source code, that is needed by this executed solution.
This can't be changed and has to stay as it is, also we can't simply move to github. (Just sayin in case someone would like to advice to move to github :))
Are there any people, that experienced the same behaviour in the past, that the repository paths mapping at the fist build step is taking about 30-60 minutes when a build is executed?
thanks for any hints in advance
The solution now was, installing everything from scretch on a new machine.
After that, the mappings are running in a 10th of the time it took before

Artifactory Delay

As we noticed that with artifacts uploaded to Artifactory, they do not appear available via pip straight away. As minimum 5 minutes before they can be downloaded and installed via pip. It seems like they are not indexed straight away or waiting for some timeslot to do so. Could not find any configuration related to this which is not helpful.
I found this, which might be helpful to you:
When you upload many Pypi packages to the same repository within a close period of time the indexing does not happen immediately. It waits for a "quiet period" which can be adjusted. This can be done in the $ARTIFACTORY_HOME/etc/artifactory.system.properties file by setting the values of the artifactory.pypi.index.quietPeriodSecs and the artifactory.pypi.index.sleepMilliSecs properties to an amount of seconds that meets your needs. If those parameters do not exist, please add them to the file. You will need to restart Artifactory for this setting to take affect.
From what I can tell, if these values aren't in that file, both default to 60. Also sleepMilliSecs appears to be a number of seconds, not milliseconds as the name would suggest.
I believe how this works is, Artifactory waits for the repository to "settle", until there hasn't been any changes (deployed or removed packages) for at least quietPeriodSecs seconds. It will check for this every sleepMilliSecs seconds.
Five minutes seems like a long time. If you're making a series of changes with under a minute before each change, that might explain why it's taking a while. Also, the larger your repository is, the longer the indexing will take once it starts, so that might also be a factor.

Why Visual studio 2013 takes long time(45 mins) to build for SP 2013 workflow project?

I am developing SharePoint 2013 workflow (SP 2013 on premise) using Visual studio 2013. everything works fine like workflow designer, opening files etc. works just fine, no delays at all but when starts build it takes around 45 mins., why?
In the solution, just 3 workflow projects and one cs project (dll). if I build cs project (dll) it builds in few seconds, even any other project from other solutions, build takes normal time but workflow projects takes long time and i don't get it why it takes so much time?
I used "Process Monitor" application against devenv.exe when build started and found where the lags are occurring, its on a NotifyChangeDirectory. There are more than 15000 entries against "NotifyChangeDirectory". When I put a filter to just show the ones w/a duration .5 seconds or more... 5000+ of them.
Can anyone help me here?
I found the solution. Now build completes within a min.
As I mentioned earlier, build takes time when it calls "ExpressionTranslationBuildTask". I researched on it and found that "during build, Any expressions in the workflows are translated into expression activities before the workflow is uploaded to Workflow Manager ". So I worked on reducing expressions translation jobs & able to solved it.
Earlier in workflow email body OR in task email body was I used string concatenation which causes the build problem.
For example,
"...."+
"..."+
""
I simply removed all the concatenations from the email body string and problem solved.

Ibm Web Aplication Server 7 publishing issue

My application is a combination of Spring/Hibernate/JPA. Recently my development environment was migrated to RAD 7 with WAS7. Previous I was using v.6 for RAD & WAS.
The problem is,
when I make a Java change, the server publishes for a long time, sometimes it takes upto 10 mins for a single line of change to take effect. Also even JSP changes alone takes much time during publishing!!
This was not the case in WAS6. Publishing java changes was not even a concern in WAS6. The changes takes effect immediately as the publish process is done within a few seconds.
This publishing process keeps on running several times as I make changes in my code, and I have to wait (for long intervals during work hours) till it completes, to verify/test my changes during runtime. This is horrible!!
Is there a way to make WAS7 publish JSP/Java changes faster in few seconds as like WAS6? Is there any fix/refresh pack for this?
Can someone help me with this?
Thanks in adavance.
This problem can be overcome if you have the control to publish rather than automatically publishing it. You can wait to make all your changes and then publish it.
To do that
In the server view double click on the server that you are working on and under publishing check the option "Never publish automatically".
Also if you can give the option "Run server with resources within the workspace" that would do reduce the time of copying the files from your workspace to server space while publishing.

Ruby redmine svn post-commit hook is extremally slow

I use the following post-commit hook for svn:
"path\to\ruby.exe" "path\to\redmine\script\runner" "Repository.fetch_changesets; Rails.logger.flush" -e production
It works correctly, but it takes about 1-2 minutes.
I also thought that a lot of time is required for first commit, but successive commit takes the same amount of time.
Is it possible to improve such behavior?
I know about slow behavior of Ruby on Windows, about 3 times, but in my case it is much more longer.
Configuration is following: Windows Vista, redmine 1.1.1, Ruby 1.8.7 with RubyGems 1.8.7, all packages installed and testing is performed on the same PC.
The problem is that script/runner starts up a new Rails process from scratch each time it's run, which will make your commit pause. So 3 commits == 3 startups and 3 shutdowns.
There are a few things you can do to improve this:
Run the script/runner process in the background so it doesn't slow down your commit. On Linux you can do this by adding an & at the end of the command but I don't remember how to do it on Windows
Instead of fetching changesets on each commit you can run it regularly through cron or a scheduled task. The rake task redmine:fetch_changesets is built for this purpose, it will iterate through each project and run fetch_changesets for you.
The command you are running goes through every project and fetches the changes. If you know the project identifier you can change the query so it only gets the changes for the project you are working on:
script\runner "Project.find('your-project').try(:repository).try(:fetch_changesets); Rails.logger.flush"
Replace 'your-project' with the project identifier (found in most of the urls in Redmine). The try parts are used to make sure you don't get an empty record.
Use the web service to fetch changesets instead of script/runner. The web service will use an existing Ruby process so it should already be loaded and the only slow down will be while Redmine downloads and processes the changes. This can also be used with the first option (i.e. background a web service request). Docs: http://www.redmine.org/projects/redmine/wiki/HowTo_setup_automatic_refresh_of_repositories_in_Redmine_on_commit
Personally, I just run a cronjob every hour (#2). Having the Repository view current isn't that important to me. Hope this gives you some ideas.

Resources