Max. Load Test Run Duration in Visual Studio Load Tests - visual-studio

We are using Visual Studio Enterprise 2017 and Self Provisioned Rig for load generation.
Are there any limits for the test run duration?
Can the test run for 2 weeks continuous duration with load test agents in Self provisioned Rig in case of Soak Test scenario for an example?

I believe that it can run for two weeks. But I must caveat that by saying lots of data is stored in the computer's RAM as the test runs and lots more is written to the SQL database. I have no idea what the limits on their storage would be. Also running a test of that duration risks losing lots of data should something unexpected happen during the run.
See here where a 672 hour test failed with an integer overflow of a value holding milliseconds. Some quick sums suggest the integer would overflow at about 600 hours which is about 25 days.

Related

DB synchronization on Visual Studio 2015 is hanged

I tried to sync database on Visual Studio 2015 after creating a project, EDT, Enum and a Table in order to create a new screen on Dynamics 365.
When I tried to synchronize it, it was stopped in the middle during schema checking process. Though it seems that the DB synchronization doesn't have problem for the first few minutes, it always stops during this process as I describe below.
Log Details:
"Schema has not changed between new table 'DPT_TableDT' and old table
'DPT_TableDT' with table id '3997'. Returning from
ManagedSyncTableWorker.ExecuteModifyTable() Syncing Table Finished:
DPT_TableDT. Time elapsed: 0:00:00:00.0010010"
Could you tell me how to solve this issue?
Thanks in advance.
Full database synchronization log
DB Sync Log
From what you've described and also shown in your screenshot, this does not look like an error but is simply describing X++ and Dynamics AX/365FO behaviour.
When you say that it "doesn't have a problem for the first few minutes" I'm guessing you're just not being patient enough. Full database syncs should generally take between 10-30 minutes, but can take shorter or longer depending on a variety of factors such as how much horsepower your development environment has, how many changes are being sync'd etc. I would wait at least one hour before considering the possibility that the sync engine has errors (or even run it overnight and see what information it has for you in the morning).
The message you've posted from the log ("Schema has not changed") isn't an error message; it is just an informational log from the sync engine. It is simply letting you know that the table did not have any changes to propagate to SQL Server.
Solution: Run the sync overnight and post a screenshot of the results or the error list window in Visual Studio.
I've recently been stymied by a long running application where Access v2003 replicas refused to synchronize. The message returned was "not enough memory". This was on machines running Windows 10. The only way I was able to force synchronizing was to move the replicas onto an old machine still running Windows 98 with Office XP, which allowed synchronizing and conflict resolution. When I moved the synchronized files back to the Windows 10 machine they still would not synchronize.
I finally had to create a blank database and link to a replica, then use make-table queries to select only data fields to create new tables. I was then able to create new replicas that would synchronize.
From this I've come to suspect the following:
Something in Windows 10 has changed and caused the problem with synchronizing/conflict resolution.
Something in the hidden/protected fields added to the replica sets is seen as a problem under Windows 10 that is not a problem under Windows 98.
One thing I noticed is that over the years the number of replicas in the synchronizing list had grown to over 900 sets, but the only way to clear the table was to create a new clean database.

Watin tests running much slower on Teamcity

I have a set of Watin GUI tests that run on Teamcity. They each run in about 300 seconds locally. They previously ran in about the same time on a TeamCity agent (PC specs are about the same). For some reason they now run incredibly slow. The tests have a timeout of 1500 seconds and this is being triggered to cause them to fail.
I have checked the tests again locally and they run in the same time as expected (about 300 seconds). When I remote view into the Team City agent I can see that the tests are keying in text at a very slow pace. The actual speed at which tests operate is slower. It almost seems as if the tests are set to run at a quarter speed if such a thing existed.
These tests are only run about once every two weeks so its possible something on the agent has changed to effect their functionality.
I am at a loss as to what could be wrong. I've never encountered this before and on paper everything is correct but it is just going slow.
Does anyone have any ideas?

How do I improve performance of SSIS calling a WCF service?

Edit
I've discovered that the service is not actually performing any database calls when running my process but it does for the normal application. I am going to troubleshoot a problem with my SSIS package / code under the assumption that something is silently failing or looping there it shouldn't.
Original Question
I have an SSIS package that has a table of user accounts. I need to loop through them (386,000 rows) and call a WCF service to create the users in our system. The WCF call is sub-second but even at 1/2 second this process would take over 50 hours. To improve through-put I modified the package to divide the result set into 5 batches and process them asynchronously.
The process works and I get the expected 1/5 run time in Visual Studio debugging. Problem is that after 40,000ish users my VS will crash.
So, I attempted an execution using the SQL Server Execute Package Utility but the iteration run time seems to be growing and is destroying any improvement that I made.
Execution start
Execution after 2 hours of processing
Any solution to my problem would be much appreciated, thank you.

nUnit Test Adapter 10 second limit: long-running tests in nUnit

I have some tests in nUnit that call an external program that sometimes takes more than 10 seconds to finish. This works fine when I run them in nUnit's "Unit Test Sessions" panel. However, when I use the test adapter (which has the benefits of more through error output and automatic test discovery), I get the following error:
The request has taken more than 10 seconds to respond, aborting it.
Exception has been thrown by the target of an invocation.
Is there any way I can extend this time limit for my tests? Is this an issue with the adapter, or with Visual Studio itself?
Edit: To clarify, I know that mocking out time-consuming functionality is the proper thing to do. Right now, though, it's not worth the time (in my estimation) that it would take to refactor my unit tests.
It has nothing to do with VS, as the Resharper is able to run tests well exceeding 10 seconds. This is most likely a limitation of the adapter.
This issue (https://github.com/nunit/nunit-vs-adapter/issues/24) is fixed in version 1.2 of the adapter.

rspec tests under jruby on Windows running very slow

We are considering a move to jruby, and as part of this have been researching testing approaches and frameworks. I've been testing rspec on my local development machine and am finding that the total time taken to run 2 tests for a single class with 2 very simple methods is 7-8 seconds. By simple, I mean one setter and one that returns true.
The rspec output shows that the tests run in roughly 2 seconds, so 5-6 seconds of the total time is spent loading and initializing rspec. I'm running from the command line using
C:\rubycode\rspec_tutorial>rspec --profile user_spec.rb
..
Top 2 slowest examples:
User User should NOT be in any roles not assigned to it
0.023 seconds ./user_spec.rb:15
User User should be in any roles assigned to it
0.006 seconds ./user_spec.rb:10
Finished in 2 seconds
2 examples, 0 failures
I'm running jruby 1.6.5 and rspec 2.7.1
I've read this post, Faster RSpec with JRuby
but it's over 1.5 years old, and the answer relates to running suites of tests as opposed to short bursts of a small number of tests locally to aid TDD, which is how we want to develop. Down the line we'll incorporate a CI server.
My question is, is this the expected execution time? Is there any way to speed up the running of rspec tests on the local development machine uner JRUBY?
EDIT:
Biggest performance gains were switching from 64 bit "server" JVM to 32 bit "client" mode. I saw about a 40% reduction in the time taken to run a simple test. I got Nailgun up and running as well, but performance varied. The link provided below by banzaiman was most helpful
It is not loading RSpec, but it is JVM startup time you are feeling.
See https://github.com/jruby/jruby/wiki/Improving-startup-time for more information.

Resources