I have been looking for a testcase which can showcase the "SCALABILITY" of Scala/Akka.
i referred to the akka-actor-test/akka.performance.trading test case, but it seems like a Unit Test case rather than a performance benckmark test case.
I have run simple akka Actor based Ping-Pong test case which gives 650K pin-pong per second in the same JVM. But it goes down to 2K ping-pong per second if i make them as remote actors running in different JVM on the same machine.
But i guess this is not enough to showcase why someone should use Scala instead of Java. The same test case if run in Java may give better results. so why someone should go for scala based akka actors? which is that test case scenario which if written in C++/Java will not scale beyond a certain point and for which Scala is more suitable?
is there such a test case available in Github? i have seen jboner/akka-bench but it seems very old (last update seems 3 years ago). is there any other which i am missing? if yes, please share it to me. if not, kindly suggest the scenario , i will develop the test case and upload to github.
Benchmark that tests message sending throughput and message exchange latency in the same JVM for different actor libraries written on Scala (Akka vs. Lift vs. ProxyActors vs. Scala vs. Scalaz): https://github.com/plokhotnyuk/actors
Related
I was doing a project that needs to support a cluster of 30k nodes, all those nodes periodic calls the api to get data.
I want to have the maximum amount of concurrent get operation per second, and due to it is get operation, it must be in synced way.
And my local pc is 32GB 8Core, spring boot version is 2.6.6, configurations are like
server.tomcat.max-connections=10000
server.tomcat.threads.max=800
I use jmeter to do concurrent test, and the through out is around 1k/s, average response time is 2 seconds.
Is there any way to make it support more requests per second?
Hard to say without details on the web service, implementation of what it actually does and where the bottleneck actually is (threads, connections, CPU, memory or others) but, as a general recommendation, using non-blocking APIs would help but it should then be full non-blocking to actually make a real difference.
I mean that just adding Webflux and have blocking DB would not improve so much.
Furthermore, all improvements in execute time would help so check if you can improve the code and maybe have a look at trying to go native (which will come "built in" in Boot 3.X btw)
I have a number of feature files in my cucumber scenario test suite.
I run the tests by launching Cucumber using the CLI.
These are the steps which occur when the test process is running:
We create a static instance of a class which manages the lifecycle of testcontainers for my cucumber tests.
This currently involves three containers: (i) Postgres DB (with our schema applied), (ii) Axon Server (event store), (iii) a separate application container.
We use spring's new #DynamicPropertySource to set the values of our data source, event store, etc. so that the cucumber process can connect to the containers.
#Before each scenario we perform some clean up on the testcontainers.
This is so that each scenario has a clean slate.
It involves truncating data in tables (postgres container), resetting all events in our event store (Axon Server container), and some other work for our application (resetting relevant tracking event processors), etc.
Although the tests pass fine, the problem is by default it takes far too long for the test suite to run. So I am looking for a way to increase parallelism to speed it up.
Adding the arguments --threads <n> will not work because the static containers will be in contention (and I have tried this and as expected it fails).
The way I see it there is are different options for parallelism which would work:
Each scenario launches its own spring application context (essentially forking a JVM), gets its own containers deployed and runs tests that way.
Each feature file launches its own spring application context (essetially forking a JVM), gets its own containers deployed and runs each scenario serially (as it would normally).
I think in an ideal world we would go for 1 (see *). But this would require a machine with a lot of memory and CPUs (which I do not have access to). And so option 2 would probably make the most sense for me.
My questions are:
is it possible to configure cucumber to fork JVMs which run assigned feature files (which matches option 2 above?)
what is the best way to parallelise this situation (with testcontainers)?
* Having each scenario deployed and tested independently agrees with the cucumber docs which state: "Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another. Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario."
This isn't really a question for stack overflow. There isn't a single correct answer - mostly it depends. You may want to try https://softwareengineering.stackexchange.com/ in the future.
No. This is not possible. Cucumber does not support forking the JVM. Surefire however does support forking and you may be able to utilize this by creating a runner for each feature file.
However I would reconsider the testing strategy and possibly the application design too.
To execute tests in parallel your system has to support parallel invocations. So I would not consider resetting your database and event store for each test a good practice.
Instead consider writing your tests in such a way that each test uses its own isolated set of resources. So for example if you are testing users, you create randomized users for each test. If these users are part of an organization, you create a random organization, ect.
This isn't always possible. Some applications are designed with implicit singleton resources in the code. In this case you'll have to refactor the application to make these resources explicit.
Alternatively consider pushing your Cucumber tests down the stack. You can test business logic at any abstraction level. It doesn't have to be an integration test. Then you can use JUnit with Surefire instead and use Surefire to create multiple forks.
I am using selenium-2.30.0 to run a single test(on windows) which runs for many hours (~ 8 Hrs). I was using the FF driver, but it runs out of memory after just 45 minutes or less, & the test execution just hangs. I was unable to use HTMLUnitDriver (i thought a pure java solution was the answer) to run the same way as the FF driver (as it needs to wait for page loads & I definitely didn't want to put random thread sleeps in my code or implement any new function by extending the HTMLUnitDriver).
I cannot break the test case to multiple smaller units.
I cannot reload the driver as and when i see heavy memory utilization
Is there any way to get this working?
I found this link:creating-firefox-profile-for-your-selenium-rc-tests, & was quite helpful. Created a new firefox profile with absolute minimal settings, & the test has been running without issues for the last 4 hours. Thanks a lot for the help guys !
What sort of testing are you doing? Selenium is used primarily for Acceptance tests. It sounds like what you're trying to do is more like a soak test on your system.
If that's the case, take a look at JMeter, it's much more suited to this type of work. However, a rather significant difference between the two technologies is that JMeter works at the protocol (HTTP Request) level as opposed to Selenium's use of the rendered HTML.
What does crash, your Java test code or Firefox itself? If it's the Java test code, then are you sure that you're not leaking memory? Or maybe the memory leak is in the server side?
I want to run a Load test on my Production server to verify that the server can handle 1 million requests per 10 seconds using JMeter. How to configure JMeter Thread group for 1 million request in 10 seconds? How many client I need to do this test?
Please share your valuable experience if you have experience doing this type of load test.
First, you should ensure you really need 1 million requests in 10 seconds (what kind of site are you testing ?).
Then if you want to use JMeter, ensure:
You use last version
You tune memory correctly
You will certainly have to use distributed testing if not using Cloud
Follow best practices
http://www.dzone.com/links/r/see_how_to_make_jmeter_run_thousands_of_threads_w.html (Disclaimer : I'm the writer of this)
http://jmeter.apache.org/usermanual/best-practices.html
You might try Constant Throughput Timer to make a kind of Barrier
Alternatively you can try "Delay thread until creation"
And finally try a Cloud solution to get to this load, see this french blog on all kind of issues you will face in all fields (not only load software):
http://blog.milamberspace.net/index.php/2012/07/14/rapport-de-tres-gros-test-de-charge-avec-la-solution-blazemeter-1161.html)
But I never tried up to this load, so I cannot tell if it will work and it is kind of unexplored field
you can accomplish this using Gatling and scaling out to several machines that would run the test in parallel. in the end Gatling can aggregate the results into 1 report.
Gatling documentation dosent provide much info for this :
http://gatling.io/docs/1.5.6/user_documentation/cookbooks/scaling_out.html
but you can check my blog post to see how this can be done (ive wrote a script for this purpose)
Blog - http://www.nimrodstech.com/gatling-cluster-load-testing/
Gist - https://gist.github.com/Nimrod007/5cfed34eeffedfd7ec76
Looks like you'd better look onto another, more suitable tool for such kind of scenario,
e.g. Tsung
Tsung homepage
Tsung # github
Scaling to 30K: Tsung
or at least Gatling instead:
Gatling homepage
Gatling: scaling out
Perhaps if you would like to use jmeter in any case you can look onto BlazeMeterLoad Testing Cloud solution.
I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.