I have written an code in Hyperledger Composer Playground and now I want to find out, how the performance (computing power, latency, responce time) of my program is.
Is there any way to get this information in Hyperledger Composer?
There is no way to test this directly in Composer, but Hyperledger has a dedicated project for benchmarking: Caliper. Caliper allows you to get statistics regarding a number of performance indicators, such as TPS (Transactions Per Second), transaction latency, resource utilisation and more.
Related
I was doing a project that needs to support a cluster of 30k nodes, all those nodes periodic calls the api to get data.
I want to have the maximum amount of concurrent get operation per second, and due to it is get operation, it must be in synced way.
And my local pc is 32GB 8Core, spring boot version is 2.6.6, configurations are like
server.tomcat.max-connections=10000
server.tomcat.threads.max=800
I use jmeter to do concurrent test, and the through out is around 1k/s, average response time is 2 seconds.
Is there any way to make it support more requests per second?
Hard to say without details on the web service, implementation of what it actually does and where the bottleneck actually is (threads, connections, CPU, memory or others) but, as a general recommendation, using non-blocking APIs would help but it should then be full non-blocking to actually make a real difference.
I mean that just adding Webflux and have blocking DB would not improve so much.
Furthermore, all improvements in execute time would help so check if you can improve the code and maybe have a look at trying to go native (which will come "built in" in Boot 3.X btw)
I conducted performance testing on e-commerce website and I have the test results with some matrices. I already found some problems on some component for example on checkout or post login with high response time and error. But I also would like to find issues that are limiting the application to scale. I only did the testing on the application server. And I observed that CPU , I/O rate are very stable as well. But still the application gives high response time. Is there any other way I can determine from the test result why it is not scaling well? Thank!
From JMeter test result only - unlikely, JMeter just sends requests, waits for the responses and measures the time in-between plus collects some extra metrics like connect time and latency, see the JMeter Glossary for full list with explanations
The integrated system acts at the speed of its slowest component, possible reasons could be in:
Network issues (i.e. lack of bandwidth, faulty router, long DNS resolution time, etc.)
Your application is not properly configured for high loads. Inspect the current setup of the application in terms of thread pools, maximum number of open connections, any limitations on resource usage, etc. Look for documentation on performance tuning of individual middleware compoments as well.
Repeat your test run with a profiler tool telemetry enabled or look at the APM tool output for the test time frame if the tool is in place, it will allow you do perform a deep dive into what's going on under the hood of this or that function call as it might be inefficient algorithm or a slow database query
I ran a few tests using k6(OSS) by load impact and found it great in terms of usability compared to JMeter
I am doing a feasibility study to choose a load testing tool that should help me do API testing. I am inclined towards using K6 because I believe it is developer friendly but could not find resources that advise regarding maximum load I can simulate using K6.
Would it be possible to simulate 1 million rps(requests per second) using K6? If yes, how should I go about achieving this?
In theory, yes, if you use multiple k6 instances, you can achieve however many requests per second you want. A single k6 instance can produce anywhere from thousands to tens of thousands requests per second, depending on a lot of different factors - machine specs, script complexity, VUs, sleep times, network conditions, etc.
Right now k6 doesn't have a native distributed execution mode though, so you'd have to schedule the different instances yourself. There's a REST API (https://docs.k6.io/docs/rest-api) and you can output metrics to a centralized collector like InfluxDB (https://docs.k6.io/docs/results-output), but it'd take some work to execute a single test on multiple machines. A native k6 distributed execution mode is planned, but work on it hasn't started yet.
You can run k6 on the Load Impact (https://loadimpact.com) cloud (Cloud Execution mode) to access multiple k6 instances executing in parallel. Then, as noted, you can generate a large number of requests per second with the specific RPS being highly dependent on your script and other factors.
I am giving you the scenario as follows:
We are deploying the build in Cloud foundary container(IASS) along with New relic binding services. This is hosted in Asia -South-east.
My jmeter resides in same location but it is in Aazon ECM2-(asia south-east).
While I ran the jmeter , seems response time looks higher compare to my New relic APP response time. Why soome time so much variation comes? is it due to latency factor? how to give explanation to my clientwhile they check New relic and Jmeter both the result. i am sure both are correct and need to find out rCA.
Please helpenter image description here
JMeter response time includes network latency which we can not avoid. So, it might be because of latency. If it is a huge difference, try to run the test from a machine which is very close to the app server / same data center and see if it helps to minimize the latency.
What is the max no of users you are trying to simulate? What is the CPU, memory utilization of the load generator - usually i would keep them below 80%.
Ensure that your results are satisfying the little's law! check below for more details. if it does not satisfy you are trying to simulate too much load from your load generator which it could not handle - go for distributed load testing in that case.
http://www.testautomationguru.com/jmeter-performance-testing-application-of-littles-law-to-workload-models/
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-rancheros-in-cloud/
I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.