I have a website hosted on a server (ubuntu 18.04) with 2 core CPU and 4GB RAM. My website usually has 200 concurrent sessions in real-time (online users) on average.
Also, for those 200 online users, the resources usage will be almost:
50% of the RAM
65% of the CPU
It should be noted, my website is a Q/A website. So, users come to my website and ask their questions in different fields. Sometimes in a TV contest, a question asks and people immediately come to my website to search about it. Or they search inside google and find the link of my website and they invade my website.
In that case, the CPU of my server will be used over 90% and it's mostly because of the MySQL service.
Also, there is another scenario. When the google-bot crawler starts indexing my websites' links or checks for broken links, again, the same CPU usage happens. The point is, I cannot increase the server resources at the moment. I will do that in the future when I got a sponsor for my website.
So, as a workaround, I'm just trying to write a script that restarts the MySQL service automatically when the CPU usage is over 90%. Currently, I do that manually when I see my website is down or there's a page loading delay.
After some researches, I could get the real-time CPU usage percentage by this command:
echo $[100-$(vmstat 1 2|tail -1|awk '{print $15}')]
Also, I restart MySQL this way:
systemctl restart mysql
My exact question is, how can I exactly write that condition as a Linux bash script?
#!/bin/bash
if <how to check CPU usage> then
systemctl restart mysql
fi
If you really want to go this route, just check whether the usage is over 90%. Then run this script periodically using cron.
#! /bin/bash
(( usage = 100 - $(vmstat 1 2 | tail -n1 | awk '{print $15}') ))
if (( usage > 90 )); then
systemctl restart mysql
fi
Related
I have a 32GB, i7 core processor running on windows 10 and I am trying to generate 10kVU concurrent load via jmeter. For some reason I am unable to go beyond 1k concurrent and I start getting BindException error or Socket connection error. Can someone help me with the settings to achieve that kind of load? Also if someone is up for freelancing I am happy to consider that as well. Any help would be great as I am nearing production and am unable to load test this use case. If you guys have any other tools that I can use effectively, that would also help.
You reach the limit of 1 computer, thus you must execute in distributed environment of multiple computers.
You can setup JMeter's distributed testing on your own environment or use blazemeter or other cloud based load testing tool
we can use BlazeMeter, which provides us with an easy way to handle our load tests. All we need to do is to upload our JMX file to BlazeMeter. We can also upload a consolidated CSV file with all the necessary data and BlazeMeter will take care of splitting it, depending on the amount of engines we have set.
On BlazeMeter we can set the amount of users or the combination of engines (slave systems) and threads that we want to apply to our tests. We can also configure additional values like multi locations.
1k concurrent sounds low enough that it's something else ... it's also the default amount of open file descriptor limits on a lot of Linux distributions so maybe try to raise the limit.
ulimit -Sn
will show you your current limit and
ulimit -Hn
will show you the hard limit you can go before you have to touch configuration files. Editing /etc/security/limits.conf as root and setting something like
yourusername soft nofile 50000
yourusername hard nofile 50000
yourusername - will have to be the username of the user which with you run jmeter.
After this you will probably have to restart in order for the changes to take effect. If not on Linux I don't know how to actually do this you will have to google :D
Recommendation:
As a k6 developer I can propose it as an alternative tool, but running 10k VUs on a single machine will be hard with it as well. Every VU will take some memory - like at least 1-3mb and this will go up the larger your script is. But with 32gb you could still run upto 1-2kVUs and use http.batch to make concurrent requests which might simulate the 10k VUs depending on what your actual workflow is like.
I managed to run the stages sample with 300VUs on a single 3770 i7 CPU and 4gb of ram in a virtual machine and got 6.5k+ rps to another virtual machine on a neighboring physical machine (the latency is very low) so maybe 1.5-2kVUs with a a somewhat more interesting script and some higher latency as this will give time to the golang to actually run GC while waiting for tcp packets. I highly recommend using discardResponseBodies if you don't need them and even if you need some to actually get the response for them. This helps a lot with the memory consumption a each VU
I'm trying to send a large number of queries to my server. When I open a certain website (with certain parameters), it sends a query to my server, and computation is done on my server.
Right now I'm opening the website repeatedly using curl, but when I do that, the website contents are downloaded to my computer, which takes a long time and is not necessary. I was wondering how I could either open the website without using curl, or use curl without actually downloading the webpage.
Do the requests in parallel, like this:
#!/bin/bash
url="http://your.server.com/path/to/page"
for i in {1..1000} ; do
# Start curl in background, throw away results
curl -s "$url" > /dev/null &
# Probably sleep a bit (randomize if you want)
sleep 0.1 # Yes, GNU sleep can sleep less than a second!
done
# Wait for background workers to finish
wait
curl still downloads the contents to your computer, but basically a test where the client does not downloads the content would not be very realistic.
Obviously the above solution is limited by the network bandwith of the test server - which is, usually worse than the bandwith of the web server. For realistic bandwith tests you would need to use multiple test servers.
However, especially when it comes to dynamic web pages not the bandwith might be the bottleneck, but the memory or CPU. For such stress tests, a single test machine might be enough.
I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Update (2013 Oct, 1st):
Today I finally found out what was the issue. After debugging for hours with xdebug and trying to execute multiple scripts I noticed that the body of the script is not the problem. When executing tests worker with a sleep of 10-20 seconds I've noted that the CPU was idling most of the time and so deducted that was consuming the most of the CPU is to bootstrap Symfony.
My scripts were very quickly executed and killed to pawn to new script, etc. I've fixed it adding a do{}while() that is exiting after a random amount of seconds (to avoid all the worker to restart at the same time).
I've reduced the load from an average of 35-45% to an average of 0.5-1.5% That's a HUGE improvement. To Resume Symfony is bootstrapped once and after the script is just waiting until a random timeout to kill itself and launch a new instance of itself. This is to avoid script to hang or the database connection to timeout, etc.
If you have a better solution do not hesitate to share. I'm so happy to go from 100% CPU usage (x4 servers because of the auto-scaling) to less than 1% (and only one server) for the same amount of work, it's even faster now.
Update (2013 Sep, 24th):
Just noticed that the console component of Symfony is using dev environment by default.
I've specified prod in the command line: ./app/console --env=prod my:command:action and I divide by 5 the execution time which is pretty good.
Also I have the feeling that curl_exec is eating a lot of CPU but I'm not sure.
I'm trying to debug the CPU usage using xdebug, reading the generated cachegrind, but there is no reference of CPU cycle used per function, class, ... Only the time spent and memory used.
If you want to use xdebug in a PHP command line just use #!/usr/bin/env php -d xdebug.profiler_enable=On at the top of the script
If anyone has a tip to debug this with xdebug I'll be happy to hear it ;)
I'm asking this question without real hope.
I have a server that I use to run workers to process some background tasks. This server is an EC2 server (m1.small) inside an auto-scaling group with high CPU alert setup.
I have something like 20 workers (php script instance) waiting for jobs to be processed. To run the script I'm using the console component of Symfony 2.3 framework.
There is not much happening in the job, fetching data from URL, looping over the results and insert it row by row (~1000 rows per job) in MySQL (RDS server).
The thing is that with 1 or 2 workers running, the CPU is at 100% (I don't think it's like at 100% all the time but it's spiking every second or so) which cause the auto-scaling group to launch new instances.
I'd like to reduce the CPU usage which is not justified at all. I was looking at php-fpm (fastCGI) but it looks like it's for web servers only. PHP client wouldn't use it? right?
Any help would be appreciated,
Cheers
I'm running PHP 5.5.3 with FPM SAPI and as #datasage pointed out in his comment this would only affect the web-based side of things. You run a php -v command on CLI you'd notice:
PHP 5.5.3 (cli) (built: Sep 17 2013 19:13:27)
So FPM isn't really part of the CLI stuff.
I'm also running a similar situation which you are, except I'm running jobs via Zend Framework 2. I've found that running jobs which loop over information can be resource intensive at times but I've also found that it was caused by the way I had originally developed that loop myself. It had nothing to do with PHP in general itself.
I'm not sure about your setup, but in one of my jobs which runs forever I've found this works out the best and my server load is almost null.
[root#router ~]$ w
12:20:45 up 4:41, 1 user, load average: 0.00, 0.01, 0.05
USER TTY LOGIN# IDLE JCPU PCPU WHAT
root pts/0 11:52 5.00s 0.02s 0.00s w
Here is just an "example":
do {
sleep(2);
// Do your processing here
} while (1);
Where // Do your processing here I'm actually running several DB Queries, processing files, and running server commands based on the job requirements.
So in short, I wouldn't blame or think that PHP-FPM is causing your problems but most likely I'd start looking at how you've developed your code to run and make the necessary changes.
I currently have four jobs right now which run forever and is continuously looking at my DB for jobs to process and the server load has never spiked. I've even tested this with 1,000 jobs pending.
I have a Windows Server 2008 with Plesk running two web sites.
Sometimes the server is going slow and there is a named.exe process making the CPU peak 100%.
It last a short period of time and after a while it comes again.
I would like to know what this process is for and how to configure it for not consuming this cpu and make my sites go slow.
This must be a DNS service, also known as Bind. High CPU usage may indicate one of the following:
DNS is re-reading its configuration. In this case high CPU usage shall be aligned with your activities in Plesk - i.e. adding and removing domains.
Someone (normally another DNS server) is pulling data from your DNS server. It is normal process. As you say it is for short period of time, it doesn't look like DNS DDoS
AFAIK there is no default way in Windows to restrict software from taking 100% CPU if no other apps require CPU at the moment.
See "DNS Treewalk Suite" system, off the process, and uses the antivirus.
Check the error "log" in the system.