I'm trying to run Apache Bench via Ruby with the backticks method. Normal Apache Bench output looks like this:
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking stackoverflow.com (be patient).....done
Server Software:
Server Hostname: stackoverflow.com
Server Port: 80
Document Path: /
Document Length: 0 bytes
Concurrency Level: 2
Time taken for tests: 0.752 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 4600 bytes
HTML transferred: 0 bytes
Requests per second: 13.30 [#/sec] (mean)
Time per request: 150.421 [ms] (mean)
Time per request: 75.210 [ms] (mean, across all concurrent requests)
Transfer rate: 5.97 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 72 73 0.4 73 73
Processing: 74 77 2.2 77 81
Waiting: 74 77 2.2 77 81
Total: 147 150 2.2 151 153
The Ruby command I'm trying is:
results = `ab -n 10 -c 2 http://stackoverflow.com/`
It seems to work find in IRB and I get all the results back. But, when I try and hook it into my Sinatra app, I only get this back:
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking stackoverflow.com (be patient).....
It's as if the command isn't waiting for the ab tests to finish. What's going on here?
Related
I just bought for a month an extremely cheap VPS with 16 GB RAM and 6 cores(from Contabo)
Now my question is, how can I get some benchmark results in order to compare it with other VPSes like Hostinger provides?
I did a Geekbench benchmark on it and the results can be seen here: https://browser.geekbench.com/v4/cpu/15852309
The problem with Geekbench is that I feel it's not really web oriented as the scores are influenced by the GPU as well.
What should I use in order to compare the VPSes between them?
Would the plan be enough to host a Magento 2 website / possibly more?
For webserver performance the Network, Disk (random read) and CPU performance are the most important factors.
I like to benchmark and compare each one separately.
For Disk I/O performance, can use sysbench:
apt install sysbench
sysbench fileio --file-num=4 prepare
sysbench fileio --file-num=4 --file-test-mode=rndrw run
For CPU performance can use stress-ng:
apt install stress-ng
stress-ng -t 5 -c 2 --metrics-brief
-c 2 uses 2 logical processors. Adjust if necessary.
For network performance can use speedtest-cli:
apt install speedtest-cli
speedtest-cli
Example output:
# sysbench fileio --file-num=4 --file-test-mode=rndrw run
<skip>
Throughput:
read, MiB/s: 45.01
written, MiB/s: 30.00
# stress-ng -t 5 -c 2 --metrics-brief
stress-ng: info: [14993] dispatching hogs: 2 cpu
stress-ng: info: [14993] successful run completed in 5.00s
stress-ng: info: [14993] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [14993] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [14993] cpu 3957 5.00 9.99 0.00 790.92 396.10
# speedtest-cli
Retrieving speedtest.net configuration...
Testing from <skip> ...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Uganda Hosting Limited (Helsinki) [0.20 km]: 1.807 ms
Testing download speed................................................................................
Download: 575.68 Mbit/s
Testing upload speed................................................................................................
Upload: 499.89 Mbit/s
I just installed Linux and Intel MPI to two machines:
(1) Quite old (~8 years old) SuperMicro server, which has 24 cores (Intel Xeon X7542 X 4). 32 GB memory.
OS: CentOS 7.5
(2) New HP ProLiant DL380 server, which has 32 cores (Intel Xeon Gold 6130 X 2). 64 GB memory.
OS: OpenSUSE Leap 15
After installing OS and Intel MPI, I compiled intel MPI benchmark and ran it:
$ mpirun -np 4 ./IMB-EXT
It is quite surprising that I find the same error when running IMB-EXT and IMB-RMA, though I have a different OS and everything (even GCC version used to compile Intel MPI benchmark is different -- in CentOS, I used GCC 6.5.0, and in OpenSUSE, I used GCC 7.3.1).
On the CentOS machine, I get:
#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 2 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
# MODE: AGGREGATE
#
#bytes #repetitions t[usec] Mbytes/sec
0 1000 0.05 0.00
4 1000 30.56 0.13
8 1000 31.53 0.25
16 1000 30.99 0.52
32 1000 30.93 1.03
64 1000 30.30 2.11
128 1000 30.31 4.22
and on the OpenSUSE machine, I get
#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 2 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
# MODE: AGGREGATE
#
#bytes #repetitions t[usec] Mbytes/sec
0 1000 0.04 0.00
4 1000 14.40 0.28
8 1000 14.04 0.57
16 1000 14.10 1.13
32 1000 13.96 2.29
64 1000 13.98 4.58
128 1000 14.08 9.09
When I don't use mpirun (which means there is only one process to run IMB-EXT), the benchmark runs through, but Unidir_Put needs >=2 processes, so doesn't help so much, and I also find that the functions with MPI_Put and MPI_Get is extremely slower than I expected (from my experience). Also, using MVAPICH on the OpenSUSE machine did not help. The output is:
#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 6 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
# MODE: AGGREGATE
#
#bytes #repetitions t[usec] Mbytes/sec
0 1000 0.03 0.00
4 1000 17.37 0.23
8 1000 17.08 0.47
16 1000 17.23 0.93
32 1000 17.56 1.82
64 1000 17.06 3.75
128 1000 17.20 7.44
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 49213 RUNNING AT iron-0-1
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
update: I tested OpenMPI, and it goes through smoothly (although my application does not recommend using openmpi, and I still don't understand why Intel MPI or MVAPICH doesn't work...)
#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 2 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
# MODE: AGGREGATE
#
#bytes #repetitions t[usec] Mbytes/sec
0 1000 0.06 0.00
4 1000 0.23 17.44
8 1000 0.22 35.82
16 1000 0.22 72.36
32 1000 0.22 144.98
64 1000 0.22 285.76
128 1000 0.30 430.29
256 1000 0.39 650.78
512 1000 0.51 1008.31
1024 1000 0.84 1214.42
2048 1000 1.86 1100.29
4096 1000 7.31 560.59
8192 1000 15.24 537.67
16384 1000 15.39 1064.82
32768 1000 15.70 2086.51
65536 640 12.31 5324.63
131072 320 10.24 12795.03
262144 160 12.49 20993.49
524288 80 30.21 17356.93
1048576 40 81.20 12913.67
2097152 20 199.20 10527.72
4194304 10 394.02 10644.77
Is there any chance that I am missing something in installing MPI, or installing OS in these servers? Actually, I assume that OS is the problem, but not sure where to start...
Thanks a lot in advance,
Jae
Although this question is well written, you were not explicit about
Intel MPI benchmark (please add header)
Intel MPI
Open MPI
MVAPICH
supported host network fabrics - for each MPI distribution
selected fabric while running MPI benchmark
Compilation settings
Debugging this kind of trouble with disparate host machines, multiple Linux distributions and compiler versions can be quite hard. Remote debugging on StackOverflow is even harder.
First of all ensure reproducibility. This seems to be the case. One of many debugging approaches, the one I would recommend, is to reduce complexity of the system as a whole, test smaller sub-systems and start shifting responsibility to third parties. You may replace self-compiled executables with software packages provided by distribution software/package repositories or third parties like Conda.
Intel recently started to provide its libraries through YUM/APT repos as well as for Conda and PyPI. I found that helps a lot with reproducible deployments of HPC clusters and even runtime/development environments. I recommend to use it for CentOS 7.5.
YUM/APT repository for Intel MKL, Intel IPP, Intel DAAL, and Intel® Distribution for Python* (for Linux*):
Installing Intel® Performance Libraries and Intel® Distribution for Python* Using YUM Repository
Installing Intel® Performance Libraries and Intel® Distribution for Python* Using APT Repository
Conda* package/ Anaconda Cloud* support (Intel MKL, Intel IPP, Intel DAAL, Intel Distribution for Python):
Installing Intel Distribution for Python and Intel Performance Libraries with Anaconda
Available Intel packages can be viewed here
Install from the Python Package Index (PyPI) using pip (Intel MKL, Intel IPP, Intel DAAL)
Installing the Intel® Distribution for Python* and Intel® Performance Libraries with pip and PyPI
I do not know much about OpenSUSE Leap 15.
After encountering situations where I found that rethinkdb service is down for unknown reason, I noticed it uses a lot of memory:
# free -m
total used free shared buffers cached
Mem: 7872 7744 128 0 30 68
-/+ buffers/cache: 7645 226
Swap: 4031 287 3744
# top
top - 23:12:51 up 7 days, 1:16, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 133 total, 1 running, 132 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8061372k total, 7931724k used, 129648k free, 32752k buffers
Swap: 4128760k total, 294732k used, 3834028k free, 71260k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1835 root 20 0 7830m 7.2g 5480 S 1.0 94.1 292:43.38 rethinkdb
29417 root 20 0 15036 1256 944 R 0.3 0.0 0:00.05 top
1 root 20 0 19364 1016 872 S 0.0 0.0 0:00.87 init
# cat log_file | tail -9
2014-09-22T21:56:47.448701122 0.052935s info: Running rethinkdb 1.12.5 (GCC 4.4.7)...
2014-09-22T21:56:47.452809839 0.057044s info: Running on Linux 2.6.32-431.17.1.el6.x86_64 x86_64
2014-09-22T21:56:47.452969820 0.057204s info: Using cache size of 3327 MB
2014-09-22T21:56:47.453169285 0.057404s info: Loading data from directory /rethinkdb_data
2014-09-22T21:56:47.571843375 0.176078s info: Listening for intracluster connections on port 29015
2014-09-22T21:56:47.587691636 0.191926s info: Listening for client driver connections on port 28015
2014-09-22T21:56:47.587912507 0.192147s info: Listening for administrative HTTP connections on port 8080
2014-09-22T21:56:47.595163724 0.199398s info: Listening on addresses
2014-09-22T21:56:47.595167377 0.199401s info: Server ready
It seems a lot considering the size of the files:
# du -h
4.0K ./tmp
156M .
Do I need to configure a different cache size? Do you think it has something to do with finding the service surprisingly gone? I'm using v1.12.5
There were a few leak in the previous version, the main one being https://github.com/rethinkdb/rethinkdb/issues/2840
You should probably update RethinkDB -- the current version being 1.15.
If you run 1.12, you need to export your data, but that should be the last time you need it since 1.14 introduced seamless migrations.
From Understanding RethinkDB memory requirements - RethinkDB
By default, RethinkDB automatically configures the cache size limit according to the formula (available_mem - 1024 MB) / 2. available_mem
You can change this via a config file as they document, or change it with a size (in MB) from the command line:
rethinkdb --cache-size 2048
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
'ab' program freezes after lots of requests, why?
Here's a simple test server:
require 'rubygems'
require 'rack'
require 'thin'
class HelloWorld
def call(env)
[200, {"Content-Type" => "text/plain"}, "OK"]
end
end
Rack::Handler::Thin.run HelloWorld.new, :Port => 9294
#I've tried with these added too, 'rack.multithread' => true, 'rack.multiprocess' => true
Here's a test run:
$ ab -n 20000 http://0.0.0.0:9294/sdf
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 0.0.0.0 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
apr_poll: The timeout specified has expired (70007)
Total of 16347 requests completed
It breaks down at around 16500. Why? How can I find out what's going on. Is it GC in ruby or is it something with number of available network sockets on an OS X machine. I have a MPB 2.5 Ghz 6G memory.
Edit
After some discussion here and testing various things, it seems like changing net.inet.tcp.msl from 15000 to 1000ms makes the problem of testing high frequency web servers with ab go away.
sudo sysctl -w net.inet.tcp.msl=1000 # this is only good for local development
See referenced question with the answer to this problem. 'ab' program freezes after lots of requests, why?
I'll add the solution here for claritys sake. The correct solution for managing to do high frequency tests with ab on os X is to change the 'net.inet.tcp.msl' setting from 15000ms to 1000ms. This should only be done on development boxes.
sudo sysctl -w net.inet.tcp.msl=1000 # this is only good for local development
This answer was found after the good detective work performed in the comments here and comes from an answer to a very similar question here's the answer: https://stackoverflow.com/a/6699135/155031
I think I've got it.
When ab makes connections to your test server, it opens a source port (say, 50134) and makes a connection to the destination port (9294).
The ports that ab opens for the source port are determined by the sysctl settings net.inet.ip.portrange.first and net.inet.ip.portrange.last. For example, on my machine:
philippotter ~ $ sysctl -a | grep ip.portrange
net.inet.ip.portrange.lowfirst: 1023
net.inet.ip.portrange.lowlast: 600
net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.last: 65535
net.inet.ip.portrange.hifirst: 49152
net.inet.ip.portrange.hilast: 65535
This means that ab's source ports will be in the range from 49152 to 65535, which is a total of 16384.
HTTP is a TCP protocol. When a TCP connection is closed, it goes into the TIME_WAIT state, while it waits for any remaining in-transit packets to reach their destinations. This means that the port is not usable for any other purpose until the timeout is reached.
So, putting all of this together, ab uses up all available source ports very quickly; they go into the TIME_WAIT state; they can't be reused; ab is unable to create any more connections.
You can see this if you kill ab when it hangs, and run it again -- it won't be able to create any connections!
my sphinx configuration is:
================================ config/sphinx.yml
development:
bin_path: "/usr/local/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
but everytime i run a rake ts:index
Sphinx cannot be found on your system. You may need to configure the following
settings in your config/sphinx.yml file:
* bin_path
* searchd_binary_name
* indexer_binary_name
For more information, read the documentation:
For more information, read the documentation:
http://freelancing-god.github.com/ts/en/advanced_config.html
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 110510 bytes/sec, 327.43 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 105567 bytes/sec, 312.79 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
So what's the problem? Why does the rake output that it cant find it even though its installed?
The warning from Thinking Sphinx could definitely be clearer... the problem is very likely to be how old your version of Thinking Sphinx is. Older TS versions don't know about Sphinx 2.0.x - so I'd recommend updating to the latest version of Thinking Sphinx (either 1.4.6 for Rails 1.2 and 2.x, or 2.0.5 for Rails 3).
There are two things that help to solve this problem. First, as Pat says, it is useful to update the Thinking Sphinx plugin or gem to the latest version (either 1.4.x for Rails 2, or 2.0.x for Rails 3). Second it helps sometimes to specify the version of Sphinx in the configuration file (you can find it out by calling "indexer"), especially if Sphinx is running on a remote server and Thinking Sphinx does not have access to Sphinx locally:
production:
..
version: 2.0.4 # <------- Version of Sphinx on remote server 192.168.1.10
port: 9312
address: 192.168.1.10
..
I was facing the same issue and looked everywhere for an answer without any resolution.
The trick that worked for me was to install older version of sphinx. v .9 instead of the latest beta.
Using the latest Thinking-Sphinx with this version of sphinx resolved the issue.