UPDATED: OK, problem solved.
The different way I started the server makes the different result .
# this gives 2800 req/s in a production server, server based on thin
$ bundle exec thin start -R config.ru -e production
# this gives 1600 req/s in the same server, server based on Rack( seems that)
$ bundle exec rackup config.ru -s thin
so the ways of starting sinatra:
wrong: $ ruby main.rb (based on rack?)
wrong: $ rackup config.ru (based on rack)
wrong: $ rackup config.ru -s thin ( event based on rack)
correct: $ thin start -R config.ru -e production
------------------- Original question --------------------
Today I am coding Sinatra for an API application, and found that:
Classic sinatra code can process :
1.1 1800 request/s, with thin as the server.
1.2 2000 request/s, with puma as the server.
Modular sinatra code can only process:
2.1 1100 requests/s, with thin as the server
2.2 800 requests/s , with puma as the server.
How to reproduce this:
classic sinatra
# test_classic_sinatra.rb
require 'sinatra'
get '/' do
'hihihi'
end
run :
siwei $ ruby test.rb
== Sinatra (v2.0.5) has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.7.2 codename Bachmanity)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
test:
$ ab -n 1000 -c 100 http://localhost:4567/
and got result:
Concurrency Level: 100
Time taken for tests: 0.530 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 211000 bytes
HTML transferred: 6000 bytes
Requests per second: 1885.43 [#/sec] (mean)
Time per request: 53.038 [ms] (mean)
Time per request: 0.530 [ms] (mean, across all concurrent requests)
Transfer rate: 388.50 [Kbytes/sec] received
Modular sinatra:
# config.ru
require 'sinatra/base'
class App < Sinatra::Application
set :environment, :production
get '/' do
'hihihi'
end
end
run App
with thin as server
run:
$ rackup config.ru -s thin
Thin web server (v1.7.2 codename Bachmanity)
Maximum connections set to 1024
Listening on localhost:9292, CTRL+C to stop
test:
Concurrency Level: 100
Time taken for tests: 0.931 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 211000 bytes
HTML transferred: 6000 bytes
Requests per second: 1073.58 [#/sec] (mean)
Time per request: 93.146 [ms] (mean)
Time per request: 0.931 [ms] (mean, across all concurrent requests)
Transfer rate: 221.22 [Kbytes/sec] received
with puma as server
run:
siwei$ rackup config.ru
Puma starting in single mode...
* Version 3.11.4 (ruby 2.3.8-p459), codename: Love Song
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://localhost:9292
Use Ctrl-C to stop
test:
$ab -n 1000 -c 100 http://localhost:9292/
got result:
Concurrency Level: 100
Time taken for tests: 1.266 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 178000 bytes
HTML transferred: 6000 bytes
Requests per second: 789.62 [#/sec] (mean)
Time per request: 126.643 [ms] (mean)
Time per request: 1.266 [ms] (mean, across all concurrent requests)
Transfer rate: 137.26 [Kbytes/sec] received
Before deciding to use Sinatra, I read many posts about "sinatra, grape and rails api", and I ran test agaist these frameworks, and finally decide to use Sinatra.
But now , I found Modular Sinatra seems not so good as expected. Could someone give me a clue about how to use "Classic Sinatra" or "Modular Sinatra" ?
If I don't use Modular Sinatra, how to write code for big applications?
thanks a lot!
Your benchmark is incorrect.
According to the snippets you posted, in the first case you use Thin as a server and in the second it's Puma.
These servers implement absolutely different concurrency models: as far as I remember it is a single-threaded event loop for the former and multiple threads for the latter. As a result, Thin performs better for light non-blocking tasks while Puma beats it in the scenarios with relatively heavy computations or blocking ones.
Your dummy example just better fits to the Thin's model and this causes the difference... Modular Sinatra should be absolutely fine, just compare apples to apples :)
OK, I found the root cause, and with The Tin Man's suggestion, I post my answer here.
The problem is the "web server" , but not the "framework"
The different way I started the server makes the different result .
# this gives 2800 req/s in a production server, server based on thin
$ bundle exec thin start -R config.ru -e production
# this gives 1600 req/s in the same server, server based on Rack( seems that)
$ bundle exec rackup config.ru -s thin
so the ways of starting sinatra:
wrong: $ ruby main.rb (based on rack?)
wrong: $ rackup config.ru (based on rack)
wrong: $ rackup config.ru -s thin ( event based on rack)
correct: $ thin start -R config.ru -e production
Related
What version of Go are you using?
go version go1.13 linux/amd64
What OS and processor architecture are you using?
OS: CentOS-7 x86_64 GNU/Linux
What did I do?
Span 100 threads using goroutine reach thread read data from
redis,data is in json which contains key csv_file path.Each csv
contains 1000 tokens. Each thread pop data from redis and read csv and
spawn 1000 threads, how much tokens that much thread and call APNS
push. During the push getting "EOF". 90% calls failed with this error.
I have set my OS ulimit is 500000
what did you expect to see? It should get processed 10M tokens in 1 minute
What did I see instead?
I am getting following error when call APNS sevice with load testing
on dev certificate time="2020-02-25T08:54:44-05:00" level=info
msg="Push Error:%!(EXTRA *url.Error=Post
https://api.sandbox.push.apple.com/3/device/eoQtFQtlL4s:APA91bGrV0HqQH4qbxe
ZCJrX-XMHj63: EOF)" 90% calls failed with this error. each thread with
1000 tokens publishe taken 2s with EOF error which is extermelly slow.
Further informations:
Aim:
My aim is publish 10M tokens in 1 minute
Where I run the code:
I am running in golang code in aws EC2 instance in Virginia us-east-1
My question:
When this error came and how can I fix?
It will be great if I can get help.
I updated one of my apps to Rails 5 and upgraded the Ruby version to 2.3.1 as well. The app already used Puma prior to the Rails 5 upgrade as well and was deployed on a Digital Ocean droplet.
When I start rails server locally, I get the normal output in my Rails log, which I've copied below.
=> Booting Puma
=> Rails 5.0.0 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
[14669] Puma starting in cluster mode...
[14669] * Version 3.4.0 (ruby 2.3.1-p112), codename: Owl Bowl Brawl
[14669] * Min threads: 5, max threads: 5
[14669] * Environment: development
[14669] * Process workers: 2
[14669] * Preloading application
[14669] * Listening on tcp://localhost:3000
[14669] Use Ctrl-C to stop
[14669] - Worker 1 (pid: 14684) booted, phase: 0
[14669] - Worker 0 (pid: 14683) booted, phase: 0
Everything looks normal to me. When I visit localhost:3000, the browser has a pending request that is pending indefinitely. There is no further activity in the Rails log acknowledging that any request is being received.
Has anyone encountered this type of issue, or know of any potential causes for that?
Resolved this issue, and confirmed by #marvindanig who was experiencing the same issue, that the 'tmp' folder needed to be cleared. There is a rake task in rails to do so...
rake tmp:clear
I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
When I run my specs using just spork, I get quite a significant performance increase
$ time rspec .
.....
Finished in 11.39 seconds
5 examples, 0 failures
real 0m11.780s
user 0m10.318s
sys 0m1.180s
and with spork
$ time rspec . --drb
.....
Finished in 107.24 seconds
5 examples, 0 failures
real 0m1.968s
user 0m0.488s
sys 0m0.095s
which is really awesome. But once I put guard into play, it seems that everything runs so slow, as if there was no spork at all.
$ guard
Guard is now watching at '/Users/darth/projects/scvrush'
Starting Spork for RSpec
Using RSpec
Preloading Rails environment
Loading Spork.prefork block...
Spork is ready and listening on 8989!
Spork server for RSpec successfully started
Guard::RSpec is running, with RSpec 2!
Running all specs
.....
Finished in 10.77 seconds
5 examples, 0 failures
even if I don't look at the Finished in 10.77 seconds, I can count at least 6-8 seconds every time it tries to run a spec, even for just one model.
I did some minor edits to the Guardfile, such as :wait => 120, but that should only affect when guard is starting up.
You have to pass the --drb option for rspec in your Guardfile, like this:
guard 'rspec', :version => 2, :cli => '--drb' do
...
end
How can I use EventMachine.connect_unix_domain while running Thin as a service (using the init script (excerpt) and configuration below). The code directly below is the problem (I get an eventmachine not initialized: evma_connect_to_unix_server error). The second code example works, but doesn't allow me to daemonize thin (I don't think). Does Thin not already have a running instance of EventMachine?
UPDATE: Oddly enough: stopping the server (with service thin stop), seems to get into the config.ru file and run the app (so it works, until the stop command times out and kills the process). What happens when thin stops that could be causing this behavior?
Problematic Code
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Working Code
EventMachine.run do
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Server.run!(:port => 4567)
end
Init Script excerpt
DAEMON=/usr/local/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
Thin Config
---
chdir: /var/www
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
socket: /tmp/thin.server.sock
daemonize: true
Thin is built on top of EventMachine. I think that you should use EventMachine for serving your app. Try to debug further way Thin won't daemonize. (What version are you using?). Also you can run Thin on another port such as 4000 and then pass that as the upstream server to your proxy-forwarding server, if that is what you want to achieve.
What I ended up doing was removing the EventMachine.run do ... end and simply enclosing the socket connection in an EM.next_tick{ $sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler) }.
Could swear I tried this once before... but it works now.
EDIT: Idea for next_tick came from here.