How do I allow concurrent access to the same route? - ruby

I've a simple Sinatra application with one long running route:
get '/jobs/new' do
logger.info "jobs/new start. Thread = #{Thread.current.inspect}"
sleep 10
logger.info "end new..."
erb :'jobs/new'
end
get '/jobs' do
erb :'jobs/index'
end
I've concurrent access between routes, but not to the same route.
An example is, while a client invokes /jobs/new(long during access), another client can invoke jobs in parallel. But the parallel call for the same route doesn't work. In this case, Puma, the webserver, always calls the route with the same thread:
jobs/new started. Thread = #<Thread:0x007f42b128e600 run>
10 seconds later...
jobs/new ended. Thread = #<Thread:0x007f42b128e600 run>
jobs/new started. Thread = #<Thread:0x007f42b128e600 run> <-- new call. Has to wait till first has finished
The other route is being called by different threads. And while route 1 is running:
jobs/new started. Thread = #<Thread:0x007f42b128e600 run>
2 seconds later...
jobs started. Thread = #<Thread:0x007f541f581a40 run> <--other thread
8 seconds later...
jobs/new ended. Thread = #<Thread:0x007f42b128e600 run>
jobs/new started. Thread = #<Thread:0x007f42b128e600 run>
I tried running the app with Thin in threaded mode and with Puma, with the same behavior

Whatever you did, I think it was not right.
Running this code:
# config.ru
require 'bundler'
Bundler.require
get '/jobs/new' do
logger.info "jobs/new start. Thread = #{Thread.current.inspect}"
sleep 10
logger.info "end new..."
"jobs/new"
end
run Sinatra::Application
with puma:
Puma starting in single mode...
* Version 2.7.1, codename: Earl of Sandwich Partition
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop
I, [2013-12-12T14:04:48.820907 #9686] INFO -- : jobs/new start. Thread = #<Thread:0x007fa5667eb7c0 run>
I, [2013-12-12T14:04:50.282718 #9686] INFO -- : jobs/new start. Thread = #<Thread:0x007fa566731e38 run>
I, [2013-12-12T14:04:58.821509 #9686] INFO -- : end new...
127.0.0.1 - - [12/Dec/2013 14:04:58] "GET /jobs/new HTTP/1.1" 200 8 10.0132
I, [2013-12-12T14:05:00.283496 #9686] INFO -- : end new...
127.0.0.1 - - [12/Dec/2013 14:05:00] "GET /jobs/new HTTP/1.1" 200 8 10.0015
^C- Gracefully stopping, waiting for requests to finish
- Goodbye
Results in 2 different threads!

Related

I lose user session with Ruby + Sinatra + puma + sequel only when worker process puma> 1

My app in Heroku with Ruby + Sinatra + puma + sequel is ok while worker process = 1 when increasing worker process = 2 or if increasing dyno = 2 I start with problems of losing the user session randomly at different points in the system making it very difficult to locate the specific error through heroku logs.
The same app works fine with:
But you lose the value of session[: user] with:
My app rack sinatra class:
class Main <Sinatra :: Aplicación
use Rack :: Session :: Pool
set: protection ,: except =>: frame_options
def usuarioLogueado?
if defined?( session[:usuario] )
if session[:usuario].nil?
return false
else
return true
end
else
return false
end
end
get "/" do
if usuarioLogueado?
redirect "/app"
.....
else
redirect "/home"
end
end
end
My sequel connection:
pool_size = 10
# db = Sequel.connect (strConexion ,: max_connections => pool_size )
# db.extension (: connection_validator)
# db.pool.connection_validation_timeout = -1
My puma.rb: (20 connections max DB)
workers Integer (ENV ['WEB_CONCURRENCY'] || 1)
threads_count = Integer (ENV ['MAX_THREADS'] || 10)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV ['PORT'] || 3000
Rack::Session::Pool is a simple memory based session store. Each process has its own store and they are not shared between processes or hosts. When a request gets directed to a different dyno or different process on the same dyno, the session data will not be available.
You could look at sticky sessions, but they won’t work in all situations (e.g. when dynos are created or destroyed) and won’t work at all if you have multiple processes on a single dyno.
You should look at using cookie based sessions, or set up a shared server side store such as memcached with Dalli, so that it doesn’t matter which dyno or process each request is routed to.

idle connection in postgres causing process to stuck or result in error

We have Postgres as our backend database . Our process runs some job(i.e it does some insert/update in DB) and then sleep for an hour.
This what we have noticed. While the time our process is sleeping our Postgres connection status is seen as idle.
postgres 5045 0.3 0.4 231220 33780 ? Ss 12:13 0:16 postgres: scp scp_test x.x.x.x(60400) idle
Now my question is?
If, I have a process that sleep for an hour .
Does Postgres close the idle connection after some time?
Because on the next run the process is not able to insert/update any record in DB.
Here how my code looks like.
$logger = Logger.new('log/checker.log')
last_modified = Time.now
while
if (last_modified == File.mtime(file_path))
$logger.info "Sleeping for 1 hour"
sleep 3600
else
$logger.info "Inserting the file mtime changed .."
last_modified = File.mtime(file_path)
$logger.info "File modified ....."
attributes = Test.testFile(file_path)
index = 0
$logger.info "........Index set......"
header_attributes = attributes.shift
$logger.info "...........Header removed..........."
trailer_attributes = attributes.pop
$logger.info "...Trailer removed......."
count = attributes.count
$logger.info "............Count calculated #{count} ........."
attributes.each_slice(50000) { |records|
_records = initialize_records(records)
_records.each { |record|
record.save
index += 1
$logger.info "Inserting ...... [#{index}/#{count}]"
}
}
$logger.info "Completed insertion of #{count}"
end
end
Tested this with
Ruby-2.2.2 - ActiveRecord-4.2.6 - pg-0.18.0
Ruby-2.3.0 - ActiveRecord-4.2.6 - pg-0.18.0
Jruby-9.0.5.0 - ActiveRecord-4.2.6 - activerecord-jdbc-adapter
Postgres version.
PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit
(1 row)
There is 1 difference between Ruby and JRuby output?
While both processes get stuck after they wake up from sleep
The Ruby process dies with PG::UnableToSend: SSL SYSCALL error: EOF detected error
but JRuby process gets stuck forever(doesn't die).
Not sure where the issue is, since I can't pin point at any specific library or code at this point.
Note: Works perfectly fine on a loopback interface. The postgres server in question is remote one..
Inside the data folder there is a file called postgresql.conf you have to configure it to send keepalive message to the client. Read this for more info.
Configuring postgresql.conf
Your postgresql.conf file should have a line like this tcp_keepalives_idle.
If you want to send keepalive message to the client computer for every 5 minutes; Update tcp_keepalives_idle line like this
tcp_keepalives_idle = 300
Make sure to uncomment that line by removing the # mark.

Ruby multithreading queues

I have a problem with synchronizing threads, I have no idea how to do it, can someone help me ?
So, the thing is that I have to launch the threads in some specific order.
The order is the following:
Threads 1 and 7 can go simultaneously, and after one of them is finished, the next thread is launched (i.e. thread 2 or/and thread 6), the same way goes with thread 3 and 5.
And the last one, after both thread 3 and 5 are finished running, goes the last one, thread 4.
This is the code, I had begun with, but I am stuck at the queue implementation somehow.
MUTEX = Mutex.new
high_condition = ConditionVariable.new
low_condition = ConditionVariable.new
threads = []
7.times do |i|
threads << Thread.new{
MUTEX.synchronize {
Thread.current["number"] = i
you_shall_not_pass
}
}
end
threads.map(&:join)
def you_shall_not_pass
order = Thread.current["number"]
end
Use Ruby's Queue as a counting semaphore. It has blocking push and pop operations that you can use to hand out a limited number of tokens to threads, requiring that each thread acquire a token before it runs and release the token when it's finished. If you initialize the queue with 2 tokens, you can ensure only 2 threads run at a time, and you can create your threads in whatever order you like.
require 'thread'
semaphore = Queue.new
2.times { semaphore.push(1) } # Add two concurrency tokens
puts "#{semaphore.size} available tokens"
threads = []
[1, 7, 2, 6, 3, 5, 4].each do |i|
puts "Enqueueing thread #{i}"
threads << Thread.new do
semaphore.pop # Acquire token
puts "#{Time.now} Thread #{i} running. #{semaphore.size} available tokens. #{semaphore.num_waiting} threads waiting."
sleep(rand(10)) # Simulate work
semaphore.push(1) # Release token
end
end
threads.each(&:join)
puts "#{semaphore.size} available tokens"
$ ruby counting_semaphore.rb
2 available tokens
Enqueueing thread 1
Enqueueing thread 7
2015-12-04 08:17:11 -0800 Thread 7 running. 1 available tokens. 0 threads waiting.
2015-12-04 08:17:11 -0800 Thread 1 running. 0 available tokens. 0 threads waiting.
Enqueueing thread 2
Enqueueing thread 6
2015-12-04 08:17:11 -0800 Thread 2 running. 0 available tokens. 0 threads waiting.
Enqueueing thread 3
Enqueueing thread 5
Enqueueing thread 4
2015-12-04 08:17:19 -0800 Thread 6 running. 0 available tokens. 3 threads waiting.
2015-12-04 08:17:19 -0800 Thread 5 running. 0 available tokens. 2 threads waiting.
2015-12-04 08:17:21 -0800 Thread 3 running. 0 available tokens. 1 threads waiting.
2015-12-04 08:17:22 -0800 Thread 4 running. 0 available tokens. 0 threads waiting.
2 available tokens

Ruby TCP/IP Client Threading

I'm using ruby socket for a simple ping-pong scenario.
(The client is sending a string to the server, and the server is sending the string back - that's all)
Simple Client:
socket = TCPSocket.new "localhost", 5555
socket.write "test-string\n"
puts socket.gets.inspect
It's working fine, until Threads come into play:
socket = TCPSocket.new "localhost", 5555
threads = []
5.times do |t|
threads << Thread.new(t) do |th|
socket.write "#{t}\n"
puts "THREAD: #{t} --> [ #{socket.recv(1024).inspect} ]"
end
end
threads.each { |th| th.join }
# Output: THREAD: 3 --> [ "0\r\n1\r\n2\r\n3\r\n4\r\n" ]
The problem here is that each Thread seems to "listen" for responses from the server with socket.gets, and as a result an arbitrary Thread will receive ALL responses from the server, as you can see from the output.
Preferably each Thread should receive it's own response, the output should not look like
THREAD: 3 --> [ "0\r\n1\r\n2\r\n3\r\n4\r\n" ]
but rather like:
THREAD: 0 --> [ "0\r\n" ]
THREAD: 1 --> [ "1\r\n" ]
THREAD: 2 --> [ "2\r\n" ]
THREAD: 3 --> [ "3\r\n" ]
THREAD: 4 --> [ "4\r\n" ]
What is the deal here?
All your threads are sharing the same socket. You write your messages to the socket and then all 5 threads are sitting waiting for data to be available to read.
Depending on the behaviour of the other end, the buffering in the network stack etc. that could come back in one chunk or multiple chunks. In your particular set of circumstances the data appears in one chunk and one thread happens to get lucky.
To get the behaviour you want you should use one socket per thread.

Detect which worker returned a TTR-expired job to the queue?

I have multiple workers processing requests in a beanstalkd queue using beanstalk-client-ruby.
For testing purposes, the workers randomly dive into an infinite loop after picking up a job from the queue.
Beanstalk notices that a job has been reserved for too long and returns it to the queue for other workers to process.
How could I detect that this has happened, so that I can kill the malfunctioning worker?
Looks like I can get detect that a timeout has happened :
> job.timeouts
=> 0
> sleep 10
=> nil
> job.timeouts
=> 1
Now how can I something like:
> job=queue.reserve
=> 189
> job.MAGICAL_INFO_STORE[:previous_worker_pid] = $$
=> extraordinary magic happened
> sleep 10
=> nil
> job=queue.reserve
=> 189
> job.timeouts
=> 1
> kill_the_sucker(job.MAGICAL_INFO_STORE[:previous_worker_pid])
=> nil
Found a working solution myself:
Reserve a job
Setup a new tube with the job_id
Push a job with your PID in the body to the new tube
When a job with timeouts > 0 is found, pop the PID-task from the job_id queue.
Kill the worker

Resources