I mean the one which was previously established as
DB = Sequel.sqlite('my_blog.db')
or
DB = Sequel.connect('postgres://user:password#localhost/my_db')
or
DB = Sequel.postgres('my_db', :user => 'user', :password => 'password', :host => 'localhost')
or etcetera.
The Sequel::Database class has no public instance method called "disconnect" or so though it has "connect" one.
Maybe somebody already faced that problem. I would appreciate any idea.
As Mladen Jablanović points out, you can just do:
DB.disconnect
Which will disconnect all of the available connections in that Sequel::Database instance's connection pool. You can't choose a specific connection to disconnect, and it wouldn't make sense to. The sharded connection pools do support disconnecting all connections for a specific shard, though.
Related
I'm new to node and to feathersjs, and for my first app, I'm trying to have different parts of it communicate using channels. I understand the operations and how they're used, but I don't understand how to establish a connection to a channel in the first place.
For example, here's some code from the official documentation:
app.on('login', (payload, { connection }) => {
if(connection && connection.user.isAdmin) {
// Join the admins channel
app.channel('admins').join(connection);
// Calling a second time will do nothing
app.channel('admins').join(connection);
}
});
Where does "connection" come from? There is no built-in function (unless I'm missing something obvious) in feathersjs to do this.
Thanks!
Channel is used in feathers to achieve real time.
In the server you need to configure socketio. Then it also requires the client to be connected to the server via socketio.
Where does "connection" come from?
connection is a js object that represents the connection the user has established by logging in.
Try doing a console.log(connection) to see what it contains.
connection is in this case passed by the Feathers framework in the function call to the function that you have quoted.
Once you have obtained this connection object then you can use it for adding the user to a channel, and many other things.
I am building a service in Ruby 2.4.4, with Sinatra 2.0.5, ActiveRecord 5.2.2, Puma 3.12.0. (I'm not using rails.)
My code looks like this. I have an endpoint which opens a DB connection (to a Postgres DB) and runs some DB queries, like this:
POST '/endpoint' do
# open a connection
ActiveRecord::Base.establish_connection(##db_configuration)
# run some queries
db_value = TableModel.find_by(xx: yy)
return whatever
end
after do
# after the endpoint finishes, close all open connections
ActiveRecord::Base.clear_all_connections!
end
When I get two parallel requests to this endpoint, one of them fails with this error:
2019-01-12 00:22:07 - ActiveRecord::ConnectionNotEstablished - No connection pool with 'primary' found.:
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1009:in `retrieve_connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_handling.rb:90:in `connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/core.rb:207:in `find_by'
...
My discovery process went this way so far.
I looked at the connection usage in Postgres, thinking I might leak connections - no, I didn't seem to.
Just in case, I increased the connection pool to 16 (corresponding to 16 Puma threads) - didn't help.
Then I looked into the ActiveRecord sources. Here I realized why 2) didn't help. The problem is not that I can't get a connection, but I can't get a connection pool (yes, yes, it says that in the exception). The #owner_to_pool map variable, from which a connection pool is obtained, stores the process_id as key, and as values - connection pools (actually, the value is also a map, where the key is a connection specification and the value, I presume, is an actual pool instance). In my case, I have only one connection spec to my only db.
But Puma is a multithreaded webserver. It runs all requests in the same process but in different threads.
Because of that, I think, the following happens:
The first request, starting in process_id=X, thread=Y, "checks out" the connection pool in establish_connection, based on process_id=X, - and "takes" it. Now it's not present in the #owner_to_pool.
The second request, starting in the same process_id=X, but different thread=Z, tries to do the same - but the connection pool for process_id=X is not present in owner_to_pool. So the second request doesn't get a connection pool and fails with that exception.
The first request finished successfully and puts the connection pool for process_id=X back in place by calling clear_all_connections.
Another request, starting after all that, and not having any parallel requests in parallel threads, will succeed, because it will pick up the connection pool and put it back again with no problems.
Although I am not sure I understand everything 100% correctly, but it seems to me that something like this happens.
Now, my question is: what do I do with all this? How do I make the multithreaded Puma webserver work correctly with ActiveRecord's connection pool?
Thanks a lot in advance!
This question seems similar, but unfortunately it doesn't have an answer, and I don't have enough reputation to comment on it and ask the author if they solved it.
So, basically, I didn't realize I was establish_connection is creating a connection pool. (Yes, yes, I said so myself in the question. Still, didn't quite realize it.)
What I ended up doing, is this:
require ....
# create the connection pool with the required configuration - once; it'll belong to the process
ActiveRecord::Base.establish_connection(db_configuration)
at_exit {
# close all connections on app exit
ActiveRecord::Base.clear_all_connections!
}
class SomeClass < Sinatra::Base
POST '/endpoint' do
# run some queries - they'll automatically use a connection from the pool
db_value = TableModel.find_by(xx: yy)
return whatever
end
end
I need to perform several operations during an ssh session. At the moment I am using SSH.start and SCP.start for the remote operations and uploads respectively. This is an example:
Net::SSH.start(host, user, options) do |session|
output = session.exec!(cmd)
end
Net::SCP.start(host, user, options) do |scp|
scp.upload!(src, dst, :recursive => true)
scp.upload!(src1, dst1, :recursive => true)
end
Net::SSH.start(host, user, options) do |session|
output = session.exec!(cmd)
end
The problem is that for every operation the SSH connection needs to be re-established and this is affecting the overall performance.
Is there a way to open a session and then perform all the required operations such as commands, uploads and downloads?
The SSH protocol allows multiple channels per connection. So technically it is possible.
I do not know Ruby net-ssh implementation, but from it's API it seems to support this.
The constructor of Net::SCP class takes an exiting SSH session.
# Creates a new Net::SCP session on top of the given Net::SSH +session+
# object.
def initialize(session)
So pass your existing Net::SSH instance to the Net::SCP constructor, instead of starting a new session using .start method.
I am getting this error:
'could not obtain a database connection within 5 seconds (waited 5.001017 seconds). The max pool size is currently 16; consider increasing it.'
First I got this error, I bumped up the count from 5 to 16. But it's still happening and i am the only one test out the database. Why is this happening when I am the only user?
I am not on rails btw. I am using:
ActiveRecord::Base.establish_connection ({
:adapter => 'mysql2',
:database => 'ck',
:host => 'localhost',
:username => 'root',
:password => '',
:pool => 16,
})
and using Sinatra.
Thanks
As Frederick pointed out you need to return opened ActiveRecord connections to the connection pool.
If you're using the Thin server, in threaded mode, then you need to add this to your Sinatra app:
after do
ActiveRecord::Base.connection.close
end
...instead of using the ConnectionManagement suggestion. The reason is that Thin splits the request processing over 2 threads and the thread that is closing the ActiveRecord connection is not the same as the thread that opened it. As ActiveRecord tracks connections by thread ID it gets confused and won't return connections correctly.
It sounds like you are not returning connections to the pool at the end of the request. If not then each request that uses the db will consume 1 connection and eventually you'll exhaust the pool and start getting the error messages you describe
Active Record provides a rack middleware to handle this ActiveRecord::ConnectionAdapters::ConnectionManagement, which should take care of things as long as its earlier in the middleware chain than anything that will access active record.
You can also take care of the connection management yourself. The docs have more details but one way of doing it is sticking all of your db accesses in a block like this
ActiveRecord::Base.connection_pool.with_connection do
...
end
Which checks out a connection at the start of the block and checks it back in afterwards.
It's better to use the middleware provided with ActiveRecord:
use ActiveRecord::ConnectionAdapters::ConnectionManagement
As Frederick pointed out you need to return opened ActiveRecord connections to the connection pool.
As kuwerty suggests, when you are using Thin, ConnectionManagement will not return connections to the pool. I suggest that instead of closing the current connection like kuwerty says, you return the connection to the pool like so.
after do
ActiveRecord::Base.clear_active_connections!
end
For those who want to reproduce the problem, try this example.
EDIT:
I made an explanation of why using middleware ActiveRecord::Connectionadapters::ConnectionManagement won't work with Thin in threaded mode, which you can find here.
I have a ruby tcpsocket client that is connected to a server.
How can I check to see if the socket is connected before I send the data ?
Do I try to "rescue" a disconnected tcpsocket, reconnect and then resend ? if so, does anyone have a simple code sample as I don't know where to begin :(
I was quite proud that I managed to get a persistent connected client tcpsocket in rails. Then the server decided to kill the client and it all fell apart ;)
edit
I've used this code to get round some of the problems - it will try to reconnect if not connected, but won't handle the case if the server is down (it will keep retrying). Is this the start of the right approach ? Thanks
def self.write(data)
begin
##my_connection.write(data)
rescue Exception => e
##my_connection = TCPSocket.new 'localhost', 8192
retry
end
end
What I usually do in these types of scenarios is keep track of consecutive retries in a variable and have some other variable that sets the retry roof. Once we hit the roof, throw some type of exception that indicates there is a network or server problem. You'll want to reset the retry count variable on success of course.