I am using the [com.impossibl.pgjdbc-ng/pgjdbc-ng "0.7.1"] library to connect to a postgres database.
The connection is saved inside an atom.
I then arm multiple listeners like so:
(doto (.createStatement (connection f))
(.execute (format "LISTEN %s;" event))
(.closeOnCompletion)))
f in this case is a function called when the event triggers.
For some reason it does not take long until the connection seems to be garbage collected, which obviously makes the listeners non-functioning.
WARNING: Cleaning up leaked connection ( jdbc:pgsql://my-container/database )
This warning is followed by a stacktrace to where I opened the connection in the arm-listeners method.
I tried several things, like store the connection in a let, but none seemed to help with this specific issue.
The complete function to establish the connection and start the listener which I use are those: https://github.com/n2o/postgres-listener/blob/master/src/postgres_listener/core.clj
This is how I start the listeners:
(defn start-listeners
"Start all important listeners."
[]
(connect {:host (System/getenv "DB_HOST")
:port (read-string (System/getenv "DB_PORT"))
:database (System/getenv "DB_NAME")
:user (System/getenv "DB_USER")
:password (System/getenv "DB_PW")})
(arm-listener handle-textversions "textversions_changes")
(arm-listener handle-statements "statements_changes")
(arm-listener handle-arguments "arguments_changes")
It seems that using the connection in a let statement and than returning it, helps the JVM not to collect the reference.
So something like this would help:
(let [conn (connect <...>)
a (arm-listener f name)]
conn)
I'll keep the question open for a while, in case somebody has another answer.
Related
I am building a service in Ruby 2.4.4, with Sinatra 2.0.5, ActiveRecord 5.2.2, Puma 3.12.0. (I'm not using rails.)
My code looks like this. I have an endpoint which opens a DB connection (to a Postgres DB) and runs some DB queries, like this:
POST '/endpoint' do
# open a connection
ActiveRecord::Base.establish_connection(##db_configuration)
# run some queries
db_value = TableModel.find_by(xx: yy)
return whatever
end
after do
# after the endpoint finishes, close all open connections
ActiveRecord::Base.clear_all_connections!
end
When I get two parallel requests to this endpoint, one of them fails with this error:
2019-01-12 00:22:07 - ActiveRecord::ConnectionNotEstablished - No connection pool with 'primary' found.:
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1009:in `retrieve_connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/connection_handling.rb:90:in `connection'
C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/activerecord-5.2.2/lib/active_record/core.rb:207:in `find_by'
...
My discovery process went this way so far.
I looked at the connection usage in Postgres, thinking I might leak connections - no, I didn't seem to.
Just in case, I increased the connection pool to 16 (corresponding to 16 Puma threads) - didn't help.
Then I looked into the ActiveRecord sources. Here I realized why 2) didn't help. The problem is not that I can't get a connection, but I can't get a connection pool (yes, yes, it says that in the exception). The #owner_to_pool map variable, from which a connection pool is obtained, stores the process_id as key, and as values - connection pools (actually, the value is also a map, where the key is a connection specification and the value, I presume, is an actual pool instance). In my case, I have only one connection spec to my only db.
But Puma is a multithreaded webserver. It runs all requests in the same process but in different threads.
Because of that, I think, the following happens:
The first request, starting in process_id=X, thread=Y, "checks out" the connection pool in establish_connection, based on process_id=X, - and "takes" it. Now it's not present in the #owner_to_pool.
The second request, starting in the same process_id=X, but different thread=Z, tries to do the same - but the connection pool for process_id=X is not present in owner_to_pool. So the second request doesn't get a connection pool and fails with that exception.
The first request finished successfully and puts the connection pool for process_id=X back in place by calling clear_all_connections.
Another request, starting after all that, and not having any parallel requests in parallel threads, will succeed, because it will pick up the connection pool and put it back again with no problems.
Although I am not sure I understand everything 100% correctly, but it seems to me that something like this happens.
Now, my question is: what do I do with all this? How do I make the multithreaded Puma webserver work correctly with ActiveRecord's connection pool?
Thanks a lot in advance!
This question seems similar, but unfortunately it doesn't have an answer, and I don't have enough reputation to comment on it and ask the author if they solved it.
So, basically, I didn't realize I was establish_connection is creating a connection pool. (Yes, yes, I said so myself in the question. Still, didn't quite realize it.)
What I ended up doing, is this:
require ....
# create the connection pool with the required configuration - once; it'll belong to the process
ActiveRecord::Base.establish_connection(db_configuration)
at_exit {
# close all connections on app exit
ActiveRecord::Base.clear_all_connections!
}
class SomeClass < Sinatra::Base
POST '/endpoint' do
# run some queries - they'll automatically use a connection from the pool
db_value = TableModel.find_by(xx: yy)
return whatever
end
end
I'm quite confused about why sente is sending the client a message all on its own around 8 seconds after the page is initially loaded. What else is strange, is that if I quickly send a message to the server over the websocket before this happens, everything stabilizes and the view doesn't crash.
For reference, the front-end is clojurescript with reagent, and its a luminus project. For further reference, its pretty much exactly the sample application from chapter 5 of "web development with clojure".
I can tell that its the server pushing a message to the client that's causing the problem, I just don't know enough about Sente to understand why it would even be doing this.
Here's what I think the relevant code is:
Server side:
(defn save-message! [message]
(if-let [errors (validate-message message)]
{:errors errors}
(do
(db/save-message! message)
message)))
(defn handle-message! [{:keys [id client-id ?data]}]
(when (= id :guestbook/add-message)
(let [response (-> ?data
(assoc :timestamp (java.util.Date.))
save-message!)]
(if (:errors response)
(chsk-send! client-id [:guestbook/error response])
(doseq [uid (:any #connected-uids)]
(chsk-send! uid [:guestbook/add-message response]))))))
Client side (with reagent):
(defn response-handler [messages fields errors]
(fn [{[_ message] :?data}]
(if-let [response-errors (:errors message)]
(reset! errors response-errors)
(do
;; Fires right before the view crashes!
(.log js/console "response-handled")
(reset! errors nil)
(reset! fields nil)
(swap! messages conj message)))))
(defn home []
(let [messages (atom nil)
fields (atom nil)
errors (atom nil)]
(ws/start-router! (response-handler messages fields errors))
(get-messages messages)
(fn []
[:div
[:div.row
[:div.span12
[message-list messages]]]
[:div.row
[:div.span12
[message-form fields errors]]]])))
The problem is that when sente sends the message on its own there is no data to update the messages (or atleast that's my best guess), so the atom's fields become null and reagent (react.js) throws trying to diff and patch from the vdom.
If anyone knows what sente is doing it would be very much appreciated. This exact same set up works fine when you use Immutant's async socket support, and do a lot of the work yourself (serialize/deserialize, handle connections etc).
;;;;;;;;
As a follow up, I solved the issue by filtering for non nil messages:
(defn response-handler [messages fields errors]
(fn [{[_ message] :?data}]
(if-let [response-errors (:errors message)]
(reset! errors response-errors)
(when (not= message nil)
(reset! errors nil)
(reset! fields nil)
(swap! messages conj message)))))
Still, this is kind of a bandage solution, it would be nice to know why Sente throws a message at me after the page loads if the socket isn't used immediately.
You can inspect what is happening by looking at the network tab in the developer tools; there should be a sub-tab for websocket frames.
Sente sends some events on its own, and the event name (a keyword which is the first element of the event vector) is in the chsk namespace if I remember correctly. I believe that you should use some kind of dispatch on the event names anyway, and not assume that only one kind of event will arrive.
In the context of re-frame, I have been filtering the unwanted events and dispatching the rest to the re-frame event loop. I guess that you could do something similar in luminus. On the server side, I have been using multimethods in a similar setting.
I have a ruby script that needs to run continually on the server. I've daemonized it using the daemon gem, and in my script I have it running in an infinite loop, since the daemon gem handles starting and stopping of the process that kicks off my script. In my script, I start out by setting up my DB instance using the Sequel gem and tiny_tds. Like so:
DB = Sequel.connect(adapter: 'tinytds', host: MSSQLHost, database: MSSQLDatabase, user: MSSQLUser, password: MSSQLPassword)
Then I have a loop do that is my infinite loop. Inside that, I test to see if I have a connection using DB.test_connection and then I query the DB every second or so to check if there is new content using a query such as:
DB['SELECT * FROM dbo.[MyTable]'].all do |row|
# MY logic here
# As part of my logic I test to see if I need to delete this row in the table and if so I use
DB.run('DELETE FROM dbo.[MyTable] WHERE some condition')
end
Then at the end of my logic, just before I loop again, I do:
sleep 1
DB.disconnect
All of this works great for about an hour to an hour and a half with everything checking the table, doing the logic, deleting rows, etc., then it dies and gives me this error message TinyTds::Error: Adaptive Server connection timed out
My question, why is that happening? Do I need to reformat my code in a different way? Why doesn't the DB.test_connection do what it is advertised to do? The documentation on that says it checks for a connection in the connection pool, and uses it if it finds it, and creates a new one otherwise.
Any help would be much appreciated
DB.test_connection just acquires a connection from the connection pool, it doesn't check that the connection is still valid (it must have been valid at one point or it wouldn't be in the pool). There's no way that a connection is still valid without actually sending a query. You can use the connection_validator extension that ships with Sequel if you want to do that automatically.
If you are loading Sequel before forking, you need to make sure you call DB.disconnect before forking, otherwise you can end up with multiple forked processes sharing the same connection, which can cause many different issues.
I finally ended up just putting a rescue statement in there that caught this, and re-ran my line of code to create the DB instance, yes, it puts a warning in my log about already setting that instance, but I guess I could just make that not a contstant an that would go away. Anyway, it appears to be working now, and the times it does timeout, I'm recovering gracefully from those. I just wish I could have figured out why it was/is disconnecting like it is.
I am getting this error:
'could not obtain a database connection within 5 seconds (waited 5.001017 seconds). The max pool size is currently 16; consider increasing it.'
First I got this error, I bumped up the count from 5 to 16. But it's still happening and i am the only one test out the database. Why is this happening when I am the only user?
I am not on rails btw. I am using:
ActiveRecord::Base.establish_connection ({
:adapter => 'mysql2',
:database => 'ck',
:host => 'localhost',
:username => 'root',
:password => '',
:pool => 16,
})
and using Sinatra.
Thanks
As Frederick pointed out you need to return opened ActiveRecord connections to the connection pool.
If you're using the Thin server, in threaded mode, then you need to add this to your Sinatra app:
after do
ActiveRecord::Base.connection.close
end
...instead of using the ConnectionManagement suggestion. The reason is that Thin splits the request processing over 2 threads and the thread that is closing the ActiveRecord connection is not the same as the thread that opened it. As ActiveRecord tracks connections by thread ID it gets confused and won't return connections correctly.
It sounds like you are not returning connections to the pool at the end of the request. If not then each request that uses the db will consume 1 connection and eventually you'll exhaust the pool and start getting the error messages you describe
Active Record provides a rack middleware to handle this ActiveRecord::ConnectionAdapters::ConnectionManagement, which should take care of things as long as its earlier in the middleware chain than anything that will access active record.
You can also take care of the connection management yourself. The docs have more details but one way of doing it is sticking all of your db accesses in a block like this
ActiveRecord::Base.connection_pool.with_connection do
...
end
Which checks out a connection at the start of the block and checks it back in afterwards.
It's better to use the middleware provided with ActiveRecord:
use ActiveRecord::ConnectionAdapters::ConnectionManagement
As Frederick pointed out you need to return opened ActiveRecord connections to the connection pool.
As kuwerty suggests, when you are using Thin, ConnectionManagement will not return connections to the pool. I suggest that instead of closing the current connection like kuwerty says, you return the connection to the pool like so.
after do
ActiveRecord::Base.clear_active_connections!
end
For those who want to reproduce the problem, try this example.
EDIT:
I made an explanation of why using middleware ActiveRecord::Connectionadapters::ConnectionManagement won't work with Thin in threaded mode, which you can find here.
I have a ruby tcpsocket client that is connected to a server.
How can I check to see if the socket is connected before I send the data ?
Do I try to "rescue" a disconnected tcpsocket, reconnect and then resend ? if so, does anyone have a simple code sample as I don't know where to begin :(
I was quite proud that I managed to get a persistent connected client tcpsocket in rails. Then the server decided to kill the client and it all fell apart ;)
edit
I've used this code to get round some of the problems - it will try to reconnect if not connected, but won't handle the case if the server is down (it will keep retrying). Is this the start of the right approach ? Thanks
def self.write(data)
begin
##my_connection.write(data)
rescue Exception => e
##my_connection = TCPSocket.new 'localhost', 8192
retry
end
end
What I usually do in these types of scenarios is keep track of consecutive retries in a variable and have some other variable that sets the retry roof. Once we hit the roof, throw some type of exception that indicates there is a network or server problem. You'll want to reset the retry count variable on success of course.