I'm using neo4j in a Ruby CLI app.
Each time a command is run from the command line, "session = Neo4j::Session.open(:server_db)" is re-established which is quite slow.
Is there anyway to persist the "session" first time use and re-use it in subsequent command invocations from the command line.
Regards
The neo4j-core gem uses the faraday gem to make persistent HTTP connections. That's defined here:
https://github.com/neo4jrb/neo4j-core/blob/master/lib/neo4j-server/cypher_session.rb#L24
That uses the NetHttpPersistent Faraday adapter here:
https://github.com/lostisland/faraday/blob/master/lib/faraday/adapter/net_http_persistent.rb
Which I believe uses the net-http-persistent library:
https://github.com/drbrain/net-http-persistent
When calling open on Session, you can pass in a second argument Hash of options. You can specify a connection key in that hash which is a Faraday connection object which you've created. That might allow you to save some token/string somewhere and the reload the Faraday object each time from that to pick up the session from where it left off.
The other option is to have a daemon in the background which has the connection open
Related
I'm running a very basic Sinatra server, which simply shows a Chartkick graph of some data I have through the Sequel gem. I'm noticing that the data on the chart doesn't seem to update unless I quit the Sinatra server script and rerun it. I don't really understand how that would be possible... the only non-normal thing option I'm using when reading my database using Sequel is the read-only option.. would that cause this?
It turns out, from reading another post on here:
First, by default, multiple processes can have the same SQLite
database open at the same time, and several read accesses can be
satisfied in parallel.
In case of writing, a single write to the database locks the database
for a short time, nothing, even reading, can access the database file
at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option
is available, in which reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, refer to the SQLite
documentation.
I currently have script A, which maintains a connection to the DB file and writes to it regularly, and script B, which is my Sinatra server that reads information from that DB file. I worked around this issue by using a block connection in my Sinatra script. I don't know how to turn on WAL with Sequel though...
I am using ActiveRecord with Sinatra and PostgreSQL. When the database connection drops (due to temporary network failure or postgres server restarting), my app doesn't re-acquire connection automatically. I'll have to restart the app in order to connect to postgres again. I remember I didn't have this problem when I was using Rails in another project.
Do I need to put some configuration or code to tell ActiveRecord to reconnect to PostgreSQL automatically?
ActiveRecord::Base.verify_active_connections! has removed back in 2012 in rails commit 9d1f1b1ea9e5d637984fda4f276db77ffd1dbdcb. so we can't use that method.
sentences below is my result of short investigation. I am no experts in rails activerecord. so listen with caution. (but hope this helpful)
comment in connection_pool.rb said
# 1. Simply use ActiveRecord::Base.connection as with Active Record 2.1 and
# earlier (pre-connection-pooling). Eventually, when you're done with
# the connection(s) and wish it to be returned to the pool, you call
# ActiveRecord::Base.clear_active_connections!. This will be the
# default behavior for Active Record when used in conjunction with
# Action Pack's request handling cycle.
so maybe you (and I. I have a same situation just like you) have to return connection to pool.
and to return connection to pool in sinatra as Action Pack's request handling cycle, use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::ConnectionAdapters::ConnectionManagement
and then as stated in rails commit 9d1f1b1ea9e5d637984fda4f276db77ffd1dbdcb we are using a different way as in this line, always checkout_and_verify when using Basae.connection by obeying action pack lifecycle.
def connection
# this is correctly done double-checked locking
# (ThreadSafe::Cache's lookups have volatile semantics)
#reserved_connections[current_connection_id] || synchronize do
#reserved_connections[current_connection_id] ||= checkout
end
end
UPDATED 2019-01-11 As of Rails 4.2 I have to use
ActiveRecord::Base.clear_active_connections!
and ActiveRecord will reconnect on next query. Works also from Rails console, which is rather convenient
From https://www.new-bamboo.co.uk/blog/2010/04/11/automatic-reconnection-of-mysql-connections-in-active-record/
If you use Active Record outside Rails or at least outside controller actions you have to verify connections on your own before executing a database statement. This can be done with the following code:
ActiveRecord::Base.verify_active_connections!
Since Active Record uses one connection per thread, in multi-threaded applications this verification has to be executed for each thread separately.
The blog post is about reconnecting to MySQL but I'm guessing it would be the same regardless of the engine used, as it's abstracted away. The blog also mentions a reconnect option in the configuration, but you'll have to find out if that works for Postgres.
I have ruby file which im running in my mac with OSX 10.9 that is a combination of sinatra and geography which i have both installed. when i use require 'sinatra' on the file everything is fine, but when i insert require 'neography' it gives me this error when trying to run the file.
/Users/AJ/.rvm/gems/ruby-2.1-head/gems/sinatra-1.4.4/lib/sinatra/base.rb:1488:in `start_server': undefined method `run' for HTTP:Module (NoMethodError)
from /Users/AJ/.rvm/gems/ruby-2.1-head/gems/sinatra-1.4.4/lib/sinatra/base.rb:1426:in `run!'
from /Users/AJ/.rvm/gems/ruby-2.1-head/gems/sinatra-1.4.4/lib/sinatra/main.rb:25:in `block in <module:Sinatra>'
What could be a possible reason for this error? Thanks in advance
Neography depends on httpclient, which in turn defines a module named HTTP.
When Sinatra tries to determine which server to use one of the options it tries is the net-http-server, whose Rack handler class is also named HTTP. This causes a name collision where Sinatra thinks the HTTP module in httpclient is the net-http-server and tries to run it as such, causing the error you see.
If you have another server installed, e.g. Thin, it will likely be detected before HTTP so you won’t see this error, but you are probably better explicitly setting the server to use. You can add something like
set :server, thin
to your application file to specify Thin as your server (you’ll need to install the thin gem first – you could also use Webrick). You could also specify this on the command line if you wanted: ruby my_app.rb -s thin, but I think you’d be better of adding it to your code to avoid problems in the future.
I am not able to use memcached as session store with rails 4 using dalli gem.
Here's what I have done.
I added dalli gem to Gemfile
gem 'dalli'
I added the following line in config/initializers/session_store.rb
Rails.application.config.session_store ActionDispatch::Session::CacheStore, :expire_after => 20.minutes
And I added the following line in development.rb
config.cache_store = :dalli_store
Now when I start my development server with thin server without starting memcached server, I still can login as usual. Should I get some error like no memcached server running or something like that.
I am not sure if rails is using memcached as session store or not.
Can someone tell me what have I missed in using memcached as session store in development environment?
For your information, I have been using devise as authentication gem.
Thanks
Yes, you should see an error like this in the console:
DalliError: No server available
However, you will still get the session cookie, since Rails will generate it and send it to the browser.
it's just that Rails does not have a place to store the data associated with the cookie.
So, for a demo, try this:
Stop memcached. In some controller action do this:
def some_action
puts session[:test]
session[:test] = "hello"
end
You should not see "hello" in STDOUT.
Now, restart memcached and hit the action again. (might need to refresh the browser twice).
This time, you should see "hello".
If you again stop memcached, the "hello" will no longer be displayed.
I hope that makes it clear that the generation of the cookie (containing the session key)
and the storage of data against the value of the cookie (i.e. the session key) are two different things. And of course, ensure that memcached really is stopped.
As for the part being able to login even with memcached stopped, check to see that you have cleared all cookies for the domain (localhost) and that you have restarted the rails server after making the change. Also, clear out the tmp/cache directory.
PS. If you do not see the error DalliError: No server available then that means that memcached is probably still running somewhere. Try accessing memcached via Dalli via the Rails console and see if you are able to store/get data.
PPS. If you see files being stored in tmp (like tmp/cache/D83/760/_session_id%3A4d65e5827354d0e1e8153a4664b3caa1), then that means that Rails is falling back to FileStore for storing the session data.
We're using the MongoHQ addon on Heroku, with the Mongoid 3.0 adapter. The addon plans come with a size limit, and Mongo will silently fail writing when the DB limit has been reached (unless configured for safe mode--in which case it'll throw exceptions).
I'm trying to query from within the app how close we are and send an alert if we've reached the limit. How can I run something like the db.stats() command but using Mongoid?
I've found out how to do this in Mongoid 3.x which uses Moped as driver, not the Ruby driver from 10gen.
It was the author of Moped himself who answered a github issue raised on the matter.
Mongoid.default_session.command(collstats: 'collection_name')
This will return the same results as db.stats() from the Mongo console. As an additional bonus, if the collection is capped, there'll be a flag in the return values indicating that.
You can call the ".db" method on your object (e.g. a Document), and do .stats on it.
For example:
MyBlog.db.stats
For verisons prior to Mongoid 3.0.0, Mongoid.master.stats should also work.