Ruby - SQLite3 - set pragmas from code - ruby

I am using Ruby SQLite3 (1.13.11) on macos (Ruby 2.0.0-p247) to create a few databases for my application. I need to set some pragmas, but I am not sure I am doing the right thing. This is what I do to set PRAGMA synchronous = OFF
db = SQLite3::Database.new("test.db")
db.synchronous
2
db.synchronous = 0
db.synchronous
0
This seems to work, but when I open my test.db with DB Browser for SQLite, synchronous is still set to Full.
I also tried
db.execute("PRAGMA synchronous = OFF")
with the same result.
Is synchronous associated with the connection? Is this the case for all PRAGMAS?

Some PRAGMAs are associated with the current database connection so they are not persisted between sessions. For example: journal_mode
For a list of all PRAGMAS refer to this link

Related

ActiveRecord with Ruby script, without Rails, connection pool timeout

I'm using ActiveRecord in a Ruby script, without using rails, just running the Ruby script. The script kicks off threads which access the database. Eventually, I get:
could not obtain a database connection within 20.000 seconds
database configuration:
The pool is set for 10 connection. The timeout is set for 20 seconds.
I tried without using connection poll calls directly, but I'm not clear on how that is supposed to be managed. So I put cxn = ActiveRecord::Base.connection_pool.checkout and ActiveRecord::Base.connection_pool.checkin(cxn) around the database access portions of the code. I still get the error.
I put some debug in the script, and I can see the checkout/checkin calls happening. There are 7 successful checkout/checkins. There are 13 total checkouts, so 6 left open.
I'm also seeing:
undefined method `run_callbacks'
when the timeout occurs.
Thank you,
Jerome
You need to explicitly release connections back to the ActiveRecord connection pool. You either need to explicitly call ActiveRecord::Base.clear_active_connections! or else define
a helper method which does the following:
def with_connection(&block)
ActiveRecord::Base.connection_pool.with_connection do
yield block
end
end
which you would use as follows:
def my_method
with_connection do
User.where(:id => 1)
end
end
which should release your connection when the block exits.
Be warned that ActiveRecord uses the current Thread ID to manage connections, so if you are spawning off threads each thread needs to release connections once done with them. Your Threads are most likely not clearing connections properly, or you are spawning more threads than you have connections available in your connection pool.

How to keep a persistent connection to SQL Server using Ruby Sequel and Tiny_TDS while in a loop

I have a ruby script that needs to run continually on the server. I've daemonized it using the daemon gem, and in my script I have it running in an infinite loop, since the daemon gem handles starting and stopping of the process that kicks off my script. In my script, I start out by setting up my DB instance using the Sequel gem and tiny_tds. Like so:
DB = Sequel.connect(adapter: 'tinytds', host: MSSQLHost, database: MSSQLDatabase, user: MSSQLUser, password: MSSQLPassword)
Then I have a loop do that is my infinite loop. Inside that, I test to see if I have a connection using DB.test_connection and then I query the DB every second or so to check if there is new content using a query such as:
DB['SELECT * FROM dbo.[MyTable]'].all do |row|
# MY logic here
# As part of my logic I test to see if I need to delete this row in the table and if so I use
DB.run('DELETE FROM dbo.[MyTable] WHERE some condition')
end
Then at the end of my logic, just before I loop again, I do:
sleep 1
DB.disconnect
All of this works great for about an hour to an hour and a half with everything checking the table, doing the logic, deleting rows, etc., then it dies and gives me this error message TinyTds::Error: Adaptive Server connection timed out
My question, why is that happening? Do I need to reformat my code in a different way? Why doesn't the DB.test_connection do what it is advertised to do? The documentation on that says it checks for a connection in the connection pool, and uses it if it finds it, and creates a new one otherwise.
Any help would be much appreciated
DB.test_connection just acquires a connection from the connection pool, it doesn't check that the connection is still valid (it must have been valid at one point or it wouldn't be in the pool). There's no way that a connection is still valid without actually sending a query. You can use the connection_validator extension that ships with Sequel if you want to do that automatically.
If you are loading Sequel before forking, you need to make sure you call DB.disconnect before forking, otherwise you can end up with multiple forked processes sharing the same connection, which can cause many different issues.
I finally ended up just putting a rescue statement in there that caught this, and re-ran my line of code to create the DB instance, yes, it puts a warning in my log about already setting that instance, but I guess I could just make that not a contstant an that would go away. Anyway, it appears to be working now, and the times it does timeout, I'm recovering gracefully from those. I just wish I could have figured out why it was/is disconnecting like it is.

Clear Oracle session state

A database connection on Oracle can have session state that persists for the lifetime of the connection, i.e. in the form of package variables.
Is there a way of flushing/clearing all that state between calls during a connection without killing the connection and reestablishing a new connection.
I.e. consider a package variable first set in the package init, and later modified within some procedure in that package: how to "reset" the package so that multiple calls to the procedure from 1 connection always lead to a reinit of the package?
In general: how to "reset" any session state between execution of statements from a client on that connection?
dbms_session.reset_package is the closest I can think of. See this tahiti link.
Other than dbms_session.reset_package (proposed in René Nyffenegger's answer), which resets all packages, you'll have to write your own package procedure to reset the state of a single package only. The procedure would just set all package variables to NULL (or whatever is appropriate).

jdbc batch performance

i'm batching updates with jdbc
ps = con.prepareStatement("");
ps.addBatch();
ps.executeBatch();
but in the background it seems, that the prostgres driver sends the query bit by bit to the database.
org.postgresql.core.v3.QueryExecutorImpl:398
for (int i = 0; i < queries.length; ++i)
{
V3Query query = (V3Query)queries[i];
V3ParameterList parameters = (V3ParameterList)parameterLists[i];
if (parameters == null)
parameters = SimpleQuery.NO_PARAMETERS;
sendQuery(query, parameters, maxRows, fetchSize, flags, trackingHandler);
if (trackingHandler.hasErrors())
break;
}
is there a possibility to let him send 1000 a time to speed it up?
AFAIK is no server-side batching in the fe/be protocol, so PgJDBC can't use it.. Update: Well, I was wrong. PgJDBC (accurate as of 9.3) does send batches of queries to the server if it doesn't need to fetch generated keys. It just queues a bunch of queries up in the send buffer without syncing up with the server after each individual query.
See:
Issue #15: Enable batching when returning generated keys
Issue #195: PgJDBC does not pipeline batches that return generated keys
Even when generated keys are requested the extended query protocol is used to ensure that the query text doesn't need to be sent every time, just the parameters.
Frankly, JDBC batching isn't a great solution in any case. It's easy to use for the app developer, but pretty sub-optimal for performance as the server still has to execute every statement individually - though not parse and plan them individually so long as you use prepared statements.
If autocommit is on, performance will be absolutely pathetic because each statement triggers a commit. Even with autocommit off running lots of little statements won't be particularly fast even if you could eliminate the round-trip delays.
A better solution for lots of simple UPDATEs can be to:
COPY new data into a TEMPORARY or UNLOGGED table; and
Use UPDATE ... FROM to UPDATE with a JOIN against the copied table
For COPY, see the PgJDBC docs and the COPY documentation in the server docs.
You'll often find it's possible to tweak things so your app doesn't have to send all those individual UPDATEs at all.

Ruby OCI8 not logging off connection consequences

What are the consequences, (if any) not calling the conn.logoff() method after the following script when connecting to an Oracle database using the Ruby OCI8 library.
conn = OCI8.new('scott', 'tiger')
num_rows = conn.exec('SELECT * FROM emp') do |r|
puts r.join(',')
end
puts num_rows.to_s + ' rows were processed.'
The reason I'm asking because we're experiencing slow downs with other applications that connect to this same Oracle db.
Thanks
I would imagine that when the Ruby process exits, the session will be killed automatically.
You could check by querying v$session to see if the ruby process is still connected to Oracle after Ruby exits.
Given only the information in your question, its really impossible to say what could be causing slowdowns - there are so many variable.
If you don't call conn.logoff(), the connection is alive even though it is garbage-collected until the ruby process exits.
The problem is fixed in ruby-oci8 2.1, which have not been released yet though.

Resources