Unable to close sqlite3 database with ruby - ruby

begin
db = SQLite3::Database.open "dbfile.db"
dbins = db.prepare("INSERT INTO table(a,b,c) VALUES (?,?,?);")
dbins.execute(vala,valb,valc)
rescue SQLite3::Exception => e
puts("Something went wrong: " + e)
ensure
db.close if db
end
So that would be the code I use to open an SQLite3 database and write data to it.
My problem is that this code always gives me the following error:
unable to close due to unfinalized statements or unfinished backups
If I remove the db.close if db part it works, but after several hours of the script running I get the too many open files error. Please do not advise me to raise my inode file limit, that would only be a temporary solution to a greater problem.
I do not want the script to just keep the database open forever, whenever an event happens I want it to open the db, write the data, and close it again, just how it's expected to work.
Note that this answer isn't helping because of the reason in the comment, which is true.
What do I have to do to "finish" the statement so I can close the database? I have tried to just add a sleep(5) before closing the database, but that had no effect.
I've found this Q suggesting to use finalize on the statement, but that seems to be only relevant for the C/C++ interface and not for ruby's sqlite3.

Reading the source code for the ruby gem helped. Specifically the file statement.c's following codeblock:
/* call-seq: stmt.close
*
* Closes the statement by finalizing the underlying statement
* handle. The statement must not be used after being closed.
*/
static VALUE sqlite3_rb_close(VALUE self)
{
sqlite3StmtRubyPtr ctx;
Data_Get_Struct(self, sqlite3StmtRuby, ctx);
REQUIRE_OPEN_STMT(ctx);
sqlite3_finalize(ctx->st);
ctx->st = NULL;
return self;
}
So using .close on the statement (e.g. dbins.close after the .execute in my code) will finalize the statement and make me able to close the database file.

Related

How can I intercept ActiveRecord::Base.logger output?

Context: I am constructing a really long sql string and executing it with ActiveRecord. When it fails, it logs out the error (which includes the original query) and takes up 5 pages of screen space. Because I am already catching the exception, I don't need to be notified there was an error, and it just clutters the logging. All attempts to temporarily turn off the logger or to hijack IO streams have been futile.
Problem: How do I prevent logging of that one exception?
Example: (I know a lot of this code is redundant, but my point is that even all together it doesn't work)
really_long_query = "select * from posts where ..."
ActiveRecord::Base.logger.level = 10
$stderr = $stdout = $stdin = STDOUT = STDERR = STDIN = IO.new(IO.sysopen('/dev/null', 'w+'))
silence_stream(STDOUT){
ActiveRecord::Base.connection.execute really_long_query # TURN LOGGING OFF FOR THIS LINE
} # => still logs the exception to the console, despite all the above code
My Conclusions: Based on the above results, I assume that ActiveRecord must be using
A different logger AND
A stream not included on line 3
ActiveRecord exceptions are like normal exceptions, you can handle them, so please try:
begin
sql = ActiveRecord::Base.connection.execute really_long_query
rescue => e
# do what ever you want to do with error
end
I am no able to try it now, but I guess it may be help for you :)

inserting row into sqlite3 database from play 2.3/anorm: exception being thrown non-deterministically

I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.

AWS S3 NoSuchBucket Exception Not Caught in Rescue Clause

I'm trying to get a bucket in Ruby using the AWS SDK, and trying to catch a NoSuchBucket error. Problem is, my rescue block is not catching the error, and so my app crashes. Here is the relevant code:
begin
b = s3.buckets[bucket_name]
rescue AWS::S3::Errors::NoSuchBucket
puts Invalid bucket name.
exit 1
end
and the error message is:
C:/Ruby193/lib/ruby/gems/1.9.1/gems/aws-sdk-1.5.6/lib/aws/core/client.rb:277:in
`return_or_raise': The specified bucket does not exist (AWS::S3::Errors::NoSuchBucket)
Am I just making a stupid beginner syntax error, or is there a bug in the AWS code that's not actually throwing the error? I've also tried catching all errors and still no dice.
b = s3.buckets[bucket_name]
Doesn't actually make any requests and won't ever through exceptions like NoSuchBucket.
It just returns a bucket object that knows what its name is. A request only happens when you actually try to do something with the bucket (list its contents, add a file to it) and it is at this point that NoSuchBucket is raised. This is outside of your begin block and so your rescue doesn't handle it. If you need to rescue that exception, you need to be putting your begin/rescue around the places that you actually use the bucket.
If you are just trying to validate that it actually exists you could do something like
s3.buckets[bucket_name].exists?

Why is this plain simple LINQ expression closing the app but not throwing an exception

So I have this LINQ expression that simply tries to retrieve a Entity from the database, but when it runs, the app just closes and no exceptions are thrown. I put a try catch around it to see if I could see the exception, but the debugger simply stops at the LINQ Expression and doesn't get inside the catch or runs anything after that, for example the folderId assignment afterwards; like I said it just closes the program. Any ideas?
Item folder = null;
try
{
folder = entities.Items.Where(i => i.Path + "\\" == folderPath).FirstOrDefault();
}
catch(Exception)
{
Console.WriteLine("What is it??!!");
}
int folderId = folder == null ? 0 : folder.ID;
FolderPath is a valid string. Already checked and it's what I expect it to be.
What would you expect? Do you do anything after you have folder?
FirstOrDefault() either returns a default value or the first element.
If you don't do anything with it afterwards, nothing will happen. An application which runs to its end terminates automatically.
Relating to your update: are you sure you are debugging the latest source files? Try to do a rebuild, see whether the compiled files and the debug files are updated.

Help troubleshoot a consistently repeatable mod_perl2 / $SIG{__DIE__} bug

This is mod_perl2 on Apache 2.2, ActiveState Perl 5.10 for win32.
I override $SIG{__DIE__} and turn on DBI's RaiseError flag, which AFAICT from the docs, should call my override when a database call fails. It seems to almost always, except in one case, and I can't understand why.
My script has an our $page variable, and being mod_perl2, I can get at this from the override like so:
use Carp::Trace;
my $full_trace = Carp::Trace::trace;
$full_trace =~ m/^(ModPerl::ROOT::ModPerl::Registry::.*::)handler .*$/m;
my $page;
if (defined $1)
{
eval '$page = $' . $1 . 'page';
if (defined $page)
{
$json = 1 if defined $$page{json_response};
if (defined $$page{dbh})
{
my $errno = $$page{dbh}->state;
if ($errno ~~ $$page{error_handling}{allowed})
{
# allowed to let it go--no report, expected possible user error at some level that couldn't be caught sooner (usually db level)
my $errmsg = $$page{error_handling}{translation_map}{$errno};
if (defined $errmsg)
{
...
This works fine. Now, within that $page, I have an array ref of 'allowed' error values that I want to do something different with when they come back from the DB. When the DB throws one of these errors, I want to translate it into a user-friendly message, $r->print that in JSON, and stop execution (behaviour A). For some reason, it instead returns control to the script (behaviour B).
Here's the main part of my script:
{
$$page{error_handling}{allowed} = ['22007'];
$$page{json_response}{result} = $page->one_liner("select 'aa'::timestamp");
$$page{json_response}{test} = $$page{error_handling}{state};
}
$page->make_json; # just JSONifies $$page{json_response} and prints it
If I comment out the first line, I get a normal error (handling something unexpected) (behaviour C), which is what I expect, because I haven't added the error that's occurring to the list of allowed errors. What's really strange is, if I cut that first line and paste it into my $SIG{__DIE__} override, it works: the JSON response is overridden, printed, and execution stops before {test} is assigned (behaviour A). Stranger still, I can set {allowed} to any set of numbers, and so long as it contains '22007' in particular, I get behaviour B. If it doesn't, I get behaviour C. Even more strange, I can actually fill my override with anything (warnings, calls to CORE::die, etc.--as long as it compiles) and I get behaviour B still--even though the override no longer contains any of the code that would make it possible! Also I don't get any of the expected results of the calls to warn and CORE::die, just silence in the logs, so I can't even attempt to manually trace the path of execution through my override.
I have restarted Apache2.2 in between every script save. I have even moved the override to the same script file as the script itself, out of the module where it normally is, and commented out the entire module file where the override normally is, and restarted.
If I take out that first line, or take '22007' out of it, I can warn and die and otherwise manually debug all I like, and everything works as expected. What is it about '22007' that it never outputs anything different despite server resets? There are no references to '22007' anywhere else in the entire project, except the translation map, and I can delete it from that file entirely and restart and the result is no different. It's behaving as if it has cached my override from earlier in the day and will never ever forget. It's not a browser cache issue either, because I can add random query strings and the results are no different.
This is the strangest and most frustrating mod_perl2 experience I've ever had, and I've run out of ideas. Does anybody have any hints? The only thing I can think of is that it's a caching problem, yet I've restarted the service countless times.
Since it was the end of the day I thought I would try fully restarting the server computer, and it still didn't change anything. I even, before restarting the server, changed the only line where {state} is assigned to this:
$$page{error_handling}{state} = 'my face'; # $errno;
And yet, the output afterwards had {test} as '22007', which is what it should be only if I had left = $errno intact.
Even if it was, say, the reverse proxy it goes through doing the caching, this situation doesn't make sense to me, since the request can be different. After a full server restart, how can it still be assigning a value that is no longer in the code, i.e., how can it be using my old $SIG{__DIE__} override after a full restart, when it no longer exists in any file?
Update: I also tried changing the allowed errors to '42601' and changing the db call to 'select', which produces that error code, but did not add it to the translation map. It still gives me behaviour B, setting {state} to '42601', so it's not specific to '22007'. Any error code that is put into {allowed}, if that error actually occurs, it's running the old version of the override. Cause an error that's not in {allowed} and it runs the current version. But how does it know whether the current error is in {allowed}, or that that even means anything, before getting to the override? (Because the override is the only place where {allowed} is grepped for the current error.)
This is my temporary workaround, but I would like to solve the mystery and not have to add the extra line everywhere I have a DB call with allowed errors.
package MyModule::ErrorLogging;
sub InsanityWorkaround # duplicates part of $SIG{__DIE__} override for allowed errors
{
my ($page) = #_;
my $r = $$page{r};
my $errno = $$page{error_handling}{state};
if ($errno ~~ $$page{error_handling}{allowed})
{
# allowed to let it go--no report, expected possible user error at some level that couldn't be caught sooner (usually db level)
my $errmsg = $$page{error_handling}{translation_map}{$errno};
if (defined $errmsg)
{
use JSON::XS qw(encode_json);
$$page{json_response} =
{
error => $errmsg,
};
my $response = encode_json($$page{json_response});
$r->content_type("application/json");
$r->print($response);
exit(0);
}
else
{
return 0; # get back to script where {state} can be checked and output can be customized even further
}
}
return;
}
Then my script becomes:
{
$$page{error_handling}{allowed} = ['22007']; # don't be bothered by invalid timestamp error
$$page{json_response}{result} = $page->one_liner("select 'aa'::timestamp");
MyModule::ErrorLogging::InsanityWorkaround($page);
}
This is giving behaviour A.

Resources