I am performing insert/update operation in MongoDB using Ruby. When the insert/update operation fails, I can see the errors in the resulting cursor. But when there is a warning, I don't see it in the resulting cursor.
I only see this
#<Mongo::Operation::Insert::Result:0x70353913223340 documents=[{"n"=>1, "ok"=>1.0}]>
However, checking my mongo logs, I see that a warning was generated when the insert happened
2019-07-31T17:43:27.959+0530 W STORAGE [conn429] Document would fail validation collection:
I want to see this error in Ruby insert operation result.
I have tried setting the Mongo logger level to Debug
Mongo::Logger.logger.level = Logger::DEBUG
But this does not help either.
You can't achieve that but if you have validation warnings on insert then perhaps you should set validationAction to "error" (which should be by default unless you changed it)
Related
I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!
For handling entities in Drupal I'm using Entity Metadata Wrappers (the "Drupal way").
It's really easy to start coding and see all the advantages it has... except when you get a fatal error and you are not clear where it comes from.
This is what the database log shows:
EntityMetadataWrapperException: Unknown data property
field_whatever. at EntityStructureWrapper->getPropertyInfo()
(line 335 of
/var/www/html/sites/all/modules/entity/includes/entity.wrapper.inc).
Sadly, many times that "field_whatever" is "nid", "uid" or some very common property, so it's name is spread all over my code, which makes me difficult to get to the origin of the error.
I'm currently doing this:
Write a tiny piece of code and then run to see if something fails.
Using getPropertyInfo when handling entities with "not so common" fields.
Loosing hair.
What is worst is that sometimes the error does not appear when you are coding, but a week later. So it could be anywhere...
Is there any way of handling entity metadata wrapper errors better? Can I get better information in the database log and not just a line? A backtrace maybe?
Thanks.
Well, having the devel module active (just to see the nice krumo message) we can do something like this inside our module:
<?php
set_exception_handler('exception_with_trace');
function exception_with_trace($e)
{
dpm($e->getTrace());
}
That will return the backtrace error of the exception thrown by the entity metadata handler on the next page load (some page in your site where everything is running fine).
Also you can set the exception handler exclusively and more elegant just for some pages or some users with some role... or when some parameter in the url is met, or when in some state of your Drupal site is met (ex. when a bool persistent variable 'exception_with_trace' is true). Even, under certain conditions and control, you can use it in production too.
If the site does not work "at all" you can include it in your settings.php file, but instead of printing the trace, you must write the trace to a file and watch the trace in a different context (not Drupal but some php file).
If exceptions are too long and are causing memory problems then getting the trace as string is also possible. See http://php.net/manual/es/exception.gettraceasstring.php
Hope that helps.
I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.
Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.
Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.
I'm using splunk-client to extract results from splunk. Here's the code:
query = "sourcetype=collection #{order_id}"
search = #splunk_client.search(query)
search.wait
The search is happening fine, and it seems like I'm doing everything according to the example (https://github.com/cbrito/splunk-client), but I get this error on the 'search.wait' line:
Undefined namespace prefix: //s:key[#name='isDone']
Any ideas what could be going wrong? Running these commands in irb works fine. Is there some sort of blocking issue?
There is currently very little error checking which occurs within the gem itself. The reason for the error is that wait looks for the status of the isDone key to change to true.
Since your credentials were not properly setup in the first place, the gem creates a search object with an invalid session. The search does not initially fail, because enough response came back from Splunk that Nokogiri processes it into an object without a Splunk search sid.
In the future I should likely raise an exception if a proper sid is not returned to avoid confusion.
Source: I wrote the gem.
I found out the issue -- the splunk client wasn't authenticating properly, and so search was actually a broken SplunkJob object (with a nil username and authentication key). It's strange that there was no error raised until the wait command, but upon inspecting the search object, one of the fields stated that the object was malformed.