RSpec - How to mock a stored procedure - ruby

Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.

Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

Get mongodb warnings in ruby when performing insert/update

I am performing insert/update operation in MongoDB using Ruby. When the insert/update operation fails, I can see the errors in the resulting cursor. But when there is a warning, I don't see it in the resulting cursor.
I only see this
#<Mongo::Operation::Insert::Result:0x70353913223340 documents=[{"n"=>1, "ok"=>1.0}]>
However, checking my mongo logs, I see that a warning was generated when the insert happened
2019-07-31T17:43:27.959+0530 W STORAGE [conn429] Document would fail validation collection:
I want to see this error in Ruby insert operation result.
I have tried setting the Mongo logger level to Debug
Mongo::Logger.logger.level = Logger::DEBUG
But this does not help either.
You can't achieve that but if you have validation warnings on insert then perhaps you should set validationAction to "error" (which should be by default unless you changed it)

How to detect watch/exec failure when using Jedis?

This is the sequence of operations I am trying to do:
WATCH somekey
MULTI
...
SET somekey somevalue
EXEC
I do this via the execute(SessionCallback) method of RedisTemplate (pseudocode):
l = template.execute({
ops.watch("somekey");
ops.multi()
ops.opsForValue().set("somekey", "somevalue")
return ops.exec()
})
My problem is that when using jedis, l is not null but an empty list, and thus exec failures are indistinguishable from exec successes as there are no operations inside multi that return results.
This seems to be confirmed by the unit tests here: https://github.com/spring-projects/spring-data-redis/blob/1.8.4.RELEASE/src/test/java/org/springframework/data/redis/core/RedisTemplateTests.java#L740 where to test failure this is done:
if (redisTemplate.getConnectionFactory() instanceof JedisConnectionFactory) {
assertThat(results, is(empty()));
} else {
assertNull(results);
}
Compare to the testUnwatch test just below it that tests successful exec after unwatch and also expects an empty list (results.isEmpty()).
How do I distinguish between these two cases when using Jedis?
TL;DR
You can't detect a transaction rollback using Jedis.
Explanation
Jedis returns, in any case, a List object in the currently available versions (2.8.2, 2.9.0). This change was part of code cleanup to prevent Jedis from returning null on exec(…) if the Redis transaction was rolled back.
Right now, Jedis rolled back the change but there was no Jedis release for about a year now.
If detecting transaction rollbacks are essential to your requirements, try a different Redis client.

inserting row into sqlite3 database from play 2.3/anorm: exception being thrown non-deterministically

I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.

Autorerun failed rspec tests

I have some code in my tests which works with some external service. This service is not very stable, so sometimes it crash for no reason. But in 80% of runs it works well.
So I want a method to automatically rerun all failed rspecs several time (for example 2 or 3 time).
Is there any way to do it?
Many people would say that your test actually should never hit external services, and that's one of the reasons to do it. Your tests should not fail because some external service is down.
TL;DR use mocks and stubs or to replace those external service calls
Instead of re-running failed specs, couldn't you just run the method accessing the service a set number of times and run the expectation on the logical OR of the results?
So instead of:
it "returns expected value for some args" do
unstable_external_service(<some args>).should == <expected return value>
end
just do something like this:
def run_x_times(times, args)
return nil if times == 0
unstable_external_service(args) || run_x_times(times-1)
end
it "returns expected value for some args" do
run_x_times(10, <some args>).should == <expected return value>
end
You can use the same wrapper method throughout your tests anytime you access the service. I'm assuming here that your service returns nil on a failure, but if not you could change this to fit your particular case -- you get the general idea.

Resources