What is the meaning of "Fb::Error: A transaction has already been started" - ruby

I have a Ruby application that crashes sometimes with this error message:
Fb::Error: A transaction has already been started)
I'm now wondering what this message means. I searched a little bit and I read that Firebird is not supporting nested transactions. Could the message hint to this? If not, what else could this mean?

This is not a Firebird error message. It is an error message in the driver you're using. Specifically here:
static void fb_connection_transaction_start(struct FbConnection *fb_connection, VALUE opt)
{
char *tpb = 0;
long tpb_len;
if (fb_connection->transact) {
rb_raise(rb_eFbError, "A transaction has been already started");
}
if (!NIL_P(opt)) {
tpb = trans_parseopts(opt, &tpb_len);
} else {
tpb_len = 0;
tpb = NULL;
}
isc_start_transaction(fb_connection->isc_status, &fb_connection->transact, 1, &fb_connection->db, tpb_len, tpb);
xfree(tpb);
fb_error_check(fb_connection->isc_status);
}
Without in-depth familiarity with this driver, I'm guessing the problem is that you're trying to start a transaction on a connection that already has an active transaction.
Firebird itself supports multiple parallel transactions on a single connection, and it supports nested transactions in the form of SQL standard savepoints, but it looks like the driver you're using doesn't support this.
The solution (or workaround) would seem to be to either not start a transaction when you already have an active transaction, or to first commit or rollback an existing transaction before starting a new one.

Related

MassTransit - How to fault messages in a batch

I am trying to utilize the MassTransit batching technique to process multiple messages to reduce the individual queries to be database (read and write).
If there is an exception while processing one/more of the messages, then the expectation is to fault only required messages and have the ability to process the rest of the messages.
This is common scenario in my use case ,what I am trying to establish here is a way to perform batch processing that caters for poisoned messages
For example, if I have a batch size of 10 messages, 10 in the queue and 1 persistently fails, I still need a means of ensuring the other 9 can be processed successfully. It is fine if all 10 need to be returned to the queue and subset re-consumed - but the poisoned message needs to be eliminated somehow. Does this requirement discount the use of batching?
I have tried below, however did solve my use case.
catching the exception and raising NotifyFaulted for that specific message.
modified sample-twitch application, to throw an exception to something like below , based on https://github.com/MassTransit/Sample-Twitch/blob/master/src/Sample.Components/BatchConsumers/RoutingSlipBatchEventConsumer.cs
file.
public Task Consume(ConsumeContext<Batch<RoutingSlipCompleted>> context)
{
if (_logger.IsEnabled(LogLevel.Information))
{
_logger.Log(LogLevel.Information, "Routing Slips Completed: {TrackingNumbers}",
string.Join(", ", context.Message.Select(x => x.Message.TrackingNumber)));
}
for (int i = 0; i < context.Message.Length; i++)
{
try
{
if (i % 2 != 0)
throw new System.Exception("business error -message failed");
}
catch (System.Exception ex)
{
context.Message[i].NotifyFaulted(TimeSpan.Zero, "batch routing silp faulted", ex);
}
}
return Task.CompletedTask;
}
I have dig into a few more threads that look similar to the issue ,for reference.
Masstransit error handling for batch consumer
If you want to use batch, and have a message in that batch that cannot be processed, you should catch the exception and do something else with the poison message. You could write it someplace else, publish some type of event, or whatever else. But MassTransit does not allow you to partially complete/fault messages of a batch.

Jdbi transaction - multiple methods - Resources should be closed

Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.

io.vertx.oracleclient.OracleException: Error : 1000, Position : 0, Sql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED

I have one question regarding oracle cursors . Actually I upgraded my project to latest vert.x version and now I started to see some errors. I have one SQL Verticle which actively used and it throw exception after a while. You can see on below :
"io.vertx.oracleclient.OracleException: Error : 1000, Position : 0, Sql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED, OriginalSql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED, Error Msg = ORA-01000: maximum open cursors exceeded"
I am using withTransaction method belong to vertx-sql-client:4.2.7 and I am not sure whether I manage it myself or vertx manage. I expected that cursors already managed by vertx. Is there anybody who see such an error ? or Any comment ? Thanks in advance .
Also Any idea why vertx open cursor for sql "SET TRANSACTION ISOLATION LEVEL READ COMMITTED, OriginalSql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED" and not close ?
Example code :
override suspend fun processItem(callback: Callback) {
pool.withTransaction { conn ->
conn.query("select * from TABLE where ROWNUM='1'") .execute()
}}

http_listener cpprestsdk how to handle multiple POST requests

I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.

Why does h2 ignore slf4j messages on the first connection when LOG is set?

See sample code & output below (with Slf4j/logback on stdout). I can't find any bug reports on this. I'm using h2 version 1.3.176 (last stable), in-memory mode. It doesn't seem to matter what value is set for the LOG (0, 1 or 2) but just has to be set.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class H2TraceTest {
public static void main(String[] args) throws SQLException {
System.out.println("Query connection 1");
Connection myConn = DriverManager.getConnection("jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2");
myConn.createStatement().execute("SELECT 1");
System.out.println("Query connection 2");
DriverManager.getConnection("jdbc:h2:mem:tracetest").createStatement().execute("SELECT 1");
System.out.println("Query connection 1 again");
myConn.createStatement().execute("SELECT 1");
System.out.println("End");
}
}
Output:
Query connection 1
Query connection 2
16:17:02.955 INFO h2database - jdbc[3]
/**/Connection conn2 = DriverManager.getConnection("jdbc:h2:mem:tracetest", "", "");
16:17:02.958 DEBUG h2database - jdbc[3]
/**/Statement stat2 = conn2.createStatement();
16:17:02.959 DEBUG h2database - jdbc[3]
/**/stat2.execute("SELECT 1");
16:17:02.959 INFO h2database - jdbc[3]
/*SQL #:1*/SELECT 1;
Query connection 1 again
End
I know that the H2 documentation says about TRACE_LEVEL_FILE: it affects all connections. But thats not (fully) correct:
Every connection keeps a lazy reference to the logging system. And if you change that with the special marker TRACE_LEVEL_FILE=4, then that reference isn't changed for all existing connections - but only for those who do their first logging after that change.
So if you use the connection string "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" everything is as expected, because your session will write no logging message before changing the logging system. Unfortunately the LOG=2 in jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2 is evaluated first, because both parameter are written into and read from an unordered Map. And because LOG=2 is generating a log statement, the reference to the log adapter (=4) is never applied to the current session. Only to the next one.
What can you do:
Use only "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" - LOG=2 is the default anyway. If you need any other log mode you can use connection.createStatement().executeUpdate("SET LOG 1")
Add some default parameters to the connection string until the TRACE_LEVEL_FILE parameter is the first parameter in the map (not really reliable, as the order may depend on the VM)
Discard the first connection at once
Fill in a bug report and wait for the fix (or fix it yourself), as I think this is somehow a bug
I know this is an old question but here is a reliable way to do it (i.e. you can ensure that TRACE_LEVEL_FILE is set to 4 first:
String url = "jdbc:h2:mem:tracetest;INIT=SET TRACE_LEVEL_FILE=4\\;SET DB_CLOSE_DELAY=-1/* for example, i.e. do other stuff */";

Resources