Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.
Related
I have a Ruby application that crashes sometimes with this error message:
Fb::Error: A transaction has already been started)
I'm now wondering what this message means. I searched a little bit and I read that Firebird is not supporting nested transactions. Could the message hint to this? If not, what else could this mean?
This is not a Firebird error message. It is an error message in the driver you're using. Specifically here:
static void fb_connection_transaction_start(struct FbConnection *fb_connection, VALUE opt)
{
char *tpb = 0;
long tpb_len;
if (fb_connection->transact) {
rb_raise(rb_eFbError, "A transaction has been already started");
}
if (!NIL_P(opt)) {
tpb = trans_parseopts(opt, &tpb_len);
} else {
tpb_len = 0;
tpb = NULL;
}
isc_start_transaction(fb_connection->isc_status, &fb_connection->transact, 1, &fb_connection->db, tpb_len, tpb);
xfree(tpb);
fb_error_check(fb_connection->isc_status);
}
Without in-depth familiarity with this driver, I'm guessing the problem is that you're trying to start a transaction on a connection that already has an active transaction.
Firebird itself supports multiple parallel transactions on a single connection, and it supports nested transactions in the form of SQL standard savepoints, but it looks like the driver you're using doesn't support this.
The solution (or workaround) would seem to be to either not start a transaction when you already have an active transaction, or to first commit or rollback an existing transaction before starting a new one.
I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.
Instead of using a logger or database server I'd like to append information to one file from possibly many verticle instances.
There are versions of methods for writing asynchronously to a file.
Can I assume that vertx handles the synchronisation between the writes so that these dont interfere when using those versions of methods marked as ¨async¨ ?
There seems to be a rule that one can rely on vertx providing all isolation between concurrent processing out of the box. But is that true in case of writing file access?
Could you please include a code snippet into the answer that shows how to open and write to one file from many verticle instances with finest possible granularity, e.g. for logging requests.
I wouldn't recommend writing to a single file with many different "writers". Regarding concurrent logging I would stick to the Single Writer principle.
Create a Verticle which subscribes to the Event Bus and listens for messages to be logged. Lets call this Verticle Logger which listens to system.logger.
EventBus eb = vertx.eventBus();
eb.consumer("system.logger", message -> {
// write to file
});
Verticles which like to log something need to send a message to the Logger Verticle:
eventBus.send("system.logger", "foobar");
Appending to a existing file work something like this (didn't test):
vertx.fileSystem().open("file.log", new OpenOptions(), result -> {
if (result.succeeded()) {
Buffer buff = Buffer.buffer(message); // message from consume
AsyncFile file = result.result();
file.write(buff, buff.length() * i, ar -> {
if (ar.succeeded()) {
System.out.println("done");
} else {
System.err.println("write failed: " + ar.cause());
}
});
} else {
System.err.println("open file failed " + result.cause());
}
});
http://boost-log.sourceforge.net/libs/log/doc/html/log/detailed/sink_backends.html
In this page there is sample code to initialize boost windows event backends,
but when I run it it gives memory error at first line.
void init_logging()
{
// Create an event log sink
boost::shared_ptr< sink_t > sink(new sink_t());
sink->set_formatter
(
expr::format("%1%: [%2%] - %3%")
% expr::attr< unsigned int >("LineID")
% expr::attr< boost::posix_time::ptime >("TimeStamp")
% expr::smessage
);
// We'll have to map our custom levels to the event log event types
sinks::event_log::custom_event_type_mapping< severity_level > mapping("Severity");
mapping[normal] = sinks::event_log::info;
mapping[warning] = sinks::event_log::warning;
mapping[error] = sinks::event_log::error;
sink->locked_backend()->set_event_type_mapper(mapping);
// Add the sink to the core
logging::core::get()->add_sink(sink);
}
Here it fails to create sink_t object.
boost::shared_ptr< sink_t > sink(new sink_t());
Any idea what is the problem and how can I solve this?
Also If you know any other source that I can learn using boost event logging please write.
No answer yet...
But I have found a solution in a blog by Timo Geusch.
http://www.lonecpluspluscoder.com/2011/01/boost-log-preventing-the-unhandled-exception-in-windows-7-when-attempting-to-log-to-the-event-log/
The reason for this problem was that the registry key in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog\ that the application needs access to both has to be present (if you’re not administrator, you don’t have the privileges to create it) and the user who runs the application also needs to be able to both read and write to it.
I was asked to enhance my assertions to provide some better log messaging within my JMeter test plan that tests APIs using basic CRUD methodology. The test plan is being checked into Jenkins and being run automatically. When something goes wrong, the level of messaging is not adequate for the support team.
Within the first thread group, I have an HTTP Request to create a new record within the database based on the payload being passed in. Under this request, I have a BeanShell Assertion as follows:
if (ResponseCode.equals("200") == true)
{
SampleResult.setResponseOK();
}
I'm now trying to enhance this to account for 409, and 500 responses.
I've attempted the, but it does not seem to work:
if (ResponseCode.equals("200") == true)
{
SampleResult.setResponseOK();
}
else if (ResponseCode.equals("409") == true)
{
FailureMessage = "Creation of a new CAE record failed: Attempting to create a duplicate record.";
}
else (ResponseCode.equals("500") == true)
{
FailureMessage = "Creation of a new CAE record failed: Unable to connect to server";
}
Additionally, if the ResponseCode is not 200, then I need to drop out of the entire thread group and go to the next thread group.
I've read several questions on this site, as well as How to Use BeanShell: JMeter's Favorite Built-in Component and How to Use JMeter Assertions in Three Easy Steps, but I'm still confused. Not being a developer and still new to JMeter, I'm in need of guidance.
Any and all help is much appreciated.
Selecting 'Stop Thread' in the Thread Group would help you to stop the thread group in case of any error - assuming you have other thread groups to execute consecutively. if not, the test will stop.
In the beanshell assetion include
else if (ResponseCode.equals("409") == true)
{
Failure = true;
FailureMessage = "Creation of a new CAE record failed: Attempting to create a duplicate record.";
}