I've built a Xamarin.Forms app that is using a DLL from a tierce app to send kind of SQL command (this is not SQL !)
Probleme is that it only provides synchronous methods and my app is then "not responding". How can I do an asynchronous method that will call the sycnhronous one and wait for its result without hanging the UI ?
I tried the following but it seems to be waitng forever like if thread was never stopping.
public async Task<ExecuteCommandResult> ExecuteMocaCommandAsync(String ps_command)
{
return await Task<ExecuteCommandResult>.Run(() =>
{
return ExecuteMocaCommand(ps_command);
}
);
}
and I'm calling it like this :
ExecuteCommandResult l_res = l_con.ExecuteMocaCommandAsync("list users where usr_id = '" + gs_UserName + "'").Result;
I'm clearly missing something and hope you can point me in the good direction.
Regards,
Yoz
The Task.Run looks good (though you can simplify it by changing Task<ExecuteCommandResult>.Run to just Task.Run). That's the proper way to push blocking work to a background thread in a UI application.
However, you can't use Result; that can deadlock. You'll need to call your method using await:
ExecuteCommandResult l_res = await l_con.ExecuteMocaCommandAsync("list users where usr_id = '" + gs_UserName + "'");
Related
I have flow in my application thath groups request and send them in batch.
Actually it is made with Flux.window operator, and I have question regarding this.
How looks behaviour of window when application going to shut down ?
Should I expect losted pushed events ?
Or If I define timeout on window, then application will wait during window end and then shutdown ?
Or maybe I could define some behaviour of app in such situation.
Thanks for any sugestions.
you can use the Disposable object received on subscribing the flux, to check if the flux window is completed or not.
Disposable subscribe = Flux.just(1, 2, 3)
.map(number -> {
Thread.sleep(1000);
return number;
})
.subscribe();
while (!subscribe.isDisposed() && count < 100) {
Thread.sleep(500);
count++;
System.out.println("Waiting......");
}
System.out.println("disposable:" + subscribe.isDisposed());
Our plugin is running slow on the "Retrieve" message, so I placed a few timestamps in the code to determine where the bottle neck is. I realized there is a 7 second delay which happens intermittently between the end of the pre-operation stage and the start of the post operation stage.
END PRE - 3/22/2018 11:57:55 AM
POST STAGE START - 3/22/2018 11:58:02 AM
protected virtual void RetrievePreOperation()
{
var message = $"END PRE - {DateTime.Now}";
PluginExecutionContext.SharedVariables.Add("message", message);
}
protected virtual void RetrievePostOperation()
{
// Stop recursive calls
if (PluginExecutionContext.Depth > 1) return;
if (PluginExecutionContext.MessageName.ToLower() != Retrieve ||
!PluginExecutionContext.InputParameters.Contains("Target") ||
PluginExecutionContext.Stage != (int)PipelineStages.PostOperation)
return;
var entity = (Entity)PluginExecutionContext.OutputParameters["BusinessEntity"];
string message = PluginExecutionContext.SharedVariables["message"].ToString();
message += $"POST STAGE START - {DateTime.Now}";
}
Any ideas on how to minimize this delay would be appreciated. Thanks
If your plugin step is registered on Asynchronous execution mode, this delay totally depends on Async service load & pipeline of waiting calls/jobs. You can switch it to Synchronous.
If its registered in Synchronous mode but still delay is there intermittently, it depends on many things like which entity, query & complex logic if any.
Consider a blocking function: this_thread::sleep_for(milliseconds(3000));
I'm trying to get the following behavior:
Trigger Blocking Function
|---------------------------------------------X
I want to trigger the blocking function and if it takes too long (more than two seconds), it should timeout.
I've done the following:
my_connection = observable<>::create<int>([](subscriber<int> s) {
auto s2 = observable<>::just(1, observe_on_new_thread()) |
subscribe<int>([&](auto x) {
this_thread::sleep_for(milliseconds(3000));
s.on_next(1);
});
}) |
timeout(seconds(2), observe_on_new_thread());
I can't get this to work. For starters, I think s can't on_next from a different thread.
So my question is, what is the correct reactive way of doing this? How can I wrap a blocking function in rxcpp and add a timeout to it?
Subsequently, I want to get an RX stream that behaves like this:
Trigger Cleanup
|------------------------X
(Delay) Trigger Cleanup
|-----------------X
Great question! The above is pretty close.
Here is an example of how to adapt blocking operations to rxcpp. It does libcurl polling to make http requests.
The following should do what you intended.
auto sharedThreads = observe_on_event_loop();
auto my_connection = observable<>::create<int>([](subscriber<int> s) {
this_thread::sleep_for(milliseconds(3000));
s.on_next(1);
s.on_completed();
}) |
subscribe_on(observe_on_new_thread()) |
//start_with(0) | // workaround bug in timeout
timeout(seconds(2), sharedThreads);
//skip(1); // workaround bug in timeout
my_connection.as_blocking().subscribe(
[](int){},
[](exception_ptr ep){cout << "timed out" << endl;}
);
subscribe_on will run the create on a dedicated thread, and thus create is allowed to block that thread.
timeout will run the timer on a different thread, that can be shared with others, and transfer all the on_next/on_error/on_completed calls to that same thread.
as_blocking will make sure that subscribe does not return until it has completed. This is only used to prevent main() from exiting - most often in test or example programs.
EDIT: added workaround for bug in timeout. At the moment, it does not schedule the first timeout until the first value arrives.
EDIT-2: timeout bug has been fixed, the workaround is not needed anymore.
I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.
I am launching an app and then trying to generate a key event say "HOME". If I call a sleep function after generating the event. The APIs don't generate the events. Without sleep it works properly.
pAc = AppManager::FindAppControlN(appId,operationId);
AppLog(GetErrorMessage(GetLastResult()));
if(pAc)
{
AppLog("Launching Applicaiton");
result r = pAc->Start(&URI,&MIME,null,null);
AppLog(GetErrorMessage(r));
delete pAc;
}
AppLog("Home Key Generated");
SystemUtil::GenerateKeyEvent(KEY_EVENT_TYPE_PRESSED,KEY_HOME);
AppLog(GetErrorMessage(GetLastResult()));
SystemUtil::GenerateKeyEvent(KEY_EVENT_TYPE_RELEASED,KEY_HOME);
AppLog(GetErrorMessage(GetLastResult()));
sleep(2);
All AppLogs display E_SUCCESS, but still events are not generated. Can somebody help as to what is wrong here?