WaitHandle.WaitAll throws a NotSupportedException when executed on Windows Phone (7.1). Is there an alternative to this method?
Here's my scenario: I am firing off a bunch of http web requests and I want to wait for all of them to return before I can continue. I want to make sure that if the user has to wait for more than X seconds (in total) for all of these requests to return, the operation should be aborted.
You can try with a global lock.
Start a new thread, and use a lock to block the caller thread, with the timeout value you want.
In the new thread, loop on the handles and call wait on each. When the loop is done, signal the lock.
Something like:
private WaitHandle[] handles;
private void MainMethod()
{
// Start a bunch of requests and store the waithandles in the this.handles array
// ...
var mutex = new ManualResetEvent(false);
var waitingThread = new Thread(this.WaitLoop);
waitingThread.Start(mutex);
mutex.WaitOne(2000); // Wait with timeout
}
private void WaitLoop(object state)
{
var mutex = (ManualResetEvent)state;
for (int i = 0; i < handles.Length; i++)
{
handles[i].WaitOne();
}
mutex.Set();
}
Another version using Thread.Join instead of a shared lock:
private void MainMethod()
{
WaitHandle[] handles;
// Start a bunch of requests and store the waithandles in the handles array
// ...
var waitingThread = new Thread(this.WaitLoop);
waitingThread.Start(handles);
waitingThread.Join(2000); // Wait with timeout
}
private void WaitLoop(object state)
{
var handles = (WaitHandle[])state;
for (int i = 0; i < handles.Length; i++)
{
handles[i].WaitOne();
}
}
Related
I have a loop that pushes back calls of std::async that are used to create objects in the pointed function and emplace them back to another vector. All the calls are pushed to the futures function and the results are ready when i used the VS debugger. However of the 507 calls, only 30 objects are actually created and i cant seem to pin point why.I have tried setting the launch policy to both async and defered but get the same result.
void load_sec_p(vector<Security>* secs, map<string, map<string, vector<daySec>>> *psa_timeline,security sec) {
Security tmp = Security(psa_timeline, &sec.tsymb, &sec.gicsInd);
std::lock_guard<std::mutex> lock(s_SecsMutex);
secs->emplace_back(tmp);
}
Above is the function being executed in the async call
below is the loop that pushes back the futures
for (auto& sec : security_list) {
m_SecFutures.emplace_back(std::async(load_sec_p,&async_secs, &psa_timeline, sec));
}
The following pictures show the watch of both variables after the above loop is completed and the entire future vectors is checked for completion.
I have tried creating the objects by just using a regular for loop and appending them synchronously but it simply just takes too long(2 hours and 11 minutes long). If anyone has any advice on alternatives or how to fix my vector problem it would be greatly appreciated.
The code that checks if all the futures is shown below:
bool done = false;
cout << "Waiting...";
do {
done = futures_ready(m_SecFutures);
} while (!done);
The function is
template<class T>
bool futures_ready(std::vector<std::future<T>>& futures) {
std::chrono::milliseconds span(5);
bool finished = false;
int pends = 0;
while (!finished) {
//allowing thread to sleep so futures can process a bit more and also
//so thread doesnt reach max cpu usage
std::this_thread::sleep_for(std::chrono::milliseconds(100));
for (auto& x : futures) {
if (x.wait_for(span) == std::future_status::timeout) {
++pends;
}
}
if (pends == 0) {
finished = true;
}
else {
pends = 0;
}
}
return finished;
}
I have a .Net console application that is supposed to be long running and continous , basically 24 hours a day. It's for a rabbitmq consumer client. I am opening 30 channels on 1 connection, and each channel is responsible for 7 different queues.
Task creation:
tokenSource2 = new CancellationTokenSource();
cancellationToken = tokenSource2.Token;
for (int i = 0; i < 30; i++) //MAX 100 MODEL
{
List<string> partlist = tmpDBList.Take(7).ToList();
tmpDBList = tmpDBList.Except(partlist).ToList();
new Task(delegate { StartConsuming(partlist, cancellationToken); }, cancellationToken, TaskCreationOptions.LongRunning).Start();
}
The consumer method:
internal void StartConsuming(List<string> dbNames, CancellationToken cancellationToken)
{
cancellationToken.ThrowIfCancellationRequested();
using (IModel channel = Consumer.CreateModel())
{
foreach (string item in dbNames)
{
//Queue creation, exchange declare, bind, + basic eventhandler etc..
channel.BasicConsume(queue: item,
autoAck: true,
consumer: consumerEvent);
}
cancellationToken.ThrowIfCancellationRequested();
while (!cancellationToken.IsCancellationRequested)
{
cancellationToken.WaitHandle.WaitOne(5000);
}
}
}
Since I want the task to never stop I have the endless while cycle at the end of the using statement, otherwise the task stops, and the channels are disposed.
while (!cancellationToken.IsCancellationRequested)
{
cancellationToken.WaitHandle.WaitOne(5000);
}
Is this a an optimal solution?
Furthermore, each consumer event handler creates a DbContext of a specific database inside the
EventingBasicConsumer consumerEvent = new EventingBasicConsumer(channel);
consumerEvent.Received += (sender, basicDeliveryEventArgs) =>
{
cancellationToken.ThrowIfCancellationRequested();
//dbContext creation
}
event handler. Will the memory be freed after the eventhandler is finished ? Do I need to Dispose of the dbcontext and each class I am using inside the eventhandler?
I'm trying to download the page source from multiple urls using tasks to download multiple sites at one time. The issue is that I want to keep the UI updated as each individual task completes. When I try to wait all tasks it stops updating the UI until they all finish. Here is the current code that I am using.
EDIT: I'm assuming I was down voted due to me not explaining well enough. I guess a better way to put this is why is the continueWith not being run before Task.WaitAll. I want the UI to update on each completion of the source being downloaded. Once that is all finished then the listbox would be updated to let the user know everything is done.
private void btnGetPages_Click(object sender, EventArgs e)
{
for (int i = 1; i < 11; i++)
{
string url = $"http://someURL/page-{i}.html";
listBoxStatus.Items.Add($"Downloading source from {url}...");
Task t = new Task(() =>
{
DownloadSource(url);
});
t.ContinueWith(prevTask => listBoxStatus.Items.Add($"Finished Downloading {url} source..."), TaskScheduler.FromCurrentSynchronizationContext());
tasks.Add(t);
t.Start();
}
Task.WaitAll(tasks.ToArray());
listBoxStatus.Items.Add("All Source files have completed...");
}
private void DownloadSource(string url)
{
var web = new HtmlWeb();
var doc = web.Load(url);
pageSource += doc.Text;
}
You really should use an asynchronous download method based on HttpClient instead of the synchronous method you are showing. Lacking that, I'll use this one:
private async Task DownloadSourceAsync(string url)
{
await Task.Run(() => DownloadSource(url));
listBoxStatus.Items.Add($"Finished Downloading {url} source...");
}
Then, you can make your btnGetPages_Click method something like this:
private async void btnGetPages_Click(object sender, EventArgs e)
{
var tasks = new List<Task>();
for (int i = 1; i < 11; i++)
{
string url = $"http://someURL/page-{i}.html";
listBoxStatus.Items.Add($"Downloading source from {url}...");
tasks.Add(DownloadSourceAsync(url));
}
Task.WaitAll(tasks.ToArray());
listBoxStatus.Items.Add("All Source files have completed...");
}
I am using volley singleton and add all volley request to it.
sample code of adding volley request to queue
MyApplication.getInstance().addToReqQueue(jsObjRequest, "jreq1");
I have an onclick function.
buttonId.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
for(int i=0;i<4;i++){
//....... here i call for asycn volley requests which get added to the queue of volleysingleton
}
// ******how to ensure all my volley requests are completed before i move to next step here.*****
//calling for new intent
Intent m = new Intent(PlaceActivity.this, Myplanshow.class);
m.putExtra("table_name", myplansLists.get(myplansLists.size() - 1).table_name);
m.putExtra("table_name_without_plan_number", myplansLists.get(myplansLists.size() - 1).place_url_name);
m.putExtra("changed", "no");
m.putExtra("plannumber", myplansLists.size());
//moving to new intent;
v.getContext().startActivity(m);
}
});
Inside onclick i have a for loop which will execute multiple volley requests.
After the for loop it will start a new activity through intent.
But for my new activity to show, i need the data of all the volley requests in the for loop to be completed before, it leaves this activity and goes to new activity.
My approach basically is to set up 2 int variables: successCount and errorCount that I use to monitor the volley requests. In the onResponse of each request, I increment the successCount variable, then in the onErrorResponse, I increment the errorCount. At the end, I check if the sum of both variables equals the number of requests made, if its not, the thread waits in a loop.
check this:
buttonId.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
new Runnable(){
#Override
public void run() {
int successCount=0;
int errorCount=0;
for(int i=0;i<4;i++){
//....... here i call for asycn volley requests which get added to the queue of volleysingleton
//in the onResponse of each of the volley requests, increment successCount by 1;
// i.e successCount++;
//also in onErrorResponse of each of the volley requests, increment
// errorCount by 1
}
// ******how to ensure all my volley requests are completed before i move to next step here.*****
// wait here till all requests are finished
while (successCount+errorCount<4)
{
Log.d("Volley"," waiting");
}
//calling for new intent
Intent m = new Intent(PlaceActivity.this, Myplanshow.class);
m.putExtra("table_name", myplansLists.get(myplansLists.size() - 1).table_name);
m.putExtra("table_name_without_plan_number", myplansLists.get(myplansLists.size() - 1).place_url_name);
m.putExtra("changed", "no");
m.putExtra("plannumber", myplansLists.size());
//moving to new intent;
v.getContext().startActivity(m);
}
}.run();
}
});
I'm looking into ZeroMQ for its PGM support.
Running on Windows (in a VirtualBox with MacOS as host, if that could matter), using the NetMQ library.
The test I want to do is very simple: send messages from A to B as fast as possible...
First I used TCP as transport; this got easily to >150 000 messages per second, with two receivers keeping pace.
Then I wanted to test PGM; all I did was to replace the address "tcp://*:5556" with "pgm://239.0.0.1:5557" on both sides.
Now, the PGM tests give very strange results: the sender easily gets to >200 000 messages/s; the receiver though, manages to process only about 500 messages/s !?
So, I don't understand what is happening.
After slowing down the sender (sleep 10ms after each message, since otherwise it's practically impossible to investigate the flow) it appears to me that the receiver is trying to keep up, initially sees every message passing by, then chokes, misses a range of messages, then tries to keep up again...
I played with the HWM and Recovery Interval settings, but that didn't seem to make much difference (?!).
Can anyone explain what's going on?
Many thanks,
Frederik
Note: Not sure if it's matters: as far as I understand, I don't use OpenPGM - I just download the ZeroMQ setup, and enabled 'Multicasting Support' in Windows.
This is the Sender code:
class MassSender
{
private const string TOPIC_PREFIX = "Hello:";
private static int messageCounter = 0;
private static int timerCounter = 0;
public static void Main(string[] args)
{
Timer timer = new Timer(1000);
timer.Elapsed += timer_Elapsed;
SendMessages_0MQ_NetMQ(timer);
}
private static void SendMessages_0MQ_NetMQ(Timer timer)
{
using (NetMQContext context = NetMQContext.Create())
{
using (NetMQSocket publisher = context.CreateSocket(ZmqSocketType.Pub))
{
//publisher.Bind("tcp://*:5556");
publisher.Bind("pgm://239.0.0.1:5557"); // IP of interface is not specified so use default interface.
timer.Start();
while (true)
{
string message = GetMessage();
byte[] body = Encoding.UTF8.GetBytes(message);
publisher.Send(body);
}
}
}
}
private static string GetMessage()
{
return TOPIC_PREFIX + "Message " + (++messageCounter).ToString();
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
Console.WriteLine("=== SENT {0} MESSAGES SO FAR - TOTAL AVERAGE IS {1}/s ===", messageCounter, messageCounter / ++timerCounter);
}
}
and the Receiver:
class MassReceiver
{
private const string TOPIC_PREFIX = "Hello:";
private static int messageCounter = 0;
private static int timerCounter = 0;
private static string lastMessage = String.Empty;
static void Main(string[] args)
{
// Assume that sender and receiver are started simultaneously.
Timer timer = new Timer(1000);
timer.Elapsed += timer_Elapsed;
ReceiveMessages_0MQ_NetMQ(timer);
}
private static void ReceiveMessages_0MQ_NetMQ(Timer timer)
{
using (NetMQContext context = NetMQContext.Create())
{
using (NetMQSocket subscriber = context.CreateSocket(ZmqSocketType.Sub))
{
subscriber.Subscribe(""); // Subscribe to everything
//subscriber.Connect("tcp://localhost:5556");
subscriber.Connect("pgm://239.0.0.1:5557"); // IP of interface is not specified so use default interface.
timer.Start();
while (true)
{
messageCounter++;
byte[] body = subscriber.Receive();
string message = Encoding.UTF8.GetString(body);
lastMessage = message; // Only show message when timer elapses, otherwise throughput drops dramatically.
}
}
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
Console.WriteLine("=== RECEIVED {0} MESSAGES SO FAR - TOTAL AVERAGE IS {1}/s === (Last: {2})", messageCounter, messageCounter / ++timerCounter, lastMessage);
}
}
What is the size of each message?
You are not using OpenPGM, you are using what is called ms-pgm (Microsoft implementation of PGM).
Anyway you might have to change the MulticastRate of the socket (it defaults to 100kbit/s).
Also what kind of network are you using?
I run into the same issue, the sender can send thousands of messages per second. But my receiver can only receive two hundred messages per second.
I think it could be sending or receiving rate is limited. I check
ZMQ_RATE: Set multicast data rate in http://api.zeromq.org/3-0:zmq-setsockopt
The default rate is just 100kb/s.
When I increase it to 1Gb/s, everything is OK now.
const int rate = 1000000; // 1Gb TX- and RX- rate
m_socket.setsockopt(ZMQ_RATE, &rate, sizeof(rate));