ZeroMQ NetMQ TrySend always succeeds - zeromq

public static void ZeroMQ()
{
try
{
TimeSpan timeout = TimeSpan.FromMilliseconds(2000);
AsyncIO.ForceDotNet.Force();
using (PairSocket client = new PairSocket("tcp://127.0.0.1:5555"))
{
client.Options.SendHighWatermark = 0;
client.Options.Linger = TimeSpan.Zero;
bool success = client.TrySendFrame(timeout, "Hello");
Debug.Log($"Success = {success}");
string msg = string.Empty;
success = client.TryReceiveFrameString(timeout, out msg);
Debug.Log($"Success = {success} - {msg}");
success = client.TryReceiveFrameString(timeout, out msg);
Debug.Log($"Success = {success} - {msg}");
}
}
catch (Exception e)
{
Debug.Log(e);
}
finally
{
NetMQConfig.Cleanup();
}
}
Here's some code with two problems:
If I run this code with no server running on :5555, this program prints "Success = true" for the TrySendFrame(), and false for both receives. I would expect it to fail and can't get it to fail even with Linger set to 0 and the high water mark also set to zero.
The second issue is that after the execution hits the finally block, the program freezes forever.
Can someone better versed in ZMQ give me any pointers as to why this might be happening?

Related

Transmitting 1500KB (hex files )data over UDS using CAPL test module

I am trying to download my hex file of size 1500KB via UDS with CAPL test module,
p2 timer = 50ms
p2* timer = 5000ms
Here is snippet of my code for data transfer :
void TS_transferData()
{
byte transferData_serviceid = 0x36;
byte blockSequenceCounter = 0x1;
byte buffer[4093];
byte binarydata[4095];
long i,ret1,ret2,ret3,temp,timeout = 0,Counter = 0;
char filename[30] = "xxx.bin";
dword readaccess_handle;
diagrequest ECU_QUALIFIER.* request;
long valueleft;
readaccess_handle = OpenFileRead(filename,1);
if (readaccess_handle != 0 )
{
while( (valueleft = fileGetBinaryBlock(buffer,elcount(buffer),readaccess_handle))==4093 )
{
binarydata[0] = transferData_serviceid;
binarydata[1] = blockSequenceCounter;
for(i=0;i<elcount(buffer);i++)
{
binarydata[i+2] = buffer[i];
}
diagResize(request, elCount(binarydata));
DiagSetPrimitiveData(request,binarydata,elcount(binarydata));
DiagSendRequest(request);
write("length of binarydata %d ",elcount(binarydata));
// Wait until the request has been completely sent
ret1 = TestWaitForDiagRequestSent(request, 20000);
if(ret1 == 1) // Request sent
{
ret2=TestWaitForDiagResponse(request,50);
if(ret2==1) // Response received
{
ret3=DiagGetLastResponseCode(request); // Get the code of the response
if(ret3==-1) // Is it a positive response?
{
;
}
else
{
testStepFail(0, "4.0","Binary Datatransfer on server Failed");
break;
}
}
else if(ret2 == timeout)
{
testStepFail(0, "4.0","Binary Datatransfer on server Failed");
write("timeout occured while TestWaitForDiagResponse with block %d ",blockSequenceCounter);
}
}
else if(ret1 == timeout)
{
testStepFail(0, "4.0","Binary Datatransfer on server Failed");
write("timeout occured while TestWaitForDiagRequestSent %d ",blockSequenceCounter);
}
if(blockSequenceCounter == 255)
blockSequenceCounter = 0;
else
++blockSequenceCounter;
}
}
//handle the rest of the bytes to be transmitted
fileClose (readaccess_handle);
}
The software downloading is happening but it is taking a long.... time for download.
For TestWaitForDiagRequestSent() function any value for timeout less than 20000 is giving me timeout error.
Is there any other way I can reduce the software transfer time or where am I going wrong with calculation?
Is there any example I can refer to see How to transmit such a long data using CAPL ?
Sorry, I am a beginner to CAPL and UDS protocol.

Why does my for loop increment past where it should stop?

I am attempting to increase the speed at which files in my application download by downloading them in parallel. Previously I was downloading them sequentially and it worked fine but when I attempted to download them in parallel I ran into unexplained issues.
Here is my method in which I downloaded the files in sequence:
public IActionResult DownloadPartFiles([FromBody] FileRequestParameters parameters)
{
List<InMemoryFile> files = new List<InMemoryFile>();
for (int i = 0; i < parameters.FileNames.Length; i++)
{
InMemoryFile inMemoryFile = GetInMemoryFile(parameters.FileLocations[i], parameters.FileNames[i]).Result;
files.Add(inMemoryFile);
}
byte[] archiveFile = null;
using (MemoryStream archiveStream = new MemoryStream())
{
using (ZipArchive archive = new ZipArchive(archiveStream, ZipArchiveMode.Create, true))
{
foreach (InMemoryFile file in files)
{
ZipArchiveEntry zipArchiveEntry = archive.CreateEntry(file.FileName, CompressionLevel.Optimal);
using (MemoryStream originalFileStream = new MemoryStream(file.Content))
using (Stream zipStream = zipArchiveEntry.Open())
{
originalFileStream.CopyTo(zipStream);
}
}
}
archiveFile = archiveStream.ToArray();
}
return File(archiveFile, "application/octet-stream");
}
Here is the method changed to download the files in parallel:
public async Task<IActionResult> DownloadPartFiles([FromBody] FileRequestParameters parameters)
{
List<Task<InMemoryFile>> fileTasks = new List<Task<InMemoryFile>>();
for (int i = 0; i < parameters.FileNames.Length; i++)
{
if(i == parameters.FileNames.Length - 1)
{
int breakpoint = 0;
}
if(i == parameters.FileNames.Length)
{
int breakpoint = 0;
}
fileTasks.Add(Task.Run(() => GetInMemoryFile(parameters.FileLocations[i], parameters.FileNames[i])));
}
InMemoryFile[] fileResults = await Task.WhenAll(fileTasks);
byte[] archiveFile = null;
using (MemoryStream archiveStream = new MemoryStream())
{
using (ZipArchive archive = new ZipArchive(archiveStream, ZipArchiveMode.Create, true))
{
foreach (InMemoryFile file in fileResults)
{
ZipArchiveEntry zipArchiveEntry = archive.CreateEntry(file.FileName, CompressionLevel.Optimal);
using (MemoryStream originalFileStream = new MemoryStream(file.Content))
using (Stream zipStream = zipArchiveEntry.Open())
{
originalFileStream.CopyTo(zipStream);
}
}
}
archiveFile = archiveStream.ToArray();
}
return File(archiveFile, "application/octet-stream");
}
Here is the method that does the actual downloading:
private async Task<InMemoryFile> GetInMemoryFile(string fileLocation, string fileName)
{
InMemoryFile file;
using (HttpClient client = new HttpClient())
using (HttpResponseMessage response = await client.GetAsync(fileLocation))
{
byte[] fileContent = await response.Content.ReadAsByteArrayAsync();
file = new InMemoryFile(fileName, fileContent);
}
return file;
}
Now the issues I run into is after I changed DownloadPartFiles to get all the files in parallel my for loop is now seeming to go past its stop condition. For example, if parameters.FileNames.Length returns 12 the for loop should not run when i = 12 and it should exit the loop. However, in my testing it will continue to run when i = 12 and as one might expect I run into an out of bounds error. I tried to set breakpoints in my code to make sure that it was actually running past the stop condition and more weird behavior arose. In my for loop I included two if statements with breakpoint variables to break on. It will always break when i should be on its last loop but will never break when i is one after its expected last loop. It seems to skip that breakpoint when i is one past the expected stop condition. It will run fine if I step through the code while debugging but will out of bounds error when I let it run normally.
I'm not sure why this is happening but I am still new to asynchronous programming so maybe its just an oversight somewhere. Let me know if I need to explain anything further.
I make a critical mistake in that I tried to wrap an asynchronous method (my GetInMemoryFile method) in the Task.Run() method which is used to wrap synchronous methods to make them run asynchronously. This caused the weird behavior.
So in short I changed
fileTasks.Add(Task.Run(() => GetInMemoryFile(parameters.FileLocations[i], parameters.FileNames[i])));
To
fileTasks.Add(GetInMemoryFile(parameters.FileLocations[i], parameters.FileNames[i]));

NetMQ why is "SendReady" needed for Req-Rep?

I have a problem that I managed to fix... However I'm a little concerned as I don't really understand why the solution worked;
I am using NetMQ, and specifically a NetMQ poller which has a number of sockets, one of which is a REQ-REP pair.
I have a queue of requests which get dequeued into requests, and the server handles each request type as required and sends back an appropriate response. This had been working without issue, however when I tried to add in an additional request type the system stopped working as expected; what would occur is that the request would reach the server, the server would send the response... and the client would not receive it. The message would not be received at the client until the server was shut down (unusual behavior!).
I had been managing the REQ-REP pair with a flag that I set before sending a request, and reset on receipt of a reply. I managed to fix the issue by only triggering replies within the "SendReady" event of the REQ socket - this automagically fixed all of my issues, however I can't really find anything in the documentation that tells me why the socket might not have been in the "sendready" state, or what this actually does.
Any information that could be shed on why this is working now would be great :)
Cheers.
Edit: Source
Client:
"Subscribe" is run as a separate thread to the UI
private void Subscribe(string address)
{
using (var req = new RequestSocket(address + ":5555"))
using (var sub = new SubscriberSocket(address + ":5556"))
using (var poller = new NetMQPoller { req, sub })
{
// Send program code when a request for a code update is received
sub.ReceiveReady += (s, a) =>
{
var type = sub.ReceiveFrameString();
var reply = sub.ReceiveFrameString();
switch (type)
{
case "Type1":
manager.ChangeValue(reply);
break;
case "Type2":
string[] args = reply.Split(',');
eventAggregator.PublishOnUIThread(new MyEvent(args[0], (SimObjectActionEventType)Enum.Parse(typeof(MyEventType), args[1])));
break;
}
};
req.ReceiveReady += Req_ReceiveReady;
poller.RunAsync();
sub.Connect(address + ":5556");
sub.SubscribeToAnyTopic();
sub.Options.ReceiveHighWatermark = 10;
reqQueue = new Queue<string[]>();
reqQueue.Enqueue(new string[] { "InitialiseClient", "" });
req_sending = false;
while (programRunning)
{
if (reqQueue.Count > 0 && !req_sending)
{
req_sending = true;
string[] request = reqQueue.Dequeue();
Console.WriteLine("Sending " + request[0] + " " + request[1]);
req.SendMoreFrame(request[0]).SendFrame(request[1]);
}
Thread.Sleep(1);
}
}
}
private void Req_ReceiveReady(object sender, NetMQSocketEventArgs e)
{
var req = e.Socket;
var messageType = req.ReceiveFrameString();
Console.WriteLine("Received {0}", messageType);
switch (messageType)
{
case "Reply1":
// Receive action
break;
case "Reply2":
// Receive action
break;
case "Reply3":
// Receive action
break;
}
req_sending = false;
}
Server:
using (var rep = new ResponseSocket("#tcp://*:5555"))
using (var pub = new PublisherSocket("#tcp://*:5556"))
using (var beacon = new NetMQBeacon())
using (var poller = new NetMQPoller { rep, pub, beacon })
{
// Send program code when a request for a code update is received
rep.ReceiveReady += (s, a) =>
{
var messageType = rep.ReceiveFrameString();
var message = rep.ReceiveFrameString();
Console.WriteLine("Received {0} - Content: {1}", messageType, message);
switch (messageType)
{
case "InitialiseClient":
// Send
rep.SendMoreFrame("Reply1").SendFrame(repData);
break;
case "Req2":
// do something
rep.SendMoreFrame("Reply2").SendFrame("RequestOK");
break;
case "Req3":
args = message.Split(',');
if (args.Length == 2)
{
// Do Something
rep.SendMoreFrame("Reply3").SendFrame("RequestOK");
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
case "Req4":
args = message.Split(',');
if (args.Length == 2)
{
requestData = //do something
rep.SendMoreFrame("Reply4").SendFrame(requestData);
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
default:
rep.SendMoreFrame("Ack").SendFrame("Error");
break;
}
};
// setup discovery beacon with 1 second interval
beacon.Configure(5555);
beacon.Publish("server", TimeSpan.FromSeconds(1));
// start the poller
poller.RunAsync();
// run the simulation loop
while (serverRunning)
{
// todo - make this operate for efficiently
// push updated variable values to clients
foreach (string[] message in pubQueue)
{
pub.SendMoreFrame(message[0]).SendFrame(message[1]);
}
pubQueue.Clear();
Thread.Sleep(2);
}
poller.StopAsync();
}
You are using the Request socket from multiple threads, which is not supported. You are sending on the main thread and receiving on the poller thread.
Instead of using regular queue try to use NetMQQueue, you can add it to the poller and enqueue from the UI thread. Then the sending is happening on the poller thread as well as the receiving.
You can read the docs here:
http://netmq.readthedocs.io/en/latest/queue/
Only thing I can think of is that the REP socket is ready to send only after you actually received a message fully (all parts).

EWS The server cannot service this request right now

I am seeing errors while exporting email in office 365 account using ews managed api, "The server cannot service this request right now. Try again later." Why is that error occurring and what can be done about it?
I am using the following code for that work:-
_GetEmail = (EmailMessage)item;
bool isread = _GetEmail.IsRead;
sub = _GetEmail.Subject;
fold = folder.DisplayName;
historicalDate = _GetEmail.DateTimeSent.Subtract(folder.Service.TimeZone.GetUtcOffset(_GetEmail.DateTimeSent));
props = new PropertySet(EmailMessageSchema.MimeContent);
var email = EmailMessage.Bind(_source, item.Id, props);
bytes = new byte[email.MimeContent.Content.Length];
fs = new MemoryStream(bytes, 0, email.MimeContent.Content.Length, true);
fs.Write(email.MimeContent.Content, 0, email.MimeContent.Content.Length);
Demail = new EmailMessage(_destination);
Demail.MimeContent = new MimeContent("UTF-8", bytes);
// 'SetExtendedProperty' used to maintain historical date of items
Demail.SetExtendedProperty(new ExtendedPropertyDefinition(57, MapiPropertyType.SystemTime), historicalDate);
// PR_MESSAGE_DELIVERY_TIME
Demail.SetExtendedProperty(new ExtendedPropertyDefinition(3590, MapiPropertyType.SystemTime), historicalDate);
if (isread == false)
{
Demail.IsRead = isread;
}
if (_source.RequestedServerVersion == flagVersion && _destination.RequestedServerVersion == flagVersion)
{
Demail.Flag = _GetEmail.Flag;
}
_lstdestmail.Add(Demail);
_objtask = new TaskStatu();
_objtask.TaskId = _taskid;
_objtask.SubTaskId = subtaskid;
_objtask.FolderId = Convert.ToInt64(folderId);
_objtask.SourceItemId = Convert.ToString(_GetEmail.InternetMessageId.ToString());
_objtask.DestinationEmail = Convert.ToString(_fromEmail);
_objtask.CreatedOn = DateTime.UtcNow;
_objtask.IsSubFolder = false;
_objtask.FolderName = fold;
_objdbcontext.TaskStatus.Add(_objtask);
try
{
if (counter == countGroup)
{
Demails = new EmailMessage(_destination);
Demails.Service.CreateItems(_lstdestmail, _destinationFolder.Id, MessageDisposition.SaveOnly, SendInvitationsMode.SendToNone);
_objdbcontext.SaveChanges();
counter = 0;
_lstdestmail.Clear();
}
}
catch (Exception ex)
{
ClouldErrorLog.CreateError(_taskid, subtaskid, ex.Message + GetLineNumber(ex, _taskid, subtaskid), CreateInnerException(sub, fold, historicalDate));
counter = 0;
_lstdestmail.Clear();
continue;
}
This error occurs only if try to export in office 365 accounts and works fine in case of outlook 2010, 2013, 2016 etc..
Usually this is the case when exceed the EWS throttling in Exchange. It is explain in here.
Make sure you already knew throttling policies and your code comply with them.
You can find throttling policies using Get-ThrottlingPolicy if you have the server.
One way to solve the throttling issue you are experiencing is to implement paging instead of requesting all items in one go. You can refer to this link.
For instance:
using Microsoft.Exchange.WebServices.Data;
static void PageSearchItems(ExchangeService service, WellKnownFolderName folder)
{
int pageSize = 5;
int offset = 0;
// Request one more item than your actual pageSize.
// This will be used to detect a change to the result
// set while paging.
ItemView view = new ItemView(pageSize + 1, offset);
view.PropertySet = new PropertySet(ItemSchema.Subject);
view.OrderBy.Add(ItemSchema.DateTimeReceived, SortDirection.Descending);
view.Traversal = ItemTraversal.Shallow;
bool moreItems = true;
ItemId anchorId = null;
while (moreItems)
{
try
{
FindItemsResults<Item> results = service.FindItems(folder, view);
moreItems = results.MoreAvailable;
if (moreItems && anchorId != null)
{
// Check the first result to make sure it matches
// the last result (anchor) from the previous page.
// If it doesn't, that means that something was added
// or deleted since you started the search.
if (results.Items.First<Item>().Id != anchorId)
{
Console.WriteLine("The collection has changed while paging. Some results may be missed.");
}
}
if (moreItems)
view.Offset += pageSize;
anchorId = results.Items.Last<Item>().Id;
// Because you’re including an additional item on the end of your results
// as an anchor, you don't want to display it.
// Set the number to loop as the smaller value between
// the number of items in the collection and the page size.
int displayCount = results.Items.Count > pageSize ? pageSize : results.Items.Count;
for (int i = 0; i < displayCount; i++)
{
Item item = results.Items[i];
Console.WriteLine("Subject: {0}", item.Subject);
Console.WriteLine("Id: {0}\n", item.Id.ToString());
}
}
catch (Exception ex)
{
Console.WriteLine("Exception while paging results: {0}", ex.Message);
}
}
}

Compiler error when combining Linq + "RangeVariables" + TPL + DynamicTableEntity

I'm looking at the Microsoft-provided sample "Process Tasks as they Finish" and adapting that TPL sample for Azure Storage.
The problem I have is marked below where the variable domainData reports the errors in the compiler: Unknown method Select(?) of TableQuerySegment<DynamicTableEntity> (fully qualified namespace removed)
I also get the following error DynamicTableEntity domainData \n\r Unknown type of variable domainData
/// if you have the necessary references the following most likely should compile and give you same error
CloudStorageAccount acct = CloudStorageAccount.DevelopmentStorageAccount;
CloudTableClient client = acct.CreateCloudTableClient();
CloudTable tableSymmetricKeys = client.GetTableReference("SymmetricKeys5");
TableContinuationToken token = new TableContinuationToken() { };
TableRequestOptions opt = new TableRequestOptions() { };
OperationContext ctx = new OperationContext() { ClientRequestID = "ID" };
CancellationToken cancelToken = new CancellationToken();
List<Task> taskList = new List<Task>();
var task2 = tableSymmetricKeys.CreateIfNotExistsAsync(cancelToken);
task2.Wait(cancelToken);
int depth = 3;
while (true)
{
Task<TableQuerySegment<DynamicTableEntity>> task3 = tableSymmetricKeys.ExecuteQuerySegmentedAsync(query, token, opt, ctx, cancelToken);
// Run the method
task3.Wait();
Console.WriteLine("Records retrieved in this attempt = " + task3.Result.Count());// + " | Total records retrieved = " + state.TotalEntitiesRetrieved);
// HELP! This is where I'm doing something the compiler doesn't like
//
IEnumerable<Task<int>> getTrustDataQuery =
from domainData in task3.Result select QueryPartnerForData(domainData, "yea, search for this.", client, cancelToken);
// Prepare for next iteration or quit
if (token == null)
{
break;
}
else
{
token = task3.Result.ContinuationToken;
// todo: persist token token.WriteXml()
}
}
//....
private static object QueryPartnerForData(DynamicTableEntity domainData, string p, CloudTableClient client, CancellationToken cancelToken)
{
throw new NotImplementedException();
}
Your code is missing a query. In order to test the code I created the following query:
TableQuery<DynamicTableEntity> query = new TableQuery<DynamicTableEntity>()
.Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "temp"));
I also added the method QueryPartnerForData which doesn't do anything (simply returns null) and everything works fine. So maybe it's an issue with the QueryPartnerForData method? The best way to find the actual error is by setting a breakpoint here and there.
A StackOverflowException often means you are stuck in an endless loop. Run through the breakpoints a few times and see where your code is stuck. Could it be that QueryPartnerForData calls the other method and that the other method calls QueryPartnerForData again?

Resources