Am I hitting a thread limit? - windows-phone-7

I have the following working code to get a stream from a url:
private Stream GetDownloadStream(string url)
{
Stream stream = null;
AutoResetEvent downloadCompleted = new AutoResetEvent(false);
httpRequest = (HttpWebRequest)WebRequest.Create(url);
httpRequest.AllowReadStreamBuffering = false;
httpRequest.BeginGetResponse(
result =>
{
try
{
httpResponse = (HttpWebResponse)httpRequest.EndGetResponse(result);
stream = httpResponse.GetResponseStream();
}
catch (WebException)
{
downloadCompleted.Set();
Abort();
}
finally
{
downloadCompleted.Set();
}
},
null);
bool completed = downloadCompleted.WaitOne(15 * 1000);
if (completed) {
return stream;
}
return null;
}
It doesn't matter the streams I choose to play. It always returns a stream for the first 6 requests and it returns null on the seven request.
I already tried to increase the timeout to 30 seconds but on the seventh request it won't enter on the httpRequest.BeginGetResponse callback.
Any ideas why?

You're hitting the limit on the number of concurrent web requests (which is 6).
Try closing the stream when you've finished with it or staggering your requests so that you're not trying to make too many at once.

Related

ZeroMQ NetMQ TrySend always succeeds

public static void ZeroMQ()
{
try
{
TimeSpan timeout = TimeSpan.FromMilliseconds(2000);
AsyncIO.ForceDotNet.Force();
using (PairSocket client = new PairSocket("tcp://127.0.0.1:5555"))
{
client.Options.SendHighWatermark = 0;
client.Options.Linger = TimeSpan.Zero;
bool success = client.TrySendFrame(timeout, "Hello");
Debug.Log($"Success = {success}");
string msg = string.Empty;
success = client.TryReceiveFrameString(timeout, out msg);
Debug.Log($"Success = {success} - {msg}");
success = client.TryReceiveFrameString(timeout, out msg);
Debug.Log($"Success = {success} - {msg}");
}
}
catch (Exception e)
{
Debug.Log(e);
}
finally
{
NetMQConfig.Cleanup();
}
}
Here's some code with two problems:
If I run this code with no server running on :5555, this program prints "Success = true" for the TrySendFrame(), and false for both receives. I would expect it to fail and can't get it to fail even with Linger set to 0 and the high water mark also set to zero.
The second issue is that after the execution hits the finally block, the program freezes forever.
Can someone better versed in ZMQ give me any pointers as to why this might be happening?

gRPC slow serialization on large dataset

I know that google states that protobufs don't support large messages (i.e. greater than 1 MB), but I'm trying to stream a dataset using gRPC that's tens of megabytes, and it seems like some people say it's ok, or at least with some splitting...
However, when I try to send an array this way (repeated uint32), it takes like 20 seconds on the same local machine.
#proto
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (PhotonRecordsReply) {}
}
message PhotonRecordsRequest {
string fileName = 1;
}
message PhotonRecordsReply {
repeated uint32 PhotonRecords = 1;
}
where PhotonRecordsReply needs to be ~10 million uint32 in length...
Does anyone have an idea on how to speed this up? Or what technology would be more appropriate?
So I think I've implemented streaming based on comments and answers given, but it still takes the same amount of time:
#proto
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (stream PhotonRecordsReply) {}
}
class PAS_GRPC(pas_pb2_grpc.PASServicer):
def getPhotonRecords(self, request: pas_pb2.PhotonRecordsRequest, _context):
raw_data_bytes = flb_tools.read_data_bytes(request.fileName)
data = flb_tools.reshape_flb_data(raw_data_bytes)
index = 0
chunk_size = 1024
len_data = len(data)
while index < len_data:
# last chunk
if index + chunk_size > len_data:
yield pas_pb2.PhotonRecordsReply(PhotonRecords=data[index:])
# all other chunks
else:
yield pas_pb2.PhotonRecordsReply(PhotonRecords=data[index:index + chunk_size])
index += chunk_size
Min repro
Github example
If you changed it over to use streams that should help. It took less than 2 seconds to transfer for me. Note this was without ssl and on localhost. This code I threw together. I did run it and it worked. Not sure what might happen if the file is not a multiple of 4 bytes for example. Also the endian order of bytes read is the default for Java.
I made my 10 meg file like this.
dd if=/dev/random of=my_10mb_file bs=1024 count=10240
Here's the service definition. Only thing I added here was the stream to the response.
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (stream PhotonRecordsReply) {}
}
Here's the server implementation.
public class PhotonsServerImpl extends PASImplBase {
#Override
public void getPhotonRecords(PhotonRecordsRequest request, StreamObserver<PhotonRecordsReply> responseObserver) {
log.info("inside getPhotonRecords");
// open the file, I suggest using java.nio API for the fastest read times.
Path file = Paths.get(request.getFileName());
try (FileChannel fileChannel = FileChannel.open(file, StandardOpenOption.READ)) {
int blockSize = 1024 * 4;
ByteBuffer byteBuffer = ByteBuffer.allocate(blockSize);
boolean done = false;
while (!done) {
PhotonRecordsReply.Builder response = PhotonRecordsReply.newBuilder();
// read 1000 ints from the file.
byteBuffer.clear();
int read = fileChannel.read(byteBuffer);
if (read < blockSize) {
done = true;
}
// write to the response.
byteBuffer.flip();
for (int index = 0; index < read / 4; index++) {
response.addPhotonRecords(byteBuffer.getInt());
}
// send the response
responseObserver.onNext(response.build());
}
} catch (Exception e) {
log.error("", e);
responseObserver.onError(
Status.INTERNAL.withDescription(e.getMessage()).asRuntimeException());
}
responseObserver.onCompleted();
log.info("exit getPhotonRecords");
}
}
The client just logs the size of the array received.
public long getPhotonRecords(ManagedChannel channel) {
if (log.isInfoEnabled())
log.info("Enter - getPhotonRecords ");
PASGrpc.PASBlockingStub photonClient = PASGrpc.newBlockingStub(channel);
PhotonRecordsRequest request = PhotonRecordsRequest.newBuilder().setFileName("/udata/jdrummond/logs/my_10mb_file").build();
photonClient.getPhotonRecords(request).forEachRemaining(photonRecordsReply -> {
log.info("got this many photons: {}", photonRecordsReply.getPhotonRecordsCount());
});
return 0;
}

Google API Client for .NET: How to implement Exponential Backoff

I've created a method to add members in a Batch Request to a google group using .NET core and google's .NET client library. The code looks like this:
private void InitializeGSuiteDirectoryService()
{
_directoryServiceCredential = GoogleCredential
.FromJson(GlobalSettings.Instance.GSuiteSettings.Credentials)
.CreateScoped(_scopes)
.CreateWithUser(GlobalSettings.Instance.GSuiteSettings.User);
_directoryService = new DirectoryService(new BaseClientService.Initializer()
{
HttpClientInitializer = _directoryServiceCredential,
ApplicationName = _applicationName
});
}
public OperationResult<int> AddGroupMembers(Group group, IEnumerable<Member> members)
{
var result = new OperationResult<int>();
var memberList = members.ToList();
var batchRequestCount = 0;
if (memberList.Any())
{
var request = new BatchRequest(_directoryService);
foreach (var member in memberList)
{
batchRequestCount++;
request.Queue<Members>(_directoryService.Members.Insert(member, group.Id), (content, error, i, message) =>
{
if (message.IsSuccessStatusCode)
{
//log OK
}
else
{
// Implement Exponential backoff only on the request that failed.
}
});
if (batchRequestCount == 30|| member.Equals(memberList.Last()))
{
request.ExecuteAsync().Wait();
request = new BatchRequest(_directoryService); //Clear queue
}
}
}
return result;
}
The logic works fine if the amount of members is small; however, when the members count is let's say 100( this is the max amount of users in my google's test instance), I get an Error from Google that reads: "quotaExceeded". According to Google's documentation, the limit for a batch request on their Admin SDK is 1000 and I've set my logic to Execute when we reach a limit of 30.
The QUESTION is: How do I implement error handling to retry whenever I get this error? Their documentation suggests implementing 'Exponential Backoff' with a response that contains a 'retry-able error'(I don't see this when I inspect my response).
So here's what I ended up doing to implement Exponential Backoff on my call to add members to a Gsuite group. Since I'm using dotnet core, I was able to use 'Polly', which is a resilience and transient-fault-handling library that offers this functionality out of the box. There may be some need for refactoring, but here's what the code looks like for now:
public OperationResult<int> AddGroupMembers(Group group, IEnumerable<Member> members)
{
var result = new OperationResult<int>();
var memberList = members.ToList();
var batchRequestCount = 0;
if (memberList.Any())
{
var request = new BatchRequest(_directoryService);
foreach (var member in memberList)
{
retryRequest = false; // This variable needs to be declared at the class level to guarantee the value is available to the original thread running the process.
batchRequestCount++;
request.Queue<Members>(_directoryService.Members.Insert(member, group.Id), (content, error, i, message) =>
{
// If error code is 'quotaExceeded' retry the request ( You can add as many error codes as you'd like to retry here)
if (error.Code == 403)
{
retryRequest = true;
}
});
// Execute batch request to add members in batches of 30 member max
if (batchRequestCount == 30|| member.Equals(memberList.Last()))
{
// Below is what the code to retry using polly looks like
var response = Policy
.HandleResult<HttpResponseMessage>(message => message.StatusCode == HttpStatusCode.Conflict)
.WaitAndRetry(new[]
{
TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(2),
TimeSpan.FromSeconds(4)
}, (results, timeSpan, retryCount, context) =>
{
// Log Warn saying a retry was required.
})
.Execute(() =>
{
var httpResponseMsg = new HttpResponseMessage();
// Execute batch request Synchronously
request.ExecuteAsync().Wait();
if (retryRequest)
{
httpResponseMsg.StatusCode = HttpStatusCode.Conflict;
retryRequest = false;
}
else
{
httpResponseMsg.StatusCode = HttpStatusCode.OK;
}
return httpResponseMsg;
});
if (response.IsSuccessStatusCode)
{
// Log info
}
else
{
// Log warn
}
requestCount = 0;
request = new BatchRequest(_directoryService);
batchCompletedCount++;
}
}
}
return result;
}

NetMQ why is "SendReady" needed for Req-Rep?

I have a problem that I managed to fix... However I'm a little concerned as I don't really understand why the solution worked;
I am using NetMQ, and specifically a NetMQ poller which has a number of sockets, one of which is a REQ-REP pair.
I have a queue of requests which get dequeued into requests, and the server handles each request type as required and sends back an appropriate response. This had been working without issue, however when I tried to add in an additional request type the system stopped working as expected; what would occur is that the request would reach the server, the server would send the response... and the client would not receive it. The message would not be received at the client until the server was shut down (unusual behavior!).
I had been managing the REQ-REP pair with a flag that I set before sending a request, and reset on receipt of a reply. I managed to fix the issue by only triggering replies within the "SendReady" event of the REQ socket - this automagically fixed all of my issues, however I can't really find anything in the documentation that tells me why the socket might not have been in the "sendready" state, or what this actually does.
Any information that could be shed on why this is working now would be great :)
Cheers.
Edit: Source
Client:
"Subscribe" is run as a separate thread to the UI
private void Subscribe(string address)
{
using (var req = new RequestSocket(address + ":5555"))
using (var sub = new SubscriberSocket(address + ":5556"))
using (var poller = new NetMQPoller { req, sub })
{
// Send program code when a request for a code update is received
sub.ReceiveReady += (s, a) =>
{
var type = sub.ReceiveFrameString();
var reply = sub.ReceiveFrameString();
switch (type)
{
case "Type1":
manager.ChangeValue(reply);
break;
case "Type2":
string[] args = reply.Split(',');
eventAggregator.PublishOnUIThread(new MyEvent(args[0], (SimObjectActionEventType)Enum.Parse(typeof(MyEventType), args[1])));
break;
}
};
req.ReceiveReady += Req_ReceiveReady;
poller.RunAsync();
sub.Connect(address + ":5556");
sub.SubscribeToAnyTopic();
sub.Options.ReceiveHighWatermark = 10;
reqQueue = new Queue<string[]>();
reqQueue.Enqueue(new string[] { "InitialiseClient", "" });
req_sending = false;
while (programRunning)
{
if (reqQueue.Count > 0 && !req_sending)
{
req_sending = true;
string[] request = reqQueue.Dequeue();
Console.WriteLine("Sending " + request[0] + " " + request[1]);
req.SendMoreFrame(request[0]).SendFrame(request[1]);
}
Thread.Sleep(1);
}
}
}
private void Req_ReceiveReady(object sender, NetMQSocketEventArgs e)
{
var req = e.Socket;
var messageType = req.ReceiveFrameString();
Console.WriteLine("Received {0}", messageType);
switch (messageType)
{
case "Reply1":
// Receive action
break;
case "Reply2":
// Receive action
break;
case "Reply3":
// Receive action
break;
}
req_sending = false;
}
Server:
using (var rep = new ResponseSocket("#tcp://*:5555"))
using (var pub = new PublisherSocket("#tcp://*:5556"))
using (var beacon = new NetMQBeacon())
using (var poller = new NetMQPoller { rep, pub, beacon })
{
// Send program code when a request for a code update is received
rep.ReceiveReady += (s, a) =>
{
var messageType = rep.ReceiveFrameString();
var message = rep.ReceiveFrameString();
Console.WriteLine("Received {0} - Content: {1}", messageType, message);
switch (messageType)
{
case "InitialiseClient":
// Send
rep.SendMoreFrame("Reply1").SendFrame(repData);
break;
case "Req2":
// do something
rep.SendMoreFrame("Reply2").SendFrame("RequestOK");
break;
case "Req3":
args = message.Split(',');
if (args.Length == 2)
{
// Do Something
rep.SendMoreFrame("Reply3").SendFrame("RequestOK");
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
case "Req4":
args = message.Split(',');
if (args.Length == 2)
{
requestData = //do something
rep.SendMoreFrame("Reply4").SendFrame(requestData);
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
default:
rep.SendMoreFrame("Ack").SendFrame("Error");
break;
}
};
// setup discovery beacon with 1 second interval
beacon.Configure(5555);
beacon.Publish("server", TimeSpan.FromSeconds(1));
// start the poller
poller.RunAsync();
// run the simulation loop
while (serverRunning)
{
// todo - make this operate for efficiently
// push updated variable values to clients
foreach (string[] message in pubQueue)
{
pub.SendMoreFrame(message[0]).SendFrame(message[1]);
}
pubQueue.Clear();
Thread.Sleep(2);
}
poller.StopAsync();
}
You are using the Request socket from multiple threads, which is not supported. You are sending on the main thread and receiving on the poller thread.
Instead of using regular queue try to use NetMQQueue, you can add it to the poller and enqueue from the UI thread. Then the sending is happening on the poller thread as well as the receiving.
You can read the docs here:
http://netmq.readthedocs.io/en/latest/queue/
Only thing I can think of is that the REP socket is ready to send only after you actually received a message fully (all parts).

Downloading files from remote server

I am using C# and a console app and I am using this script to download files from a remote server. There area a couple of things I want to add. First, when it writes to a file, it doesn't take into consideration a newline. This seems to run a certain amount of bytes and then goes to a newline. I would like it to keep the same format as the file it is reading from. Second, there are multiple .jpg files on the server that I need to download. How can I use this script to download multiple, .jpg files
public static int DownLoadFiles(String remoteUrl, String localFile)
{
int bytesProcessed = 0;
// Assign values to these objects here so that they can
// be referenced in the finally block
StreamReader remoteStream = null;
StreamWriter localStream = null;
WebResponse response = null;
// Use a try/catch/finally block as both the WebRequest and Stream
// classes throw exceptions upon error
try
{
// Create a request for the specified remote file name
WebRequest request = WebRequest.Create(remoteUrl);
request.PreAuthenticate = true;
NetworkCredential credentials = new NetworkCredential("id", "pass");
request.Credentials = credentials;
if (request != null)
{
// Send the request to the server and retrieve the
// WebResponse object
response = request.GetResponse();
if (response != null)
{
// Once the WebResponse object has been retrieved,
// get the stream object associated with the response's data
remoteStream = new StreamReader(response.GetResponseStream());
// Create the local file
localStream = new StreamWriter(File.Create(localFile));
// Allocate a 1k buffer
char[] buffer = new char[1024];
int bytesRead;
// Simple do/while loop to read from stream until
// no bytes are returned
do
{
// Read data (up to 1k) from the stream
bytesRead = remoteStream.Read(buffer, 0, buffer.Length);
// Write the data to the local file
localStream.WriteLine(buffer, 0, bytesRead);
// Increment total bytes processed
bytesProcessed += bytesRead;
} while (bytesRead > 0);
}
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
finally
{
// Close the response and streams objects here
// to make sure they're closed even if an exception
// is thrown at some point
if (response != null) response.Close();
if (remoteStream != null) remoteStream.Close();
if (localStream != null) localStream.Close();
}
// Return total bytes processed to caller.
return bytesProcessed;
Why don't you use WebClient.DownloadData or WebClient.DownloadFile instead?
WebClient client = new WebClient();
client.Credentials = new NetworkCredentials("id", "pass");
client.DownloadFile(remoteUrl, localFile);
By the way the correct way to copy a stream to another is not what you did. You shouldn't read into char[] at all, as you might run into encoding and end of line issues as you are downloading a binary file. The WriteLine method call is problematic too. The right way to copy contents of a stream to another is:
void CopyStream(Stream destination, Stream source) {
int count;
byte[] buffer = new byte[BUFFER_SIZE];
while( (count = source.Read(buffer, 0, buffer.Length)) > 0)
destination.Write(buffer, 0, count);
}
The WebClient class is much easier to use and I suggest using that instead.
The reason you're getting spurious newlines in the result file is because StreamWriter.WriteLine() puts them there. Try using StreamWriter.Write() instead.
Regarding downloading multiple files, can't you just run the function several times, passing it the URLs of the different files you need?

Resources