I recently discovered Parse, and I want to use the push service. So this is how I plan to do targetted push:
Store the device tokens I got from Parse to my server, and associate every device token with matching user id.
When it's time to push, my server will tell Parse which device I want to push by using device tokens as the identifier.
I prefer sending all device tokens at once to Parse server instead of sending one by one, for the sake of reducing number of requests. What I'm worried about is Parse will reject my request because of the large list of device tokens. Does Parse have a limit of uploaded data size?
Yes, there is a limit. Neither where clause nor data may exceed 64kB.
Related
I am creating an application that allows users to submit JSON or Base64 image data via socket.io
The goal I am trying to achieve is:
if JSON is submitted, the message can have a maximum size of 1MB
if a Base64 image is submitted, the message can have a maximum size of 5MB
From the socket.io docs I can see that:
you can specify a maxHttpBufferSize option value that allows you to limit the maximum message size
namespaces allow you to split logic over a single connection
However, I can't figure out the correct way to get the functionality to work the way I have described above.
Would I need to:
set up 2 separate io instances on the server, one for JSON data and the other for Base64 images (therefore allowing me to set separate maxHttpBufferSize values for each), and then the client can use the correct instance, depending on what they want to submit (if so, what is the correct way of doing this?)
set up 1 instance with a maxHttpBufferSize of 5MB, and then add in my own custom logic to determine message sizes and prevent further actions if the data is JSON and over 1MB in size
set this up in some totally different way that I haven't thought of
Many thanks
From what I can see in the API, maxHttpBufferSize is a parameter for the underlying Engine.IO server (of which there is one instance per Socket.IO Server Instance). Obviously you're free to set up two servers but I doubt it makes sense to separate the system into two entirely different applications.
Talk of using Namespaces to separate logic is more about handling different messages at different endpoints (for example you would register a removeUserFromChat message handler to a user connecting via an /admin namespace, but you wouldn't want to register this to a user connecting via the /user namespace).
In the most recent socket server I set up, I defined my own protocol where part of the response would contain a HTTP status code, as well as a description that could be displayed to the user. For example I would return 200 on success. If I was uploading a file via a REST HTTP Interface, I would expect a 400 (BAD REQUEST) response if my request couldn't be processed - and I believe that this makes sense for your use case. Alternatively you could define your own custom 4XX error code if the file is too large, and handle this in your UI purely based on the code returned. Obviously you don't need to follow the HTTP protocol, and the design decisions are ultimately up to you, but in my opinion it makes sense to return some kind of error response in your message handler.
I suspect that the maxHttpBufferSize has different use at lower levels than your use case. When sending content over network, content is split into 'n bytes' of packets and when a application writes 'n' bytes, the network sends a packet over network (the less the n, more overhead due to network headers. The more the n, high latency because of waiting involved in accumulating n bytes before sending). Documentation is not clear about maxHttpBufferSize but it could be the packet size (n) configuration, not limit on the max data on connection.
It seems, http request header Content-Length might serve your purpose. It gives the actual object size based on that you can make a decision.
Currently I'm working on a SaaS with support for multiple tenants that can enable push notifications for their user-bases.
I'm thinking of using a message queue to store all pushes and send them with a separate service. That new service would need to read from the queue and send the push notifications.
My question now is: Do I need to come up with a complex sending strategy? I know that with GCM has a limit of 1000 devices per request, so this needs to be considered. I also can't wait for x pushes to fly in as this might delay a previous push from being sent. My next thought was to create a global array and fill it with pushes from the queue. A loop would then fetch that array every say 1 second and send pushes. This way pushes would get sent for sure and I wouldn't exceed the 1000 devices limit.
So ... although this might work I'm not sure if an infinite loop is the best way to go. I'm wondering if GCM / FCM even has a request limit? If not, I wouldn't need to aggregate the pushes in the first place and I could ditch the loop. I could simply fire a request for each push that gets pulled from the queue.
Any enlightenment on this topic or improvement of my prototypical algorithm would be great!
Do I need to come up with a complex sending strategy?
Not really. GCM/FCM is pretty simple enough. Just send the message towards the GCM/FCM server and it would queue it on it's own, then (as per it's behavior) send it as soon as possible.
I know that with GCM has a limit of 1000 devices per request, so this needs to be considered.
I think you're confusing the 1000 devices per request limit. The 1000 devices limit refers to the number of registration tokens you add in the list when using the registration_ids parameter:
This parameter specifies a list of devices (registration tokens, or IDs) receiving a multicast message. It must contain at least 1 and at most 1000 registration tokens.
This means you can only send to 1000 devices with the same message payload in a single request (you can then do a batch request (1000/each request) if you need to).
I'm wondering if GCM / FCM even has a request limit?
AFAIK, there is no such limit. Ditch the loop. Whenever you successfully send a message towards the GCM/FCM server, it will enqueue and keep the message until such time that it is available to send.
I have an application using Bing Maps API to retrieve coordinates for a postal code and then I perform spatial queries based on the result. There are times where I get empty results, but when I wait a few minutes it succeeds. I added logic that retried a handful of times if there's a failure but that doesn't seem to be helping. Here's the empty result I get back:
{"authenticationResultCode":"ValidCredentials","brandLogoUri":"http://dev.virtualearth.net/Branding/logo_powered_by.png","copyright":"Copyright © 2014 Microsoft and its suppliers. All rights reserved. This API cannot be accessed and the content and any results may not be used, reproduced or transmitted in any manner without express written permission from Microsoft Corporation.","resourceSets":[{"estimatedTotal":0,"resources":[]}],"statusCode":200,"statusDescription":"OK","traceId":"7a6bfca3f89b4f94a4693a410da4feb7|CH10043840|02.00.107.2300|CH1SCH050102529"}
And here's the URL I'm calling:
http://dev.virtualearth.net/REST/v1/Locations?q=50613&o=json&key=MyApiKey
Is there a way I can retrieve further information based on the traceId? Or is this something that's just accepted when using Bing Maps API?
You should firstly check the number of requests you're doing in a specific time and put it in relation with the type of Bing Maps Key you're using. Basic keys are rate limited which means that if you exceed the allowed number of request in a specific duration, you will be blocked.
Bing Maps Trial and basic key and rate limitation information
Those types of key are rate limited for security and logicial reasons (on 24h period and with time between the request) and that's the reason why you're getting a blank response without any information regarding the fact that it failed to geocode.
See the Terms of Use regarding the limitations and other restrictions (load and stress tests as well as hammering are part of it): http://www.microsoft.com/maps/product/terms.html
So, in order to try to analyze where your problem comes from, you might:
Check the type of key you're using and how many calls you're making on a specific period
Check the header of the response, it should include a specific header value: X-MS-BM-WS-INFO set to 1 if you are rate limited
See the MSDN about error handling: http://msdn.microsoft.com/en-us/library/ff701703.aspx
If you're not in this case (if you have an enterprise account), reach the technical support so they can officialy get back to you and check the key.
I'm currently working with an API that receives all the data sent to it via query strings. Some of the query strings I have to send are rather long and the request just dies with an unknown exception, I assume because they exceed the maximum length.
The ideal solution would be to switch to using POST data but as I don't control the API I'd have to wait until the owners of the API can update it.
Is there a way to increase the maximum query string length on Windows Phone to get around this?
I wasn't able to find a solution to this so we're asking the API vendor to switch to POST data.
For reference (as I wasn't able to find this information anywhere) Windows Phone doesn't seem to be able to make requests to URLs over 2078 characters long.
I'm using the VB6 Winsock control. When I do a POST to a server I get back the response as multiple Data arrival events.
How do you know when all the data has arrived?
(I'm guessing it's when the Winsock_Close event fires)
I have used VB6 Winsock controls in the past, and what I did was format my messages in a certain way to know when all the data has arrived.
Example: Each message starts with a "[" and ends with a "]".
"[Message Text]"
When data comes in from the DataArrival event check for the end of the message "]". If it is there you received at least one whole message, and possibly the start of a new one. If more of the message is waiting, store your message data in a form level variable and append to it when the DataArrival event fires the next time.
In HTTP, you have to parse and analyze the reply data that the server is sending back to you in order to know how to read it all.
First, the server sends back a list of CRLF-delimited header lines, which are terminated by a blank CRLF-delimited line by itself. You then have to look at the actual values of the 'Content-Length' and 'Transfer-Encoding' headers to know how to read the remaining data.
If there is no 'Transfer-Encoding' header, or if it does not contain a 'chunked' item in it, then the 'Content-Length' header specifies how many remaining bytes to read. But if the 'Transfer-Encoding' header contains a 'chunked' item, then you have to read and parse the remaining data in chunks, one at a time, in order to know when the data ends (each chunk reports its own size, and the last chunk reports a size of 0).
And no, you cannot rely on the connection being closed after the reply has been sent, unless the 'Connection' header explicitally says 'close'. For HTTP 1.1, that header is usually set to 'keep-alive' instead, which means the socket is left open so the client can send more requests on the same socket.
Read RFC 2616 for more details.
No, the Close event doesn't fire when all the data has arrived, it fires when you close the connection. It's not the Winsock control's job to know when all the data has been transmitted, it's yours. As part of your client/server communication protocol implementation, you have to tell the client what to expect.
Suppose your client wants the contents of a file from the server. The client doesn't know how much data is in the file. The exchange might go something like this:
client sends request for the data in the file
the server reads the file, determines the size, attaches the size to the beginning of the data (let's say it uses 4 bytes) that tells the client how much data to expect, and starts sending it
your client code knows to strip the first 4 bytes off any data that arrives after a file request and store it as the amount of data that is to follow, then accumulate the subsequent data, through any number of DataArrival events, until it has that amount
Ideally, the server would append a checksum to the data as well, and you'll have to implement some sort of timeout mechanism, figure out what to do if you don't get the expected amount of data, etc.