Bing Maps API inconsistently fails on certain postal code lookups - bing-api

I have an application using Bing Maps API to retrieve coordinates for a postal code and then I perform spatial queries based on the result. There are times where I get empty results, but when I wait a few minutes it succeeds. I added logic that retried a handful of times if there's a failure but that doesn't seem to be helping. Here's the empty result I get back:
{"authenticationResultCode":"ValidCredentials","brandLogoUri":"http://dev.virtualearth.net/Branding/logo_powered_by.png","copyright":"Copyright © 2014 Microsoft and its suppliers. All rights reserved. This API cannot be accessed and the content and any results may not be used, reproduced or transmitted in any manner without express written permission from Microsoft Corporation.","resourceSets":[{"estimatedTotal":0,"resources":[]}],"statusCode":200,"statusDescription":"OK","traceId":"7a6bfca3f89b4f94a4693a410da4feb7|CH10043840|02.00.107.2300|CH1SCH050102529"}
And here's the URL I'm calling:
http://dev.virtualearth.net/REST/v1/Locations?q=50613&o=json&key=MyApiKey
Is there a way I can retrieve further information based on the traceId? Or is this something that's just accepted when using Bing Maps API?

You should firstly check the number of requests you're doing in a specific time and put it in relation with the type of Bing Maps Key you're using. Basic keys are rate limited which means that if you exceed the allowed number of request in a specific duration, you will be blocked.
Bing Maps Trial and basic key and rate limitation information
Those types of key are rate limited for security and logicial reasons (on 24h period and with time between the request) and that's the reason why you're getting a blank response without any information regarding the fact that it failed to geocode.
See the Terms of Use regarding the limitations and other restrictions (load and stress tests as well as hammering are part of it): http://www.microsoft.com/maps/product/terms.html
So, in order to try to analyze where your problem comes from, you might:
Check the type of key you're using and how many calls you're making on a specific period
Check the header of the response, it should include a specific header value: X-MS-BM-WS-INFO set to 1 if you are rate limited
See the MSDN about error handling: http://msdn.microsoft.com/en-us/library/ff701703.aspx
If you're not in this case (if you have an enterprise account), reach the technical support so they can officialy get back to you and check the key.

Related

What is "sf_max_daily_api_calls"?

Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.

D365 Same Tracking Token was assigned to Email/Case in Customer Service

One customer had a problem where an incorrect email (from another customer) was assigned to a case. The incorrectly assigned email is a response to a case that was deleted. However, the current case has the same tracking token as the deleted one. It seems that the CRM system uses the same tracking token as soon as it is available again. This should not happen! Here Microsoft has a real programming error from our point of view. The only solution we see is to increase the number of numbers to the maximum so that it takes longer until all tracking tokens are used up. But in the end, you still reach the limit.
Is there another possibility or has Microsoft really made a big mistake in the way emails are allocated?
We also activated Smart Matching, but that didn't help in this case either, because the allocation was made via the Tracking Token first.
Thanks
The structure of the tracking token can be configured and is set to 3 digits by default. This means that as soon as 999 emails are reached, the tracking token starts again at 1, which is basically a thinking error on Microsoft's part.
If you have set "Automatic replies", these will be reached in the shortest possible time. We therefore had to increase the number to 9 digits, which is also not a 100% solution. At some point, this number of emails is also reached and then emails are again assigned to requests that do not belong together. Microsoft has to come up with another solution.

Properly way for obtaining the N latest videos of channel : search vs playlistItem vs activities endpoints

I'm developing for a web app that needs to retrieve the last 10 videos of a user(channel).
First approach
Was to use the search endpoint with param 'forMine' ordering by date, but then I figured that maybe that param could retrieve videos uploaded by the user in a diferent channel or whatever...
First result with channel ID and date - 1st Aproach
Second approach
Was to use the search endpoint with param 'channelId' ordering by date, but then I realized that descriptions were incomplete and most importantly there were some videos missing comparing with first aproach, even if the missing videos belonged to same channel (as showed in pics links)
First resutl with channel ID and date - 2nd Aproach
So, then I googled to find some solution and found other way.
Third approach
Was to use the playlistItem endpoint as I found in Google, and seemed ok (I supposed) because it returned same videos that first aproach and consumed less quota but this method left me with doubts as I didn't knew if the videos would be the latest or maybe they would be sorted by position in the playlist and couldn't be trusted to be the most recent
That said, what would be the correct way to get the N most recent videos from a channel, please?
Regardless of the quota consumption (the less quota the better, of course, but an accurate result is essential)
I'm so confussed with the API response...
Thank you so much!
-- EDITED: NEW APPROACH AND FURTHER INVESTIGATIONS --
Fourth approach
Was to use activities endpoint as was stated by #stvar in his answer. I found that this way, as on second approach, there were some videos missing comparing with first and third approaches, and it was required to retrieve everything without 'maxResults' param because there were activities not related to video upload, making mandatory to perform pagination and a self filtering by type 'upload' after retrieving response in order to get N videos (or be confident in getting N videos uploaded in first 50 retrieved items)
Self Investigations
Further investigations and tests bringed me response to the issue of 'missing videos' of some approaches.
The status of that missing videos were 'unlisted', so they were videos uploaded to the channel, property of the channel, uploaded by user of the channel... but not retrieved by some methods that seemed to retrieve only 'public' videos not 'unlisted' (hidden) nor 'private'.
NOTE: I did my test with Google API PHP Client Library, this behaviour seems not to be on 'Try this API' as it returns only 'public' items, so be careful on trust in 'Try this API' results as it seems to use some hidden filters or something...
Also I tested the channel upload playlist to verify that the order can not be changed and has a LIFO sorting
CONCLUSIONS
At this point, my self conclusion is that there is not a proper way to solve this but quite ways to do it in depend of requisites of status and amount of free quota
Search endpoint seems to work all right, if you have a good amount of unused quota (100 each call) that is the direct way and easiest one as you can sort it and filtering as needed by a bunch of params, taking care to use 'forMine' param if you need every uploaded video or 'channelId' if you need only 'listed' and 'public' ones.
PlaylistItems endpoint is a proper way if you are in a quota crisis (1 each call) as the result is sorted by recent date, taking care to do pagination and post filtering if only 'public' videos are needed till retrieve the desired amount of video ids, otherwhise you can go all the way easy.
Note that the date used to order is the upload date not the post date
(thanks to #stvar for bringing this to the attention)
Activity endpoint, also for quota crisis (1 each call), while it could be more accurate than the others if you only want public videos (it is ordered by recent 'first publish date' so not accurate 100% neither ), is for me the one that gives more work, as it retrieves activities other than 'video upload', so you can not skip pagination and post filtering to retrieve the desired amount of video ids, besides that way you only have access, as said before, to public videos (which is fine if that meets your needs).
Anyway, if you need more than 50 ids, you need to make pagination whatever the aproach you use.
Hope this help someone else and thanks so much to contributors
PS: People in charge of the YouTube API, perhaps a filter by state among some others would be interesting, Thanks!!!
You may employ the Activities.list API endpoint, queried with:
mine=true,
part=snippet,contentDetails,
fields=items(snippet(type),contentDetails(upload)), and
maxResults=50.
For to obtain your desired N uploads, you have to implement pagination. That is that you have to successively call the endpoint until you reach N result set items that have snippet.type equal with upload.
Note that you may well use channelId=CHANNEL_ID instead of mine=true, if you're interested about the most recent uploads of a channel identified by its ID CHANNEL_ID rather than your own channel.
According to the docs, you'll get from this endpoint a result set made of Activities resource items that will contain the following info:
contentDetails.upload (object)
The upload object contains information about the uploaded video. This property is only present if the snippet.type is upload.
contentDetails.upload.videoId (string)
The ID that YouTube uses to uniquely identify the uploaded video.
The official docs state that each call to Activities.list endpoint has a quota cost of one unit.
Futhermore, upon obtaining a set of video IDs, you may invoke the Videos.list endpoint with a properly assigned id parameter, for to obtain from the endpoint all the details you need for each and every video of your interest.
Note that if you have a set of video IDs of cardinality K, since the parameter id of Videos.list endpoint can be specified as a comma-separated list of video IDs, then you may reduce the number of calls to Videos.list endpoint from K to floor(K / 50) + (K % 50 ? 1 : 0) by appropriately using the feature of id just mentioned.
According to the official docs, each call to Videos.list endpoint has also a quota cost of one unit.
Clarifications upon OP's request:
Question no. 1: The Activities.list endpoint produces only the activities specified by the Activities resource. The type property enumerates them all:
snippet.type (string)
The type of activity that the resource describes.
Valid values for this property are: channelItem, comment (not currently returned), favorite, like, playlistItem, promotedItem, recommendation, social, subscription, upload, bulletin (deprecated).
Indeed your remark is correct. For example, when getting the most recent 10 uploads, is possible that you'll have to scan a number of pages P of result sets, with P >= 2, until you reached collecting the desired 10 upload items. (Actual tests have confirmed me this to be factual.)
Question no. 2: The Activities.list endpoint produces items that are sorted by publishedAt; just replace the above fields with:
fields=items(snippet(type,publishedAt),contentDetails(upload))
and see that for yourself.
I could make here the following argument justifying the necessity that the items resulted upon the invocation of Activities.list endpoint be ordered chronologically by publishedAt (the newest first). One may note that, indeed, the official docs quoted above do not specify explicitly that ordering condition I just mentioned; but bare with me for a while:
My argument is of a pragmatic kind: if the result set of Activities.list is not ordered as mentioned, then this endpoint becomes useless. This is so, since, in this case, for one to obtain the most recent upload activity would have to fetch locally all the upload activities, for to then scan that result set for the most recent one. Being compelled to fetch all upload activities only for to obtain the newest one is pragmatically a nonsense. Therefore, by way of contradiction, the result set has to be ordered chronologically by publishedAt with the newest being the first.
Question no. 3: Indeed Search.list is not precise -- it has a fuzzy behavior. I can confirm this based on my own experience; but, unfortunately, I cannot point you to official docs (from Google or YouTube) that acknowledge and explain this behavior. As unfortunate as it is, for its users Search.list is completely opaque.
On the other hand, Activities.list is precise -- it has to be like that; if it wouldn't be precise, then that's a serious bug in the implementation (in my educated opinion).

Parse.com how to investigate excessive amount of requests

I'm developing a basic messaging system on the Parse.com at the moment and I have noticed in the Events Analytics screen I'm hitting 30,000+ requests per day. This is a shock considering I'm the only person using the system at the moment. Obviously with a few users I would blow my API request limit straight away.
I'm pretty experienced with Parse.com these days, so I'm lean with queries and I'm alert to not putting finds, saves, retrieves, etc in for loops. I also understand that saveAll() on an array of ParseObjects doesn't always limit the request count to 1 (depending on relationships inside that object).
So how does one track down where the excessive calls are coming from?
I see the above Analytics > Performance > Served Requests data, but how do I drill down to see if cloud code or iOS is the culprit?
Current solution is to effectively unit test each block of Parse code and look at the results in above screen.
For the benefit of others who may happen upon this thread with the same questions, I found some techniques to hunt down where excessive requests are coming from.
1) Parse's documentation on the API's themselves is really good, but there isn't a lot of information / guides for the admin interfaces. Under: Analytics -> Explorer -> Make a table there is a capability to download all the requests for a specific day (to import into a spreadsheet). The data isn't very detailed though and the dates are epoch timestamps, so hard to follow. At least you can see [Request Type, Class, Installation ID] e.g. ["find", "MyParseClass", "Cloud Code"].
2) My other technique was to add custom Analytic events to the code. So in Cloud Code for example, I added the following line to each beforeSave and afterSave event:
Parse.Analytics.track('MyClass_beforeSave', null);
3) Obviously, Parse logs these calls in the Logs window, but given you can only see the most recents transactions and can't clear them, I found it mostly unhelpful in tracking down the excessive calls.

Exhaustive Search on Google Places

I'm trying to use Google Places API for a business locator app, but am having trouble creating an exhaustive database of business.
1.The API call only returns 20 results back.
2.The "type" restriction (e.g. type=restaurant) does not pick up all businesses by type in a given zip. I could use "keyword" but not all restaurants have restaurant in their name, and not all spas have "spa" in their name.
3. Each call produces the same set of results from day to day, and with only 20 returns per call, how am I to get a more exhaustive database of businesses?
I can try to get around the above three constraints by looping through a very well degraded search of businesses: say by zip code, some list of keywords, category type. But I still won't get close to picking up the 50 million or so businesses in google places.
In fact, even when I make a call for restaurants and bars in my own neighborhood, I don't pick up popular places down the block from me.
How is the API usable for an app that locates places then?
Any suggestions on how to create a more exhaustive search?
Thanks,
Nad
I'm not able to answer your question regarding Google Places API.
But for your requirements ('business locator app', 'I don't pick up popular places down the block from me') I suggest you try Yelp Search API:
Yelp's API program enables you to access trusted Yelp information in real time, such as business listing info, overall business ratings and review counts, deals and recent review excerpts.
Yelp is a popular review website with a capable API and you may test the quality of database and the devoted user base they have at Yelp homepage.
Note:
They keep some data for themselves and do not return everything in response.
The (free) dev account has a limit of 100 calls per 24 hours.
I know I'm late but maybe it helps someone these days.
By default, each Nearby Search or Text Search returns up to 20
establishment results per query; however, each search can return as
many as 60 results, split across three pages.
You need to use the field nextPageToken that you will receive on the first search to get the next page.
https://developers.google.com/places/web-service/search
An issue in stack overflow says:
There is no way to get more than 60 results in Places API. Some people
tried to file a feature request in Google issue tracker, but Google
rejected it with the following comment Unfortunately Places API is not
in a position to return more than 60 results. Besides technical
reasons (latency, among others) returning more than 60 results would
make the API be more like a database or general-purpose search engine.
We'd rather improve search quality so that users don't need to go so
far down a long list of results.
google places api more than 60 results
I faced the same difficulties that you did and decided to use the Yelp API instead. It is free, very complete and returns up to 1000 results. You should however check the terms of service before doing anything. It does not provide the website of the business (only the Yelp website link).
https://www.yelp.com/developers/documentation/v3/business_search
Other options I investigated at that time:
Foursquare ventures. (It was very expensive, and only returned up to around 100 results)
Here places API
Factual Places (I don't think this one is an API)
Sygic Travel API (Specific for touristical spots)
Planet.osm (OpenStreetMap)

Resources