I have a Teams messaging extension that is returning search results. The documentation mentions paging functionality, but I cannot get this to work:
https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/search-commands/respond-to-search?tabs=dotnet
I would expect that my search result should be telling Teams that there are additional search results (e.g. a total results parameter) and therefore paging should be available, but I cannot see anywhere that this is set.
As a result, users only see the first page of results.
This is undocumented, but apparently as long as your return a number of results that is equal to the requested page size Teams will request another page.
Related
I'm developing for a web app that needs to retrieve the last 10 videos of a user(channel).
First approach
Was to use the search endpoint with param 'forMine' ordering by date, but then I figured that maybe that param could retrieve videos uploaded by the user in a diferent channel or whatever...
First result with channel ID and date - 1st Aproach
Second approach
Was to use the search endpoint with param 'channelId' ordering by date, but then I realized that descriptions were incomplete and most importantly there were some videos missing comparing with first aproach, even if the missing videos belonged to same channel (as showed in pics links)
First resutl with channel ID and date - 2nd Aproach
So, then I googled to find some solution and found other way.
Third approach
Was to use the playlistItem endpoint as I found in Google, and seemed ok (I supposed) because it returned same videos that first aproach and consumed less quota but this method left me with doubts as I didn't knew if the videos would be the latest or maybe they would be sorted by position in the playlist and couldn't be trusted to be the most recent
That said, what would be the correct way to get the N most recent videos from a channel, please?
Regardless of the quota consumption (the less quota the better, of course, but an accurate result is essential)
I'm so confussed with the API response...
Thank you so much!
-- EDITED: NEW APPROACH AND FURTHER INVESTIGATIONS --
Fourth approach
Was to use activities endpoint as was stated by #stvar in his answer. I found that this way, as on second approach, there were some videos missing comparing with first and third approaches, and it was required to retrieve everything without 'maxResults' param because there were activities not related to video upload, making mandatory to perform pagination and a self filtering by type 'upload' after retrieving response in order to get N videos (or be confident in getting N videos uploaded in first 50 retrieved items)
Self Investigations
Further investigations and tests bringed me response to the issue of 'missing videos' of some approaches.
The status of that missing videos were 'unlisted', so they were videos uploaded to the channel, property of the channel, uploaded by user of the channel... but not retrieved by some methods that seemed to retrieve only 'public' videos not 'unlisted' (hidden) nor 'private'.
NOTE: I did my test with Google API PHP Client Library, this behaviour seems not to be on 'Try this API' as it returns only 'public' items, so be careful on trust in 'Try this API' results as it seems to use some hidden filters or something...
Also I tested the channel upload playlist to verify that the order can not be changed and has a LIFO sorting
CONCLUSIONS
At this point, my self conclusion is that there is not a proper way to solve this but quite ways to do it in depend of requisites of status and amount of free quota
Search endpoint seems to work all right, if you have a good amount of unused quota (100 each call) that is the direct way and easiest one as you can sort it and filtering as needed by a bunch of params, taking care to use 'forMine' param if you need every uploaded video or 'channelId' if you need only 'listed' and 'public' ones.
PlaylistItems endpoint is a proper way if you are in a quota crisis (1 each call) as the result is sorted by recent date, taking care to do pagination and post filtering if only 'public' videos are needed till retrieve the desired amount of video ids, otherwhise you can go all the way easy.
Note that the date used to order is the upload date not the post date
(thanks to #stvar for bringing this to the attention)
Activity endpoint, also for quota crisis (1 each call), while it could be more accurate than the others if you only want public videos (it is ordered by recent 'first publish date' so not accurate 100% neither ), is for me the one that gives more work, as it retrieves activities other than 'video upload', so you can not skip pagination and post filtering to retrieve the desired amount of video ids, besides that way you only have access, as said before, to public videos (which is fine if that meets your needs).
Anyway, if you need more than 50 ids, you need to make pagination whatever the aproach you use.
Hope this help someone else and thanks so much to contributors
PS: People in charge of the YouTube API, perhaps a filter by state among some others would be interesting, Thanks!!!
You may employ the Activities.list API endpoint, queried with:
mine=true,
part=snippet,contentDetails,
fields=items(snippet(type),contentDetails(upload)), and
maxResults=50.
For to obtain your desired N uploads, you have to implement pagination. That is that you have to successively call the endpoint until you reach N result set items that have snippet.type equal with upload.
Note that you may well use channelId=CHANNEL_ID instead of mine=true, if you're interested about the most recent uploads of a channel identified by its ID CHANNEL_ID rather than your own channel.
According to the docs, you'll get from this endpoint a result set made of Activities resource items that will contain the following info:
contentDetails.upload (object)
The upload object contains information about the uploaded video. This property is only present if the snippet.type is upload.
contentDetails.upload.videoId (string)
The ID that YouTube uses to uniquely identify the uploaded video.
The official docs state that each call to Activities.list endpoint has a quota cost of one unit.
Futhermore, upon obtaining a set of video IDs, you may invoke the Videos.list endpoint with a properly assigned id parameter, for to obtain from the endpoint all the details you need for each and every video of your interest.
Note that if you have a set of video IDs of cardinality K, since the parameter id of Videos.list endpoint can be specified as a comma-separated list of video IDs, then you may reduce the number of calls to Videos.list endpoint from K to floor(K / 50) + (K % 50 ? 1 : 0) by appropriately using the feature of id just mentioned.
According to the official docs, each call to Videos.list endpoint has also a quota cost of one unit.
Clarifications upon OP's request:
Question no. 1: The Activities.list endpoint produces only the activities specified by the Activities resource. The type property enumerates them all:
snippet.type (string)
The type of activity that the resource describes.
Valid values for this property are: channelItem, comment (not currently returned), favorite, like, playlistItem, promotedItem, recommendation, social, subscription, upload, bulletin (deprecated).
Indeed your remark is correct. For example, when getting the most recent 10 uploads, is possible that you'll have to scan a number of pages P of result sets, with P >= 2, until you reached collecting the desired 10 upload items. (Actual tests have confirmed me this to be factual.)
Question no. 2: The Activities.list endpoint produces items that are sorted by publishedAt; just replace the above fields with:
fields=items(snippet(type,publishedAt),contentDetails(upload))
and see that for yourself.
I could make here the following argument justifying the necessity that the items resulted upon the invocation of Activities.list endpoint be ordered chronologically by publishedAt (the newest first). One may note that, indeed, the official docs quoted above do not specify explicitly that ordering condition I just mentioned; but bare with me for a while:
My argument is of a pragmatic kind: if the result set of Activities.list is not ordered as mentioned, then this endpoint becomes useless. This is so, since, in this case, for one to obtain the most recent upload activity would have to fetch locally all the upload activities, for to then scan that result set for the most recent one. Being compelled to fetch all upload activities only for to obtain the newest one is pragmatically a nonsense. Therefore, by way of contradiction, the result set has to be ordered chronologically by publishedAt with the newest being the first.
Question no. 3: Indeed Search.list is not precise -- it has a fuzzy behavior. I can confirm this based on my own experience; but, unfortunately, I cannot point you to official docs (from Google or YouTube) that acknowledge and explain this behavior. As unfortunate as it is, for its users Search.list is completely opaque.
On the other hand, Activities.list is precise -- it has to be like that; if it wouldn't be precise, then that's a serious bug in the implementation (in my educated opinion).
I've read the official document of Google Contacts API version 3.0.
(https://developers.google.com/contacts/v3/)
On the part of 'Retrieving all contacts' there is a note saying below:
The feed may not contain all of the user's contacts, because there's a default limit on the number of results returned. For more information, see the max-results query parameter in Retrieving contacts using query parameters.
I wonder that 'default limit' because I would like to refer to Google's standard for developing.
Is there anyone who knows the number of default limit?
The default max depends upon the API and the method itself. Some of the Youtube methods only return 50 for a max others return 500.
Unfortunately the Google contacts API is a very old API and not well documented. If you dont send a max-results with your request then you will get the default.
You can also send something really big like 100000 if it refuses it it should return an error stating its max.
I perform a bing API search for webpages and the query cameras.
The first "page" of results (offset=0, count=50) returns 49 actual results. It also returns a totalEstimatedMatches of 114000000 -- 114 million. Neat, that's a lot of results.
The second "page" of results (offset=49, count=50) performs similarly...
...until I reach page 7 (offset=314, count=50). Suddenly totalEstimatedMatches is 544.
And the actual count of results returned per-page trails off precipitously from there. In fact, over 43 "pages" of results, I get 413 actual results, of which only 311 have unique URLs.
This appears to happen for any query after a small number of pages.
Is this expected behavior? There's no hint from the API documentation that exhaustive pagination should lead to this behavior... but there you have it.
Here's a screenshot:
Each time the API is called, the search API obtains a group of possible matches starting at in the result set, and then filters out the results based on different parameters (e.g spam, duplicates, safesearch setting, etc), finally leaving a final result set. If the final result after filtering and optimization is more than the count parameter then the number of results equal to count would be returned. If the parameter is more than the final result set count then the final result set is returned which will be less than the count parameter. If the search API is called again, passing in the offset parameter to get the next set of results, then the filtering process happens again on the next set of results which means it may also be less than count.
You should not expect the full count parameter number of results to always be returned for each API call. If further search results beyond the number returned are required then the query should be called again, passing in the offset parameter with a value equal to the number of results returned in the previous API call. This also means that when making subsequent API calls, the offset parameter should never be a hard coded value and should always be calculated based on the results of previous queries.
totalEstimatedMatches can also add to confusion around the Bing Search API results. The word ‘estimated’ is important because the number is an estimation based on an initial quick result set, prior to the filtering described above. Additionally, the totalEstimatedMatches value can change as you iterate through the result set by making subsequent API calls with increasing offset values. The totalEstimatedMatches should only be used as a rough guide indicating the magnitude of the possible result set, and it should not be used to determine the number of results that will ultimately be returned. To query all of the possible results you should continue making API calls, passing in offset with a value of the sum of the results returned in previous calls, until that sum is greater than totalEstimatedMatches of the most recent API call.
Note that you can see this same behavior by going to bing.com directly and using a query such as https://www.bing.com/search?q=bill+gates&count=50. Notice that you will get around 34 results with a totalEstimatedMatches of ~567,000 (valid as of June 2017, future searches may change), and if you click the 'next page' arrow you will see that the next query executed will start at the offset of the 34 returned in the first query (ie. https://www.bing.com/search?q=bill+gates&count=50&first=34). If you click ‘next’ several more times you may see the totalEstimatedMatches also change from page to page.
This seems to be expected behavior. The Web Search API is not a crawler API, thus it only delivers results, that the algorithms deem relevant for a human. Simply put, most humans won't skim through more than a few pages of results, furthermore they expect to find relevant results on the first page.
If you could retrieve the results in the millions, you could simply copy their search index and Bing would be out of business.
Search indices seem to be things of political and economic power, as far as I know there are only four relevant search indices world wide: from Google, from Microsoft (Bing), from Russia, and from China.
Those who control the search, control the Spice... ;-)
I am wondering how can I use Slack API to feed message history into GSA (Google Search Appliance) and having this kept up to date.
Did anyone wrote a script for this?
I don't have a readymade script, but it should be possible as you've imagined; IMO (without being familiar with the slack api, but with some knowledge of the slack archive sizes, i.e., >500K messages), I think the main challenge would be to identify and extract only the pieces of information that are important to you from the slack archive (which can easily get you to run out of your GSA document index license limit if you chose your GSA feed record elements too discretely - e.g., imagine if every message were a separate feed record).
In other words, you need to identify the discrete feed records keeping them as atomically large as possible in order to keep the document license usage down to a minimum, while keeping them discrete enough to yield accurate results.
Once that's done, or if your GSA index license limit is not a problem, one possible solution is to create an incremental/full feed by reading updates from the slack archive using its API, and then compiling the new records found, into the GSA feed format (with information that you want to be able to search-on/omit contained within the tags as appropriate, and info that you need to present in the results, contained in html meta tags), and push those new records in to the GSA.
Another solution, if you'd be able to host a few web application pages that you can have the GSA crawl, will even allow you to keep its index up to date with a continuous crawl. For this you'd need at least one "jump page" which would just be a list of links each populated with query string parameters, to be passed to your detail record page, which would serve to identify a set of various slack message archive element IDs, that you've determined as needing to be indexed as a discrete record. You'd then need to set your "jump page" URL to be crawled by the GSA, and also develop your XSLT or other search results consumer service to be able to read/render the returned results with info contained in meta tags. Note: when the consumer service makes the search call to GSA, it'll need to pass in a "&getfields=*" query string parameter to get the GSA to return all the info contained in the meta tags.
I hope that my wording is not too esoteric and helps you in some way in designing your solution.
I'm trying to use Google Places API for a business locator app, but am having trouble creating an exhaustive database of business.
1.The API call only returns 20 results back.
2.The "type" restriction (e.g. type=restaurant) does not pick up all businesses by type in a given zip. I could use "keyword" but not all restaurants have restaurant in their name, and not all spas have "spa" in their name.
3. Each call produces the same set of results from day to day, and with only 20 returns per call, how am I to get a more exhaustive database of businesses?
I can try to get around the above three constraints by looping through a very well degraded search of businesses: say by zip code, some list of keywords, category type. But I still won't get close to picking up the 50 million or so businesses in google places.
In fact, even when I make a call for restaurants and bars in my own neighborhood, I don't pick up popular places down the block from me.
How is the API usable for an app that locates places then?
Any suggestions on how to create a more exhaustive search?
Thanks,
Nad
I'm not able to answer your question regarding Google Places API.
But for your requirements ('business locator app', 'I don't pick up popular places down the block from me') I suggest you try Yelp Search API:
Yelp's API program enables you to access trusted Yelp information in real time, such as business listing info, overall business ratings and review counts, deals and recent review excerpts.
Yelp is a popular review website with a capable API and you may test the quality of database and the devoted user base they have at Yelp homepage.
Note:
They keep some data for themselves and do not return everything in response.
The (free) dev account has a limit of 100 calls per 24 hours.
I know I'm late but maybe it helps someone these days.
By default, each Nearby Search or Text Search returns up to 20
establishment results per query; however, each search can return as
many as 60 results, split across three pages.
You need to use the field nextPageToken that you will receive on the first search to get the next page.
https://developers.google.com/places/web-service/search
An issue in stack overflow says:
There is no way to get more than 60 results in Places API. Some people
tried to file a feature request in Google issue tracker, but Google
rejected it with the following comment Unfortunately Places API is not
in a position to return more than 60 results. Besides technical
reasons (latency, among others) returning more than 60 results would
make the API be more like a database or general-purpose search engine.
We'd rather improve search quality so that users don't need to go so
far down a long list of results.
google places api more than 60 results
I faced the same difficulties that you did and decided to use the Yelp API instead. It is free, very complete and returns up to 1000 results. You should however check the terms of service before doing anything. It does not provide the website of the business (only the Yelp website link).
https://www.yelp.com/developers/documentation/v3/business_search
Other options I investigated at that time:
Foursquare ventures. (It was very expensive, and only returned up to around 100 results)
Here places API
Factual Places (I don't think this one is an API)
Sygic Travel API (Specific for touristical spots)
Planet.osm (OpenStreetMap)