API Platform: MaxDepth by request parameter - api-platform.com

I'm new to API Platform and I'm creating some relations in my API, but I want to limit maximal depth of output (for example menu has submenu, which has submenu, etc. - I want to limit this depth).
Is there any way to set max depth by a GET parameter (or some other parameter in a request)?

Related

jsPlumb - How do you get all of the actual connections from a scrollable list when some connected elements are scrolled out of view?

I need to allow mapping between two long scrollable lists and I want to be able to get a list of all the connections created. Using the demo code off the jsPlumb community website I can create the lists, and connect items. But when I try to get a list of all the connections, any item that has had it's connector "collapsed" to an upper or lower corner of the list container doesn't show the actual connection. For example:
List 1, Item 5 is connected to List 2, Item 2. If I query all connections:
const connections = this.jsPlumbInstance.getAllConnections();
console.log(connections);
And drill into the object I can see the source and target are correct:
If I then scroll List 2, Item 2 out of view and get all connections again:
I now get:
Somewhere, the connection to L2I2 is being maintained - because it "reconnects" when scrolled back into view, but I want to programmatically extract all connected items without resorting to scrolling the lists. Where can I find the "real" mappings between items regardless of their visibility and the connector stacking? Thanks!
I think in fact this information is not readily available. Connections are maintained in the background via a "proxy" concept, handled by the proxyConnection method of JsPlumbInstance (it's the same method that is used when collapsing a group and moving child connections to the collapsed group, incidentally). This method manipulates a proxies array on each connection it is managing.
So, theoretically, you could examine the connections in the return value of getAllConnections() and find the real connection via the proxies array. It could be a bit annoying though. A better solution might be for the list manager to expose a method, maybe? We could follow this up on Github if you like.

Google Drive API: list all files at once

Currently, I'm using Files: list API below to get all the files
https://developers.google.com/drive/api/v3/reference/files/list
Attached screenshot contains the parameters used where I've provided (pagesize=1000). This only returns 1000 files per call. I have to set (pageToken='nextPageToken' value from previous response)
Is there a way to for the API to return all the files instead of having to set (pageToken='nextPageToken' value from previous response) ?. Please advise
Answer: No there is no way to List more then 1000 files without pagination.
Addional information.
If you check the documentation that you yourself have linked
You will notice that it states that the default page size is 100, that means that if you don't send the page size parameter that it will automatically be set to 100 by the system.
You will also notice that it states Acceptable values are 1 to 1000, inclusive. this means that you can max set pagesize to 1000
If you want additional files you need to use the nextpagetoken to get another set of rows.
There is no other way around pagination if you want more then the 1000 rows. I dont know what your doing but maybe using the the Q parameter to search for just the files you are looking for and thereby limiting the response to under 1000.
Also note that my testing suggests that you get 100 back for My Drives and 460 (?) for Shared Drives. I have never got 1000 back for either type. You will need to iterate each "page" in turn

Query for a cache hit rate graph with prometheus

I'm using Caffeine cache with Spring Boot application. All metrics are enabled, so I have them on Prometheus and Grafana.
Based on cache_gets_total metric I want to build a HitRate graph.
I've tried to get a cache hits:
delta(cache_gets_total{result="hit",name="myCache"}[1m])
and all gets from cache:
sum(delta(cache_gets_total{name="myCache"}[1m]))
Both of the metrics works correctly and have values. But when I'm trying to get a hit ratio, I have no data points. Query I've tried:
delta(cache_gets_total{result="hit",name="myCache"}[1m]) / sum(delta(cache_gets_total{name="myCache"}[1m]))
Why this query doesn't work and how to get a HitRate graph based on information, I have from Spring Boot and Caffeine?
Run both ("cache hits" and "all gets") queries individually in prometheus and compare label sets you get with results.
For "/" operation to work both sides have to have exactly the same labels (and values). Usually some aggregation is required to "drop" unwanted dimensions/labels (like: if you already have one value from both queries then just wrap them both in sum() - before dividing).
First of all, it is recommended to use increase() instead of delta for calculating the increase of the counter over the specified lookbehind window. The increase() function properly handles counter resets to zero, which may happen on service restart, while delta() would return incorrect results if the given lookbehind window covers counter resets.
Next, Prometheus searches for pairs of time series with identical sets of labels when performing / operation. Then it applies individually the given operation per each pair of time series. Time series returned from increase(cache_gets_total{result="hit",name="myCache"}[1m]) have at least two labels: result="hit" and name="myCache", while time series returned from sum(increase(cache_gets_total{name="myCache"}[1m])) have zero labels because sum removes all the labels after the aggregation.
Prometheus provides the solution to this issue - on() and group_left() modifiers. The on() modifier allows limiting the set of labels, which should be used when searching for time series pairs with identical labelsets, while the group_left() modifier allows matching multiple time series on the left side of / with a single time series on the right side of / operator. See these docs. So the following query should return cache hit rate:
increase(cache_gets_total{result="hit",name="myCache"}[1m])
/ on() group_left()
sum(increase(cache_gets_total{name="myCache"}[1m]))
There are alternative solutions exist:
To remove all the labels from increase(cache_gets_total{result="hit",name="myCache"}[1m]) with sum() function:
sum(increase(cache_gets_total{result="hit",name="myCache"}[1m]))
/
sum(increase(cache_gets_total{name="myCache"}[1m]))
To wrap the right part of the query into scalar() function. This enables vector op scalar matching rules described here:
increase(cache_gets_total{result="hit",name="myCache"}[1m])
/
scalar(sum(increase(cache_gets_total{name="myCache"}[1m])))
It is also possible to get cache hit rate for all the caches with a single query via sum(...) by (name) template:
sum(increase(cache_gets_total{result="hit"}[1m])) by (name)
/
sum(increase(cache_gets_total[1m])) by (name)

What's the expected behavior of the Bing Search API v5 when deeply paginating?

I perform a bing API search for webpages and the query cameras.
The first "page" of results (offset=0, count=50) returns 49 actual results. It also returns a totalEstimatedMatches of 114000000 -- 114 million. Neat, that's a lot of results.
The second "page" of results (offset=49, count=50) performs similarly...
...until I reach page 7 (offset=314, count=50). Suddenly totalEstimatedMatches is 544.
And the actual count of results returned per-page trails off precipitously from there. In fact, over 43 "pages" of results, I get 413 actual results, of which only 311 have unique URLs.
This appears to happen for any query after a small number of pages.
Is this expected behavior? There's no hint from the API documentation that exhaustive pagination should lead to this behavior... but there you have it.
Here's a screenshot:
Each time the API is called, the search API obtains a group of possible matches starting at in the result set, and then filters out the results based on different parameters (e.g spam, duplicates, safesearch setting, etc), finally leaving a final result set.  If the final result after filtering and optimization is more than the count parameter then the number of results equal to count would be returned. If the parameter is more than the final result set count then the final result set is returned which will be less than the count parameter.  If the search API is called again, passing in the offset parameter to get the next set of results, then the filtering process happens again on the next set of results which means it may also be less than count.
 
You should not expect the full count parameter number of results to always be returned for each API call.  If further search results beyond the number returned are required then the query should be called again, passing in the offset parameter with a value equal to the number of results returned in the previous API call.  This also means that when making subsequent API calls, the offset parameter should never be a hard coded value and should always be calculated based on the results of previous queries. 
 
totalEstimatedMatches can also add to confusion around the Bing Search API results.  The word ‘estimated’ is important because the number is an estimation based on an initial quick result set, prior to the filtering described above.  Additionally, the totalEstimatedMatches value can change as you iterate through the result set by making subsequent API calls with increasing offset values.  The totalEstimatedMatches should only be used as a rough guide indicating the magnitude of the possible result set, and it should not be used to determine the number of results that will ultimately be returned.  To query all of the possible results you should continue making API calls, passing in offset with a value of the sum of the results returned in previous calls, until that sum is greater than totalEstimatedMatches of the most recent API call.
 
Note that you can see this same behavior by going to bing.com directly and using a query such as https://www.bing.com/search?q=bill+gates&count=50.  Notice that you will get around 34 results with a totalEstimatedMatches of ~567,000 (valid as of June 2017, future searches may change), and if you click the 'next page' arrow you will see that the next query executed will start at the offset of the 34 returned in the first query (ie. https://www.bing.com/search?q=bill+gates&count=50&first=34).  If you click ‘next’ several more times you may see the totalEstimatedMatches also change from page to page.
This seems to be expected behavior. The Web Search API is not a crawler API, thus it only delivers results, that the algorithms deem relevant for a human. Simply put, most humans won't skim through more than a few pages of results, furthermore they expect to find relevant results on the first page.
If you could retrieve the results in the millions, you could simply copy their search index and Bing would be out of business.
Search indices seem to be things of political and economic power, as far as I know there are only four relevant search indices world wide: from Google, from Microsoft (Bing), from Russia, and from China.
Those who control the search, control the Spice... ;-)

Google Places API inconsistency

Adding expected types parameter changes response result in unexpected way.
Request 1: https://maps.googleapis.com/maps/api/place/search/json?location=38.4551,-122.672045&radius=100&sensor=false&key=
Request 2: https://maps.googleapis.com/maps/api/place/search/json?location=38.4551,-122.672045&radius=100&sensor=false&types=park&key=
Both requests should return place with name: "Howarth Park" since it is of a type park. And the funny thing is that increasing radius=500 will bring back the expected result. But then how come it is returning it in the first place with no types parameter and the same radius?
This is due to the way the Google Places API processes Search Requests.
The Places API will return up to 20 establishment results within the specified radius. Additionally, area identity results may be returned to to help identify the area the establishments are located.
If no type has been specified in the Places API Search Request, these area identity results are not strictly limited to the radius specified in the request, however when a type has been specified, additional area identity results are strictly limited to the radius specified in the request.

Resources