We have a web service where we'd like to deliver responses that vary depending on what time it is (local time) for the client. Conceptually, let's see we have REST endpoint to report what meal is coming up next (good for Hobbits):
GET http://.../nextMeal
Response: "Next meal is lunch!"
I want to be able to make the GET request without encoding the time is a query parameter, I would think this should be possible using a HTTP header. Googling around I see there are some old drafts proposing this but nothing official nor widely adopted.
Is there such a standard? If not, why not...?
Related
I am making an application that shows real-time status for a Valorant game. like players alive, the type of weapons each play has, time remaining, etc.
Is it possible to use Riot Valorant API to do this for live matches or for previously played matches?
As per my knowledge you couldn't. But I think you should try with Riot Games' official production API, not development API.
Let me know if you find something relatable.
(This is adding onto Sanskar's answer, which I cannot comment on as I lack the required 'reputation')
I'm aware that this is an old question, but for anyone who happens to have stumbled upon this question, there is no way to obtain real-time in-game events however, there is a way to retrieve certain data from a match-- only except, not in an official way that does go against Riot Game's TOS of using third party software. Though, I wouldn't worry about this too much as long as you do not ruin the competitive integrity of the game by providing yourself with an in-game advantage over others in the game. I personally have been using this for over a year now and have not received any form of punishment for doing so.
Anyhow, back to the actual question of this thread, check out this document of API endpoints that have been scraped through monitoring HTTP traffic of the Riot Client. https://github.com/techchrism/valorant-api-docs/tree/trunk/docs/ You'll need to obtain certain authorization tokens of the Valorant account through whatever methods are available to you (I pray that it is through lawful means :) ), which highly depends on the type of endpoint. There are certain wrappers for these endpoints already made by other users somewhere on GitHub, and you can always ask for help in the small community of developers that are using these endpoints in the README of the GitHub page I sent in this post.
REMEMBER TO NOT DO ANYTHING THAT WOULD CREATE AN UNFAIR ADVANTAGE, OR ANYTHING ELSE THAT A RIOT EMPLOYEE WOULD NOT APPROVE OF USING THIS :)
I wonder if someone else has experienced the same issue, and might have an answer to it.
I am using the Google Places API. There I do two kinds of requests
https://maps.googleapis.com/maps/api/place/textsearch/
and
https://maps.googleapis.com/maps/api/place/details/
After I have done about 20,000 of these requests my Quota of 150,000 has been eaten up, and I do get an error message.
The strange thing is, when I look at the Google API Console I can see the following:
In the API & Services Section I can see the following (which reflects the real requests I have done)
and in the IAM & admin section I do see a much higher value
This looks artifically high, and is limiting the service way to early.
Does anyone else have the same issue?
I figuert out, why there is this difference in request in the API view and requests in the Quota view.
When using the TextSeach Places API, each text search request, will be multiplied by a factor 10 towards your free contingent.
it is mentioned at this page:
TextSearchRequests
We use Google APIs Calendar v3 and Google said that they'll discontinuing support for json rpc Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints.
I cant find if they plan a v4 version compliant or if the current version is compliant. Documentation don't reference about it. Java Quickstart
Any information about that?
Its not just Calendar that is effected its all Google APIs discovery APIs that are effected. The batching endpoint
POST /batch HTTP/1.1
Authorization: Bearer your_auth_token
Host: www.googleapis.com
Content-Type: multipart/mixed; boundary=batch_foobarbaz
Content-Length: total_content_length
Will be discontinued around March 25, 2019. That being said i am skeptical that the client libraries have all been updated to remove it already. I am a contributor on two of them and haven't heard anything yet about removing the the batching ability from the libraries.
Google API Client Libraries have been regenerated to no longer make
requests to the global HTTP batch endpoint. Clients using these
libraries must upgrade to the latest version. Clients not using the
Google API Client Libraries and/or making custom calls to the JSON-RPC
endpoint or HTTP batch endpoint will need to make the changes outlined
below.
The global batching endpoint is
www.googleapis.com/batch
the new one is
www.googleapis.com/batch/<api>/<version>
I think the choice of words incorrect here and it they will be regenerated if needed. The change should not effect users with one exception. That being heterogeneous batch requests a single batch request containing more then one API within the call wont work due to the fact that the end point is API specific.
Now for the bad news to my knowledge there is nothing that is going to be replacing it. You will not be able to make heterogeneous batch requests. The Google apis java client library appears to use the old endpoint BatchRequest.java so if you are using heterogeneous batching your going to have to change your code by the time they update the library to support the new API specific endpoint.
Update
After a lot of back and forth with Google over the last 24 hours I have gotten some clarification on that post.
Batching will still work with the client libries
Most of the client libraries appear to already use this endpoint so there should be no change.
You will only be able to call one API within a batch request. Example you cant call drive and calendar API in the same batch request. You will have to make two batch requests one for drive and one for calendar.
There may be some edits coming to that post to clear up the language a little to be more clear.
I have updated my answer to reflect the clarifications from Google
It is not removing batching entirely.
Per the blog they are removing heterogeneous batching - accessing the same API with requests that lead to other APIs. They are also consolidating homogeneous batching (batching to the same API and leading to a singular API) to "API specific batch endpoints".
From my understanding of the blog, if you are batching several different requests, ie. a Foo request and a Bar request into a Foo API call, you will have to adjust your code to use one batch for one and one batch for the other. If you are already doing that, it is unclear whether or not you will have to change your code, perhaps newly released libraries will have a new way to handle these requests.
We have Intermec CK71 mobile devices (WiFi). There will always be a scenario in which the device sends a request (GET, PUT, or POST), then loses connection. What methods can we use to prevent duplicate PUTs or POSTs? How does the client device know whether or not the server processed its request before losing the connection?
I have seen similar posts like this but the marked answer doesn't go into much detail. I'm not even sure where to begin. Should I be looking into caching (ETag, last modified), or some type of handshaking?
The client device has the .Net Compact Framework 3.5 on it and is hitting the server via its Web API 2 endpoints.
If someone can point me to the right direction or offer any suggestions it would be much appreciated. Thanks.
I am not using REST but as far as I read the information there is no easy way to get an acknowledge and avoid duplicate POST.
As with other high level API frameworks you are tied to what the API offers and it looks like the designers did not think about connection aborts.
The easiest way to workaround this seems to use an unique ID with every post and check the server for knowing these UID before re-posting. If the server does not response with OK for an POST, you have to assume the connection has broken or other things went wrong. Then query the server for the UID you posted to know if the pervious POST was succesful before you try another POST with the same data and UID.
Possibly there is some transaction encapsulation available with REST as available for sql server. A 'transaction' protocol would ensure that a POST has been processed succesfully or will be 'rolled' back, if something failed.
Sorry, but I do not know much about REST.
I am trying to make a web app using ExpressJS and Coffeescript that pulls data from Amazon, LastFM, and Bing's web API's.
Users can request data such as the prices for a specific album from a specific band, upcoming concert times and locations for a band, etc... stuff like that.
My question is: should I make these API calls client-side using jQuery and getJSON or should they be server-side? I've done client-side requests; how would I even make an API call from the server side?
I just want to know what the best practice is, and also if someone could point me in the right direction for making server-side API requests, that would be very helpful.
Thanks!
There's are two key considerations for this question:
Do calls incur any data access? Are the results just going to be written to the screen?
How & where do you plan to handle errors? How do you handle throttling?
Item #2 is really important here because web services go down all of the time for a whole host of reasons. Your calls to Bing, Amazon & Last FM will fail probably 1% or 0.1% of the time (based on my experiences here).
To make requests users server-side JS you probably want to take a look at the Request package on NPM.
It's often good to abstract away your storage and dependent services to isolate changes and offer a consolidated and consistent web api for your application. But sometimes, if you have a good hypermedia web api (RESTful responses link to other resources), you could reference a resource link from another service in the response from your service (ex: SO request could reference gravatar image/resource of user). There's no one size fits all - it depends on whether you want to encapsulate the dependency or integrate with it.
It might be beneficial to make the web-api requests from your service exposed via expressjs as your own web-apis.
Making http web-api requests is easy from node. Here's another SO post covering that:
HTTP GET Request in Node.js Express
well, the way you describe it I think you may want to fetch data from amazon, lastfm and so on, process it with node, save it in your database and provide your own api.
you can use node's http.request() to fetch the data and build your own rest api with express.js