Find usage of HTTP Batch in storage api - google-api

I got notified that Googles JSON-RPC and Global HTTP Batch Endpoints are deprecated. The affected api is "storage#v1" and "Global Batch Endpoints" in my case.
I tried to find, where the depricated api call comes from. But I'm using 24 buckets with several tools accessing it. So is there a way to log depricated calls? I enabled logging for buckets. I could not find any difference in the access log when doing batch request and when doing single requests.

Yes "Batching across multiple APIs in a single request" is being discontinued Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints
But what its hard to understand is exactly what is being discontinued.
There are two batching endpoints. The Global one www.googleapis.com/batch
And the API specific one www.googleapis.com/batch/<api>/<version>.
So whats changing?
The global batching endpoing is going down. This means you wont be able to make calls to www.googleapis.com/batch anymore. What does that really mean in the worse case if you were making batch requests mixing two apis at the same time for example Drive and Gmail you wont be able to do that anymore.
In the future you are going to have to split batch requests up by API.
Will this effect you?
Code wise this depends on which client library you are currrently using. Some of them have already been updated to use the single api endpoint (JavaScript and .net) there are a few which have not been updated yet (php and java last i checked)
Now as for your buckets if i understand them correctly they all insert into the same place so your using the same api this probably inst going to effect you. You are also using Googles SDK and they are going to keep that updated.
Note
The blog post is very confusing and there is some internal emails going around Google right now in attempt to get things cleared up as far as what this means for developers.

You have to find where you are doing heterogeneous batch requests either directly or through libraries in your code. In any case batch requests are not reflected in your bucket logs because no API or API method per se was deprecated just a way to call send them.
In detail
You can bundle many requests to different APIs into one batch request. This batch will be sent to a one magical Google server that splits the batch and routes all the API requests in it into their respective service.
This Google server is going to be removed so everything has to be sent directly to the service.
What should you do?
I looks like you are making heterogeneous batch requests because only one service is mentioned, Storage. Probably you should do one of these options.
if you are using Cloud Libraries -> update them.
find if you are accessing the URL below
www.googleapis.com/batch
and replace it with the appropriate homogeneous batch API, which in your case is
www.googleapis.com/batch/storage/v1
in case you use batchPath, this seems to be a relevant article
Otherwise, if you make heterogeneous calls with gapi, which doesn't seem to be your case, split something like this:
request1 = gapi.client.urlshortener(...)
request2 = gapi.client.storage.buckets.update(...)
request3 = gapi.client.storage.buckets.update(...)
heterogeneousBatchRequest = gapi.client.newBatch();
heterogeneousBatchRequest.add(request1);
heterogeneousBatchRequest.add(request2);
heterogeneousBatchRequest.add(request3);
into something like this
request1 = gapi.client.urlshortener(...)
urlshortnerbatch = gapi.client.newBatch();
urlshortnerbatch.add(request1);
request2 = gapi.client.storage.buckets.update(...)
request3 = gapi.client.storage.buckets.update(...)
storagebatch.add(request2);
storagebatch.add(request3);
Official Documentation
Here it's described how to make batch request specifically with Storage API.

Related

Should an API do more than one thing?

I am Spring Boot dev.
I develop RESTful web services.
One of my colleagues developed an API and it does two things on the basis of operation type.
If opType = Set, the api sets/unsets a flag at the backend and if opType = Get, the api gets the status of the flag.
Does this not break the architecture of REST APIs?
We have Post/Put to change some data at backend, either create or update.
And we have Get, to get the status of some thing from backend.
Now, I want to opinion of better developers!
Should this be allowed, like having multiples operations with one API call, or should we create multiple apis for each of the tasks.
Also, the front end devs in my team, don’t like integrating multiple apis somehow, suggesting that more the api calls, poorer user experience, customer will have.
Is this the normal practice among app developers?
Comments requested.
GET requests in REST are not supposed to change the state of the server, these are read operations, whereas PUT/POST do modifications to the state of the server in the most general sense.
So usually you should have two endpoints GET to read the state of the flag and put/post for creating and modifying the state.
Having said that there is nothing that can technically restrict you from implementing everything in one API, such an API won't adhere to REST conventions, that's true, but from the client-server communication standpoint (HTTP based usually), it's still perfectly doable.
Sure thing, the separation to two endpoints makes the API more clear, easier to debug and maintain the code. But besides being "restful" this can be treated as an opinionated claim.
I didn't really get the argument of integrating multiple APIs - in my understanding, the effort is the same, and even more clear to front-enders, but they might have their own arguments.

Batched requests with modern Google APIs Node.js client

I've recently been trying to refactor some code that takes advantage of the global batch requests feature for the Google APIs that were recently deprecated. Currently, we use the npm package google-batch, but since it dangerously edits the filesystem and uses the deprecated global endpoint, I'd like to move away from it before the endpoint gets fully removed.
How can I create a batch request using (ideally) only the Node.js client? I want to use methods already present in the client as much as possible since it natively provides Promise and TypeScript support, which I intend to use.
I've looked into the Batchelor package suggested in this answer, but it requires you to manually write the HTTP request object instead of using the Node.js client.
This Github issue discusses the use of batch requests in the new node client.
According to that thread, the new intended method of "batching" (outside of a poorly listed set of endpoints that support it) is to make use of the HTTP/2 feature being shipped with the client- and then just to make your requests all at once it seems.
The reason "Batching" is in quotes is because I do not believe this explanation matches my definition of batching- the client isn't performing the queuing of requests to be executed, but instead managing network traffic better when you execute them yourself.
I'm unsure if I am understanding it correctly, but this HTTP/2 feature doesn't actually batch requests, and requires you to queue things yourself and instead tidys up some TCP overhead. In short, I do not believe that batching itself is possible with the api client alone.
(FWIW, I would have preferred to comment with a link as I'm uncertain I explained this well, but reputation didn't let me)

Microservice requests

I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..

Parse Cloud - Why need this?

I'm new to parse and i've just setup my server and dashboard on my local machine.
For my use, i just not need the simple API from parse, i need to write a server (with NodeJS + Express) to handle users request.
I've just see how to integrate an Express application with parse, so my application instead of the server directly will use my server that will serve:
The standard Parse API (/classes etc)
All my others route, that could not to depend on Parse API
This is correct ?
Reading online i've see that Parse Cloud need to extend Parse functionality with additional "routing" (if i have understand well).
So, in my application i will have
The standard API (ad described up here)
All other routers (that could not depend on Parse)
Other routers (that come from Cloud) and use Parse API
So, Parse Cloud is just a "simple" way to write additional Routing ? (i've see that exists the job function too, but right now i've not studied it).
My question is just because i'm a little confused about the real needed, just would like to have more info on "when to use it"
Thanks
EDIT
I provide here an example (that in part come from Parse Docs).
I have a Video class with an director name field.
In my Application (iOs, Android etc) i setup a view that need to know all the Video provided from a particular director.
I will have three ways:
Get all Videos (/classes/videos) and then filter it directly in APP
Write an NodeJS + Express router endpoint (http://blabla.com/videos/XXX) where XXX is the director and then get the result with Parse JS API and send back it to the app
Write an Clound function (that if i have understand respond to /functions/) that do the same as the router one.
This is just a little example, but is this the usage of Parse Cloud ? (or at least, one on them :))

Design ideas for passing data out of Salesforce

Within salesforce, we're envisioning someone clicking on a quote button on an Account object record and having that pass a number of fields information to 1 of two systems. One system would be a web application. The other, a windows application. I was thinking it would be a JavaScript call to the systems, but I'm not sure. What are some of my potential options? How would you guys go about doing this?
Thanks and sorry it's so broad.
One thing to look into is Outbound Messaging in Salesforce. Outbound messages are triggered as part of a workflow rule. I think you'll find outbound messaging to be a much more robust solution than an AJAX call to a web service. For instance, if your web service cannot process an incoming request, the outbound message will queue up on the Salesforce side. Then Salesforce will attempt to resend the message at regular intervals.
Outbound messaging is a great approach and I'd choose that direction for single SObject integrations when possible. However, if you need to pass any form of related list (master-detail/lookup relationship) you'll need to tackle this another way since outbound messaging only fires on a single object at a time. You can configure multiple outbound messages to get around this but this can quickly become unmanageable. JavaScript is certainly doable but using SOAP or REST from within Apex is more sturdy and secure.
I prefer REST/HTTP since Apex has had trouble consuming complex WSDL from external systems. In fact Apex is not able to consume the Force.com API or the Metadata API for size reasons. But the built-in HTTPRequest/HTTPResponse classes from Apex using either the built-in XMLStream/DOM or System.JSON classes to parse results works really well imo.

Resources