Is there a batch Get messages? from the golang client library?
I dont see it
https://godoc.org/google.golang.org/api/gmail/v1
i can get a list of message ids, but have to get the message per Id, one at a time.
Answer
There is a Github issue on the Go client's repo regarding this topic, and apparently it is not likely to see support for this feature anytime soon. It may, however, be implemented in the next generation of the client.
Possible workaround
You can implement yourself the batching feature, by making HTTP calls to the www.googleapis.com/batch or www.googleapis.com/batch/api/version endpoints. The former is due to be deprecated in August 12, 2020 but you can still use the latter past this date for homogeneous requests (in your case, doing GET requests based on messageId, you should have no problem doing so). You can read more about this in the following Official Google Developers Blog post: https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html
Related
I've recently been trying to refactor some code that takes advantage of the global batch requests feature for the Google APIs that were recently deprecated. Currently, we use the npm package google-batch, but since it dangerously edits the filesystem and uses the deprecated global endpoint, I'd like to move away from it before the endpoint gets fully removed.
How can I create a batch request using (ideally) only the Node.js client? I want to use methods already present in the client as much as possible since it natively provides Promise and TypeScript support, which I intend to use.
I've looked into the Batchelor package suggested in this answer, but it requires you to manually write the HTTP request object instead of using the Node.js client.
This Github issue discusses the use of batch requests in the new node client.
According to that thread, the new intended method of "batching" (outside of a poorly listed set of endpoints that support it) is to make use of the HTTP/2 feature being shipped with the client- and then just to make your requests all at once it seems.
The reason "Batching" is in quotes is because I do not believe this explanation matches my definition of batching- the client isn't performing the queuing of requests to be executed, but instead managing network traffic better when you execute them yourself.
I'm unsure if I am understanding it correctly, but this HTTP/2 feature doesn't actually batch requests, and requires you to queue things yourself and instead tidys up some TCP overhead. In short, I do not believe that batching itself is possible with the api client alone.
(FWIW, I would have preferred to comment with a link as I'm uncertain I explained this well, but reputation didn't let me)
I got notified that Googles JSON-RPC and Global HTTP Batch Endpoints are deprecated. The affected api is "storage#v1" and "Global Batch Endpoints" in my case.
I tried to find, where the depricated api call comes from. But I'm using 24 buckets with several tools accessing it. So is there a way to log depricated calls? I enabled logging for buckets. I could not find any difference in the access log when doing batch request and when doing single requests.
Yes "Batching across multiple APIs in a single request" is being discontinued Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints
But what its hard to understand is exactly what is being discontinued.
There are two batching endpoints. The Global one www.googleapis.com/batch
And the API specific one www.googleapis.com/batch/<api>/<version>.
So whats changing?
The global batching endpoing is going down. This means you wont be able to make calls to www.googleapis.com/batch anymore. What does that really mean in the worse case if you were making batch requests mixing two apis at the same time for example Drive and Gmail you wont be able to do that anymore.
In the future you are going to have to split batch requests up by API.
Will this effect you?
Code wise this depends on which client library you are currrently using. Some of them have already been updated to use the single api endpoint (JavaScript and .net) there are a few which have not been updated yet (php and java last i checked)
Now as for your buckets if i understand them correctly they all insert into the same place so your using the same api this probably inst going to effect you. You are also using Googles SDK and they are going to keep that updated.
Note
The blog post is very confusing and there is some internal emails going around Google right now in attempt to get things cleared up as far as what this means for developers.
You have to find where you are doing heterogeneous batch requests either directly or through libraries in your code. In any case batch requests are not reflected in your bucket logs because no API or API method per se was deprecated just a way to call send them.
In detail
You can bundle many requests to different APIs into one batch request. This batch will be sent to a one magical Google server that splits the batch and routes all the API requests in it into their respective service.
This Google server is going to be removed so everything has to be sent directly to the service.
What should you do?
I looks like you are making heterogeneous batch requests because only one service is mentioned, Storage. Probably you should do one of these options.
if you are using Cloud Libraries -> update them.
find if you are accessing the URL below
www.googleapis.com/batch
and replace it with the appropriate homogeneous batch API, which in your case is
www.googleapis.com/batch/storage/v1
in case you use batchPath, this seems to be a relevant article
Otherwise, if you make heterogeneous calls with gapi, which doesn't seem to be your case, split something like this:
request1 = gapi.client.urlshortener(...)
request2 = gapi.client.storage.buckets.update(...)
request3 = gapi.client.storage.buckets.update(...)
heterogeneousBatchRequest = gapi.client.newBatch();
heterogeneousBatchRequest.add(request1);
heterogeneousBatchRequest.add(request2);
heterogeneousBatchRequest.add(request3);
into something like this
request1 = gapi.client.urlshortener(...)
urlshortnerbatch = gapi.client.newBatch();
urlshortnerbatch.add(request1);
request2 = gapi.client.storage.buckets.update(...)
request3 = gapi.client.storage.buckets.update(...)
storagebatch.add(request2);
storagebatch.add(request3);
Official Documentation
Here it's described how to make batch request specifically with Storage API.
We create classes for multiple school districts via googles classroom api. We noticed that on 2017-12-18 we had multiple classes have their Aliases removed (We ended up creating duplicate classes as we use this alias for our unique ID). We use a domain-scoped alias as defined here https://developers.google.com/classroom/reference/rest/v1/courses.aliases
Any ideas? I'll keep this updated as we find more information.
Google found the root cause on their end. This was a google issue.
Note: I did remove some of the messages from google but nothing context related just some specific things about questions related to our company.
Hello *****,
I hope this message finds you well. This is a follow up message
concerning your case with Google Cloud Support.
I would like to inform you that we just received an update from our
Engineering Team regarding the bug that was affecting the Classroom
Aliases. According to the update received, our Engineering Team was
able to identify the root cause of the issue and the issue is now
fully resolved. This means that the aliases will no longer clear when
making update to the courses via API. I possible, we would like to
know if you are still seeing aliases that are deleted or seemingly
disappearing?
According to the information and due to the nature of this bug, our
Engineering Team will not be able to restore the previously deleted
aliases but they are working on creating an alternative solution in
case anyone else encounters this issue in the future. This means that
they will have a way to recover aliases in the future in case they are
deleted by a bug or issue in our end. Please rest assured that this
API issue should not occur again but if it does, we will have a way to
get those aliases restored.
I suggest that you recreate the aliases, test to see if there is any
other issues and let me know if you need any additional help or if you
have any questions regarding the above response. Again, we really
appreciate your patience and continued support while we work to
identify the root cause of this issue.
The best way to reach or API Team is by opening a support case and
wait for us to reply back via email or wait for a call back. We will
normally sent an email requesting the best time and phone number to
contact you back. You can also reply to any of our cases during a
period of 30 days and your will be be reopened and I will be able to
contact you back as soon as possible.
Sincerely,
Google Cloud Support
I've come to you today in hopes of getting some support in regards to the Google Distance Matrix API. Currently I'm using this in a very simple way with a Web Services request through an HTTP interface and am having no problems getting results. Unfortunately my project seems to be running into Query limits due to the 2,500 query Quota limit. I have added Billing to the project to allow for going over 2,500 queries, and it reflects that increased quota in my project. What's funky though is that the console is not showing any usage and so I'm not sure if these requests are being ran against what I have set up.
I am using a single API Key for the project which is present in my requests, and as I said before the requests ARE working, but again I'm hoping to see if someone can shed some light as to why I might not be seeing my queries reflected in my usage, and to see how I can verify that my requests are being run under the project for which I have attached billing.
If there is any information I can provide to help assist in finding an answer, please feel free to let me know and I'll be happy to give what information I can.
After doing some digging I was able to find the following relevant thread to answer my question:
Google API Key hits max request limit despite billing enabled
Context
We have an on-premise CRM (8.0) application, which is integrated with different legacy systems. There are approx 20 entities which are created/updated/upserted via the standard SOAP API by the legacy systems.
Question
I would like to log all the incoming requests and responses as SOAP/XML for diagnostics reasons. How can I accomplish this task?
(Note: I know the trivial, but not exactly fit solution to have workflows for create/update on all affected entities. This seems to be not universal enough + we ultimately must log the request text and response text itself)
I haven't tried it yet, but I think it should be possible to configure the native WCF tracing for the Organization Service. This is something really easy to do (it requires to add some configuration to the web.config file) and you will be able to log any request and response. You can take a look about how to configure it here.
EDIT:
In this link you will be able to see what I've just told you working (it was done for CRM2011 but it should works in newer versions): link