I am using the Google Sheets API and consistently getting the following error. I am only getting it for a specific sheet with a specific service key. My other credential is working just fine. Also the load relatively load from what I can tell. I'm not railing the API or anything.
{
"error": {
"code": 500,
"message": "Internal error encountered.",
"errors": [
{
"message": "Internal error encountered.",
"domain": "global",
"reason": "backendError"
}
],
"status": "INTERNAL"
}
}
I have found the culprit here. It looks like I had to remove two sheets with Pivot Tables on them referencing a sheet I was trying to query. Once I did that all is well now.
This has to do with their internal timeout. If an operation that you are trying to complete is taking longer, it will bail. Until they fix this a solution may be to reduce the size of data to make the operation quicker. In my case, I update the spreadsheet in smaller chunks.
A workaround for this can be having a central sheet to update data from the Google Sheet API, then reference the range of this data using a formula like IMPORTRANGE on the sheet you need to have your charts and analysis. This way, the sheet that is accessed by the API won't have any chart and wouldn't have any issue.
Related
On my AWS Lambda dashboard, I see a spike in failed invocations. I want to investigate these errors by looking at the logs for these invocations. Currently, the only thing I can do to filter these invocations, is get the timeline of the failed invocations, and then look through logs.
Is there a way I can search for failed invocations, i.e. ones that did not return a 200, and get a request ID that I can then lookup in CloudWatch Logs?
You may use AWS X-Ray for this by enabling in AWS Lambda dashboard.
In X-Ray dashboard;
you may view traces
filter them by status code
see all the details of the invocation including request id, total execution time etc such as
{
"Document": {
"id": "ept5e8c459d8d017fab",
"name": "zucker",
"start_time": 1595364779.526,
"trace_id": "1-some-trace-id-fa543548b17a44aeb2e62171",
"end_time": 1595364780.079,
"http": {
"response": {
"status": 200
}
},
"aws": {
"request_id": "abcdefg-69b5-hijkl-95cc-170e91c66110"
},
"origin": "AWS::Lambda",
"resource_arn": "arn:aws:lambda:eu-west-1:12345678:function:major-tom"
},
"Id": "52dc189d8d017fab"
}
What I understand from your question is you are more interested in finding out why your lambda invocation has failed rather than finding the request-id for failed lambda invocation.
You can do this by following the steps below:
Go to your lambda function in the AWS console.
There will be three tabs named as Configuration, Permissions, and Monitoring
Click on the Monitoring Tab. Here you can see the number of invocation, Error count and success rate, and other metrics as well. Click on the Error metrics. You will see that at what time the error in invocation has happened. You can read more at this Lambda function metrics
If you already know the time at which your function has failed then ignore Step 3.
Now scroll down. You will find the section termed as CloudWatch Logs Insights. Here you will see logs for all the invocation that has happened within the specified time range.
Adjust your time range under this section. You can choose a predefined time range like 1h, 3h, 1d, etc, or your custom time range.
Now Click on the Log stream link after the above filter has been applied. It will take you to cloudwatch console and you can see the logs here.
I am getting the following error when trying to upload files. How can i get this number increased?
POST https://content.googleapis.com/drive/v2/files?alt=json
{"title":"4394480","mimeType":"application/vnd.google-apps.folder","labels":{"restricted":false},"params":{"fields":"items(id)","quotaUser":"U1VQRVI="},"parents":[{"id":"0B_driE4U__5YWlhvd3VGRm5famc"}]}
Response 403
{
"error": {
"errors": [
{
"domain": "global",
"reason": "numChildrenInNonRootLimitExceeded",
"message": "The limit for this folder's number of children (files and folders) has been exceeded."
}
],
"code": 403,
"message": "The limit for this folder's number of children (files and folders) has been exceeded."
}
}
Error message code:
"numChildrenInNonRootLimitExceeded"
If you check the google drive errors page here
You will see that the problem is that your directory simply has to many files in it you can not add more. This is a hard limit from within google drive and nothing that can be changed Google drive limits
create another directory and upload to that.
I have many languages for my docs and am following this pattern: One index per language. In that they suggest to search across all indices with the
/blogs-*/post/_count
pattern. For my case I am getting a count across the indices of how many docs I have. I am running my code concurrently so making many requests at same time. If I search
/blogs-en/post/_count
or any other language then all is fine. However if I search
/blogs-*/post/_count
I soon encounter:
"Error 429 (Too Many Requests): [reduce] [type=reduce_search_phase_exception]
"
Is there a workaround for this? The same number of requests is made regardless of if I use
/blogs-en/post/_count or /blogs-*/post/_count.
I have always used the same number of workers in my code but re-arranging the indices to have one index per language suddenly broke my code.
EDIT: It is a brand new index without any documents when I start the program and when I get the error I have about 5,000 documents so not under any heavy load.
Edit: I am using the mapping found in the above-referenced link and running on a local machine with all the defaults of ES...in my case shards=5 and replicas=1. I am really just following the example from the link.
EDIT: The errors are seen with as few as 13-20 requests are made and I know ES can handle more than that. Searching /blogs-en/post/_count instead of /blogs-*/post/_count, etc.. can easily handle thousands with no errors.
Another Edit: I have removed all concurrency but still can only access 40-50 requests before I get the error.
I don't get an error for that request and it returns total documents.
Is you'r cluster under load?
Anyway, using simple aggregation you can get total document count in hits.total and per index document count in count_per_index part of result:
GET /blogs-*/post/_search
{
"size": 0,
"query": {
"match_all": {}
},
"aggs": {
"count_per_index": {
"terms": {
"field": "_index"
}
}
}
}
Trying to manage the "cost" of API request and so to generate a delta of videos that were added to a playlist since last API request
Would like to make the "0" cost request of just fetching the videoIds before matching getting additional details about the Video in the playlist
GET https://www.googleapis.com/youtube/v3/playlistItems?part=id&playlistId=PLlTLHnxSVuIyeEZPBIQF_krewJkY2JSwi&key={YOUR_API_KEY}
The response is like below
"items": [
{
"kind": "youtube#playlistItem",
"etag": "\"5g01s4-wS2b4VpScndqCYc5Y-8k/2wturocJM7aMkvG4Zrmv45tbyWY\"",
"id": "UExsVExIbnhTVnVJeWVFWlBCSVFGX2tyZXdKa1kySlN3aS4xMjU2MjFGMDJBNEUzQzcw"
},
The playlistItem id cannot be used in the video list to get additional info about the video and instead part:"snippet" which has a cost associated with it has to be added to the playlistItems request. Is this a bug or intentional ? Also is there a way to map the playlistItem-id to videoId/ResourceId ?
Firstly, all calls have a cost. No matter what it is. Just how much depends on your request.
Yes, this is by design. They want to limit as much as possible the amount of calls to the system. This will make for better stream lining of call requests, as well as reducing strain on the site.
I need to use Google Custom Search API https://developers.google.com/custom-search/v1/overview. From that page, it said:
For CSE users, the API provides 100 search queries per day for free.
If you need more, you may sign up for billing in the Developers
Console. Additional requests cost $5 per 1000 queries, up to 10k
queries per day.
I already sign up for billing inside the developer console. However, I still could not retrieve results more than 100. What things should I do more? https://www.googleapis.com/customsearch/v1?cx=CSE_INSTANCE&key=API_KEY&q=QUERY&start=100
{ error: { errors: [ { domain: "global", reason: "invalid", message:
"Invalid Value" } ], code: 400, message: "Invalid Value" } }
Query: Definition
https://support.google.com/customsearch/answer/1361951
Any actual user query from a Google Site Search engine, including but
not limited to search engines installed on your website using XML,
iFrame, or the Custom Search Element.
That means you would probably need to send eleven queries to get more than 100 results.
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=1
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=11
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=21
GET ...
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=81
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=91
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=101
Check every response and if error code is 400, you can stop - there is probably no need to send next (&start=previous+10) request.
Now you can merge responses and start building results page.
Google Custom Search and Google Site Search return up to 10 results
per query. If you want to display more than 10 results to the user,
you can issue multiple requests (using the start=0, start=11 ...
parameters) and display the results on a single page. In this case,
Google will consider each request as a separate query, and if you are
using Google Site Search, each query will count towards your limit.
There might be a better way to do this then I described above. (But, I'm not sure about batching API calls.)
And (finally) possible answer to your question: I made more than few tests, but I haven't had any luck with start greater than 100 (I was getting the same as you - <Response [400]>). I'm using "Browser key" from my billing-enabled project. That could mean we can't get 101st, 102nd, 103rd, etc. results with CSE API.
The API documentation says it never returns more than 100 items.
https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list
start
integer (uint32 format)
The index of the first result to return. The default number of results
per page is 10, so &start=11 would start at the top of the second page
of results. Note: The JSON API will never return more than 100
results, even if more than 100 documents match the query, so setting
the sum of start + num to a number greater than 100 will produce an
error. Also note that the maximum value for num is 10.