I am running sitespeed docker container container and using config file to run multiple pages at a time. I've also scheduled it to run every hour but from the job output I am seeing the below error related to some Quote Exceeded
[2021-11-02 01:16:58] ERROR: Error: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'pagespeedonline.googleapis.com' for consumer 'project_number:583797351490'.
at Gaxios.<anonymous> (/gpsi/node_modules/gaxios/build/src/gaxios.js:72:27)
at Generator.next (<anonymous>)
at fulfilled (/gpsi/node_modules/gaxios/build/src/gaxios.js:16:58)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
Any ides on what this could be and how can we fix this? In the coming future I would be testing around 100 pages using this and I am sure it would fail as well with this error
sitespeed.io docker container command -
docker run --shm-size=1g --rm -v "$(pwd):/sitespeed.io" $DOCKER_IMAGE --graphite.addSlugToKey true --slug shasha_test --config configs.json urls.txt
The error which I am getting is below. So it works for some pages and then fails for others. How do i check the quota here?
Status: Downloaded newer image for XXXXXX.XXX.ecr.eu-west-2.amazonaws.com/sitespeedio/sitespeedio:latest
Google Chrome 92.0.4515.131
Mozilla Firefox 92.0b2
Microsoft Edge 92.0.902.8 dev
[2021-11-02 01:16:57] INFO: Versions OS: linux 4.14.248-189.473.amzn2.x86_64 nodejs: v14.17.1 sitespeed.io: 19.1.0 browsertime: 14.0.2 coach: 6.4.3
[2021-11-02 01:16:58] INFO: Will run Lighthouse tests after Browsertime has finished
[2021-11-02 01:16:58] INFO: Sending url https://www.virginmedia.com to test on Page Speed Insights
[2021-11-02 01:16:58] INFO: Sending url https://www.virginmedia.com/broadband/packages to test on Page Speed Insights
[2021-11-02 01:16:58] INFO: Sending url https://www.virginmedia.com/broadband to test on Page Speed Insights
[2021-11-02 01:16:58] INFO: Sending url https://www.virginmedia.com/broadband/speed-test to test on Page Speed Insights
[2021-11-02 01:16:58] ERROR: Error: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'pagespeedonline.googleapis.com' for consumer 'project_number:XXXXXXX'.
at Gaxios.<anonymous> (/gpsi/node_modules/gaxios/build/src/gaxios.js:72:27)
at Generator.next (<anonymous>)
at fulfilled (/gpsi/node_modules/gaxios/build/src/gaxios.js:16:58)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
[2021-11-02 01:16:58] ERROR: Error: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'pagespeedonline.googleapis.com' for consumer 'project_number:XXXXXXX'.
at Gaxios.<anonymous> (/gpsi/node_modules/gaxios/build/src/gaxios.js:72:27)
at Generator.next (<anonymous>)
at fulfilled (/gpsi/node_modules/gaxios/build/src/gaxios.js:16:58)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
Quota is the number of requests that your application can make to an api. This is normally split up in per day or per minute quotas.
If you check your project on Google cloud console under libray. There and search for PageSpeed Insights API you have already enabled it. click manage then you come to a new screen where you will find quota on the left.
This is the limit of requests that you can make to the api per day and per minute.
You are exceding the Queries' and limit 'Queries per minute' quota which is technically flood protection your going to fast slow down your application. you can only make 240 requests a minute.
Related
As part of an AWS CodePipeline in an AWS CodeBuild action I deploy resources created with the Serverless Framework to a "UAT" (user acceptance testing) stage.
The pipeline runs in its own tooling AWS account, first deploying cross-account into a separate "UAT" account, then deploying cross-account into a separate "Production" account.
The first deployment to "UAT" completes successfully, whereas the succeeding deployment to "Production" fails with the error ...
Serverless Error ----------------------------------------
An error occurred: <some name>LambdaFunction - Resource handler returned message: "Code uncompressed size is greater than max allowed size of 272629760. (Service: Lambda, Status Code: 400, Request ID: <some request id>, Extended Request ID: null)" (RequestToken: <some request token>, HandlerErrorCode: InvalidRequest).
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.17.2
Framework Version: 2.68.0 (local)
Plugin Version: 5.5.1
SDK Version: 4.3.0
Components Version: 3.18.1
This started to happen, once I introduced the usage of a private Lambda Layer. The total size of all files seems way less than the maximum allowed size.
This question isn't so much about the actual error (there already exists a similar question).
I rather wonder why the behavior is inconsistent, varying with the deployment targets. Because the limits for the Lambda Function package size (including the usage of Lambda Layers) should be the same for all environments.
This used to be working perfectly until a couple of days back exactly 4 days back. When i run gcloud app deploy now it complete the build and then straight after completing the build it hangs on Updating Service
Here is the output:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/just-sleek/regions/us-central1/operations/8260bef8-b882-4313-bf97-efff8d603c5f error [INTERNAL]: An internal error occurred while processing task /appengine-flex-v1/insert_flex_deployment/flex_create_resources>2020-05-26T05:20:44.032Z4316.jc.11: Deployment Manager operation just-sleek/operation-1590470444486-5a68641de8da1-5dfcfe5c-b041c398 errors: [
code: "RESOURCE_ERROR"
location: "/deployments/aef-default-20200526t070946/resources/aef-default-20200526t070946"
message: {
\"ResourceType\":\"compute.beta.regionAutoscaler\",
\"ResourceErrorCode\":\"403\",
\"ResourceErrorMessage\":{
\"code\":403,
\"errors\":[{
\"domain\":\"usageLimits\",
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"reason\":\"limitExceeded\"
}],
\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200526t070946\'. Limit: 8.0\",
\"statusMessage\":\"Forbidden\",
\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/just-sleek/regions/us-central1/autoscalers\",
\"httpMethod\":\"POST\"
}
}"]
I tried the following the ways to resolve the error:
I deleted all my previous version and left the running version
I ran gcloud components update still fails.
I create a new project, changed the region from [REGION1] to [REGION2] and deployed and m still getting the same error.
Also ran gcloud app deploy --verbosity=debug, does not give me any different result
I have no clue what is causing this issue and how to solve it please assist.
Google is already aware of this issue and it is currently being investigated.
There is a Public Issue Tracker, you may 'star' and follow so that you can receive any further updates on this. In addition, you may see some workarounds posted that could be performed temporarily if agreed with your preferences.
Currently, there is no ETA yet for the resolution but there will be an update provided as soon as the team progresses on the issue.
I resolved this by adding this into my app.yaml
automatic_scaling:
min_num_instances: 1
max_num_instances: 7
I found the solution here:
https://issuetracker.google.com/issues/157449521
And I was also redirected to:
gcloud app deploy - updating service default fails with code 13 Quota for instances limit exceeded, and 401 unathorizeed
I'm running the AWS Lambda tutorial at https://aws.amazon.com/getting-started/hands-on/run-serverless-code/ and I'm getting a strange error. I'm on step 5, where it says to run the test a few times to generate some metrics to view. After running my test 3 times, I started getting this error message:
Calling the invoke API action failed with this message: Rate Exceeded.
What? I'm new to lambda, but I'm not doing anything complicated or time consuming, I'm just running the the AWS tutorial. Can anyone tell me how to get past this?
I am running a parse server on heroku with Mlab.
When I running a search query on a table (Who have 188K records),I am receiving below error:
Process running mem=558M(109.2%)
2019-11-18T18:16:48.355162+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
I tried to run server via node --max-old-space-size=4096 parse/server.js but still the issue not resolved.
Please advise an solution.
I am running a load test on JMeter with 200 users. Around 10 percent of the request sent for each sampler results into failure with a status code 404 - Not found error. However, if I run my test with a load of 100 users I do not encounter 404 errors. Please advice me on what can be the issue and possible solution for this.?
It’s a server issue only. Some applications handle server error in a kind of strange way.
So you would need to:
analyze access logs
add monitoring and APM to diagnose
check error logs