Create_Failed S3BatchProcessor, AWS Lambda - aws-lambda

I am running cdk deploy in my textract pipline folder for large document processing. However, when i run this porgram I get this error
The error
| CREATE_FAILED | AWS::Lambda::Function | S3BatchProcessor6C619AEA
Resource handler returned message: "Specified ReservedConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10]. (Service: Lambda, Status Code: 400, Request ID: 7f6d1305-e248-4745-983e-045eccde562d)" (RequestToken: 9c84827d-502e-5697-b023-e
0be45f8d451, HandlerErrorCode: InvalidRequest)

By default AWS will provide with at max 1000 concurrency limit.
In your case, the different concurrencies in all lambdas in your account is exceeding UnreservedConcurrentExecution Limit of 10 i.e.,
1000 - sum all reservedConcurrency in lambdas > 10
This is causing deployment failure as you're trying to exceed concurrency limit.
There can be two solutions here:
Reduce the reserved concurrency of lambdas so that above equation holds or
You can raise the account concurrency limit by contacting aws support. Please refer this

Related

AWS: Specified ConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10]

I keep getting this error when deploying a laravel project with GitHub actions
==> Updating Function Configurations
Deployment Failed
AWS: Specified ConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10].
23613KB/23613KB [============================] 100% (< 1 sec remaining)
Error: Process completed with exit code 1.
This means you have reserved more capacity that your account/regional maximum. By default, Lambda allows you to have 1000 concurrent lambda executions in each region. When you create a function, you can specify (this is optional) to reserve a portion of that concurrency for your function. You can't reserve 100% of your account/region's concurrency or lambdas without this setting wouldn't be able to run, that is what this error is saying.
You have 2 options:
Reduce the amount of reserved/provisioned concurrency you have for this and your other lambda functions in region (or just don't specify any reserved/provisioned concurrency if this is just an experiment).
Request a limit increase with AWS Support.
Some reading material:
What is reserved concurrency: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Default limits (including concurrency) for Lambda: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
How to request an increase of concurrency limits: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-concurrency-limit-increase/

429 issues on Solana/metaplex

I thought Solana/Metaplex etc should be able to handle large numbers of transactions in quick succession. I just wrote a load test to do 50 mints of an existing SPL token (that has metaplex token-data associated with it)
In my code I dont specify any particular node/rpc - rather just the cluster i.e. testnet
What should I be doing here ?
{"name":"Error","message":"failed to get info about account 2fvtsp6U6iDVhJvox5kRpUS6jFAStk847zATX3cpsVD8: Error: 429 Too Many Requests: {\"jsonrpc\":\"2.0\",\"error\":{\"code\": 429, \"message\":\"Too many requests from your IP, contact your app developer or support#rpcpool.com.\"}, \"id\": \"76627a31-4522-4ebb-ae22-5861fa6781f0\" } \r\n","stack":"Error: failed to get info about account 2fvtsp6U6iDVhJvox5kRpUS6jFAStk847zATX3cpsVD8: Error: 429 Too Many Requests: {\"jsonrpc\":\"2.0\",\"error\":{\"code\": 429, \"message\":\"Too many requests from your IP, contact your app developer or support#rpcpool.com.\"}, \"id\": \"76627a31-4522-4ebb-ae22-5861fa6781f0\" } \r\n\n at Connection.getAccountInfo (/Users/ffff/dev/walsingh/TOKENPASS/tpass-graphql/graphql/node_modules/#metaplex/js/node_modules/#solana/web3.js/lib/index.cjs.js:5508:13)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Token.getAccountInfo (
The 429 issue you are running into is a RPC rate limit. Testnet has the following rate limits at the time of writing:
Maximum number of requests per 10 seconds per IP: 100
Maximum number of requests per 10 seconds per IP for a single RPC: 40
Maximum concurrent connections per IP: 40
Maximum connection rate per 10 seconds per IP: 40
Maximum amount of data per 30 second: 100 MB
You probably ran into one of these limits. The general recommendation is to go get access to one of the RPCs without rate limits, as the public endpoints are not meant for testing how many transactions you can get through.
Quicknode, Triton, and Genesysgo provide RPC infra to use.

How To Prevent DynamoDb exception ProvisionedThroughputExceededException in Laravel

I am updating a certain number of records in a long period of time, and I have no certainty about the timing in which the records will be produced. Sometimes, when many records are produced at the same time, I get an Error Log Entry saying that I hit the ProvisionedThroughputExceededException.
I'd like to prevent this exception to happen, or at least be able to catch the exception (and then re-throw it so that I don't alterate the logic) but all I get is the error log below:
[2019-02-12 15:50:48] local.ERROR: Error executing "UpdateItem" on "https://dynamodb.eu-central-1.amazonaws.com"; AWS HTTP error: Client error: `POST https://dynamodb.eu-central-1.amazonaws.com` resulted in a `400 Bad Request` response:
The Log continues and we can find a little more detail:
ProvisionedThroughputExceededException (client): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. -
{
"__type": "com.amazonaws.dynamodb.v20120810#ProvisionedThroughputExceededException",
"message": "The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API."
}
{"exception":"[object] (Aws\\DynamoDb\\Exception\\DynamoDbException(code: 0): Error executing \"UpdateItem\" on \"https://dynamodb.eu-central-1.amazonaws.com\"; AWS HTTP error: Client error: `POST https://dynamodb.eu-central-1.amazonaws.com` resulted in a `400 Bad Request` response:
So, the exception was thrown, but it looks like it's already caught, while I'd love to catch it myself, even only to keep track of it, and possibly to avoid the exception at all.
Is there a way to do so ?
To prevent the Exception, the obvious answer would be "Use Autoscaling on DynamoDb capacity". And that's what I did, with a certain degree of luck: when a spike in the request arose, I still had the exception, but in average, autoscaling worked pretty well. Here is the CloudFormation snipped for autoscaling:
MyTableWriteScaling:
Type: AWS::ApplicationAutoScaling::ScalableTarget
Properties:
MaxCapacity: 250
MinCapacity: 5
ResourceId: !Join ["/", ["table", !Ref myTable ]]
ScalableDimension: "dynamodb:table:WriteCapacityUnits"
ServiceNamespace: "dynamodb"
RoleARN: {"Fn::GetAtt": ["DynamoDbScalingRole", "Arn"]}
WriteScalingPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: !Join ['-', [!Ref 'AWS::StackName', 'MyTable', 'Write', 'Scaling', 'Policy']]
PolicyType: TargetTrackingScaling
ScalingTargetId: !Ref MyTableWriteScaling
ScalableDimension: dynamodb:table:WriteCapacityUnits
ServiceNamespace: dynamodb
TargetTrackingScalingPolicyConfiguration:
PredefinedMetricSpecification:
PredefinedMetricType: DynamoDBWriteCapacityUtilization
ScaleInCooldown: 1
ScaleOutCooldown: 1
TargetValue: 60
DependsOn : MyTableWriteScaling
That said, I still had the Exception. I knew that the throttled requests would eventually be written, but I was looking for a way to prevent the exception, since I could not catch it.
The way to do it was introduced by Amazon on November 28 and it is DynamoDB on demand.
Quite usefully, in the announcement we read:
DynamoDB on-demand is useful if your application traffic is difficult to predict and control, your workload has large spikes of short duration, or if your average table utilization is well below the peak.
Configuring on-demand in CloudFormation couldn't be easier:
HotelStay:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
...
Changing the BillingMode and removing the ProvisionedThroughput prevented this kind of exception to be thrown, they're just gone forever.

Solidity | Truffle | Web3 | Gas Limit

I am trying to deploy a contact using below code on rinkeby test net:
const result = await new web3.eth.Contract(JSON.parse(interface))
.deploy({data: bytecode, arguments: [100, accounts[0]]})
.send({gas: 1000000, from: accounts[0]});
Attempting to deploy from acount 0xBE80D3f83530f2Ed1214BE5a7434E0cd32177047
(node:3862) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: The contract code couldn't be stored, please check your gas limit.
When I increase the gas limit to 10000000
I get below error. Not able to understand what is wrong with the deployment
Attempting to deploy from acount 0xBE80D3f83530f2Ed1214BE5a7434E0cd32177047
(node:3870) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: exceeds block gas limit
You're exceeding the gas limit. You may be doing too much work on your constructor, or you're just sending a gas limit that is too low.
Rinkeby gas limit is around 7.4M, so you can try increasing the gas from: 1M to ~7.4M.
If your contract is to big, you can split it into multiple contracts, or as I said before reduce the work being done on the constructor.

How can i resolve HTTPSConnectionPool(host='www.googleapis.com', port=443) Max retries exceeded with url (Google cloud storage)

I have created API using Django Rest Framework.
API communicates with GCP cloud storage to store profile Image(around 1MB/pic).
While performing load testing (around 1000 request/s) to that server.
I have encountered the following error.
I seem to be a GCP cloud storage max request issue, but unable to figure out the solution of it.
Exception Type: SSLError at /api/v1/users
Exception Value: HTTPSConnectionPool(host='www.googleapis.com', port=443): Max retries exceeded with url: /storage/v1/b/<gcp-bucket-name>?projection=noAcl (Caused by SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')",),))
Looks like you have the answer to your question here:
"...buckets have an initial IO capacity of around 1000 write requests
per second...As the request rate for a given bucket grows, Cloud
Storage automatically increases the IO capacity for that bucket"
Therefore it automatically Auto-Scale. The only thing is that you need to increase the requests/s gradually as described here:
"If your request rate is expected to go over these thresholds, you should start with a request rate below or near the thresholds and then double the request rate no faster than every 20 minutes"
Looks like your bucket should get an increase of I/O capacity that will work in the future.
You are actually right in the edge (1000 req/s), but I guess this is what is causing your error.

Resources