Solidity | Truffle | Web3 | Gas Limit - mocha.js

I am trying to deploy a contact using below code on rinkeby test net:
const result = await new web3.eth.Contract(JSON.parse(interface))
.deploy({data: bytecode, arguments: [100, accounts[0]]})
.send({gas: 1000000, from: accounts[0]});
Attempting to deploy from acount 0xBE80D3f83530f2Ed1214BE5a7434E0cd32177047
(node:3862) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: The contract code couldn't be stored, please check your gas limit.
When I increase the gas limit to 10000000
I get below error. Not able to understand what is wrong with the deployment
Attempting to deploy from acount 0xBE80D3f83530f2Ed1214BE5a7434E0cd32177047
(node:3870) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: exceeds block gas limit

You're exceeding the gas limit. You may be doing too much work on your constructor, or you're just sending a gas limit that is too low.
Rinkeby gas limit is around 7.4M, so you can try increasing the gas from: 1M to ~7.4M.
If your contract is to big, you can split it into multiple contracts, or as I said before reduce the work being done on the constructor.

Related

AWS: Specified ConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10]

I keep getting this error when deploying a laravel project with GitHub actions
==> Updating Function Configurations
Deployment Failed
AWS: Specified ConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10].
23613KB/23613KB [============================] 100% (< 1 sec remaining)
Error: Process completed with exit code 1.
This means you have reserved more capacity that your account/regional maximum. By default, Lambda allows you to have 1000 concurrent lambda executions in each region. When you create a function, you can specify (this is optional) to reserve a portion of that concurrency for your function. You can't reserve 100% of your account/region's concurrency or lambdas without this setting wouldn't be able to run, that is what this error is saying.
You have 2 options:
Reduce the amount of reserved/provisioned concurrency you have for this and your other lambda functions in region (or just don't specify any reserved/provisioned concurrency if this is just an experiment).
Request a limit increase with AWS Support.
Some reading material:
What is reserved concurrency: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Default limits (including concurrency) for Lambda: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
How to request an increase of concurrency limits: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-concurrency-limit-increase/

Create_Failed S3BatchProcessor, AWS Lambda

I am running cdk deploy in my textract pipline folder for large document processing. However, when i run this porgram I get this error
The error
| CREATE_FAILED | AWS::Lambda::Function | S3BatchProcessor6C619AEA
Resource handler returned message: "Specified ReservedConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value of [10]. (Service: Lambda, Status Code: 400, Request ID: 7f6d1305-e248-4745-983e-045eccde562d)" (RequestToken: 9c84827d-502e-5697-b023-e
0be45f8d451, HandlerErrorCode: InvalidRequest)
By default AWS will provide with at max 1000 concurrency limit.
In your case, the different concurrencies in all lambdas in your account is exceeding UnreservedConcurrentExecution Limit of 10 i.e.,
1000 - sum all reservedConcurrency in lambdas > 10
This is causing deployment failure as you're trying to exceed concurrency limit.
There can be two solutions here:
Reduce the reserved concurrency of lambdas so that above equation holds or
You can raise the account concurrency limit by contacting aws support. Please refer this

Data too large ElasticSearch issue along with Readiness probe failed

We have set up an EFK stack for our project and from yesterday kibana seems down. When we initially troubleshooter we have found the following errors:
Readiness probe failed: Error: Got HTTP code 503 but expected a 200 & Readiness probe failed: Error: Got HTTP code 000 but expected a 200
Later we found the same issue with elasticsearch pod as well. along with this we found the following issue with Data request limit:
FATAL
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent]
Data too large, data for [indices:admin/template/get] would be
[1036909172/988.8mb], which is larger than the limitof
[1020054732/972.7mb], real usage: [1036909056/988.8mb], new bytes
reserved: [116/116b], usages [request=0/0b, fielddata=420/420b,
in_flight_requests=67310/65.7kb, model_inference=0/0b,
eql_sequence=0/0b,
accounting=110294544/105.1mb]","bytes_wanted":1036909172,"bytes_limit":1020054732,"durability":"PERMANENT"}],"type":"circuit_breaking_exception","reason":"[parent]
Data too large, data for [indices:admin/template/get] would be
[1036909172/988.8mb], which is larger than the limit of
[1020054732/972.7mb], real usage: [1036909056/988.8mb], new bytes
reserved: [116/116b], usages [request=0/0b, fielddata=420/420b,
in_flight_requests=67310/65.7kb, model_inference=0/0b,
eql_sequence=0/0b,
accounting=110294544/105.1mb]","bytes_wanted":1036909172,"bytes_limit":1020054732,"durability":"PERMANENT"},"status":429}
We have tried changing the REDINESS_PROBE_TIMEOUT, Initial Delay, Timeout, Probe Period, Success Threshold, and Failure Threshold. Also tried increasing the Indicess Breaker limit but it's not reflecting we can see error still taking old limits, tried fixing circuit_breaking_exception by adding ES_JAVA_OPTS values as well.
Nothing seems to be working, any help would be appreciated.
the same phenomenon occurred during the service operation. This issue is identified as a memory shortage. So there are several ways to think about it over.
Physical Memory Expansion (Scale Out)
Additional equipment due to insufficient memory available
Lower load through monitoring
If circuit_breaking_exception remains in the log, develop a monitoring device that lowers the load
Setting java_opts
You can set memory usage, but it's meaningless if you don't have enough hardware memory

How To Prevent DynamoDb exception ProvisionedThroughputExceededException in Laravel

I am updating a certain number of records in a long period of time, and I have no certainty about the timing in which the records will be produced. Sometimes, when many records are produced at the same time, I get an Error Log Entry saying that I hit the ProvisionedThroughputExceededException.
I'd like to prevent this exception to happen, or at least be able to catch the exception (and then re-throw it so that I don't alterate the logic) but all I get is the error log below:
[2019-02-12 15:50:48] local.ERROR: Error executing "UpdateItem" on "https://dynamodb.eu-central-1.amazonaws.com"; AWS HTTP error: Client error: `POST https://dynamodb.eu-central-1.amazonaws.com` resulted in a `400 Bad Request` response:
The Log continues and we can find a little more detail:
ProvisionedThroughputExceededException (client): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. -
{
"__type": "com.amazonaws.dynamodb.v20120810#ProvisionedThroughputExceededException",
"message": "The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API."
}
{"exception":"[object] (Aws\\DynamoDb\\Exception\\DynamoDbException(code: 0): Error executing \"UpdateItem\" on \"https://dynamodb.eu-central-1.amazonaws.com\"; AWS HTTP error: Client error: `POST https://dynamodb.eu-central-1.amazonaws.com` resulted in a `400 Bad Request` response:
So, the exception was thrown, but it looks like it's already caught, while I'd love to catch it myself, even only to keep track of it, and possibly to avoid the exception at all.
Is there a way to do so ?
To prevent the Exception, the obvious answer would be "Use Autoscaling on DynamoDb capacity". And that's what I did, with a certain degree of luck: when a spike in the request arose, I still had the exception, but in average, autoscaling worked pretty well. Here is the CloudFormation snipped for autoscaling:
MyTableWriteScaling:
Type: AWS::ApplicationAutoScaling::ScalableTarget
Properties:
MaxCapacity: 250
MinCapacity: 5
ResourceId: !Join ["/", ["table", !Ref myTable ]]
ScalableDimension: "dynamodb:table:WriteCapacityUnits"
ServiceNamespace: "dynamodb"
RoleARN: {"Fn::GetAtt": ["DynamoDbScalingRole", "Arn"]}
WriteScalingPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: !Join ['-', [!Ref 'AWS::StackName', 'MyTable', 'Write', 'Scaling', 'Policy']]
PolicyType: TargetTrackingScaling
ScalingTargetId: !Ref MyTableWriteScaling
ScalableDimension: dynamodb:table:WriteCapacityUnits
ServiceNamespace: dynamodb
TargetTrackingScalingPolicyConfiguration:
PredefinedMetricSpecification:
PredefinedMetricType: DynamoDBWriteCapacityUtilization
ScaleInCooldown: 1
ScaleOutCooldown: 1
TargetValue: 60
DependsOn : MyTableWriteScaling
That said, I still had the Exception. I knew that the throttled requests would eventually be written, but I was looking for a way to prevent the exception, since I could not catch it.
The way to do it was introduced by Amazon on November 28 and it is DynamoDB on demand.
Quite usefully, in the announcement we read:
DynamoDB on-demand is useful if your application traffic is difficult to predict and control, your workload has large spikes of short duration, or if your average table utilization is well below the peak.
Configuring on-demand in CloudFormation couldn't be easier:
HotelStay:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
...
Changing the BillingMode and removing the ProvisionedThroughput prevented this kind of exception to be thrown, they're just gone forever.

How can i resolve HTTPSConnectionPool(host='www.googleapis.com', port=443) Max retries exceeded with url (Google cloud storage)

I have created API using Django Rest Framework.
API communicates with GCP cloud storage to store profile Image(around 1MB/pic).
While performing load testing (around 1000 request/s) to that server.
I have encountered the following error.
I seem to be a GCP cloud storage max request issue, but unable to figure out the solution of it.
Exception Type: SSLError at /api/v1/users
Exception Value: HTTPSConnectionPool(host='www.googleapis.com', port=443): Max retries exceeded with url: /storage/v1/b/<gcp-bucket-name>?projection=noAcl (Caused by SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')",),))
Looks like you have the answer to your question here:
"...buckets have an initial IO capacity of around 1000 write requests
per second...As the request rate for a given bucket grows, Cloud
Storage automatically increases the IO capacity for that bucket"
Therefore it automatically Auto-Scale. The only thing is that you need to increase the requests/s gradually as described here:
"If your request rate is expected to go over these thresholds, you should start with a request rate below or near the thresholds and then double the request rate no faster than every 20 minutes"
Looks like your bucket should get an increase of I/O capacity that will work in the future.
You are actually right in the edge (1000 req/s), but I guess this is what is causing your error.

Resources