How are chainlink requests processed when forking mainnet in Brownie? - chainlink

From the brownie chainlink-mix, why does the PriceFeed works fine on mainnet-fork while the ApiConsumer does not fulfill the request on the same network?
Are prices cached on the Aggregator?

When you fork the mainnet, it literally forks the blockchchain state at that point in time. So when you query the Price Feed Aggregator contract, you get the price at the time of forking.
However because there are no Chainlink oracles connected to your forked chain, there's no way to do a real API or VRF request...and the latest price data in Price Feed contracts won't update either.
Check out the tests to see how mocks are used for local environments where there is no connectivity to Chainlink nodes

Related

What could cause AWS S3 MultiObjectDeleteException?

In our Spring Boot app, we are using AmazonS3Client.deleteObjects() to delete multiple objects in a bucket. From time to time, the request throws MultiObjectDeleteException and one or many objects won't be deleted. It is not often, about 5 failures among thousands of requests. But still it could be a problem. What could lead to the exception?
And I have no idea how to debug. The log from our app follows the data flow but not showing much useful information. It suddenly throws the exception after the request. Please help.
Another thing is that the exception comes back with a 200 code. How could this be possible?
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or
more objects could not be deleted (Service: null; Status Code: 200;
Error Code: null; Request ID: xxxx; S3 Extended Request ID: yyyy;
Proxy: null)
TLDR: Some error rates are normal and the application should handle them. 500 and 503 errors are retriable. The MultiObjectDeleteException should provide a clue and getDeletedObjects() gives you a list of the deleted objects. The rest you should mostly try later.
In the MultiObjectDeleteException documentation is said that exception should have an explanation of the issue which caused the error
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/MultiObjectDeleteException.html
Exception for partial or total failure of the multi-object delete API, including the errors that occurred. For successfully deleted objects, refer to getDeletedObjects().
According to https://aws.amazon.com/s3/sla/ AWS does not guarantee 100% availability. Again, according to that document:
• “Error Rate” means: (i) the total number of internal server errors returned by the Amazon S3 Service as error status “InternalError” or “ServiceUnavailable” divided by (ii) the total number of requests for the applicable request type during that 5-minute interval. We will calculate the Error Rate for each Amazon S3 Service account as a percentage for each 5-minute interval in the monthly billing cycle. The calculation of the number of internal server errors will not include errors that arise directly or indirectly as a result of any of the Amazon S3 SLA Exclusions.
Usually we think about SLA in the terms of downtimes so it is easy to assume that AWS does mean the same. But that's not the case here. Some number of errors is normal and should be expected. In many documents AWS does suggest that you should implement a combination of slowdowns and retries e.g. here https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorBestPractices.html
Some 500 and 503 errors are, again, part of the normal operation https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
The documents specifically says:
Because Amazon S3 is a distributed service, a very small percentage of 5xx errors is expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can be retried. This means that it's a best practice to have a fault-tolerance mechanism or to implement retry logic for any applications making requests to Amazon S3. By doing so, S3 can recover from these errors.
Edit: Later was added a question: "How is it possible that the API call returned status code 200 while some objects were not deleted."
And the answer to that is very simple: This is how the API is defined. From the JDK reference page for deleteObjects you can go directly to the AWS API documentation page https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
Which says that this is the expected behavior. Status code 200 means that the high level API code succeeded and was able to request the deletion of the listed objects. Well, some of these actions did fail and, but the API call did create a report about it in the response.
Why does the Java API throw an exception then? Again, the authors of the AWS Java SDK tried to translate the response to the Java programming language and they clearly thought that while AWS API works with a non-zero error rate as part of the service agreement, Java developers are more used to a situation that anything but 100% success should end up by an exception.
Both of the abstractions are well documented and it is the programmer who is responsible for a precise implementation. The engineering rule is cheap, fast, reliable - chose two. AWS was able to provide a service which has all three with a reasonable concession that part of the reliability will be implemented on the client side - retries and slow-downs.

Chainlink's dynamic upkeep registration example failing: UpkeepIDConsumerExample.registerAndPredictID errored: execution reverted

I'm playing around with Chainlink's "Register an Upkeep using your own deployed contract" example: https://docs.chain.link/docs/chainlink-keepers/register-upkeep/#register-an-upkeep-using-your-own-deployed-contract
However, once the UpkeepIDConsumerExample is deployed with the Link Token Contact, Registry and Registrar parameters for the respective chain, I am unable to use the UpkeepIDConsumerExample.registerAndPredictID function as it fails.
(Gas estimation errored with the following message (see below). The transaction execution will likely fail. Do you want to force sending?
Internal JSON-RPC error. { "code": -32000, "message": "execution reverted" })
I've tried on Rinkeby, Mumbai and Polygon Mainnet, incase testnets weren't live yet. And I've used the parameters suggested by the docs for calling the function. And I have sufficient Link in my metamask.
Is it correct to use these: https://docs.chain.link/docs/link-token-contracts/ as the Link Token Interfrace parameter?
Thanks!
I was able to make this work (though I tried only on Goerli)using the code from the offical docs that you linked to.
For the benefit of others that read this post, I will break it down into detailed steps - perhaps more than you needed for an answer!
Prerequisites
Get some LINK tokens in your browser wallet
Deploy a Chainlink Keepers-compatible contract -- this is your Upkeep contract; the one that will be automated. Here is the example Upkeep smart contract that you can copy and deploy right away. You can use 10 as the interval -- that's 10 seconds. This way you can see the upkeep happen fast. Note this Upkeep's address
Next, deploy the UpkeepIDConsumerExample from the example in the docs, which is the smart contract that programmatically registers your Upkeep Contract. This contract handles registering the Upkeep Contract you deployed in Step #2 with Chainlink's Keepers network, so that the Keepers Network can automate the running of functions in your Upkeep contract. Note this Contracts Address
Making it work
From your wallet, which should now have LINK in it, send 5 LINK to the deployed UpkeepIDConsumerExample address. This is funding it will need to send onwards to your Upkeep (Upkeeps need funding so they can pay the Keepers Network for the compute work they do in performing the automations).
Using Remix, connect to the right network and then connect to your deployed UpkeepIDConsumerExample contract by using its address.
When Remix shows your contract and its interactions in the DEPLOYED CONTRACTS section of the UI, fill in the parameters for the registerAndPredictID() function using this table in the docs.
While following the table referred to above, please note:
upkeepContract is the Upkeep Contracts address - the one you deployed in Step 2 in Prerequisites
gasLimit - I used 3000000
adminAddress - this can just be your wallet address. The one that you're deployed from, sending LINK from etc.
Amount - 5 LINK expressed in Juels (LINK's equivalent of Wei), so 5000000000000000000
Sender - this is the UpkeepIDConsumerExample's address. In this example it's the calling contract itself.
run registerAndPredictID() with the params as per the previous step. It should run successfully.
Verify by going to the Keepers App and checking under "My Upkeeps" for a new Upkeep that you just programmatically created.
Cleanup
In the Keepers App note the LINK balance of the Upkeep you just created and funded with the 5 LINK -- it may be a bit less than the 5 LINK you sent it because the keepers network may have already run your Upkeep - we had set the interval for 10 seconds in Step 2 of Prerequisites.
And on Etherscan check whether UpkeepIDConsumerExample has any
LINK in it (it shouldn't because the 5 LINK you sent from your wallet to this contract, was transferred when you ran registerAndPredictID() and sent an amount of 5 LINK
Hope this helps!

Azure Functions - Java CosmosClientBuilder slow on initial connection

we're using Azure Cloud Functions with the Java SDK and connect to the Cosmos DB using the following Java API
CosmosClient client = new CosmosClientBuilder()
.endpoint("https://my-cosmos-project-xyz.documents.azure.com:443/")
.key(key)
.consistencyLevel(ConsistencyLevel.SESSION)
.buildClient();
This buildClient() starts a connection to CosmosDB, which takes 2 to 3 seconds.
The subsequent database queries using that client are fast.
Only this first setup of the connection is pretty slow.
We keep the CosmosClient as a static variable, so we can reuse it between multiple http requests that go to our function.
But once the function is getting cold (when Azure shuts it down after a few minutes unused), the static variable gets lost and will be reconnected, when the function is started up again.
Is there a way to make this initial connection to cosmos DB faster?
Or do you think we need to increase the time a function stays online, if we need faster response times?
This is a expected behavior, see https://youtu.be/McZIQhZpvew?t=850.
The first request a client does needs to go through a warm-up step. This warm-up consists of fetching the account information, container information, routing and partitioning information in order to know where to route the requests (as you experienced, further requests do not get this extra latency). Hence the importance of maintaining a singleton instance.
In some Functions plan (Consumption) instances get de-provisioned if there is no activity, in which case, any existing instance of the client is destroyed, so when a new instance is provisioned, your first request will pay this warm-up cost.
There are currently no workaround I'm aware of in the Java SDK but this should not affect your P99 latency since it's just the first request on a cold client.
Hope this and the video help with the reason.

Laravel - Efficiently consuming large external API into database

I'm attempting to consume the Paypal API transaction endpoint.
I want to grab ALL transactions for a given account. This number could potentially be in the 10's of millions of transactions. For each of these transactions, I need to store it in the database for processing by a queued job. I've been trying to figure out the best way to pull this many records with Laravel. Paypal has a max request items limit of 20 per page.
I initially started off with the idea of creating a job when a user gives me their API credentials that gets the first 20 items and processes them, then dispatches a job from the first job that contains the starting index to use. This would loop forever until it errored out. This doesn't seem to be working well though as it causes a gateway timeout on saving those API credentials and the request to the API eventually times out (before getting all transactions). I should also mention that the total number of transactions is unknown, so chaining doesn't seem to be the answer as there is no way to know how many jobs to dispatch...
Thoughts? Is getting API data best suited for a job?
Yes job is way to go . I’m not familiar with paypal api but it’s seems requests are rate limited paypal rate limiting.. you might want to delay your api requests a bit.. also you can make a class to monitor your api requests consumption by tracking the latest requests you made and in the job you can determine when to fire the next request and record it in the database...
My humble advise
please don’t pull all the data your database will get bloated quickly and you’ll need to scale each time you have a new account it’s not easy task.
You could dispatch the same job at the end of the first job which queries your current database to find the starting index of the transactions for that job.
So even if your job errors out, you could dispatch it again, then it will resume from where it was ended previously
May be you will need link your app with another data engine like AWS, anyway I think the best idea is creating an APi, pull only the most important data, indexed, and keep the all big data in another endpoint, where you can reach them if you need

Policy for EC2 and ELB based on number of transcoding processes on each instance

I need to transcode massive number of audio files on a series of auto-scaling instances behind an ELB. The core of transcoding script is based on Node.Js and FFMPEG. Queuing is impossible because users are not patience! I need to control the number of transcodings on each instance to avoid CPU 100% problem.
My questions:
A- Is there any way to define a policy for ELB to control the number of connections to each instance? if not is there any parameter to control average CPU utilization on each instance and add a new one after triggering level? (I have found this slide but it is not complete) If it adds a new instance on the fly how much it takes time the new instance be 100% operative to serve the user ( I mean does auto scaling have long latency?)
B- Is there another alternative architecture to achieve same transcoding solution? (I have included my current idea to this answer as a drawing). I can not use third party solutions like Transcoding.com I need to have my native solution.
C- I use node.js for each instance and by socket to the user browser show progress. From browser side I send regularly some ajax request to the node.js side to get the progress information. Does this mechanism has problem with sticky session?
Thanks you.
If your scaling needs to take place in response to individual requests on the server (i.e. a single request would require X number of machines to execute in desired timeframe), then autoscaling is probably not going to be the answer for you, as you will have delay as the new instances become active. You will also potentially have much higher cost to run service in such manner as you could scale up and time a number of times in response to individual request, charging you for one hour minimum for each instance that is started.
If however you are concerned with autoscaling, to for example, increase your fleet 50% during peak times when you get request volume spikes (i.e. you already have many servers serving many requests, but you just need to keep latency down during peak hours by adding more instances), then autoscaling should probably work just fine for you.
There are any number of triggers you can configure to control scaling events in such a case.
ELB does support session affinity ("sticky" sessions).
You will want to use an AWS SDK. Normally you'd use one of the official ones for C#, Ruby etc. Since you're on node.js, try using this SDK on github to monitor, throttle and create instance connection pools etc.
https://github.com/awssum/awssum
there's also AWS2JS
https://github.com/SaltwaterC/aws2js

Resources