In chainlink keepers document here. There is a conf called checkGasLimit with 6,500,000 as the default value.
Since the computation in checkUpKeep is expected to be outsourced off-chain, why there is a configuration called checkGasLimit where computation is off-chain?
Or checkGasLimit is for the situation where function checkUpKeep is supposed to modify some state.
You got it!
checkUpkeep can be used to change the state of the blockchain. The Chainlink nodes will call the checkUpkeep function when it returns true - and if it costs gas, it will use gas.
The use of the checkGasLimit then, is to make sure they don't use too much gas. Per the docs:
The maximum amount of gas that can be used by your checkUpkeep for off-chain computation.
Related
There's 3 related questions here:
I can look at an account's NEAR balance before and after a near-api-js account.functionCall to determine how much gas the tx took, but is
there a better way to extrapolate that info from the metadata of the call?
Also, is there a cap for how much gas I can feed into a function call?
From within a functional call (in rust) is it possible to use env:: to see
how much gas is remaining in the course of operations?
I'm trying to set up the control flow so
that the function will quit working in time and never throw a 'ran out of gas' error, but only do as much work as it has gas to do.
I can look at an account's NEAR balance before and after a near-api-js account.functionCall to determine how much gas the tx took, but is there a better way to extrapolate that info from the metadata of the call?
Well, to compute the transaction fee you can query the transaction status (tx/EXPERIMENTAL_tx_status JSON RPC endpoints) and sum up the tokens_burnt from all the receipt_outcomes and transaction_outcome. If you want to profile the gas usage, you should sum up gas_burnt from all the receipt_outcomes and transaction_outcome.
Also, is there a cap for how much gas I can feed into a function call?
Yes, on mainnet genesis it is set to 1 PetaGas (1_000 TeraGas).
From within a functional call (in rust) is it possible to use env:: to see how much gas is remaining in the course of operations?
near_sdk::env::prepaid_gas() minus
near_sdk::env::gas_used()
I want to use Chainlink to get the price of an asset at a given point in time in the past (which I will refer to as "expiry") in order to settle options.
To get a historical price, Chainlink requires to pass in a roundId as argument into the getRoundData function. In order to verify that the round implied by the given roundId includes the expiry time, my initial thought was checking inside the smart contract whether startedAt <= expiry && timestamp >= expiry for the received roundData.
In order to assess whether this is a feasible approach, I would like to better understand the concept of rounds in Chainlink:
Can I assume that rounds always span adjacent time intervals, i.e. if say a round starts at unixTime t1 and ends at t2, can I assume that the next round will start at t2?
getRoundData(roundId) returns startedAt and timestamp. Does the timeStamp represent the end of the given roundId?
What exactly is the answeredInRound that I receive as output from getRoundData?
Any help is highly appreciated.
There are currently 2 "trigger" parameters that kick off Chainlink nodes to update. If the real-world price of an asset deviates past some interval, it will trigger all the nodes to do an update. Right now, most Ethereum data feeds have a 0.5% deviation threshold. If the price stays within the deviation parameters, it will only trigger an update every X minutes/hours. You can see these parameters on data.chain.link.
timeStamp (updatedAt) - Timestamp of when the round was updated. More details here
answeredInRound is a legacy variable from when the Chainlink price feeds were on the Flux Aggregator model instead of OCR (Off-Chain Reporting), and it was possible for a price to be updated slowly and leek into the next "round". This is now no longer the case.
More info is available at Data Feeds API Reference.
Additionally, since you want to use Chainlink to get the price of an asset at a given point in time in the past, there is an open-source community project that solves the same issue
Contrary to Ethereum which uses RANDAO (possibly enhanced with VDF), in Polkadot, a verifiable random function (VRF) is used to shuffle validators and select potential block proposers for certain slots. Where does the randomness come from, i.e. how does the randomness work specifically?
A verifiable random function is a function that, in pseudocode, can be expressed like so:
(RESULT, PROOF) = VRF(SECRET, INPUT)
That is, for some secret and some input (which can be public), the result is a tuple of RESULT and PROOF, where PROOF can be used by outside observers to verify legitimacy of the VRF RESULT.
In other words, making a "VRF roll" results in a random number and proof that you got that random number, and didn't just pick it.
Every slot (approx. every 6 seconds) every validator will run the VRF function. The SECRET will be their VRF key, a special key to be used only for this, generated by the validator and kept secret. The INPUT is either a specific value from the genesis block if fewer than 2 epochs exist in the chain, or a hash of all the VRF results in the past 2 epochs.
Once a validator has executed the VRF, the RESULT is compared to a THRESHOLD value which is defined by the protocol. If the RESULT is less than THRESHOLD, the validator is a valid block-proposer candidate for that slot. Otherwise, the validator skips that slot.
This means it is possible for there to be multiple validators who are block producing candidates for a slot, in which case the block that gets picked up by other nodes is the one that prevails, as long as it's on the chain with the most recent finalized block as per the GRANDPA finality gadget. A situation in which no block producers exist for a slot is also possible, in which case the AURA consensus will take over. The AURA consensus is basically a fallback which picks a random validator for each block. It runs in parallel to BABE, and only matters when a slot has no block producers, otherwise it's ignored.
I know that the "how to generate random number" in solidity is a very common question. However, after reading the great majority of answers I did not find one to fit my case.
A short description of what I want to do is: I have a list of objects that each have a unique id, a number. I need to produce a list that contains 25% of those objects, randomly selected each time the function is called. The person calling the function cannot be depended on to provide input that will somehow influence predictably the resulting list.
The only answer I found that gives a secure random number was Here. However, it depends on input coming from the participants and it is meant to address a gambling scenario. I cannot use it in my implementation.
All other cases mention that the number generated is going to be predictable, and even some of those depend on a singular input to produce a single random number. Once again, does not help me.
Summarising, I need a function that will give me multiple, non-predictable, random numbers.
Thanks for any help.
Here is an option:
function rand()
public
view
returns(uint256)
{
uint256 seed = uint256(keccak256(abi.encodePacked(
block.timestamp + block.difficulty +
((uint256(keccak256(abi.encodePacked(block.coinbase)))) / (now)) +
block.gaslimit +
((uint256(keccak256(abi.encodePacked(msg.sender)))) / (now)) +
block.number
)));
return (seed - ((seed / 1000) * 1000));
}
It generates a random number between 0-999, and basically it's impossible to predict it (It has been used by some famous Dapps like Fomo3D).
Smart Contracts are deterministic, so, basically every functions are predictable - if we know input, we will be and we should be know output. And you cannot get random number without any input - almost every language generates "pseudo random number" using clock. This means, you will not get random number in blockchain using simple method.
There are many interesting methods to generate random number using Smart Contract - using DAO, Oracle, etc. - but they all have some trade-offs.
So in conclusion, There is no method you are looking for. You need to sacrifice something.
:(
100% randomness is definitely impossible on Ethereum. The reason for that is that when distributed nodes are building from the scratch the blockchain they will build the state by running every single transaction ever created on the blockchain, and all of them have to achieve the exact same final status. In order to do that randomness is totally forbidden from the Ethereum Virtual Machine, since otherwise each execution of the exact same code would potentially yield a different result, which would make impossible to reach a common final status among all participants of the network.
That being said, there are projects like RanDAO that pretend to create trustable pseudorandomness on the blockchain.
In any case, there are approaches to achieve pseudandomness, being two of the most important ones commit-reveal techniques and using an oracle (or a combination of both).
As an example that just occurred to me: you could use Oraclize to call from time to time to a trusted external JSON API that returns pseudorandom numbers and verify on the contract that the call has truly been performed.
Of course the downside of these methods is that you and/or your users will have to spend more gas executing the smart contracts, but it's in my opinion a fair price for the huge benefits in security.
Anyone has any comment on how to choose the validation max.fail number?
As you may know there is no unique criteria to choose a certain number. i believe that it could depends on the number of samples being used for training/validation.
However, it has a nontrivial role in stopping the training of the neural network
You're right, this parameter is critical for NN training. In fact, the biggest disadvantage of NNs is the presence of many critical parameters that are strongly problem-dependent, like number of neurons and training algorithm parameters such as learning rate or early stopping criteria (like in this case). In some applications, use a value of 3 or 30 is more or less the same, because after some point the NN generalization do not increase anymore, so I can suggest you to try with different parameters, including 0 and inf (i.e. no early stopping) and observe the training/validation error curves. Of course, DO NOT consider only a single run but do at least 5-10 runs for each configuration. At this point, you can try to have an idea of the "error landscape".
Use : nnparam.max_fail
For the trainlm training function, you could type:
net.trainParam.max_fail = 10 (if you want to increase the validation fail to be 10)
From Matlab Documentation
Maximum Validation Checks (max_fail) function parameter
max_fail is a training function parameter. It must be a strictly positive integer scalar.
max_fail is maximum number of validation checks before training is stopped.
This parameter is used by trainb, trainbfg, trainbr, trainc, traincgb, traincgf, traincgp, traingd, traingda, traingdm,
traingdx, trainlm, trainoss, trainrp, trains and trainscg