Chainlink VRF very slow - 20+ minutes for response to fulfillRandomness on Rinkeby - chainlink

I have implemented a basic Chainlink VRF function in Solidity and deployed to Ethereum test network Rinkeby
function getRandomNumber() public returns (bytes32 requestId)
{
require(LINK.balanceOf(address(this)) >= fee, "Not enough LINK");
return requestRandomness(keyHash, fee);
}
function fulfillRandomness(bytes32 requestId, uint256 randomness) internal override
{
randomResult = randomness;
_requestId = requestId;
}
This works fine, deducts the LINK fee from my wallet, and eeeeeventually does call back fulfillRandomness with a random hex value, however it takes forever!
Just did a test and it took 30 minutes. Could I be doing something wrong or why is it taking so long? I know it's async and there's lots of stuff going on to fetch the random number, but at these speeds it's basically unusable right

The Rinkeby testenet has had some downtime issues this weekend. You can use the Kovan testnet and VRF until Rinkeby is operating normally again.

I switched to Kovan and I'm reliably getting ~1 min VRF responses

Related

I set VRF chainLink fee on 0.1 LINK instead of 2 LINK on mainnet, how to fix this issue? (already deploy)

We have deployed the VRF contract on mainnet and we added it to our main contract.
But there is issue, we set the fee limit for 0.1 LINK instead of 2 LINK, and now it look like it’s doesn’t work.
Can you give us the solution for this issue.
constructor ()
VRFConsumerBase(
0xf0d54349aDdcf704F77AE15b96510dEA15cb7952, // VRF Coordinator
0x514910771AF9Ca656af840dff83E8264EcF986CA// LINK Token
)
{
keyHash = 0xAA77729D3466CA35AE8D28B3BBAC7CC36A5031EFDC430821C02BC31A238AF445;
**fee = 0.1 * 10 ** 18; // 0.1 LINK**
}
Your only solution here is to redeploy your contracts with the correct fee set.
Code deployed to a blockchain is immutable, and since you have hardcoded the fee in it can't be changed. In the future, you could always create a setter function that can only be accessed by the contract owner that allows you to change the fee. An example:
function setFee(uint256 _fee) public onlyOwner {
fee = _fee;
}
This function uses the OpenZeppelin Ownable contract to get the onlyOwner modifier.

Caching is working for one hour while it should be for days

I have created an API using .NETCore 2.0 ; This API is connected to an oracle database to retrieve needed data; One of the functions takes too much time so I decided to use caching in order to retrieve data faster;
Function description: Get ranking
Caching period: Data should be renewed in cache memory each Monday
I am using IMemoryCache, but the problem is that data is not being cached for multiple days; It lasts only for one hour, after that data is being retrieved from database and takes too much time (10 s.); Below is my code:
var dateNow = DateTime.Now;
int diff = 7; // if today is Monday then should add 7 days to get next Monday date
if (dateNow.DayOfWeek != DayOfWeek.Monday) {
var daysToStartWeek = dateNow.DayOfWeek - DayOfWeek.Monday;
diff = (7 - (daysToStartWeek)) % 7;
}
var nextMonday = dateNow.AddDays(diff).Date;
var totalDays = (nextMonday - dateNow).TotalDays;
if (_cache.TryGetValue("GetRanking", out IEnumerable<GetRankingStruct> objRanking))
{
return Ok(objRanking);
}
var dp = new DataProvider(Configuration);
var response = dp.GetRanking(userName, asAtDate);
_cache.Set("GetRanking", response, TimeSpan.FromDays(diff));
return Ok(response);
Could be related to the token life Time since it's only 1 hour?
Firstly - have you tried checking to see if your worker process is being restarted? You don't specify how you are hosting your application but, obviously, if the application (worker process) is restarted your memory cache will be empty.
If your worker process / process is restarting then you could load the cache on start up.
Secondly - I believe that the implementation may choose to empty the cache due to inactivity or memory constraints. You can set the priority to never remove - https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.caching.memory.cacheitempriority?view=dotnet-plat-ext-3.1
I believe you can set this by passing a MemoryCacheOptions object to the constructor of the memory cache https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.caching.memory.memorycache.-ctor?view=dotnet-plat-ext-3.1#Microsoft_Extensions_Caching_Memory_MemoryCache__ctor_Microsoft_Extensions_Options_IOptions_Microsoft_Extensions_Caching_Memory_MemoryCacheOptions__.
Finally - I assume you've made your _cache object static so it is shared by all instances of your class. (Or made the controller, if that's what it is, a singleton).
These are my suggestions.
Good luck.

querying data with micrometer

We have this fancy monitoring system to which our spring-boot services are posting metrics to an influx DB with micrometer. There's a nice grafana frontend, but the problem is that we're now at a stage where we have to have some of these metrics available in other services to reason on.
The whole system was set up by my predecessor, and my current understanding of it is practically zero. I can add and post new metrics, but I can't for the life of me get anything out of it.
Here's a short example:
Our gateway increments the counter for each image that a camera posts to it. The definition of the counter looks like this:
private val imageCounters = mutableMapOf<String, Counter>()
private val imageCounter = { camera: String ->
imageCounters.getOrPut(camera) {
registry.counter("gateway.image.counter", "camera", camera)
}
And the counter is incremented in the code like this:
imageCounter("placeholder-id").increment()
Now we're improving our billing, and the billing service needs to know how many images for a certain camera went through the gateway. So naturally the first thing I try looks like this:
class MonitoringService(val metrics: MeterRegistry) {
private val log = logger()
private val imageCounters = mutableMapOf<String, Counter>()
private val imageCounter = { camera: String ->
imageCounters.getOrPut(camera) {
metrics.counter("gateway.image.counter", "camera", camera)
}
}
fun test() {
val test = imageCounter("16004").count()
val bugme = true
log.info("influx test: $test")
}
}
There's two problems with this: First off it always returns zero, so obviously I'm doing it wrong. I just can't figure out what it is.
Second, even if it would return a reasonable value, I don't see a way to limit this by time (I'll usually need the number of images uploaded during the current month).
What worries me is that while I can find a lot of documentation on how to post data with micrometer, I can't seem to find any documentation on how to query. Is Micrometer only designed to post monitoring data, but not query it? the .getOrPut() method would indicate it can do both, but since querying data seems undocumented as far as I can tell, that might be a misconception on my part.
There is an influx-db client for Java, which I'll try next, but at the end of the day I don't want multiple components in my application doing the same thing just because I'm not familiar with the tools I inherited.
InfluxMeterRegistry is a StepMeterRegistry, so the created Counter from it is a StepCounter. StepCounter.increment() increments the count in the current step but StepCounter.count() will return the count in the previous step. That's why you're seeing 0 with count() although you've already invoked increment() several times. You can see it in the next step and the default step is 1 minute, so you have to wait for 1 minute to see it.
See the following test to get an idea on how it works: https://github.com/izeye/sample-micrometer-spring-boot/blob/influx/src/test/java/com/izeye/sample/InfluxMeterRegistryTests.java

Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}
According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

Load test randomization: How to set up WCAT to use different scenario for each virtual client?

I would like to run load test of one of POST action in my web application. The problem is that the action can be completed only if it receives unique email address in POST data. I generated wcat script with few thousands requests each with unique email, like:
transaction
{
id = "1";
weight = 1;
request
{
verb = POST; postdata = "Email=test546546546546%40loadtest.com&...";
setheader { name="Content-Length"; value="...";
}
// more requests like that
}
My UBR settings file is like:
settings
{
counters
{
interval = 10;
counter = "Processor(_Total)\\% Processor Time";
counter = "Processor(_Total)\\% Privileged Time";
counter = "Processor(_Total)\\% User Time";
counter = "Processor(_Total)\\Interrupts/sec";
}
clientfile = "<above-wcat-script>";
server = "<host name>";
clients = 3;
virtualclients = 100;
}
When I run the test 3x100 = 300 clients starts sending requests, but they are doing it in the same order so the first request from the first client is processed, and then the next 299 requests from other clients are not unique anymore. Then the second request from some client is processed, and 299 identical requests from other clients are not unique.
I need a way to randomize the requests or run them in different order or set up separate scenario scripts for each virtual client so that each request carry unique email address.
Is it possible to do that with WCAT?
Or maybe there is some other tool that can do such a test?
Have you considered using the rand(x,y) WCAT internal function to add randomized integer to the email address? By doing so you could conceivably have a single transaction with single request that uses a randomized email address. So instead of manually creating (say) 1000 requests with unique email addresses, you can use the single randomized transaction 1000 times.
Your new randomized transaction might look something like this:
transaction
{
id = "1";
weight = 1;
request
{
verb = POST;
postdata = "Email=" + rand("100000", "1000000") + "#loadtest.com&...";
setheader { name="Content-Length"; value="...";
}
}
If using rand(x,y) doesn't make it random enough then you could experiment with using additional functions to make the data more random. Perhaps something like this:
postdata = "Email=" + rand("100000", "1000000") + "#loadtest" + clientindex() + vclientindex() + ".com&...";
You can find the WCAT 6.3 documentation here, including a list of the internal functions that are available. If the built in functions don't suffice, you can even build your own.

Resources